WorldWideScience

Sample records for semi-analytical model reconstruction

  1. Two-dimensional semi-analytic nodal method for multigroup pin power reconstruction

    International Nuclear Information System (INIS)

    Seung Gyou, Baek; Han Gyu, Joo; Un Chul, Lee

    2007-01-01

    A pin power reconstruction method applicable to multigroup problems involving square fuel assemblies is presented. The method is based on a two-dimensional semi-analytic nodal solution which consists of eight exponential terms and 13 polynomial terms. The 13 polynomial terms represent the particular solution obtained under the condition of a 2-dimensional 13 term source expansion. In order to achieve better approximation of the source distribution, the least square fitting method is employed. The 8 exponential terms represent a part of the analytically obtained homogeneous solution and the 8 coefficients are determined by imposing constraints on the 4 surface average currents and 4 corner point fluxes. The surface average currents determined from a transverse-integrated nodal solution are used directly whereas the corner point fluxes are determined during the course of the reconstruction by employing an iterative scheme that would realize the corner point balance condition. The outgoing current based corner point flux determination scheme is newly introduced. The accuracy of the proposed method is demonstrated with the L336C5 benchmark problem. (authors)

  2. Semi-analytical MBS Pricing

    DEFF Research Database (Denmark)

    Rom-Poulsen, Niels

    2007-01-01

    This paper presents a multi-factor valuation model for fixed-rate callable mortgage backed securities (MBS). The model yields semi-analytic solutions for the value of MBS in the sense that the MBS value is found by solving a system of ordinary differential equations. Instead of modelling the cond......This paper presents a multi-factor valuation model for fixed-rate callable mortgage backed securities (MBS). The model yields semi-analytic solutions for the value of MBS in the sense that the MBS value is found by solving a system of ordinary differential equations. Instead of modelling...... interest rate model. However, if the pool size is specified in a way that makes the expectations solvable using transform methods, semi-analytic pricing formulas are achieved. The affine and quadratic pricing frameworks are combined to get flexible and sophisticated prepayment functions. We show...

  3. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

    Science.gov (United States)

    Wang, D.; Cui, Y.

    2015-12-01

    The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

  4. A fast semi-analytical model for the slotted structure of induction motors

    NARCIS (Netherlands)

    Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Lomonova, E.A.

    A fast, semi-analytical model for induction motors (IMs) is presented. In comparison to traditional analytical models for IMs, such as lumped parameter, magnetic equivalent circuit and anisotropic layer models, the presented model calculates a continuous distribution of the magnetic flux density in

  5. Semi-analytical wave functions in relativistic average atom model for high-temperature plasmas

    International Nuclear Information System (INIS)

    Guo Yonghui; Duan Yaoyong; Kuai Bin

    2007-01-01

    The semi-analytical method is utilized for solving a relativistic average atom model for high-temperature plasmas. Semi-analytical wave function and the corresponding energy eigenvalue, containing only a numerical factor, are obtained by fitting the potential function in the average atom into hydrogen-like one. The full equations for the model are enumerated, and more attentions are paid upon the detailed procedures including the numerical techniques and computer code design. When the temperature of plasmas is comparatively high, the semi-analytical results agree quite well with those obtained by using a full numerical method for the same model and with those calculated by just a little different physical models, and the result's accuracy and computation efficiency are worthy of note. The drawbacks for this model are also analyzed. (authors)

  6. Comparison of a semi-analytic and a CFD model uranium combustion to experimental data

    International Nuclear Information System (INIS)

    Clarksean, R.

    1998-01-01

    Two numerical models were developed and compared for the analysis of uranium combustion and ignition in a furnace. Both a semi-analytical solution and a computational fluid dynamics (CFD) numerical solution were obtained. Prediction of uranium oxidation rates is important for fuel storage applications, fuel processing, and the development of spent fuel metal waste forms. The semi-analytical model was based on heat transfer correlations, a semi-analytical model of flow over a flat surface, and simple radiative heat transfer from the material surface. The CFD model numerically determined the flowfield over the object of interest, calculated the heat and mass transfer to the material of interest, and calculated the radiative heat exchange of the material with the furnace. The semi-analytical model is much less detailed than the CFD model, but yields reasonable results and assists in understanding the physical process. Short computation times allowed the analyst to study numerous scenarios. The CFD model had significantly longer run times, was found to have some physical limitations that were not easily modified, but was better able to yield details of the heat and mass transfer and flow field once code limitations were overcome

  7. A simple stationary semi-analytical wake model

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.

    We present an idealized simple, but fast, semi-analytical algorithm for computation of stationary wind farm wind fields with a possible potential within a multi-fidelity strategy for wind farm topology optimization. Basically, the model considers wakes as linear perturbations on the ambient non......-linear. With each of these approached, a parabolic system are described, which is initiated by first considering the most upwind located turbines and subsequently successively solved in the downstream direction. Algorithms for the resulting wind farm flow fields are proposed, and it is shown that in the limit......-uniform mean wind field, although the modelling of the individual stationary wake flow fields includes non-linear terms. The simulation of the individual wake contributions are based on an analytical solution of the thin shear layer approximation of the NS equations. The wake flow fields are assumed...

  8. Semi-analytical solutions of the Schnakenberg model of a reaction-diffusion cell with feedback

    Science.gov (United States)

    Al Noufaey, K. S.

    2018-06-01

    This paper considers the application of a semi-analytical method to the Schnakenberg model of a reaction-diffusion cell. The semi-analytical method is based on the Galerkin method which approximates the original governing partial differential equations as a system of ordinary differential equations. Steady-state curves, bifurcation diagrams and the region of parameter space in which Hopf bifurcations occur are presented for semi-analytical solutions and the numerical solution. The effect of feedback control, via altering various concentrations in the boundary reservoirs in response to concentrations in the cell centre, is examined. It is shown that increasing the magnitude of feedback leads to destabilization of the system, whereas decreasing this parameter to negative values of large magnitude stabilizes the system. The semi-analytical solutions agree well with numerical solutions of the governing equations.

  9. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    Science.gov (United States)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  10. Coupled thermodynamic-dynamic semi-analytical model of free piston Stirling engines

    Energy Technology Data Exchange (ETDEWEB)

    Formosa, F., E-mail: fabien.formosa@univ-savoie.f [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France)

    2011-05-15

    Research highlights: {yields} The free piston Stirling behaviour relies on its thermal and dynamic features. {yields} A global semi-analytical model for preliminary design is developed. {yields} The model compared with NASA-RE1000 experimental data shows good correlations. -- Abstract: The study of free piston Stirling engine (FPSE) requires both accurate thermodynamic and dynamic modelling to predict its performances. The steady state behaviour of the engine partly relies on non linear dissipative phenomena such as pressure drop loss within heat exchangers which is dependant on the temperature within the associated components. An analytical thermodynamic model which encompasses the effectiveness and the flaws of the heat exchangers and the regenerator has been previously developed and validated. A semi-analytical dynamic model of FPSE is developed and presented in this paper. The thermodynamic model is used to define the thermal variables that are used in the dynamic model which evaluates the kinematic results. Thus, a coupled iterative strategy has been used to perform a global simulation. The global modelling approach has been validated using the experimental data available from the NASA RE-1000 Stirling engine prototype. The resulting coupled thermodynamic-dynamic model using a standardized description of the engine allows efficient and realistic preliminary design of FPSE.

  11. Coupled thermodynamic-dynamic semi-analytical model of free piston Stirling engines

    International Nuclear Information System (INIS)

    Formosa, F.

    2011-01-01

    Research highlights: → The free piston Stirling behaviour relies on its thermal and dynamic features. → A global semi-analytical model for preliminary design is developed. → The model compared with NASA-RE1000 experimental data shows good correlations. -- Abstract: The study of free piston Stirling engine (FPSE) requires both accurate thermodynamic and dynamic modelling to predict its performances. The steady state behaviour of the engine partly relies on non linear dissipative phenomena such as pressure drop loss within heat exchangers which is dependant on the temperature within the associated components. An analytical thermodynamic model which encompasses the effectiveness and the flaws of the heat exchangers and the regenerator has been previously developed and validated. A semi-analytical dynamic model of FPSE is developed and presented in this paper. The thermodynamic model is used to define the thermal variables that are used in the dynamic model which evaluates the kinematic results. Thus, a coupled iterative strategy has been used to perform a global simulation. The global modelling approach has been validated using the experimental data available from the NASA RE-1000 Stirling engine prototype. The resulting coupled thermodynamic-dynamic model using a standardized description of the engine allows efficient and realistic preliminary design of FPSE.

  12. Magnetic saturation in semi-analytical harmonic modeling for electric machine analysis

    NARCIS (Netherlands)

    Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Lomonova, E.

    2016-01-01

    A semi-analytical method based on the harmonic modeling (HM) technique is presented for the analysis of the magneto-static field distribution in the slotted structure of rotating electric machines. In contrast to the existing literature, the proposed model does not require the assumption of infinite

  13. A comparison of galaxy group luminosity functions from semi-analytic models

    NARCIS (Netherlands)

    Snaith, Owain N.; Gibson, Brad K.; Brook, Chris B.; Courty, Stéphanie; Sánchez-Blázquez, Patricia; Kawata, Daisuke; Knebe, Alexander; Sales, Laura V.

    Semi-analytic models (SAMs) are currently one of the primary tools with which we model statistically significant ensembles of galaxies. The underlying physical prescriptions inherent to each SAM are, in many cases, different from one another. Several SAMs have been applied to the dark matter merger

  14. Evaluation of subject contrast and normalized average glandular dose by semi-analytical models

    International Nuclear Information System (INIS)

    Tomal, A.; Poletti, M.E.; Caldas, L.V.E.

    2010-01-01

    In this work, two semi-analytical models are described to evaluate the subject contrast of nodules and the normalized average glandular dose in mammography. Both models were used to study the influence of some parameters, such as breast characteristics (thickness and composition) and incident spectra (kVp and target-filter combination) on the subject contrast of a nodule and on the normalized average glandular dose. From the subject contrast results, detection limits of nodules were also determined. Our results are in good agreement with those reported by other authors, who had used Monte Carlo simulation, showing the robustness of our semi-analytical method.

  15. Semi-analytical model for hollow-core anti-resonant fibers

    Directory of Open Access Journals (Sweden)

    Wei eDing

    2015-03-01

    Full Text Available We detailedly describe a recently-developed semi-analytical method to quantitatively calculate light transmission properties of hollow-core anti-resonant fibers (HC-ARFs. Formation of equiphase interface at fiber’s outermost boundary and outward light emission ruled by Helmholtz equation in fiber’s transverse plane constitute the basis of this method. Our semi-analytical calculation results agree well with those of precise simulations and clarify the light leakage dependences on azimuthal angle, geometrical shape and polarization. Using this method, we show investigations on HC-ARFs having various core shapes (e.g. polygon, hypocycloid with single- and multi-layered core-surrounds. The polarization properties of ARFs are also studied. Our semi-analytical method provides clear physical insights into the light guidance in ARF and can play as a fast and useful design aid for better ARFs.

  16. A semi-analytical stationary model of a point-to-plane corona discharge

    International Nuclear Information System (INIS)

    Yanallah, K; Pontiga, F

    2012-01-01

    A semi-analytical model of a dc corona discharge is formulated to determine the spatial distribution of charged particles (electrons, negative ions and positive ions) and the electric field in pure oxygen using a point-to-plane electrode system. A key point in the modeling is the integration of Gauss' law and the continuity equation of charged species along the electric field lines, and the use of Warburg's law and the corona current–voltage characteristics as input data in the boundary conditions. The electric field distribution predicted by the model is compared with the numerical solution obtained using a finite-element technique. The semi-analytical solutions are obtained at a negligible computational cost, and provide useful information to characterize and control the corona discharge in different technological applications. (paper)

  17. Accuracy of semi-analytical finite elements for modelling wave propagation in rails

    CSIR Research Space (South Africa)

    Andhavarapu, EV

    2010-01-01

    Full Text Available The semi-analytical finite element method (SAFE) is a popular method for analysing guided wave propagation in elastic waveguides of complex cross-section such as rails. The convergence of these models has previously been studied for linear...

  18. A semi-analytic model of magnetized liner inertial fusion

    Energy Technology Data Exchange (ETDEWEB)

    McBride, Ryan D.; Slutz, Stephen A. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

    2015-05-15

    Presented is a semi-analytic model of magnetized liner inertial fusion (MagLIF). This model accounts for several key aspects of MagLIF, including: (1) preheat of the fuel (optionally via laser absorption); (2) pulsed-power-driven liner implosion; (3) liner compressibility with an analytic equation of state, artificial viscosity, internal magnetic pressure, and ohmic heating; (4) adiabatic compression and heating of the fuel; (5) radiative losses and fuel opacity; (6) magnetic flux compression with Nernst thermoelectric losses; (7) magnetized electron and ion thermal conduction losses; (8) end losses; (9) enhanced losses due to prescribed dopant concentrations and contaminant mix; (10) deuterium-deuterium and deuterium-tritium primary fusion reactions for arbitrary deuterium to tritium fuel ratios; and (11) magnetized α-particle fuel heating. We show that this simplified model, with its transparent and accessible physics, can be used to reproduce the general 1D behavior presented throughout the original MagLIF paper [S. A. Slutz et al., Phys. Plasmas 17, 056303 (2010)]. We also discuss some important physics insights gained as a result of developing this model, such as the dependence of radiative loss rates on the radial fraction of the fuel that is preheated.

  19. Simplified semi-analytical model for mass transport simulation in unsaturated zone

    International Nuclear Information System (INIS)

    Sa, Bernadete L. Vieira de; Hiromoto, Goro

    2001-01-01

    This paper describes a simple model to determine the flux of radionuclides released from a concrete vault repository and its implementation through the development of a computer program. The radionuclide leach rate from waste is calculated using a model based on simple first order kinetics and the transport through porous media bellow the waste is determined using a semi-analytical solution of the mass transport equation. Results obtained in the IAEA intercomparison program are also related in this communication. (author)

  20. An analytical reconstruction model of the spread-out Bragg peak using laser-accelerated proton beams.

    Science.gov (United States)

    Tao, Li; Zhu, Kun; Zhu, Jungao; Xu, Xiaohan; Lin, Chen; Ma, Wenjun; Lu, Haiyang; Zhao, Yanying; Lu, Yuanrong; Chen, Jia-Er; Yan, Xueqing

    2017-07-07

    With the development of laser technology, laser-driven proton acceleration provides a new method for proton tumor therapy. However, it has not been applied in practice because of the wide and decreasing energy spectrum of laser-accelerated proton beams. In this paper, we propose an analytical model to reconstruct the spread-out Bragg peak (SOBP) using laser-accelerated proton beams. Firstly, we present a modified weighting formula for protons of different energies. Secondly, a theoretical model for the reconstruction of SOBPs with laser-accelerated proton beams has been built. It can quickly calculate the number of laser shots needed for each energy interval of the laser-accelerated protons. Finally, we show the 2D reconstruction results of SOBPs for laser-accelerated proton beams and the ideal situation. The final results show that our analytical model can give an SOBP reconstruction scheme that can be used for actual tumor therapy.

  1. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    International Nuclear Information System (INIS)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García; Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D.; Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz

    2015-01-01

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs

  2. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García [Instituto de Astronomía Teórica y Experimental, CONICET-UNC, Laprida 854, X5000BGR, Córdoba (Argentina); Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D. [Consejo Nacional de Investigaciones Científicas y Técnicas, Rivadavia 1917, C1033AAJ Buenos Aires (Argentina); Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz, E-mail: andresnicolas@oac.uncor.edu [Instituto de Astrofísica, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Santiago (Chile)

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.

  3. Neuromantic - from semi manual to semi automatic reconstruction of neuron morphology

    Directory of Open Access Journals (Sweden)

    Darren eMyatt

    2012-03-01

    Full Text Available The ability to create accurate geometric models of neuronal morphologyis important for understanding the role of shape in informationprocessing. Despite a significant amount of research on automating neuronreconstructions from image stacks obtained via microscopy, in practice mostdata are still collected manually.This paper describes Neuromantic, an open source system for threedimensional digital tracing of neurites. Neuromantic reconstructions arecomparable in quality to those of existing commercial and freeware systemswhile balancing speed and accuracy of manual reconstruction. Thecombination of semi-automatic tracing, intuitive editing, and ability ofvisualising large image stacks on standard computing platforms providesa versatile tool that can help address the reconstructions availabilitybottleneck. Practical considerations for reducing the computational time andspace requirements of the extended algorithm are also discussed.

  4. A fluid-coupled transmitting CMUT operated in collapse mode : Semi-analytic modeling and experiments

    NARCIS (Netherlands)

    Pekař, Martin; van Nispen, Stephan H.M.; Fey, Rob H.B.; Shulepov, Sergei; Mihajlović, Nenad; Nijmeijer, Henk

    2017-01-01

    An electro-mechanical, semi-analytic, reduced-order (RO) model of a fluid-loaded transmitting capacitive-micromachined ultrasound transducer (CMUT) operated in collapse mode is developed. Simulation of static deflections, approximated by a linear combination of six mode shapes, are benchmarked

  5. Simulation of reactive geochemical transport in groundwater using a semi-analytical screening model

    Science.gov (United States)

    McNab, Walt W.

    1997-10-01

    A reactive geochemical transport model, based on a semi-analytical solution to the advective-dispersive transport equation in two dimensions, is developed as a screening tool for evaluating the impact of reactive contaminants on aquifer hydrogeochemistry. Because the model utilizes an analytical solution to the transport equation, it is less computationally intensive than models based on numerical transport schemes, is faster, and it is not subject to numerical dispersion effects. Although the assumptions used to construct the model preclude consideration of reactions between the aqueous and solid phases, thermodynamic mineral saturation indices are calculated to provide qualitative insight into such reactions. Test problems involving acid mine drainage and hydrocarbon biodegradation signatures illustrate the utility of the model in simulating essential hydrogeochemical phenomena.

  6. Developing semi-analytical solution for multiple-zone transient storage model with spatially non-uniform storage

    Science.gov (United States)

    Deng, Baoqing; Si, Yinbing; Wang, Jia

    2017-12-01

    Transient storages may vary along the stream due to stream hydraulic conditions and the characteristics of storage. Analytical solutions of transient storage models in literature didn't cover the spatially non-uniform storage. A novel integral transform strategy is presented that simultaneously performs integral transforms to the concentrations in the stream and in storage zones by using the single set of eigenfunctions derived from the advection-diffusion equation of the stream. The semi-analytical solution of the multiple-zone transient storage model with the spatially non-uniform storage is obtained by applying the generalized integral transform technique to all partial differential equations in the multiple-zone transient storage model. The derived semi-analytical solution is validated against the field data in literature. Good agreement between the computed data and the field data is obtained. Some illustrative examples are formulated to demonstrate the applications of the present solution. It is shown that solute transport can be greatly affected by the variation of mass exchange coefficient and the ratio of cross-sectional areas. When the ratio of cross-sectional areas is big or the mass exchange coefficient is small, more reaches are recommended to calibrate the parameter.

  7. Self-consistent semi-analytic models of the first stars

    Science.gov (United States)

    Visbal, Eli; Haiman, Zoltán; Bryan, Greg L.

    2018-04-01

    We have developed a semi-analytic framework to model the large-scale evolution of the first Population III (Pop III) stars and the transition to metal-enriched star formation. Our model follows dark matter haloes from cosmological N-body simulations, utilizing their individual merger histories and three-dimensional positions, and applies physically motivated prescriptions for star formation and feedback from Lyman-Werner (LW) radiation, hydrogen ionizing radiation, and external metal enrichment due to supernovae winds. This method is intended to complement analytic studies, which do not include clustering or individual merger histories, and hydrodynamical cosmological simulations, which include detailed physics, but are computationally expensive and have limited dynamic range. Utilizing this technique, we compute the cumulative Pop III and metal-enriched star formation rate density (SFRD) as a function of redshift at z ≥ 20. We find that varying the model parameters leads to significant qualitative changes in the global star formation history. The Pop III star formation efficiency and the delay time between Pop III and subsequent metal-enriched star formation are found to have the largest impact. The effect of clustering (i.e. including the three-dimensional positions of individual haloes) on various feedback mechanisms is also investigated. The impact of clustering on LW and ionization feedback is found to be relatively mild in our fiducial model, but can be larger if external metal enrichment can promote metal-enriched star formation over large distances.

  8. Massive quiescent galaxies at z > 3 in the Millennium simulation populated by a semi-analytic galaxy formation model

    Science.gov (United States)

    Rong, Yu; Jing, Yingjie; Gao, Liang; Guo, Qi; Wang, Jie; Sun, Shuangpeng; Wang, Lin; Pan, Jun

    2017-10-01

    We take advantage of the statistical power of the large-volume dark-matter-only Millennium simulation (MS), combined with a sophisticated semi-analytic galaxy formation model, to explore whether the recently reported z = 3.7 quiescent galaxy ZF-COSMOS-20115 (ZF) can be accommodated in current galaxy formation models. In our model, a population of quiescent galaxies with stellar masses and star formation rates comparable to those of ZF naturally emerges at redshifts z 3.5 massive QGs are rare (about 2 per cent of the galaxies with the similar stellar masses), the existing AGN feedback model implemented in the semi-analytic galaxy formation model can successfully explain the formation of the high-redshift QGs as it does on their lower redshift counterparts.

  9. Numerical and semi-analytical modelling of the process induced distortions in pultrusion

    DEFF Research Database (Denmark)

    Baran, Ismet; Carlone, P.; Hattel, Jesper Henri

    2013-01-01

    , the transient distortions are inferred adopting a semi-analytical procedure, i.e. post processing numerical results by means of analytical methods. The predictions of the process induced distortion development using the aforementioned methods are found to be qualitatively close to each other...

  10. Control Chart on Semi Analytical Weighting

    Science.gov (United States)

    Miranda, G. S.; Oliveira, C. C.; Silva, T. B. S. C.; Stellato, T. B.; Monteiro, L. R.; Marques, J. R.; Faustino, M. G.; Soares, S. M. V.; Ulrich, J. C.; Pires, M. A. F.; Cotrim, M. E. B.

    2018-03-01

    Semi-analytical balance verification intends to assess the balance performance using graphs that illustrate measurement dispersion, trough time, and to demonstrate measurements were performed in a reliable manner. This study presents internal quality control of a semi-analytical balance (GEHAKA BG400) using control charts. From 2013 to 2016, 2 weight standards were monitored before any balance operation. This work intended to evaluate if any significant difference or bias were presented on weighting procedure over time, to check the generated data reliability. This work also exemplifies how control intervals are established.

  11. Semi-analytic techniques for calculating bubble wall profiles

    International Nuclear Information System (INIS)

    Akula, Sujeet; Balazs, Csaba; White, Graham A.

    2016-01-01

    We present semi-analytic techniques for finding bubble wall profiles during first order phase transitions with multiple scalar fields. Our method involves reducing the problem to an equation with a single field, finding an approximate analytic solution and perturbing around it. The perturbations can be written in a semi-analytic form. We assert that our technique lacks convergence problems and demonstrate the speed of convergence on an example potential. (orig.)

  12. Reconstruction of binary geological images using analytical edge and object models

    Science.gov (United States)

    Abdollahifard, Mohammad J.; Ahmadi, Sadegh

    2016-04-01

    Reconstruction of fields using partial measurements is of vital importance in different applications in geosciences. Solving such an ill-posed problem requires a well-chosen model. In recent years, training images (TI) are widely employed as strong prior models for solving these problems. However, in the absence of enough evidence it is difficult to find an adequate TI which is capable of describing the field behavior properly. In this paper a very simple and general model is introduced which is applicable to a fairly wide range of binary images without any modifications. The model is motivated by the fact that nearly all binary images are composed of simple linear edges in micro-scale. The analytic essence of this model allows us to formulate the template matching problem as a convex optimization problem having efficient and fast solutions. The model has the potential to incorporate the qualitative and quantitative information provided by geologists. The image reconstruction problem is also formulated as an optimization problem and solved using an iterative greedy approach. The proposed method is capable of recovering the image unknown values with accuracies about 90% given samples representing as few as 2% of the original image.

  13. Semi-analytic solution to planar Helmholtz equation

    Directory of Open Access Journals (Sweden)

    Tukač M.

    2013-06-01

    Full Text Available Acoustic solution of interior domains is of great interest. Solving acoustic pressure fields faster with lower computational requirements is demanded. A novel solution technique based on the analytic solution to the Helmholtz equation in rectangular domain is presented. This semi-analytic solution is compared with the finite element method, which is taken as the reference. Results show that presented method is as precise as the finite element method. As the semi-analytic method doesn’t require spatial discretization, it can be used for small and very large acoustic problems with the same computational costs.

  14. Interior beam searchlight semi-analytical benchmark

    International Nuclear Information System (INIS)

    Ganapol, Barry D.; Kornreich, Drew E.

    2008-01-01

    Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)

  15. A fast semi-analytical model for the slotted structure of induction motors with 36/28 stator/rotor slot combination

    NARCIS (Netherlands)

    Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Lomonova, E.A.

    2014-01-01

    A fast, semi-analyticalmodel for inductionmotors (IMs) with 36/28 stator/rotor slot combination is presented. In comparison to traditional analytical models for IMs, such as lumped parameter, magnetic equivalent circuit and anisotropic layer models, the presented model calculates a continuous

  16. Semi-analytical approach to modelling the dynamic behaviour of soil excited by embedded foundations

    DEFF Research Database (Denmark)

    Bucinskas, Paulius; Andersen, Lars Vabbersgaard

    2017-01-01

    The underlying soil has a significant effect on the dynamic behaviour of structures. The paper proposes a semi-analytical approach based on a Green’s function solution in frequency–wavenumber domain. The procedure allows calculating the dynamic stiffness for points on the soil surface as well...... are analysed. It is determined how simplification of the numerical model affects the overall dynamic behaviour. © 2017 The Authors. Published by Elsevier Ltd....

  17. A Novel Analytical Model for Network-on-Chip using Semi-Markov Process

    Directory of Open Access Journals (Sweden)

    WANG, J.

    2011-02-01

    Full Text Available Network-on-Chip (NoC communication architecture is proposed to resolve the bottleneck of Multi-processor communication in a single chip. In this paper, a performance analytical model using Semi-Markov Process (SMP is presented to obtain the NoC performance. More precisely, given the related parameters, SMP is used to describe the behavior of each channel and the header flit routing time on each channel can be calculated by analyzing the SMP. Then, the average packet latency in NoC can be calculated. The accuracy of our model is illustrated through simulation. Indeed, the experimental results show that the proposed model can be used to obtain NoC performance and it performs better than the state-of-art models. Therefore, our model can be used as a useful tool to guide the NoC design process.

  18. A semi-analytical refrigeration cycle modelling approach for a heat pump hot water heater

    Science.gov (United States)

    Panaras, G.; Mathioulakis, E.; Belessiotis, V.

    2018-04-01

    The use of heat pump systems in applications like the production of hot water or space heating makes important the modelling of the processes for the evaluation of the performance of existing systems, as well as for design purposes. The proposed semi-analytical model offers the opportunity to estimate the performance of a heat pump system producing hot water, without using detailed geometrical or any performance data. This is important, as for many commercial systems the type and characteristics of the involved subcomponents can hardly be detected, thus not allowing the implementation of more analytical approaches or the exploitation of the manufacturers' catalogue performance data. The analysis copes with the issues related with the development of the models of the subcomponents involved in the studied system. Issues not discussed thoroughly in the existing literature, as the refrigerant mass inventory in the case an accumulator is present, are examined effectively.

  19. Analytical model for cratering of semi-infinite metallic targets by long rod penetrators

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Analytical model is presented herein to predict the diameter of crater in semi-infinite metallic targets struck by a long rod penetrator. Based on the observation that two mechanisms such as mushrooming and cavitation are involved in cavity expansion by a long rod penetrator, the model is constructed by using the laws of conservation of mass, momentum, energy, together with the u-v relationship of the newly suggested 1D theory of long rod penetration (see Lan and Wen, Sci China Tech Sci, 2010, 53(5): 1364–1373). It is demonstrated that the model predictions are in good agreement with available experimental data and numerical simulations obtained for the combinations of penetrator and target made of different materials.

  20. SEMI-ANALYTIC GALAXY EVOLUTION (SAGE): MODEL CALIBRATION AND BASIC RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Croton, Darren J.; Stevens, Adam R. H.; Tonini, Chiara; Garel, Thibault; Bernyk, Maksym; Bibiano, Antonio; Hodkinson, Luke; Mutch, Simon J.; Poole, Gregory B.; Shattow, Genevieve M. [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, Victoria 3122 (Australia)

    2016-02-15

    This paper describes a new publicly available codebase for modeling galaxy formation in a cosmological context, the “Semi-Analytic Galaxy Evolution” model, or sage for short.{sup 5} sage is a significant update to the 2006 model of Croton et al. and has been rebuilt to be modular and customizable. The model will run on any N-body simulation whose trees are organized in a supported format and contain a minimum set of basic halo properties. In this work, we present the baryonic prescriptions implemented in sage to describe the formation and evolution of galaxies, and their calibration for three N-body simulations: Millennium, Bolshoi, and GiggleZ. Updated physics include the following: gas accretion, ejection due to feedback, and reincorporation via the galactic fountain; a new gas cooling–radio mode active galactic nucleus (AGN) heating cycle; AGN feedback in the quasar mode; a new treatment of gas in satellite galaxies; and galaxy mergers, disruption, and the build-up of intra-cluster stars. Throughout, we show the results of a common default parameterization on each simulation, with a focus on the local galaxy population.

  1. SEMI-ANALYTIC GALAXY EVOLUTION (SAGE): MODEL CALIBRATION AND BASIC RESULTS

    International Nuclear Information System (INIS)

    Croton, Darren J.; Stevens, Adam R. H.; Tonini, Chiara; Garel, Thibault; Bernyk, Maksym; Bibiano, Antonio; Hodkinson, Luke; Mutch, Simon J.; Poole, Gregory B.; Shattow, Genevieve M.

    2016-01-01

    This paper describes a new publicly available codebase for modeling galaxy formation in a cosmological context, the “Semi-Analytic Galaxy Evolution” model, or sage for short. 5 sage is a significant update to the 2006 model of Croton et al. and has been rebuilt to be modular and customizable. The model will run on any N-body simulation whose trees are organized in a supported format and contain a minimum set of basic halo properties. In this work, we present the baryonic prescriptions implemented in sage to describe the formation and evolution of galaxies, and their calibration for three N-body simulations: Millennium, Bolshoi, and GiggleZ. Updated physics include the following: gas accretion, ejection due to feedback, and reincorporation via the galactic fountain; a new gas cooling–radio mode active galactic nucleus (AGN) heating cycle; AGN feedback in the quasar mode; a new treatment of gas in satellite galaxies; and galaxy mergers, disruption, and the build-up of intra-cluster stars. Throughout, we show the results of a common default parameterization on each simulation, with a focus on the local galaxy population

  2. A semi-analytical foreshock model for energetic storm particle events inside 1 AU

    Directory of Open Access Journals (Sweden)

    Vainio Rami

    2014-02-01

    Full Text Available We have constructed a semi-analytical model of the energetic-ion foreshock of a CME-driven coronal/interplanetary shock wave responsible for the acceleration of large solar energetic particle (SEP events. The model is based on the analytical model of diffusive shock acceleration of Bell (1978, appended with a temporal dependence of the cut-off momentum of the energetic particles accelerated at the shock, derived from the theory. Parameters of the model are re-calibrated using a fully time-dependent self-consistent simulation model of the coupled particle acceleration and Alfvén-wave generation upstream of the shock. Our results show that analytical estimates of the cut-off energy resulting from the simplified theory and frequently used in SEP modelling are overestimating the cut-off momentum at the shock by one order magnitude. We show also that the cut-off momentum observed remotely far upstream of the shock (e.g., at 1 AU can be used to infer the properties of the foreshock and the resulting energetic storm particle (ESP event, when the shock is still at small distances from the Sun, unaccessible to the in-situ observations. Our results can be used in ESP event modelling for future missions to the inner heliosphere, like the Solar Orbiter and Solar Probe Plus as well as in developing acceleration models for SEP events in the solar corona.

  3. Universality and Realistic Extensions to the Semi-Analytic Simulation Principle in GNSS Signal Processing

    Directory of Open Access Journals (Sweden)

    O. Jakubov

    2012-06-01

    Full Text Available Semi-analytic simulation principle in GNSS signal processing bypasses the bit-true operations at high sampling frequency. Instead, signals at the output branches of the integrate&dump blocks are successfully modeled, thus making extensive Monte Carlo simulations feasible. Methods for simulations of code and carrier tracking loops with BPSK, BOC signals have been introduced in the literature. Matlab toolboxes were designed and published. In this paper, we further extend the applicability of the approach. Firstly, we describe any GNSS signal as a special instance of linear multi-dimensional modulation. Thereby, we state universal framework for classification of differently modulated signals. Using such description, we derive the semi-analytic models generally. Secondly, we extend the model for realistic scenarios including delay in the feed back, slowly fading multipath effects, finite bandwidth, phase noise, and a combination of these. Finally, a discussion on connection of this semi-analytic model and position-velocity-time estimator is delivered, as well as comparison of theoretical and simulated characteristics, produced by a prototype simulator developed at CTU in Prague.

  4. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    Science.gov (United States)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  5. Semi-Analytical Benchmarks for MCNP6

    Energy Technology Data Exchange (ETDEWEB)

    Grechanuk, Pavel Aleksandrovi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-07

    Code verification is an extremely important process that involves proving or disproving the validity of code algorithms by comparing them against analytical results of the underlying physics or mathematical theory on which the code is based. Monte Carlo codes such as MCNP6 must undergo verification and testing upon every release to ensure that the codes are properly simulating nature. Specifically, MCNP6 has multiple sets of problems with known analytic solutions that are used for code verification. Monte Carlo codes primarily specify either current boundary sources or a volumetric fixed source, either of which can be very complicated functions of space, energy, direction and time. Thus, most of the challenges with modeling analytic benchmark problems in Monte Carlo codes come from identifying the correct source definition to properly simulate the correct boundary conditions. The problems included in this suite all deal with mono-energetic neutron transport without energy loss, in a homogeneous material. The variables that differ between the problems are source type (isotropic/beam), medium dimensionality (infinite/semi-infinite), etc.

  6. CCS Site Optimization by Applying a Multi-objective Evolutionary Algorithm to Semi-Analytical Leakage Models

    Science.gov (United States)

    Cody, B. M.; Gonzalez-Nicolas, A.; Bau, D. A.

    2011-12-01

    Carbon capture and storage (CCS) has been proposed as a method of reducing global carbon dioxide (CO2) emissions. Although CCS has the potential to greatly retard greenhouse gas loading to the atmosphere while cleaner, more sustainable energy solutions are developed, there is a possibility that sequestered CO2 may leak and intrude into and adversely affect groundwater resources. It has been reported [1] that, while CO2 intrusion typically does not directly threaten underground drinking water resources, it may cause secondary effects, such as the mobilization of hazardous inorganic constituents present in aquifer minerals and changes in pH values. These risks must be fully understood and minimized before CCS project implementation. Combined management of project resources and leakage risk is crucial for the implementation of CCS. In this work, we present a method of: (a) minimizing the total CCS cost, the summation of major project costs with the cost associated with CO2 leakage; and (b) maximizing the mass of injected CO2, for a given proposed sequestration site. Optimization decision variables include the number of CO2 injection wells, injection rates, and injection well locations. The capital and operational costs of injection wells are directly related to injection well depth, location, injection flow rate, and injection duration. The cost of leakage is directly related to the mass of CO2 leaked through weak areas, such as abandoned oil wells, in the cap rock layers overlying the injected formation. Additional constraints on fluid overpressure caused by CO2 injection are imposed to maintain predefined effective stress levels that prevent cap rock fracturing. Here, both mass leakage and fluid overpressure are estimated using two semi-analytical models based upon work by [2,3]. A multi-objective evolutionary algorithm coupled with these semi-analytical leakage flow models is used to determine Pareto-optimal trade-off sets giving minimum total cost vs. maximum mass

  7. Galaxy modelling. II. Multi-wavelength faint counts from a semi-analytic model of galaxy formation

    Science.gov (United States)

    Devriendt, J. E. G.; Guiderdoni, B.

    2000-11-01

    This paper predicts self-consistent faint galaxy counts from the UV to the submm wavelength range. The stardust spectral energy distributions described in Devriendt et al. \\citeparyear{DGS99} (Paper I) are embedded within the explicit cosmological framework of a simple semi-analytic model of galaxy formation and evolution. We begin with a description of the non-dissipative and dissipative collapses of primordial perturbations, and plug in standard recipes for star formation, stellar evolution and feedback. We also model the absorption of starlight by dust and its re-processing in the IR and submm. We then build a class of models which capture the luminosity budget of the universe through faint galaxy counts and redshift distributions in the whole wavelength range spanned by our spectra. In contrast with a rather stable behaviour in the optical and even in the far-IR, the submm counts are dramatically sensitive to variations in the cosmological parameters and changes in the star formation history. Faint submm counts are more easily accommodated within an open universe with a low value of Omega_0 , or a flat universe with a non-zero cosmological constant. We confirm the suggestion of Guiderdoni et al. \\citeparyear{GHBM98} that matching the current multi-wavelength data requires a population of heavily-extinguished, massive galaxies with large star formation rates ( ~ 500 M_sun yr-1) at intermediate and high redshift (z >= 1.5). Such a population of objects probably is the consequence of an increase of interaction and merging activity at high redshift, but a realistic quantitative description can only be obtained through more detailed modelling of such processes. This study illustrates the implementation of multi-wavelength spectra into a semi-analytic model. In spite of its simplicity, it already provides fair fits of the current data of faint counts, and a physically motivated way of interpolating and extrapolating these data to other wavelengths and fainter flux

  8. Modal instability of rod fiber amplifiers: a semi-analytic approach

    DEFF Research Database (Denmark)

    Jørgensen, Mette Marie; Hansen, Kristian Rymann; Laurila, Marko

    2013-01-01

    The modal instability (MI) threshold is estimated for four rod fiber designs by combining a semi-analytic model with the finite element method. The thermal load due to the quantum defect is calculated and used to numerically determine the mode distributions on which the expression for the onset o...

  9. A two-dimensional, semi-analytic expansion method for nodal calculations

    International Nuclear Information System (INIS)

    Palmtag, S.P.

    1995-08-01

    Most modern nodal methods used today are based upon the transverse integration procedure in which the multi-dimensional flux shape is integrated over the transverse directions in order to produce a set of coupled one-dimensional flux shapes. The one-dimensional flux shapes are then solved either analytically or by representing the flux shape by a finite polynomial expansion. While these methods have been verified for most light-water reactor applications, they have been found to have difficulty predicting the large thermal flux gradients near the interfaces of highly-enriched MOX fuel assemblies. A new method is presented here in which the neutron flux is represented by a non-seperable, two-dimensional, semi-analytic flux expansion. The main features of this method are (1) the leakage terms from the node are modeled explicitly and therefore, the transverse integration procedure is not used, (2) the corner point flux values for each node are directly edited from the solution method, and a corner-point interpolation is not needed in the flux reconstruction, (3) the thermal flux expansion contains hyperbolic terms representing analytic solutions to the thermal flux diffusion equation, and (4) the thermal flux expansion contains a thermal to fast flux ratio term which reduces the number of polynomial expansion functions needed to represent the thermal flux. This new nodal method has been incorporated into the computer code COLOR2G and has been used to solve a two-dimensional, two-group colorset problem containing uranium and highly-enriched MOX fuel assemblies. The results from this calculation are compared to the results found using a code based on the traditional transverse integration procedure

  10. From Point Clouds to Building Information Models: 3D Semi-Automatic Reconstruction of Indoors of Existing Buildings

    Directory of Open Access Journals (Sweden)

    Hélène Macher

    2017-10-01

    Full Text Available The creation of as-built Building Information Models requires the acquisition of the as-is state of existing buildings. Laser scanners are widely used to achieve this goal since they permit to collect information about object geometry in form of point clouds and provide a large amount of accurate data in a very fast way and with a high level of details. Unfortunately, the scan-to-BIM (Building Information Model process remains currently largely a manual process which is time consuming and error-prone. In this paper, a semi-automatic approach is presented for the 3D reconstruction of indoors of existing buildings from point clouds. Several segmentations are performed so that point clouds corresponding to grounds, ceilings and walls are extracted. Based on these point clouds, walls and slabs of buildings are reconstructed and described in the IFC format in order to be integrated into BIM software. The assessment of the approach is proposed thanks to two datasets. The evaluation items are the degree of automation, the transferability of the approach and the geometric quality of results of the 3D reconstruction. Additionally, quality indexes are introduced to inspect the results in order to be able to detect potential errors of reconstruction.

  11. Semi-physiologic model validation and bioequivalence trials simulation to select the best analyte for acetylsalicylic acid.

    Science.gov (United States)

    Cuesta-Gragera, Ana; Navarro-Fontestad, Carmen; Mangas-Sanjuan, Victor; González-Álvarez, Isabel; García-Arieta, Alfredo; Trocóniz, Iñaki F; Casabó, Vicente G; Bermejo, Marival

    2015-07-10

    The objective of this paper is to apply a previously developed semi-physiologic pharmacokinetic model implemented in NONMEM to simulate bioequivalence trials (BE) of acetyl salicylic acid (ASA) in order to validate the model performance against ASA human experimental data. ASA is a drug with first-pass hepatic and intestinal metabolism following Michaelis-Menten kinetics that leads to the formation of two main metabolites in two generations (first and second generation metabolites). The first aim was to adapt the semi-physiological model for ASA in NOMMEN using ASA pharmacokinetic parameters from literature, showing its sequential metabolism. The second aim was to validate this model by comparing the results obtained in NONMEM simulations with published experimental data at a dose of 1000 mg. The validated model was used to simulate bioequivalence trials at 3 dose schemes (100, 1000 and 3000 mg) and with 6 test formulations with decreasing in vivo dissolution rate constants versus the reference formulation (kD 8-0.25 h (-1)). Finally, the third aim was to determine which analyte (parent drug, first generation or second generation metabolite) was more sensitive to changes in formulation performance. The validation results showed that the concentration-time curves obtained with the simulations reproduced closely the published experimental data, confirming model performance. The parent drug (ASA) was the analyte that showed to be more sensitive to the decrease in pharmaceutical quality, with the highest decrease in Cmax and AUC ratio between test and reference formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. An analytical statistical approach to the 3D reconstruction problem

    Energy Technology Data Exchange (ETDEWEB)

    Cierniak, Robert [Czestochowa Univ. of Technology (Poland). Inst. of Computer Engineering

    2011-07-01

    The presented here approach is concerned with the reconstruction problem for 3D spiral X-ray tomography. The reconstruction problem is formulated taking into considerations the statistical properties of signals obtained in X-ray CT. Additinally, image processing performed in our approach is involved in analytical methodology. This conception significantly improves quality of the obtained after reconstruction images and decreases the complexity of the reconstruction problem in comparison with other approaches. Computer simulations proved that schematically described here reconstruction algorithm outperforms conventional analytical methods in obtained image quality. (orig.)

  13. Semi-analytical models of hydroelastic sloshing impact in tanks of liquefied natural gas vessels.

    Science.gov (United States)

    Ten, I; Malenica, Š; Korobkin, A

    2011-07-28

    The present paper deals with the methods for the evaluation of the hydroelastic interactions that appear during the violent sloshing impacts inside the tanks of liquefied natural gas carriers. The complexity of both the fluid flow and the structural behaviour (containment system and ship structure) does not allow for a fully consistent direct approach according to the present state of the art. Several simplifications are thus necessary in order to isolate the most dominant physical aspects and to treat them properly. In this paper, choice was made of semi-analytical modelling for the hydrodynamic part and finite-element modelling for the structural part. Depending on the impact type, different hydrodynamic models are proposed, and the basic principles of hydroelastic coupling are clearly described and validated with respect to the accuracy and convergence of the numerical results.

  14. Semi-analytic calculations for the impact parameter dependence of electromagnetic multi-lepton pair production

    International Nuclear Information System (INIS)

    Gueclue, M.C.

    2000-01-01

    We provide a new general semi-analytic derivation of the impact parameter dependence of lowest order electromagnetic lepton-pair production in relativistic heavy-ion collisions. By using this result we have also calculated the related analytic multiple-pair production in the two-photon external-field model. We have compared our results with the equivalent-photon approximation and other calculations

  15. A semi-analytical study of positive corona discharge in wire–plane electrode configuration

    International Nuclear Information System (INIS)

    Yanallah, K; Pontiga, F; Chen, J H

    2013-01-01

    Wire-to-plane positive corona discharge in air has been studied using an analytical model of two species (electrons and positive ions). The spatial distributions of electric field and charged species are obtained by integrating Gauss's law and the continuity equations of species along the Laplacian field lines. The experimental values of corona current intensity and applied voltage, together with Warburg's law, have been used to formulate the boundary condition for the electron density on the corona wire. To test the accuracy of the model, the approximate electric field distribution has been compared with the exact numerical solution obtained from a finite element analysis. A parametrical study of wire-to-plane corona discharge has then been undertaken using the approximate semi-analytical solutions. Thus, the spatial distributions of electric field and charged particles have been computed for different values of the gas pressure, wire radius and electrode separation. Also, the two dimensional distribution of ozone density has been obtained using a simplified plasma chemistry model. The approximate semi-analytical solutions can be evaluated in a negligible computational time, yet provide precise estimates of corona discharge variables. (paper)

  16. A semi-analytical study of positive corona discharge in wire-plane electrode configuration

    Science.gov (United States)

    Yanallah, K.; Pontiga, F.; Chen, J. H.

    2013-08-01

    Wire-to-plane positive corona discharge in air has been studied using an analytical model of two species (electrons and positive ions). The spatial distributions of electric field and charged species are obtained by integrating Gauss's law and the continuity equations of species along the Laplacian field lines. The experimental values of corona current intensity and applied voltage, together with Warburg's law, have been used to formulate the boundary condition for the electron density on the corona wire. To test the accuracy of the model, the approximate electric field distribution has been compared with the exact numerical solution obtained from a finite element analysis. A parametrical study of wire-to-plane corona discharge has then been undertaken using the approximate semi-analytical solutions. Thus, the spatial distributions of electric field and charged particles have been computed for different values of the gas pressure, wire radius and electrode separation. Also, the two dimensional distribution of ozone density has been obtained using a simplified plasma chemistry model. The approximate semi-analytical solutions can be evaluated in a negligible computational time, yet provide precise estimates of corona discharge variables.

  17. Upon the reconstruction of accidents triggered by tire explosion. Analytical model and case study

    Science.gov (United States)

    Gaiginschi, L.; Agape, I.; Talif, S.

    2017-10-01

    Accident Reconstruction is important in the general context of increasing road traffic safety. In the casuistry of traffic accidents, those caused by tire explosions are critical under the severity of consequences, because they are usually happening at high speeds. Consequently, the knowledge of the running speed of the vehicle involved at the time of the tire explosion is essential to elucidate the circumstances of the accident. The paper presents an analytical model for the kinematics of a vehicle which, after the explosion of one of its tires, begins to skid, overturns and rolls. The model consists of two concurent approaches built as applications of the momentum conservation and energy conservation principles, and allows determination of the initial speed of the vehicle involved, by running backwards the sequences of the road event. The authors also aimed to both validate the two distinct analytical approaches by calibrating the calculation algorithms on a case study

  18. A semi-analytical treatment of xenon oscillations

    International Nuclear Information System (INIS)

    Zarei, M.; Minuchehr, A.; Ghaderi, R.

    2017-01-01

    Highlights: • A two-group two region kinetic core model is developed employing the eigenvalues separation index. • Poison dynamics are investigated within an adiabatic approach. • The overall nonlinear reactor model is recast into a linear time varying framework incorporating the matrix exponential numerical scheme. • The largest Lyapunov exponent is employed to analytically verify model stability. - Abstract: A novel approach is developed to investigate xenon oscillations within a two-group two-region coupled core reactor model incorporating thermal feedback and poison effects. Group-wise neutronic coupling coefficients between the core regions are calculated applying the associated fundamental and first mode eigenvalue separation values. The resultant nonlinear state space representation of the core behavior is quite suitable for evaluation of reactivity induced power transients such as load following operation. The model however comprises a multi-physics coupling of sub-systems with extremely distant relaxation times whose stiffness treatment inquire costly multistep implicit numerical methods. An adiabatic treatment of the sluggish poison dynamics is therefore proposed as a way out. The approach helps further investigate the nonlinear system within a linear time varying (LTV) framework whereby a semi-analytical framework is established. This scheme incorporates a matrix exponential analytical solution of the perturbed system as a quite efficient tool to study load following operation and control purposes. Poison dynamics are updated within larger intervals which exclude the need for specific numerical schemes of stiff systems. Simulation results of the axial offset conducted on a VVER-1000 reactor at the beginning (BOC) and the end of cycle (EOC) display quite acceptable results compared with available benchmarks. The LTV reactor model is further investigated within a stability analysis of the associated time varying systems at these two stages

  19. THE STELLAR MASS COMPONENTS OF GALAXIES: COMPARING SEMI-ANALYTICAL MODELS WITH OBSERVATION

    International Nuclear Information System (INIS)

    Liu Lei; Yang Xiaohu; Mo, H. J.; Van den Bosch, Frank C.; Springel, Volker

    2010-01-01

    We compare the stellar masses of central and satellite galaxies predicted by three independent semi-analytical models (SAMs) with observational results obtained from a large galaxy group catalog constructed from the Sloan Digital Sky Survey. In particular, we compare the stellar mass functions of centrals and satellites, the relation between total stellar mass and halo mass, and the conditional stellar mass functions, Φ(M * |M h ), which specify the average number of galaxies of stellar mass M * that reside in a halo of mass M h . The SAMs only predict the correct stellar masses of central galaxies within a limited mass range and all models fail to reproduce the sharp decline of stellar mass with decreasing halo mass observed at the low mass end. In addition, all models over-predict the number of satellite galaxies by roughly a factor of 2. The predicted stellar mass in satellite galaxies can be made to match the data by assuming that a significant fraction of satellite galaxies are tidally stripped and disrupted, giving rise to a population of intra-cluster stars (ICS) in their host halos. However, the amount of ICS thus predicted is too large compared to observation. This suggests that current galaxy formation models still have serious problems in modeling star formation in low-mass halos.

  20. Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems

    Science.gov (United States)

    Sandwell, David; Smith-Konter, Bridget

    2018-05-01

    We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.

  1. A semi-analytical beam model for the vibration of railway tracks

    Science.gov (United States)

    Kostovasilis, D.; Thompson, D. J.; Hussein, M. F. M.

    2017-04-01

    The high frequency dynamic behaviour of railway tracks, in both vertical and lateral directions, strongly affects the generation of rolling noise as well as other phenomena such as rail corrugation. An improved semi-analytical model of a beam on an elastic foundation is introduced that accounts for the coupling of the vertical and lateral vibration. The model includes the effects of cross-section asymmetry, shear deformation, rotational inertia and restrained warping. Consideration is given to the fact that the loads at the rail head, as well as those exerted by the railpads at the rail foot, may not act through the centroid of the section. The response is evaluated for a harmonic load and the solution is obtained in the wavenumber domain. Results are presented as dispersion curves for free and supported rails and are validated with the aid of a Finite Element (FE) and a waveguide finite element (WFE) model. Closed form expressions are derived for the forced response, and validated against the WFE model. Track mobilities and decay rates are presented to assess the potential implications for rolling noise and the influence of the various sources of vertical-lateral coupling. Comparison is also made with measured data. Overall, the model presented performs very well, especially for the lateral vibration, although it does not contain the high frequency cross-section deformation modes. The most significant effects on the response are shown to be the inclusion of torsion and foundation eccentricity, which mainly affect the lateral response.

  2. Bessel Fourier Orientation Reconstruction (BFOR): An Analytical Diffusion Propagator Reconstruction for Hybrid Diffusion Imaging and Computation of q-Space Indices

    Science.gov (United States)

    Hosseinbor, A. Pasha; Chung, Moo K.; Wu, Yu-Chien; Alexander, Andrew L.

    2012-01-01

    The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents. The EAP can thus provide richer information about complex tissue microstructure properties than the orientation distribution function (ODF), an angular feature of the EAP. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed, such as diffusion propagator imaging (DPI) and spherical polar Fourier imaging (SPFI). In this study, a new analytical EAP reconstruction method is proposed, called Bessel Fourier orientation reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition, and is validated on both synthetic and real datasets. A significant portion of the paper is dedicated to comparing BFOR, SPFI, and DPI using hybrid, non-Cartesian sampling for multiple b-value acquisitions. Ways to mitigate the effects of Gibbs ringing on EAP reconstruction are also explored. In addition to analytical EAP reconstruction, the aforementioned modeling bases can be used to obtain rotationally invariant q-space indices of potential clinical value, an avenue which has not yet been thoroughly explored. Three such measures are computed: zero-displacement probability (Po), mean squared displacement (MSD), and generalized fractional anisotropy (GFA). PMID:22963853

  3. Semi-infinite photocarrier radiometric model for the characterization of semiconductor wafer

    International Nuclear Information System (INIS)

    Liu Xianming; Li Bincheng; Huang Qiuping

    2010-01-01

    The analytical expression is derived to describe the photocarrier radiometric (PCR) signal for a semi-infinite semiconductor wafer excited by a square-wave modulated laser. For comparative study, the PCR signals are calculated by the semi-infinite model and the finite thickness model with several thicknesses. The fitted errors of the electronic transport properties by semi-infinite model are analyzed. From these results it is evident that for thick samples or at high modulation frequency, the semiconductor can be considered as semi-infinite.

  4. Analytic 3D image reconstruction using all detected events

    International Nuclear Information System (INIS)

    Kinahan, P.E.; Rogers, J.G.

    1988-11-01

    We present the results of testing a previously presented algorithm for three-dimensional image reconstruction that uses all gamma-ray coincidence events detected by a PET volume-imaging scanner. By using two iterations of an analytic filter-backprojection method, the algorithm is not constrained by the requirement of a spatially invariant detector point spread function, which limits normal analytic techniques. Removing this constraint allows the incorporation of all detected events, regardless of orientation, which improves the statistical quality of the final reconstructed image

  5. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    Science.gov (United States)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    Ocean color remote sensing provides synoptic-scale, near-daily observations of marine inherent optical properties (IOPs). Whilst contemporary ocean color algorithms are known to perform well in deep oceanic waters, they have difficulty operating in optically clear, shallow marine environments where light reflected from the seafloor contributes to the water-leaving radiance. The effect of benthic reflectance in optically shallow waters is known to adversely affect algorithms developed for optically deep waters [1, 2]. Whilst adapted versions of optically deep ocean color algorithms have been applied to optically shallow regions with reasonable success [3], there is presently no approach that directly corrects for bottom reflectance using existing knowledge of bathymetry and benthic albedo.To address the issue of optically shallow waters, we have developed a semi-analytical ocean color inversion algorithm: the Shallow Water Inversion Model (SWIM). SWIM uses existing bathymetry and a derived benthic albedo map to correct for bottom reflectance using the semi-analytical model of Lee et al [4]. The algorithm was incorporated into the NASA Ocean Biology Processing Groups L2GEN program and tested in optically shallow waters of the Great Barrier Reef, Australia. In-lieu of readily available in situ matchup data, we present a comparison between SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Property Algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA).

  6. A Semi-analytical model for creep life prediction of butt-welded joints in cylindrical vessels

    International Nuclear Information System (INIS)

    Zarrabi, K.

    2001-01-01

    There have been many investigations on the life assessment of high temperature weldments used in cylindrical pressure vessels, pipes and tubes over the last two decades or so. But to the author's knowledge, currently, there exists no practical, economical and relatively accurate model for creep life assessment of butt-welded joints in cylindrical pressure vessels. This paper describes a semi-analytical and economical model for creep life assessment of butt-welded joints. The first stage of the development of the model is described where the model takes into account the material discontinuities at the welded joint only. The development of the model to include other factors such as geometrical stress concentrations, residual stresses, etc will be reported separately. It has been shown that the proposed model can estimate the redistributions of stresses in the weld and Haz with an error of less than 4%. It has also been shown that the proposed model can conservatively predict the creep life of a butt-welded joint with an error of less than 16%

  7. On accuracy problems for semi-analytical sensitivity analyses

    DEFF Research Database (Denmark)

    Pedersen, P.; Cheng, G.; Rasmussen, John

    1989-01-01

    The semi-analytical method of sensitivity analysis combines ease of implementation with computational efficiency. A major drawback to this method, however, is that severe accuracy problems have recently been reported. A complete error analysis for a beam problem with changing length is carried ou...... pseudo loads in order to obtain general load equilibrium with rigid body motions. Such a method would be readily applicable for any element type, whether analytical expressions for the element stiffnesses are available or not. This topic is postponed for a future study....

  8. Semi-analytical calculation of fuel parameters for shock ignition fusion

    Directory of Open Access Journals (Sweden)

    S A Ghasemi

    2017-02-01

    Full Text Available In this paper, semi-analytical relations of total energy, fuel gain and hot-spot radius in a non-isobaric model have been derived and compared with Schmitt (2010 numerical calculations for shock ignition scenario. in nuclear fusion. Results indicate that the approximations used by Rosen (1983 and Schmitt (2010 for the calculation of burn up fraction have not enough accuracy compared with numerical simulation. Meanwhile, it is shown that the obtained formulas of non-isobaric model cannot determine the model parameters of total energy, fuel gain and hot-spot radius uniquely. Therefore, employing more appropriate approximations, an improved semianalytical relations for non-isobaric model has been presented, which  are in a better agreement with numerical calculations of shock ignition by Schmitt (2010.

  9. Model-free and analytical EAP reconstruction via spherical polar Fourier diffusion MRI.

    Science.gov (United States)

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2010-01-01

    How to estimate the diffusion Ensemble Average Propagator (EAP) from the DWI signals in q-space is an open problem in diffusion MRI field. Many methods were proposed to estimate the Orientation Distribution Function (ODF) that is used to describe the fiber direction. However, ODF is just one of the features of the EAP. Compared with ODF, EAP has the full information about the diffusion process which reflects the complex tissue micro-structure. Diffusion Orientation Transform (DOT) and Diffusion Spectrum Imaging (DSI) are two important methods to estimate the EAP from the signal. However, DOT is based on mono-exponential assumption and DSI needs a lot of samplings and very large b values. In this paper, we propose Spherical Polar Fourier Imaging (SPFI), a novel model-free fast robust analytical EAP reconstruction method, which almost does not need any assumption of data and does not need too many samplings. SPFI naturally combines the DWI signals with different b-values. It is an analytical linear transformation from the q-space signal to the EAP profile represented by Spherical Harmonics (SH). We validated the proposed methods in synthetic data, phantom data and real data. It works well in all experiments, especially for the data with low SNR, low anisotropy, and non-exponential decay.

  10. Analytical reconstruction schemes for coarse-mesh spectral nodal solution of slab-geometry SN transport problems

    International Nuclear Information System (INIS)

    Barros, R. C.; Filho, H. A.; Platt, G. M.; Oliveira, F. B. S.; Militao, D. S.

    2009-01-01

    Coarse-mesh numerical methods are very efficient in the sense that they generate accurate results in short computational time, as the number of floating point operations generally decrease, as a result of the reduced number of mesh points. On the other hand, they generate numerical solutions that do not give detailed information on the problem solution profile, as the grid points can be located considerably away from each other. In this paper we describe two analytical reconstruction schemes for the coarse-mesh solution generated by the spectral nodal method for neutral particle discrete ordinates (S N ) transport model in slab geometry. The first scheme we describe is based on the analytical reconstruction of the coarse-mesh solution within each discretization cell of the spatial grid set up on the slab. The second scheme is based on the angular reconstruction of the discrete ordinates solution between two contiguous ordinates of the angular quadrature set used in the S N model. Numerical results are given so we can illustrate the accuracy of the two reconstruction schemes, as described in this paper. (authors)

  11. The ''2T'' ion-electron semi-analytic shock solution for code-comparison with xRAGE: A report for FY16

    International Nuclear Information System (INIS)

    Ferguson, Jim Michael

    2016-01-01

    This report documents an effort to generate the semi-analytic '2T' ion-electron shock solution developed in the paper by Masser, Wohlbier, and Lowrie, and the initial attempts to understand how to use this solution as a code-verification tool for one of LANL's ASC codes, xRAGE. Most of the work so far has gone into generating the semi-analytic solution. Considerable effort will go into understanding how to write the xRAGE input deck that both matches the boundary conditions imposed by the solution, and also what physics models must be implemented within the semi-analytic solution itself to match the model assumptions inherit within xRAGE. Therefore, most of this report focuses on deriving the equations for the semi-analytic 1D-planar time-independent '2T' ion-electron shock solution, and is written in a style that is intended to provide clear guidance for anyone writing their own solver.

  12. Elasto-plastic strain analysis by a semi-analytical method

    Indian Academy of Sciences (India)

    deformation problems following a semi-analytical method, incorporating the com- ..... The set of equations in (8) are non-linear in nature, which is solved by direct ...... Here, [K] and [M] are stiffness matrix and mass matrix which are of the form ...

  13. PROBING THE ROLE OF DYNAMICAL FRICTION IN SHAPING THE BSS RADIAL DISTRIBUTION. I. SEMI-ANALYTICAL MODELS AND PRELIMINARY N-BODY SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Miocchi, P.; Lanzoni, B.; Ferraro, F. R.; Dalessandro, E.; Alessandrini, E. [Dipartimento di Fisica e Astronomia, Università di Bologna, Viale Berti Pichat 6/2, I-40127 Bologna (Italy); Pasquato, M.; Lee, Y.-W. [Department of Astronomy and Center for Galaxy Evolution Research, Yonsei University, Seoul 120-749 (Korea, Republic of); Vesperini, E. [Department of Astronomy, Indiana University, Bloomington, IN 47405 (United States)

    2015-01-20

    We present semi-analytical models and simplified N-body simulations with 10{sup 4} particles aimed at probing the role of dynamical friction (DF) in determining the radial distribution of blue straggler stars (BSSs) in globular clusters. The semi-analytical models show that DF (which is the only evolutionary mechanism at work) is responsible for the formation of a bimodal distribution with a dip progressively moving toward the external regions of the cluster. However, these models fail to reproduce the formation of the long-lived central peak observed in all dynamically evolved clusters. The results of N-body simulations confirm the formation of a sharp central peak, which remains as a stable feature over time regardless of the initial concentration of the system. In spite of noisy behavior, a bimodal distribution forms in many cases, with the size of the dip increasing as a function of time. In the most advanced stages, the distribution becomes monotonic. These results are in agreement with the observations. Also, the shape of the peak and the location of the minimum (which, in most of cases, is within 10 core radii) turn out to be consistent with observational results. For a more detailed and close comparison with observations, including a proper calibration of the timescales of the dynamical processes driving the evolution of the BSS spatial distribution, more realistic simulations will be necessary.

  14. Porous media: Analysis, reconstruction and percolation

    DEFF Research Database (Denmark)

    Rogon, Thomas Alexander

    1995-01-01

    functions of Gaussian fields and spatial autocorrelation functions of binary fields. An enhanced approach which embodies semi-analytical solutions for the conversions has been made. The scope and limitations of the method have been analysed in terms of realizability of different model correlation functions...... stereological methods. The measured sample autocorrelations are modeled by analytical correlation functions. A method for simulating porous networks from their porosity and spatial correlation originally developed by Joshi (14) is presented. This method is based on a conversion between spatial autocorrelation...... in binary fields. Percolation threshold of reconstructed porous media has been determined for different discretizations of a selected model correlation function. Also critical exponents such as the correlation length exponent v, the strength of the infinite network and the mean size of finite clusters have...

  15. A semi-analytical analysis of electro-thermo-hydrodynamic stability in dielectric nanofluids using Buongiorno's mathematical model together with more realistic boundary conditions

    Science.gov (United States)

    Wakif, Abderrahim; Boulahia, Zoubair; Sehaqui, Rachid

    2018-06-01

    The main aim of the present analysis is to examine the electroconvection phenomenon that takes place in a dielectric nanofluid under the influence of a perpendicularly applied alternating electric field. In this investigation, we assume that the nanofluid has a Newtonian rheological behavior and verifies the Buongiorno's mathematical model, in which the effects of thermophoretic and Brownian diffusions are incorporated explicitly in the governing equations. Moreover, the nanofluid layer is taken to be confined horizontally between two parallel plate electrodes, heated from below and cooled from above. In a fast pulse electric field, the onset of electroconvection is due principally to the buoyancy forces and the dielectrophoretic forces. Within the framework of the Oberbeck-Boussinesq approximation and the linear stability theory, the governing stability equations are solved semi-analytically by means of the power series method for isothermal, no-slip and non-penetrability conditions. In addition, the computational implementation with the impermeability condition implies that there exists no nanoparticles mass flux on the electrodes. On the other hand, the obtained analytical solutions are validated by comparing them to those available in the literature for the limiting case of dielectric fluids. In order to check the accuracy of our semi-analytical results obtained for the case of dielectric nanofluids, we perform further numerical and semi-analytical computations by means of the Runge-Kutta-Fehlberg method, the Chebyshev-Gauss-Lobatto spectral method, the Galerkin weighted residuals technique, the polynomial collocation method and the Wakif-Galerkin weighted residuals technique. In this analysis, the electro-thermo-hydrodynamic stability of the studied nanofluid is controlled through the critical AC electric Rayleigh number Rec , whose value depends on several physical parameters. Furthermore, the effects of various pertinent parameters on the electro

  16. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  17. A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves

    Science.gov (United States)

    Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang

    2018-03-01

    The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.

  18. A semi-analytical model of a time reversal cavity for high-amplitude focused ultrasound applications

    Science.gov (United States)

    Robin, J.; Tanter, M.; Pernot, M.

    2017-09-01

    Time reversal cavities (TRC) have been proposed as an efficient approach for 3D ultrasound therapy. They allow the precise spatio-temporal focusing of high-power ultrasound pulses within a large region of interest with a low number of transducers. Leaky TRCs are usually built by placing a multiple scattering medium, such as a random rod forest, in a reverberating cavity, and the final peak pressure gain of the device only depends on the temporal length of its impulse response. Such multiple scattering in a reverberating cavity is a complex phenomenon, and optimisation of the device’s gain is usually a cumbersome process, mostly empirical, and requiring numerical simulations with extremely long computation times. In this paper, we present a semi-analytical model for the fast optimisation of a TRC. This model decouples ultrasound propagation in an empty cavity and multiple scattering in a multiple scattering medium. It was validated numerically and experimentally using a 2D-TRC and numerically using a 3D-TRC. Finally, the model was used to determine rapidly the optimal parameters of the 3D-TRC which had been confirmed by numerical simulations.

  19. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  20. Semi-analytic variable charge solitary waves involving dust phase-space vortices (holes)

    Energy Technology Data Exchange (ETDEWEB)

    Tribeche, Mouloud; Younsi, Smain; Amour, Rabia; Aoutou, Kamel [Plasma Physics Group, Faculty of Sciences-Physics, Theoretical Physics Laboratory, University of Bab-Ezzouar, USTHB BP 32, El Alia, Algiers 16111 (Algeria)], E-mail: mtribeche@usthb.dz

    2009-09-15

    A semi-analytic model for highly nonlinear solitary waves involving dust phase-space vortices (holes) is outlined. The variable dust charge is expressed in terms of the Lambert function and we take advantage of this transcendental function to investigate the localized structures that may occur in a dusty plasma with variable charge trapped dust particles. Our results which complement the previously published work on this problem (Schamel et al 2001 Phys. Plasmas 8 671) should be of basic interest for experiments that involve the trapping of dust particles in ultra-low-frequency dust acoustic modes.

  1. Semi-analytic variable charge solitary waves involving dust phase-space vortices (holes)

    International Nuclear Information System (INIS)

    Tribeche, Mouloud; Younsi, Smain; Amour, Rabia; Aoutou, Kamel

    2009-01-01

    A semi-analytic model for highly nonlinear solitary waves involving dust phase-space vortices (holes) is outlined. The variable dust charge is expressed in terms of the Lambert function and we take advantage of this transcendental function to investigate the localized structures that may occur in a dusty plasma with variable charge trapped dust particles. Our results which complement the previously published work on this problem (Schamel et al 2001 Phys. Plasmas 8 671) should be of basic interest for experiments that involve the trapping of dust particles in ultra-low-frequency dust acoustic modes.

  2. Semi-analytical solution to arbitrarily shaped beam scattering

    Science.gov (United States)

    Wang, Wenjie; Zhang, Huayong; Sun, Yufa

    2017-07-01

    Based on the field expansions in terms of appropriate spherical vector wave functions and the method of moments scheme, an exact semi-analytical solution to the scattering of an arbitrarily shaped beam is given. For incidence of a Gaussian beam, zero-order Bessel beam and Hertzian electric dipole radiation, numerical results of the normalized differential scattering cross section are presented to a spheroid and a circular cylinder of finite length, and the scattering properties are analyzed concisely.

  3. Electro-osmotic and pressure-driven flow of viscoelastic fluids in microchannels: Analytical and semi-analytical solutions

    Science.gov (United States)

    Ferrás, L. L.; Afonso, A. M.; Alves, M. A.; Nóbrega, J. M.; Pinho, F. T.

    2016-09-01

    In this work, we present a series of solutions for combined electro-osmotic and pressure-driven flows of viscoelastic fluids in microchannels. The solutions are semi-analytical, a feature made possible by the use of the Debye-Hückel approximation for the electrokinetic fields, thus restricted to cases with small electric double-layers, in which the distance between the microfluidic device walls is at least one order of magnitude larger than the electric double-layer thickness. To describe the complex fluid rheology, several viscoelastic differential constitutive models were used, namely, the simplified Phan-Thien-Tanner model with linear, quadratic or exponential kernel for the stress coefficient function, the Johnson-Segalman model, and the Giesekus model. The results obtained illustrate the effects of the Weissenberg number, the Johnson-Segalman slip parameter, the Giesekus mobility parameter, and the relative strengths of the electro-osmotic and pressure gradient-driven forcings on the dynamics of these viscoelastic flows.

  4. A semi-interactive panorama based 3D reconstruction framework for indoor scenes

    NARCIS (Netherlands)

    Dang, T.K.; Worring, M.; Bui, T.D.

    2011-01-01

    We present a semi-interactive method for 3D reconstruction specialized for indoor scenes which combines computer vision techniques with efficient interaction. We use panoramas, popularly used for visualization of indoor scenes, but clearly not able to show depth, for their great field of view, as

  5. Analytical and Experimental Evaluation of Digital Control Systems for the Semi-Span Super-Sonic Transport (S4T) Wind Tunnel Model

    Science.gov (United States)

    Wieseman, Carol D.; Christhilf, David; Perry, Boyd, III

    2012-01-01

    An important objective of the Semi-Span Super-Sonic Transport (S4T) wind tunnel model program was the demonstration of Flutter Suppression (FS), Gust Load Alleviation (GLA), and Ride Quality Enhancement (RQE). It was critical to evaluate the stability and robustness of these control laws analytically before testing them and experimentally while testing them to ensure safety of the model and the wind tunnel. MATLAB based software was applied to evaluate the performance of closed-loop systems in terms of stability and robustness. Existing software tools were extended to use analytical representations of the S4T and the control laws to analyze and evaluate the control laws prior to testing. Lessons were learned about the complex windtunnel model and experimental testing. The open-loop flutter boundary was determined from the closed-loop systems. A MATLAB/Simulink Simulation developed under the program is available for future work to improve the CPE process. This paper is one of a series of that comprise a special session, which summarizes the S4T wind-tunnel program.

  6. Evaluation of analytical reconstruction with a new gap-filling method in comparison to iterative reconstruction in [11C]-raclopride PET studies

    International Nuclear Information System (INIS)

    Tuna, U.; Johansson, J.; Ruotsalainen, U.

    2014-01-01

    The aim of the study was (1) to evaluate the reconstruction strategies with dynamic [ 11 C]-raclopride human positron emission tomography (PET) studies acquired from ECAT high-resolution research tomograph (HRRT) scanner and (2) to justify for the selected gap-filling method for analytical reconstruction with simulated phantom data. A new transradial bicubic interpolation method has been implemented to enable faster analytical 3D-reprojection (3DRP) reconstructions for the ECAT HRRT PET scanner data. The transradial bicubic interpolation method was compared to the other gap-filling methods visually and quantitatively using the numerical Shepp-Logan phantom. The performance of the analytical 3DRP reconstruction method with this new gap-filling method was evaluated in comparison with the iterative statistical methods: ordinary Poisson ordered subsets expectation maximization (OPOSEM) and resolution modeled OPOSEM methods. The image reconstruction strategies were evaluated using human data at different count statistics and consequently at different noise levels. In the assessments, 14 [ 11 C]-raclopride dynamic PET studies (test-retest studies of 7 healthy subjects) acquired from the HRRT PET scanner were used. Besides the visual comparisons of the methods, we performed regional quantitative evaluations over the cerebellum, caudate and putamen structures. We compared the regional time-activity curves (TACs), areas under the TACs and binding potential (BP ND ) values. The results showed that the new gap-filling method preserves the linearity of the 3DRP method. Results with the 3DRP after gap-filling method exhibited hardly any dependency on the count statistics (noise levels) in the sinograms while we observed changes in the quantitative results with the EM-based methods for different noise contamination in the data. With this study, we showed that 3DRP with transradial bicubic gap-filling method is feasible for the reconstruction of high-resolution PET data with

  7. Application of Semi-analytical Satellite Theory orbit propagator to orbit determination for space object catalog maintenance

    Science.gov (United States)

    Setty, Srinivas J.; Cefola, Paul J.; Montenbruck, Oliver; Fiedler, Hauke

    2016-05-01

    Catalog maintenance for Space Situational Awareness (SSA) demands an accurate and computationally lean orbit propagation and orbit determination technique to cope with the ever increasing number of observed space objects. As an alternative to established numerical and analytical methods, we investigate the accuracy and computational load of the Draper Semi-analytical Satellite Theory (DSST). The standalone version of the DSST was enhanced with additional perturbation models to improve its recovery of short periodic motion. The accuracy of DSST is, for the first time, compared to a numerical propagator with fidelity force models for a comprehensive grid of low, medium, and high altitude orbits with varying eccentricity and different inclinations. Furthermore, the run-time of both propagators is compared as a function of propagation arc, output step size and gravity field order to assess its performance for a full range of relevant use cases. For use in orbit determination, a robust performance of DSST is demonstrated even in the case of sparse observations, which is most sensitive to mismodeled short periodic perturbations. Overall, DSST is shown to exhibit adequate accuracy at favorable computational speed for the full set of orbits that need to be considered in space surveillance. Along with the inherent benefits of a semi-analytical orbit representation, DSST provides an attractive alternative to the more common numerical orbit propagation techniques.

  8. Semi-analytical Study of a One-dimensional Contaminant Flow in a ...

    African Journals Online (AJOL)

    ADOWIE PERE

    ABSTRACT: The Bubnov-Galerkin weighted residual method was used to solve a one- dimensional contaminant flow problem in this paper. The governing equation of the contaminant flow, which is characterized by advection, dispersion and adsorption was discretized and solved to obtain the semi-analytical solution.

  9. A multi-band semi-analytical algorithm for estimating chlorophyll-a concentration in the Yellow River Estuary, China.

    Science.gov (United States)

    Chen, Jun; Quan, Wenting; Cui, Tingwei

    2015-01-01

    In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).

  10. Comparison of algebraic and analytical approaches to the formulation of the statistical model-based reconstruction problem for X-ray computed tomography.

    Science.gov (United States)

    Cierniak, Robert; Lorent, Anna

    2016-09-01

    The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Quantum state engineering and reconstruction in cavity QED. An analytical approach

    International Nuclear Information System (INIS)

    Lougovski, P.

    2004-01-01

    The models of a strongly-driven micromaser and a one-atom laser are developed. Their analytical solutions are obtained by means of phase space techniques. It is shown how to exploit the model of a one-atom laser for simultaneous generation and monitoring of the decoherence of the atom-field ''Schroedinger cat'' states. The similar machinery applied to the problem of the generation of the maximally-entangled states of two atoms placed inside an optical cavity permits its analytical solution. The steady-state solution of the problem exhibits a structure in which the two-atom maximally-entangled state correlates with the vacuum state of the cavity. As a consequence, it is demonstrated that the atomic maximally-entangled state, depending on a coupling regime, can be produced via a single or a sequence of no-photon measurements. The question of the implementation of a quantum memory device using a dispersive interaction between the collective internal ground state of an atomic ensemble and two orthogonal modes of a cavity is addressed. The problem of quantum state reconstruction in the context of cavity quantum electrodynamics is considered. The optimal operational definition of the Wigner function of a cavity field is worked out. It is based on the Fresnel transform of the atomic inversion of a probe atom. The general integral transformation for the Wigner function reconstruction of a particle in an arbitrary symmetric potential is derived

  12. Critical node treatment in the analytic function expansion method for Pin Power Reconstruction

    International Nuclear Information System (INIS)

    Gao, Z.; Xu, Y.; Downar, T.

    2013-01-01

    Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)

  13. Critical node treatment in the analytic function expansion method for Pin Power Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Z. [Rice University, MS 318, 6100 Main Street, Houston, TX 77005 (United States); Xu, Y. [Argonne National Laboratory, 9700 South Case Ave., Argonne, IL 60439 (United States); Downar, T. [Department of Nuclear Engineering, University of Michigan, 2355 Bonisteel blvd., Ann Arbor, MI 48109 (United States)

    2013-07-01

    Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)

  14. A SEMI-ANALYTICAL LINE TRANSFER MODEL TO INTERPRET THE SPECTRA OF GALAXY OUTFLOWS

    International Nuclear Information System (INIS)

    Scarlata, C.; Panagia, N.

    2015-01-01

    We present a semi-analytical line transfer model, (SALT), to study the absorption and re-emission line profiles from expanding galactic envelopes. The envelopes are described as a superposition of shells with density and velocity varying with the distance from the center. We adopt the Sobolev approximation to describe the interaction between the photons escaping from each shell and the remainder of the envelope. We include the effect of multiple scatterings within each shell, properly accounting for the atomic structure of the scattering ions. We also account for the effect of a finite circular aperture on actual observations. For equal geometries and density distributions, our models reproduce the main features of the profiles generated with more complicated transfer codes. Also, our SALT line profiles nicely reproduce the typical asymmetric resonant absorption line profiles observed in starforming/starburst galaxies whereas these absorption profiles cannot be reproduced with thin shells moving at a fixed outflow velocity. We show that scattered resonant emission fills in the resonant absorption profiles, with a strength that is different for each transition. Observationally, the effect of resonant filling depends on both the outflow geometry and the size of the outflow relative to the spectroscopic aperture. Neglecting these effects will lead to incorrect values of gas covering fraction and column density. When a fluorescent channel is available, the resonant profiles alone cannot be used to infer the presence of scattered re-emission. Conversely, the presence of emission lines of fluorescent transitions reveals that emission filling cannot be neglected

  15. Semi-analytic models for the CANDELS survey: comparison of predictions for intrinsic galaxy properties

    International Nuclear Information System (INIS)

    Lu, Yu; Wechsler, Risa H.; Somerville, Rachel S.; Croton, Darren; Porter, Lauren; Primack, Joel; Moody, Chris; Behroozi, Peter S.; Ferguson, Henry C.; Koo, David C.; Guo, Yicheng; Safarzadeh, Mohammadtaher; White, Catherine E.; Finlator, Kristian; Castellano, Marco; Sommariva, Veronica

    2014-01-01

    We compare the predictions of three independently developed semi-analytic galaxy formation models (SAMs) that are being used to aid in the interpretation of results from the CANDELS survey. These models are each applied to the same set of halo merger trees extracted from the 'Bolshoi' high-resolution cosmological N-body simulation and are carefully tuned to match the local galaxy stellar mass function using the powerful method of Bayesian Inference coupled with Markov Chain Monte Carlo or by hand. The comparisons reveal that in spite of the significantly different parameterizations for star formation and feedback processes, the three models yield qualitatively similar predictions for the assembly histories of galaxy stellar mass and star formation over cosmic time. Comparing SAM predictions with existing estimates of the stellar mass function from z = 0-8, we show that the SAMs generally require strong outflows to suppress star formation in low-mass halos to match the present-day stellar mass function, as is the present common wisdom. However, all of the models considered produce predictions for the star formation rates (SFRs) and metallicities of low-mass galaxies that are inconsistent with existing data. The predictions for metallicity-stellar mass relations and their evolution clearly diverge between the models. We suggest that large differences in the metallicity relations and small differences in the stellar mass assembly histories of model galaxies stem from different assumptions for the outflow mass-loading factor produced by feedback. Importantly, while more accurate observational measurements for stellar mass, SFR and metallicity of galaxies at 1 < z < 5 will discriminate between models, the discrepancies between the constrained models and existing data of these observables have already revealed challenging problems in understanding star formation and its feedback in galaxy formation. The three sets of models are being used to construct catalogs

  16. Semi-analytic models for the CANDELS survey: comparison of predictions for intrinsic galaxy properties

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Yu; Wechsler, Risa H. [Kavli Institute for Particle Astrophysics and Cosmology, Physics Department, and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Somerville, Rachel S. [Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Croton, Darren [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, VIC 3122 (Australia); Porter, Lauren; Primack, Joel; Moody, Chris [Department of Physics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Behroozi, Peter S.; Ferguson, Henry C. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Koo, David C.; Guo, Yicheng [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Safarzadeh, Mohammadtaher; White, Catherine E. [Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Finlator, Kristian [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen (Denmark); Castellano, Marco; Sommariva, Veronica, E-mail: luyu@stanford.edu, E-mail: rwechsler@stanford.edu [INAF-Osservatorio Astronomico di Roma, via Frascati 33, I-00040 Monteporzio (Italy)

    2014-11-10

    We compare the predictions of three independently developed semi-analytic galaxy formation models (SAMs) that are being used to aid in the interpretation of results from the CANDELS survey. These models are each applied to the same set of halo merger trees extracted from the 'Bolshoi' high-resolution cosmological N-body simulation and are carefully tuned to match the local galaxy stellar mass function using the powerful method of Bayesian Inference coupled with Markov Chain Monte Carlo or by hand. The comparisons reveal that in spite of the significantly different parameterizations for star formation and feedback processes, the three models yield qualitatively similar predictions for the assembly histories of galaxy stellar mass and star formation over cosmic time. Comparing SAM predictions with existing estimates of the stellar mass function from z = 0-8, we show that the SAMs generally require strong outflows to suppress star formation in low-mass halos to match the present-day stellar mass function, as is the present common wisdom. However, all of the models considered produce predictions for the star formation rates (SFRs) and metallicities of low-mass galaxies that are inconsistent with existing data. The predictions for metallicity-stellar mass relations and their evolution clearly diverge between the models. We suggest that large differences in the metallicity relations and small differences in the stellar mass assembly histories of model galaxies stem from different assumptions for the outflow mass-loading factor produced by feedback. Importantly, while more accurate observational measurements for stellar mass, SFR and metallicity of galaxies at 1 < z < 5 will discriminate between models, the discrepancies between the constrained models and existing data of these observables have already revealed challenging problems in understanding star formation and its feedback in galaxy formation. The three sets of models are being used to construct catalogs

  17. Analytical reconstructions for PET and spect employing L1-denoising

    KAUST Repository

    Barbano, PE.

    2009-07-01

    We propose an efficient, deterministic algorithm designed to reconstruct images from real Radon-Transform and Attenuated Radon-Transform data. Its input consists in a small family of recorded signals, each sampling the same composite photon or positron emission scene over a non-Gaussian, noisy channel. The reconstruction is performed by combining a novel numerical implementation of an analytical inversion formula [1] and a novel signal processing technique, inspired by the work of Tao and Candes [2] on code reconstruction. Our approach is proven to be optimal under a variety of realistic assumptions. We also indicate several medical imaging applications for which the new technology achieves high fidelity, even when dealing with real data subject to substantial non-Gaussian distortions. © 2009 IEEE.

  18. Modeling Change of Topographic Spatial Structures with DEM Resolution Using Semi-Variogram Analysis and Filter Bank

    Directory of Open Access Journals (Sweden)

    Chunmei Wang

    2016-06-01

    Full Text Available In this paper, the way topographic spatial information changes with resolution was investigated using semi-variograms and an Independent Structures Model (ISM to identify the mechanisms involved in changes of topographic parameters as resolution becomes coarser or finer. A typical Loess Hilly area in the Loess Plateau of China was taken as the study area. DEMs with resolutions of 2.5 m and 25 m were derived from topographic maps with map scales of 1:10,000 using ANUDEM software. The ISM, in which the semi-variogram was modeled as the sum of component semi-variograms, was used to model the measured semi-variogram of the elevation surface. Components were modeled using an analytic ISM model and corresponding landscape components identified using Kriging and filter bank analyses. The change in the spatial components as resolution became coarser was investigated by modeling upscaling as a low pass linear filter and applying a general result to obtain an analytic model for the scaling process in terms of semi-variance. This investigation demonstrated how topographic structures could be effectively characterised over varying scales using the ISM model for the semi-variogram. The loss of information in the short range components with resolution is a major driver for the observed change in derived topographic parameters such as slope. This paper has helped to quantify how information is distributed among scale components and how it is lost in natural terrain surfaces as resolution becomes coarser. It is a basis for further applications in the field of geomorphometry.

  19. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Theory and user's manual

    International Nuclear Information System (INIS)

    Rood, A.S.

    1992-03-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track 1 and Track 2 assessment of Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1991). The code calculates the limiting soil concentration such that regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: Contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation for transient mass flux input

  20. Using Fourier and Taylor series expansion in semi-analytical deformation analysis of thick-walled isotropic and wound composite structures

    Directory of Open Access Journals (Sweden)

    Jiran L.

    2016-06-01

    Full Text Available Thick-walled tubes made from isotropic and anisotropic materials are subjected to an internal pressure while the semi-analytical method is employed to investigate their elastic deformations. The contribution and novelty of this method is that it works universally for different loads, different boundary conditions, and different geometry of analyzed structures. Moreover, even when composite material is considered, the method requires no simplistic assumptions. The method uses a curvilinear tensor calculus and it works with the analytical expression of the total potential energy while the unknown displacement functions are approximated by using appropriate series expansion. Fourier and Taylor series expansion are involved into analysis in which they are tested and compared. The main potential of the proposed method is in analyses of wound composite structures when a simple description of the geometry is made in a curvilinear coordinate system while material properties are described in their inherent Cartesian coordinate system. Validations of the introduced semi-analytical method are performed by comparing results with those obtained from three-dimensional finite element analysis (FEA. Calculations with Fourier series expansion show noticeable disagreement with results from the finite element model because Fourier series expansion is not able to capture the course of radial deformation. Therefore, it can be used only for rough estimations of a shape after deformation. On the other hand, the semi-analytical method with Fourier Taylor series expansion works very well for both types of material. Its predictions of deformations are reliable and widely exploitable.

  1. The effect of gas dynamics on semi-analytic modelling of cluster galaxies

    Science.gov (United States)

    Saro, A.; De Lucia, G.; Dolag, K.; Borgani, S.

    2008-12-01

    We study the degree to which non-radiative gas dynamics affect the merger histories of haloes along with subsequent predictions from a semi-analytic model (SAM) of galaxy formation. To this aim, we use a sample of dark matter only and non-radiative smooth particle hydrodynamics (SPH) simulations of four massive clusters. The presence of gas-dynamical processes (e.g. ram pressure from the hot intra-cluster atmosphere) makes haloes more fragile in the runs which include gas. This results in a 25 per cent decrease in the total number of subhaloes at z = 0. The impact on the galaxy population predicted by SAMs is complicated by the presence of `orphan' galaxies, i.e. galaxies whose parent substructures are reduced below the resolution limit of the simulation. In the model employed in our study, these galaxies survive (unaffected by the tidal stripping process) for a residual merging time that is computed using a variation of the Chandrasekhar formula. Due to ram-pressure stripping, haloes in gas simulations tend to be less massive than their counterparts in the dark matter simulations. The resulting merging times for satellite galaxies are then longer in these simulations. On the other hand, the presence of gas influences the orbits of haloes making them on average more circular and therefore reducing the estimated merging times with respect to the dark matter only simulation. This effect is particularly significant for the most massive satellites and is (at least in part) responsible for the fact that brightest cluster galaxies in runs with gas have stellar masses which are about 25 per cent larger than those obtained from dark matter only simulations. Our results show that gas dynamics has only a marginal impact on the statistical properties of the galaxy population, but that its impact on the orbits and merging times of haloes strongly influences the assembly of the most massive galaxies.

  2. Semi-analytical solution for flow in a leaky unconfined aquifer toward a partially penetrating pumping well

    Science.gov (United States)

    Malama, Bwalya; Kuhlman, Kristopher L.; Barrash, Warren

    2008-07-01

    SummaryA semi-analytical solution is presented for the problem of flow in a system consisting of unconfined and confined aquifers, separated by an aquitard. The unconfined aquifer is pumped continuously at a constant rate from a well of infinitesimal radius that partially penetrates its saturated thickness. The solution is termed semi-analytical because the exact solution obtained in double Laplace-Hankel transform space is inverted numerically. The solution presented here is more general than similar solutions obtained for confined aquifer flow as we do not adopt the assumption of unidirectional flow in the confined aquifer (typically assumed to be horizontal) and the aquitard (typically assumed to be vertical). Model predicted results show significant departure from the solution that does not take into account the effect of leakage even for cases where aquitard hydraulic conductivities are two orders of magnitude smaller than those of the aquifers. The results show low sensitivity to changes in radial hydraulic conductivities for aquitards that are two or more orders of magnitude smaller than those of the aquifers, in conformity to findings of earlier workers that radial flow in aquitards may be neglected under such conditions. Hence, for cases were aquitard hydraulic conductivities are two or more orders of magnitude smaller than aquifer conductivities, the simpler models that restrict flow to the radial direction in aquifers and to the vertical direction in aquitards may be sufficient. However, the model developed here can be used to model flow in aquifer-aquitard systems where radial flow is significant in aquitards.

  3. Bessel Fourier orientation reconstruction: an analytical EAP reconstruction using multiple shell acquisitions in diffusion MRI.

    Science.gov (United States)

    Hosseinbor, Ameer Pasha; Chung, Moo K; Wu, Yu-Chien; Alexander, Andrew L

    2011-01-01

    The estimation of the ensemble average propagator (EAP) directly from q-space DWI signals is an open problem in diffusion MRI. Diffusion spectrum imaging (DSI) is one common technique to compute the EAP directly from the diffusion signal, but it is burdened by the large sampling required. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed. One, in particular, is Diffusion Propagator Imaging (DPI) which is based on the Laplace's equation estimation of diffusion signal for each shell acquisition. Viewed intuitively in terms of the heat equation, the DPI solution is obtained when the heat distribution between temperatuere measurements at each shell is at steady state. We propose a generalized extension of DPI, Bessel Fourier Orientation Reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition. That is, the heat distribution between shell measurements is no longer at steady state. In addition to being analytical, the BFOR solution also includes an intrinsic exponential smootheing term. We illustrate the effectiveness of the proposed method by showing results on both synthetic and real MR datasets.

  4. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Theory and user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Rood, A.S.

    1992-03-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track 1 and Track 2 assessment of Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1991). The code calculates the limiting soil concentration such that regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: Contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation for transient mass flux input.

  5. A semi-analytical computation of the theoretical uncertainties of the solar neutrino flux

    DEFF Research Database (Denmark)

    Jorgensen, Andreas C. S.; Christensen-Dalsgaard, Jorgen

    2017-01-01

    We present a comparison between Monte Carlo simulations and a semi-analytical approach that reproduces the theoretical probability distribution functions of the solar neutrino fluxes, stemming from the pp, pep, hep, Be-7, B-8, N-13, O-15 and F-17 source reactions. We obtain good agreement between...

  6. Full energy peak efficiency of NaI(Tl) gamma detectors and its analytical and semi-empirical representations

    International Nuclear Information System (INIS)

    Sudarshan, M.; Joseph, J.; Singh, R.

    1992-01-01

    The validity of various analytical functions and semi-empirical formulae proposed for representing the full energy peak efficiency (FEPE) curves of Ge(Li) and HPGe detectors has been tested for the FEPE of 7.6 cm x 7.6 cm and 5 cm x 5 cm Nal(Tl) detectors in the gamma energy range from 59.5 to 1408.03 keV. The functions proposed by East, and McNelles and Campbell provide by far the best representations of the present data. The semi-empirical formula of Mowatt describes the present data very well. The present investigation shows that some of the analytical functions and semi-empirical formulae, which represent the FEPE of the Ge(Li) and HPGe detectors very well, can be quite fruitfully used for Nal(Tl) detectors. (Author)

  7. High resolution x-ray CMT: Reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.K.

    1997-02-01

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.

  8. A Semi-Analytical Method for Rapid Estimation of Near-Well Saturation, Temperature, Pressure and Stress in Non-Isothermal CO2 Injection

    Science.gov (United States)

    LaForce, T.; Ennis-King, J.; Paterson, L.

    2015-12-01

    Reservoir cooling near the wellbore is expected when fluids are injected into a reservoir or aquifer in CO2 storage, enhanced oil or gas recovery, enhanced geothermal systems, and water injection for disposal. Ignoring thermal effects near the well can lead to under-prediction of changes in reservoir pressure and stress due to competition between increased pressure and contraction of the rock in the cooled near-well region. In this work a previously developed semi-analytical model for immiscible, nonisothermal fluid injection is generalised to include partitioning of components between two phases. Advection-dominated radial flow is assumed so that the coupled two-phase flow and thermal conservation laws can be solved analytically. The temperature and saturation profiles are used to find the increase in reservoir pressure, tangential, and radial stress near the wellbore in a semi-analytical, forward-coupled model. Saturation, temperature, pressure, and stress profiles are found for parameters representative of several CO2 storage demonstration projects around the world. General results on maximum injection rates vs depth for common reservoir parameters are also presented. Prior to drilling an injection well there is often little information about the properties that will determine the injection rate that can be achieved without exceeding fracture pressure, yet injection rate and pressure are key parameters in well design and placement decisions. Analytical solutions to simplified models such as these can quickly provide order of magnitude estimates for flow and stress near the well based on a range of likely parameters.

  9. Semi-Analytical method for the pricing of barrier options in case of time-dependent parameters (with Matlab® codes

    Directory of Open Access Journals (Sweden)

    Guardasoni C.

    2018-03-01

    Full Text Available A Semi-Analytical method for pricing of Barrier Options (SABO is presented. The method is based on the foundations of Boundary Integral Methods which is recast here for the application to barrier option pricing in the Black-Scholes model with time-dependent interest rate, volatility and dividend yield. The validity of the numerical method is illustrated by several numerical examples and comparisons.

  10. A Semi-Analytical Model for Dispersion Modelling Studies in the Atmospheric Boundary Layer

    Science.gov (United States)

    Gupta, A.; Sharan, M.

    2017-12-01

    The severe impact of harmful air pollutants has always been a cause of concern for a wide variety of air quality analysis. The analytical models based on the solution of the advection-diffusion equation have been the first and remain the convenient way for modeling air pollutant dispersion as it is easy to handle the dispersion parameters and related physics in it. A mathematical model describing the crosswind integrated concentration is presented. The analytical solution to the resulting advection-diffusion equation is limited to a constant and simple profiles of eddy diffusivity and wind speed. In practice, the wind speed depends on the vertical height above the ground and eddy diffusivity profiles on the downwind distance from the source as well as the vertical height. In the present model, a method of eigen-function expansion is used to solve the resulting partial differential equation with the appropriate boundary conditions. This leads to a system of first order ordinary differential equations with a coefficient matrix depending on the downwind distance. The solution of this system, in general, can be expressed in terms of Peano-baker series which is not easy to compute, particularly when the coefficient matrix becomes non-commutative (Martin et al., 1967). An approach based on Taylor's series expansion is introduced to find the numerical solution of first order system. The method is applied to various profiles of wind speed and eddy diffusivities. The solution computed from the proposed methodology is found to be efficient and accurate in comparison to those available in the literature. The performance of the model is evaluated with the diffusion datasets from Copenhagen (Gryning et al., 1987) and Hanford (Doran et al., 1985). In addition, the proposed method is used to deduce three dimensional concentrations by considering the Gaussian distribution in crosswind direction, which is also evaluated with diffusion data corresponding to a continuous point source.

  11. Determination of flexibility factors in curved pipes with end restraints using a semi-analytic formulation

    International Nuclear Information System (INIS)

    Fonseca, E.M.M.; Melo, F.J.M.Q. de; Oliveira, C.A.M.

    2002-01-01

    Piping systems are structural sets used in the chemical industry, conventional or nuclear power plants and fluid transport in general-purpose process equipment. They include curved elements built as parts of toroidal thin-walled structures. The mechanical behaviour of such structural assemblies is of leading importance for satisfactory performance and safety standards of the installations. This paper presents a semi-analytic formulation based on Fourier trigonometric series for solving the pure bending problem in curved pipes. A pipe element is considered as a part of a toroidal shell. A displacement formulation pipe element was developed with Fourier series. The solution of this problem is solved from a system of differential equations using mathematical software. To build-up the solution, a simple but efficient deformation model, from a semi-membrane behaviour, was followed here, given the geometry and thin shell assumption. The flexibility factors are compared with the ASME code for some elbow dimensions adopted from ISO 1127. The stress field distribution was also calculated

  12. Heat Conduction Analysis Using Semi Analytical Finite Element Method

    International Nuclear Information System (INIS)

    Wargadipura, A. H. S.

    1997-01-01

    Heat conduction problems are very often found in science and engineering fields. It is of accrual importance to determine quantitative descriptions of this important physical phenomena. This paper discusses the development and application of a numerical formulation and computation that can be used to analyze heat conduction problems. The mathematical equation which governs the physical behaviour of heat conduction is in the form of second order partial differential equations. The numerical resolution used in this paper is performed using the finite element method and Fourier series, which is known as semi-analytical finite element methods. The numerical solution results in simultaneous algebraic equations which is solved using the Gauss elimination methodology. The computer implementation is carried out using FORTRAN language. In the final part of the paper, a heat conduction problem in a rectangular plate domain with isothermal boundary conditions in its edge is solved to show the application of the computer program developed and also a comparison with analytical solution is discussed to assess the accuracy of the numerical solution obtained

  13. Active learning for semi-supervised clustering based on locally linear propagation reconstruction.

    Science.gov (United States)

    Chang, Chin-Chun; Lin, Po-Yi

    2015-03-01

    The success of semi-supervised clustering relies on the effectiveness of side information. To get effective side information, a new active learner learning pairwise constraints known as must-link and cannot-link constraints is proposed in this paper. Three novel techniques are developed for learning effective pairwise constraints. The first technique is used to identify samples less important to cluster structures. This technique makes use of a kernel version of locally linear embedding for manifold learning. Samples neither important to locally linear propagation reconstructions of other samples nor on flat patches in the learned manifold are regarded as unimportant samples. The second is a novel criterion for query selection. This criterion considers not only the importance of a sample to expanding the space coverage of the learned samples but also the expected number of queries needed to learn the sample. To facilitate semi-supervised clustering, the third technique yields inferred must-links for passing information about flat patches in the learned manifold to semi-supervised clustering algorithms. Experimental results have shown that the learned pairwise constraints can capture the underlying cluster structures and proven the feasibility of the proposed approach. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Theory and user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Rood, A.S.

    1992-03-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track 1 and Track 2 assessment of Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1991). The code calculates the limiting soil concentration such that regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: Contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation for transient mass flux input.

  15. A semi-automatic method for positioning a femoral bone reconstruction for strict view generation.

    Science.gov (United States)

    Milano, Federico; Ritacco, Lucas; Gomez, Adrian; Gonzalez Bernaldo de Quiros, Fernan; Risk, Marcelo

    2010-01-01

    In this paper we present a semi-automatic method for femoral bone positioning after 3D image reconstruction from Computed Tomography images. This serves as grounding for the definition of strict axial, longitudinal and anterior-posterior views, overcoming the problem of patient positioning biases in 2D femoral bone measuring methods. After the bone reconstruction is aligned to a standard reference frame, new tomographic slices can be generated, on which unbiased measures may be taken. This could allow not only accurate inter-patient comparisons but also intra-patient comparisons, i.e., comparisons of images of the same patient taken at different times. This method could enable medical doctors to diagnose and follow up several bone deformities more easily.

  16. Application of Dynamic Analysis in Semi-Analytical Finite Element Method.

    Science.gov (United States)

    Liu, Pengfei; Xing, Qinyan; Wang, Dawei; Oeser, Markus

    2017-08-30

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement's state.

  17. Analytic nuclear scattering theories

    International Nuclear Information System (INIS)

    Di Marzio, F.; University of Melbourne, Parkville, VIC

    1999-01-01

    A wide range of nuclear reactions are examined in an analytical version of the usual distorted wave Born approximation. This new approach provides either semi analytic or fully analytic descriptions of the nuclear scattering processes. The resulting computational simplifications, when used within the limits of validity, allow very detailed tests of both nuclear interaction models as well as large basis models of nuclear structure to be performed

  18. Semi-analytic equations to the Cox-Thompson inverse scattering method at fixed energy for special cases

    International Nuclear Information System (INIS)

    Palmai, T.; Apagyi, B.; Horvath, M.

    2008-01-01

    Solution of the Cox-Thompson inverse scattering problem at fixed energy 1-3 is reformulated resulting in semi-analytic equations. The new set of equations for the normalization constants and the nonphysical (shifted) angular momenta are free of matrix inversion operations. This simplification is a result of treating only the input phase shifts of partial waves of a given parity. Therefore, the proposed method can be applied for identical particle scattering of the bosonic type (or for certain cases of identical fermionic scattering). The new formulae are expected to be numerically more efficient than the previous ones. Based on the semi-analytic equations an approximate method is proposed for the generic inverse scattering problem, when partial waves of arbitrary parity are considered. (author)

  19. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules.

    Science.gov (United States)

    Cohen, Julien G; Kim, Hyungjin; Park, Su Bin; van Ginneken, Bram; Ferretti, Gilbert R; Lee, Chang Hyun; Goo, Jin Mo; Park, Chang Min

    2017-08-01

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. • Intra- and interobserver reproducibility of measurements did not differ between FBP and MBIR. • Differences in SSNs' semi-automatic measurement induced by reconstruction algorithms were not clinically significant. • Semi-automatic measurement may be conducted regardless of reconstruction algorithm. • SSNs' semi-automated classification agreement (pure vs. part-solid) did not significantly differ between algorithms.

  20. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    Science.gov (United States)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  1. Analytical and semi-analytical formalism for the voltage and the current sources of a superconducting cavity under dynamic detuning

    CERN Document Server

    Doleans, M

    2003-01-01

    Elliptical superconducting radio frequency (SRF) cavities are sensitive to frequency detuning because they have a high Q value in comparison with normal conducting cavities and weak mechanical properties. Radiation pressure on the cavity walls, microphonics, and tuning system are possible sources of dynamic detuning during SRF cavity-pulsed operation. A general analytic relation between the cavity voltage, the dynamic detuning function, and the RF control function is developed. This expression for the voltage envelope in a cavity under dynamic detuning and dynamic RF controls is analytically expressed through an integral formulation. A semi-analytical scheme is derived to calculate the voltage behavior in any practical case. Examples of voltage envelope behavior for different cases of dynamic detuning and RF control functions are shown. The RF control function for a cavity under dynamic detuning is also investigated and as an application various filling schemes are presented.

  2. A semi-analytical modelling of multistage bunch compression with collective effects

    International Nuclear Information System (INIS)

    Zagorodnov, Igor; Dohlus, Martin

    2010-07-01

    In this paper we introduce an analytical solution (up to the third order) for a multistage bunch compression and acceleration system without collective effects. The solution for the system with collective effects is found by an iterative procedure based on this analytical result. The developed formalism is applied to the FLASH facility at DESY. Analytical estimations of RF tolerances are given. (orig.)

  3. A semi-analytical modelling of multistage bunch compression with collective effects

    Energy Technology Data Exchange (ETDEWEB)

    Zagorodnov, Igor; Dohlus, Martin

    2010-07-15

    In this paper we introduce an analytical solution (up to the third order) for a multistage bunch compression and acceleration system without collective effects. The solution for the system with collective effects is found by an iterative procedure based on this analytical result. The developed formalism is applied to the FLASH facility at DESY. Analytical estimations of RF tolerances are given. (orig.)

  4. A semi-analytical solution to accelerate spin-up of a coupled carbon and nitrogen land model to steady state

    Directory of Open Access Journals (Sweden)

    J. Y. Xia

    2012-10-01

    Full Text Available The spin-up of land models to steady state of coupled carbon–nitrogen processes is computationally so costly that it becomes a bottleneck issue for global analysis. In this study, we introduced a semi-analytical solution (SAS for the spin-up issue. SAS is fundamentally based on the analytic solution to a set of equations that describe carbon transfers within ecosystems over time. SAS is implemented by three steps: (1 having an initial spin-up with prior pool-size values until net primary productivity (NPP reaches stabilization, (2 calculating quasi-steady-state pool sizes by letting fluxes of the equations equal zero, and (3 having a final spin-up to meet the criterion of steady state. Step 2 is enabled by averaged time-varying variables over one period of repeated driving forcings. SAS was applied to both site-level and global scale spin-up of the Australian Community Atmosphere Biosphere Land Exchange (CABLE model. For the carbon-cycle-only simulations, SAS saved 95.7% and 92.4% of computational time for site-level and global spin-up, respectively, in comparison with the traditional method (a long-term iterative simulation to achieve the steady states of variables. For the carbon–nitrogen coupled simulations, SAS reduced computational cost by 84.5% and 86.6% for site-level and global spin-up, respectively. The estimated steady-state pool sizes represent the ecosystem carbon storage capacity, which was 12.1 kg C m−2 with the coupled carbon–nitrogen global model, 14.6% lower than that with the carbon-only model. The nitrogen down-regulation in modeled carbon storage is partly due to the 4.6% decrease in carbon influx (i.e., net primary productivity and partly due to the 10.5% reduction in residence times. This steady-state analysis accelerated by the SAS method can facilitate comparative studies of structural differences in determining the ecosystem carbon storage capacity among biogeochemical models. Overall, the

  5. Task-based data-acquisition optimization for sparse image reconstruction systems

    Science.gov (United States)

    Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.

    2017-03-01

    Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.

  6. Semi-analytic approach to higher-order corrections in simple muonic bound systems: vacuum polarization, self-energy and radiative-recoil

    International Nuclear Information System (INIS)

    Jentschura, U.D.; Wundt, B.J.

    2011-01-01

    The current discrepancy of theory and experiment observed recently in muonic hydrogen necessitates a reinvestigation of all corrections to contribute to the Lamb shift in muonic hydrogen (μH), muonic deuterium (μD), the muonic 3 He ion (denoted here as μ 3 He + ), as well as in the muonic 4 He ion (μ 4 He + ). Here, we choose a semi-analytic approach and evaluate a number of higher-order corrections to vacuum polarization (VP) semi-analytically, while remaining integrals over the spectral density of VP are performed numerically. We obtain semi-analytic results for the second-order correction, and for the relativistic correction to VP. The self-energy correction to VP is calculated, including the perturbations of the Bethe logarithms by vacuum polarization. Sub-leading logarithmic terms in the radiative-recoil correction to the 2S-2P Lamb shift of order α(Zα) 5 μ 3 ln(Zα)/(m μ m N ) where α is the fine structure constant, are also obtained. All calculations are nonperturbative in the mass ratio of orbiting particle and nucleus. (authors)

  7. Restoration of the analytically reconstructed OpenPET images by the method of convex projections

    Energy Technology Data Exchange (ETDEWEB)

    Tashima, Hideaki; Murayama, Hideo; Yamaya, Taiga [National Institute of Radiological Sciences, Chiba (Japan); Katsunuma, Takayuki; Suga, Mikio [Chiba Univ. (Japan). Graduate School of Engineering; Kinouchi, Shoko [National Institute of Radiological Sciences, Chiba (Japan); Chiba Univ. (Japan). Graduate School of Engineering; Obi, Takashi [Tokyo Institute of Technology (Japan). Interdisciplinary Graduate School of Science and Engineering; Kudo, Hiroyuki [Tsukuba Univ. (Japan). Graduate School of Systems and Information Engineering

    2011-07-01

    We have proposed the OpenPET geometry which has gaps between detector rings and physically opened field-of-view. The image reconstruction of the OpenPET is classified into an incomplete problem because it does not satisfy the Orlov's condition. Even so, the simulation and experimental studies have shown that applying iterative methods such as the maximum likelihood expectation maximization (ML-EM) algorithm successfully reconstruct images in the gap area. However, the imaging process of the iterative methods in the OpenPET imaging is not clear. Therefore, the aim of this study is to analytically analyze the OpenPET imaging and estimate implicit constraints involved in the iterative methods. To apply explicit constraints in the OpenPET imaging, we used the method of convex projections for restoration of the images reconstructed by the analytical way in which low-frequency components are lost. Numerical simulations showed that the similar restoration effects are involved both in the ML-EM and the method of convex projections. Therefore, the iterative methods have advantageous effect of restoring lost frequency components of the OpenPET imaging. (orig.)

  8. Semi-analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    Science.gov (United States)

    Lee, Gibbeum; Cho, Yeunwoo

    2018-01-01

    A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.

  9. Criterion of Semi-Markov Dependent Risk Model

    Institute of Scientific and Technical Information of China (English)

    Xiao Yun MO; Xiang Qun YANG

    2014-01-01

    A rigorous definition of semi-Markov dependent risk model is given. This model is a generalization of the Markov dependent risk model. A criterion and necessary conditions of semi-Markov dependent risk model are obtained. The results clarify relations between elements among semi-Markov dependent risk model more clear and are applicable for Markov dependent risk model.

  10. Variations of the stellar initial mass function in semi-analytical models - II. The impact of cosmic ray regulation

    Science.gov (United States)

    Fontanot, Fabio; De Lucia, Gabriella; Xie, Lizhi; Hirschmann, Michaela; Bruzual, Gustavo; Charlot, Stéphane

    2018-04-01

    Recent studies proposed that cosmic rays (CRs) are a key ingredient in setting the conditions for star formation, thanks to their ability to alter the thermal and chemical state of dense gas in the ultraviolet-shielded cores of molecular clouds. In this paper, we explore their role as regulators of the stellar initial mass function (IMF) variations, using the semi-analytic model for GAlaxy Evolution and Assembly (GAEA). The new model confirms our previous results obtained using the integrated galaxy-wide IMF (IGIMF) theory. Both variable IMF models reproduce the observed increase of α-enhancement as a function of stellar mass and the measured z = 0 excess of dynamical mass-to-light ratios with respect to photometric estimates assuming a universal IMF. We focus here on the mismatch between the photometrically derived (M^app_{\\star }) and intrinsic (M⋆) stellar masses, by analysing in detail the evolution of model galaxies with different values of M_{\\star }/M^app_{\\star }. We find that galaxies with small deviations (i.e. formally consistent with a universal IMF hypothesis) are characterized by more extended star formation histories and live in less massive haloes with respect to the bulk of the galaxy population. In particular, the IGIMF theory does not change significantly the mean evolution of model galaxies with respect to the reference model, a CR-regulated IMF instead implies shorter star formation histories and higher peaks of star formation for objects more massive than 1010.5 M⊙. However, we also show that it is difficult to unveil this behaviour from observations, as the key physical quantities are typically derived assuming a universal IMF.

  11. Determination of Transport Properties From Flowing Fluid Temperature Logging In Unsaturated Fractured Rocks: Theory And Semi-Analytical Solution

    International Nuclear Information System (INIS)

    Mukhopadhyay, Sumit; Tsang, Yvonne W.

    2008-01-01

    Flowing fluid temperature logging (FFTL) has been recently proposed as a method to locate flowing fractures. We argue that FFTL, backed up by data from high-precision distributed temperature sensors, can be a useful tool in locating flowing fractures and in estimating the transport properties of unsaturated fractured rocks. We have developed the theoretical background needed to analyze data from FFTL. In this paper, we present a simplified conceptualization of FFTL in unsaturated fractured rock, and develop a semianalytical solution for spatial and temporal variations of pressure and temperature inside a borehole in response to an applied perturbation (pumping of air from the borehole). We compare the semi-analytical solution with predictions from the TOUGH2 numerical simulator. Based on the semi-analytical solution, we propose a method to estimate the permeability of the fracture continuum surrounding the borehole. Using this proposed method, we estimated the effective fracture continuum permeability of the unsaturated rock hosting the Drift Scale Test (DST) at Yucca Mountain, Nevada. Our estimate compares well with previous independent estimates for fracture permeability of the DST host rock. The conceptual model of FFTL presented in this paper is based on the assumptions of single-phase flow, convection-only heat transfer, and negligible change in system state of the rock formation. In a sequel paper (Mukhopadhyay et al., 2008), we extend the conceptual model to evaluate some of these assumptions. We also perform inverse modeling of FFTL data to estimate, in addition to permeability, other transport parameters (such as porosity and thermal conductivity) of unsaturated fractured rocks

  12. Analytic and numerical realizations of a disc galaxy

    Science.gov (United States)

    Stringer, M. J.; Brooks, A. M.; Benson, A. J.; Governato, F.

    2010-09-01

    Recent focus on the importance of cold, unshocked gas accretion in galaxy formation - not explicitly included in semi-analytic studies - motivates the following detailed comparison between two inherently different modelling techniques: direct hydrodynamical simulation and semi-analytic modelling. By analysing the physical assumptions built into the GASOLINE simulation, formulae for the emergent behaviour are derived which allow immediate and accurate translation of these assumptions to the GALFORM semi-analytic model. The simulated halo merger history is then extracted and evolved using these equivalent equations, predicting a strikingly similar galactic system. This exercise demonstrates that it is the initial conditions and physical assumptions which are responsible for the predicted evolution, not the choice of modelling technique. On this level playing field, a previously published GALFORM model is applied (including additional physics such as chemical enrichment and feedback from active galactic nuclei) which leads to starkly different predictions.

  13. An Analytical Model for Prediction of Magnetic Flux Leakage from Surface Defects in Ferromagnetic Tubes

    Directory of Open Access Journals (Sweden)

    Suresh V.

    2016-02-01

    Full Text Available In this paper, an analytical model is proposed to predict magnetic flux leakage (MFL signals from the surface defects in ferromagnetic tubes. The analytical expression consists of elliptic integrals of first kind based on the magnetic dipole model. The radial (Bz component of leakage fields is computed from the cylindrical holes in ferromagnetic tubes. The effectiveness of the model has been studied by analyzing MFL signals as a function of the defect parameters and lift-off. The model predicted results are verified with experimental results and a good agreement is observed between the analytical and the experimental results. This analytical expression could be used for quick prediction of MFL signals and also input data for defect reconstructions in inverse MFL problem.

  14. Casting the Coronal Magnetic Field Reconstructions with Magnetic Field Constraints above the Photosphere in 3D Using MHD Bifrost Model

    Science.gov (United States)

    Fleishman, G. D.; Anfinogentov, S.; Loukitcheva, M.; Mysh'yakov, I.; Stupishin, A.

    2017-12-01

    Measuring and modeling coronal magnetic field, especially above active regions (ARs), remains one of the central problems of solar physics given that the solar coronal magnetism is the key driver of all solar activity. Nowadays the coronal magnetic field is often modelled using methods of nonlinear force-free field reconstruction, whose accuracy has not yet been comprehensively assessed. Given that the coronal magnetic probing is routinely unavailable, only morphological tests have been applied to evaluate performance of the reconstruction methods and a few direct tests using available semi-analytical force-free field solution. Here we report a detailed casting of various tools used for the nonlinear force-free field reconstruction, such as disambiguation methods, photospheric field preprocessing methods, and volume reconstruction methods in a 3D domain using a 3D snapshot of the publicly available full-fledged radiative MHD model. We take advantage of the fact that from the realistic MHD model we know the magnetic field vector distribution in the entire 3D domain, which enables us to perform "voxel-by-voxel" comparison of the restored magnetic field and the true magnetic field in the 3D model volume. Our tests show that the available disambiguation methods often fail at the quiet sun areas, where the magnetic structure is dominated by small-scale magnetic elements, while they work really well at the AR photosphere and (even better) chromosphere. The preprocessing of the photospheric magnetic field, although does produce a more force-free boundary condition, also results in some effective `elevation' of the magnetic field components. The effective `elevation' height turns out to be different for the longitudinal and transverse components of the magnetic field, which results in a systematic error in absolute heights in the reconstructed magnetic data cube. The extrapolation performed starting from actual AR photospheric magnetogram (i.e., without preprocessing) are

  15. Semi-implicit semi-Lagrangian modelling of the atmosphere: a Met Office perspective

    Directory of Open Access Journals (Sweden)

    Benacchio Tommaso

    2016-09-01

    Full Text Available The semi-Lagrangian numerical method, in conjunction with semi-implicit time integration, provides numerical weather prediction models with numerical stability for large time steps, accurate modes of interest, and good representation of hydrostatic and geostrophic balance. Drawing on the legacy of dynamical cores at the Met Office, the use of the semi-implicit semi-Lagrangian method in an operational numerical weather prediction context is surveyed, together with details of the solution approach and associated issues and challenges. The numerical properties and performance of the current operational version of the Met Office’s numerical model are then investigated in a simplified setting along with the impact of different modelling choices.

  16. Elliptic-cylindrical analytical flux-rope model for ICMEs

    Science.gov (United States)

    Nieves-Chinchilla, T.; Linton, M.; Hidalgo, M. A. U.; Vourlidas, A.

    2016-12-01

    We present an analytical flux-rope model for realistic magnetic structures embedded in Interplanetary Coronal Mass Ejections. The framework of this model was established by Nieves-Chinchilla et al. (2016) with the circular-cylindrical analytical flux rope model and under the concept developed by Hidalgo et al. (2002). Elliptic-cylindrical geometry establishes the first-grade of complexity of a series of models. The model attempts to describe the magnetic flux rope topology with distorted cross-section as a possible consequence of the interaction with the solar wind. In this model, the flux rope is completely described in the non-euclidean geometry. The Maxwell equations are solved using tensor calculus consistently with the geometry chosen, invariance along the axial component, and with the only assumption of no radial current density. The model is generalized in terms of the radial dependence of the poloidal current density component and axial current density component. The misalignment between current density and magnetic field is studied in detail for the individual cases of different pairs of indexes for the axial and poloidal current density components. This theoretical analysis provides a map of the force distribution inside of the flux-rope. The reconstruction technique has been adapted to the model and compared with in situ ICME set of events with different in situ signatures. The successful result is limited to some cases with clear in-situ signatures of distortion. However, the model adds a piece in the puzzle of the physical-analytical representation of these magnetic structures. Other effects such as axial curvature, expansion and/or interaction could be incorporated in the future to fully understand the magnetic structure. Finally, the mathematical formulation of this model opens the door to the next model: toroidal flux rope analytical model.

  17. The effect of inclined soil layers on surface vibration from underground railways using a semi-analytical approach

    International Nuclear Information System (INIS)

    Jones, S; Hunt, H

    2009-01-01

    Ground vibration due to underground railways is a significant source of disturbance for people living or working near the subways. The numerical models used to predict vibration levels have inherent uncertainty which must be understood to give confidence in the predictions. A semi-analytical approach is developed herein to investigate the effect of soil layering on the surface vibration of a halfspace where both soil properties and layer inclination angles are varied. The study suggests that both material properties and inclination angle of the layers have significant effect (± 10dB) on the surface vibration response.

  18. Image Reconstruction. Chapter 13

    Energy Technology Data Exchange (ETDEWEB)

    Nuyts, J. [Department of Nuclear Medicine and Medical Imaging Research Center, Katholieke Universiteit Leuven, Leuven (Belgium); Matej, S. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA (United States)

    2014-12-15

    This chapter discusses how 2‑D or 3‑D images of tracer distribution can be reconstructed from a series of so-called projection images acquired with a gamma camera or a positron emission tomography (PET) system [13.1]. This is often called an ‘inverse problem’. The reconstruction is the inverse of the acquisition. The reconstruction is called an inverse problem because making software to compute the true tracer distribution from the acquired data turns out to be more difficult than the ‘forward’ direction, i.e. making software to simulate the acquisition. There are basically two approaches to image reconstruction: analytical reconstruction and iterative reconstruction. The analytical approach is based on mathematical inversion, yielding efficient, non-iterative reconstruction algorithms. In the iterative approach, the reconstruction problem is reduced to computing a finite number of image values from a finite number of measurements. That simplification enables the use of iterative instead of mathematical inversion. Iterative inversion tends to require more computer power, but it can cope with more complex (and hopefully more accurate) models of the acquisition process.

  19. A semi-analytical solution for slug tests in an unconfined aquifer considering unsaturated flow

    Science.gov (United States)

    Sun, Hongbing

    2016-01-01

    A semi-analytical solution considering the vertical unsaturated flow is developed for groundwater flow in response to a slug test in an unconfined aquifer in Laplace space. The new solution incorporates the effects of partial penetrating, anisotropy, vertical unsaturated flow, and a moving water table boundary. Compared to the Kansas Geological Survey (KGS) model, the new solution can significantly improve the fittings of the modeled to the measured hydraulic heads at the late stage of slug tests in an unconfined aquifer, particularly when the slug well has a partially submerged screen and moisture drainage above the water table is significant. The radial hydraulic conductivities estimated with the new solution are comparable to those from the KGS, Bouwer and Rice, and Hvorslev methods. In addition, the new solution also can be used to examine the vertical conductivity, specific storage, specific yield, and the moisture retention parameters in an unconfined aquifer based on slug test data.

  20. Semi-analytic approach to higher-order corrections in simple muonic bound systems: vacuum polarization, self-energy and radiative-recoil

    Energy Technology Data Exchange (ETDEWEB)

    Jentschura, U.D. [Department of Physics, Missouri University of Science and Technology, Rolla MO65409 (United States); Institut fur Theoretische Physik, Universitat Heidelberg, Philosophenweg 16, 69120 Heidelberg (Germany); Wundt, B.J. [Department of Physics, Missouri University of Science and Technology, Rolla MO65409 (United States)

    2011-12-15

    The current discrepancy of theory and experiment observed recently in muonic hydrogen necessitates a reinvestigation of all corrections to contribute to the Lamb shift in muonic hydrogen ({mu}H), muonic deuterium ({mu}D), the muonic {sup 3}He ion (denoted here as {mu}{sup 3}He{sup +}), as well as in the muonic {sup 4}He ion ({mu}{sup 4}He{sup +}). Here, we choose a semi-analytic approach and evaluate a number of higher-order corrections to vacuum polarization (VP) semi-analytically, while remaining integrals over the spectral density of VP are performed numerically. We obtain semi-analytic results for the second-order correction, and for the relativistic correction to VP. The self-energy correction to VP is calculated, including the perturbations of the Bethe logarithms by vacuum polarization. Sub-leading logarithmic terms in the radiative-recoil correction to the 2S-2P Lamb shift of order {alpha}(Z{alpha}){sup 5{mu}3}ln(Z{alpha})/(m{sub {mu}mN}) where {alpha} is the fine structure constant, are also obtained. All calculations are nonperturbative in the mass ratio of orbiting particle and nucleus. (authors)

  1. Semi-analytic flux formulas for shielding calculations

    International Nuclear Information System (INIS)

    Wallace, O.J.

    1976-06-01

    A special coordinate system based on the work of H. Ono and A. Tsuro has been used to derive exact semi-analytic formulas for the flux from cylindrical, spherical, toroidal, rectangular, annular and truncated cone volume sources; from cylindrical, spherical, truncated cone, disk and rectangular surface sources; and from curved and tilted line sources. In most of the cases where the source is curved, shields of the same curvature are allowed in addition to the standard slab shields; cylindrical shields are also allowed in the rectangular volume source flux formula. An especially complete treatment of a cylindrical volume source is given, in which dose points may be arbitrarily located both within and outside the source, and a finite cylindrical shield may be considered. Detector points may also be specified as lying within spherical and annular source volumes. The integral functions encountered in these formulas require at most two-dimensional numeric integration in order to evaluate the flux values. The classic flux formulas involving only slab shields and slab, disk, line, sphere and truncated cone sources become some of the many special cases which are given in addition to the more general formulas mentioned above

  2. From GCode to STL: Reconstruct Models from 3D Printing as a Service

    Science.gov (United States)

    Baumann, Felix W.; Schuermann, Martin; Odefey, Ulrich; Pfeil, Markus

    2017-12-01

    The authors present a method to reverse engineer 3D printer specific machine instructions (GCode) to a point cloud representation and then a STL (Stereolithography) file format. GCode is a machine code that is used for 3D printing among other applications, such as CNC routers. Such code files contain instructions for the 3D printer to move and control its actuator, in case of Fused Deposition Modeling (FDM), the printhead that extrudes semi-molten plastics. The reverse engineering method presented here is based on the digital simulation of the extrusion process of FDM type 3D printing. The reconstructed models and pointclouds do not accommodate for hollow structures, such as holes or cavities. The implementation is performed in Python and relies on open source software and libraries, such as Matplotlib and OpenCV. The reconstruction is performed on the model’s extrusion boundary and considers mechanical imprecision. The complete reconstruction mechanism is available as a RESTful (Representational State Transfer) Web service.

  3. Reconstructing see-saw models

    International Nuclear Information System (INIS)

    Ibarra, Alejandro

    2007-01-01

    In this talk we discuss the prospects to reconstruct the high-energy see-saw Lagrangian from low energy experiments in supersymmetric scenarios. We show that the model with three right-handed neutrinos could be reconstructed in theory, but not in practice. Then, we discuss the prospects to reconstruct the model with two right-handed neutrinos, which is the minimal see-saw model able to accommodate neutrino observations. We identify the relevant processes to achieve this goal, and comment on the sensitivity of future experiments to them. We find the prospects much more promising and we emphasize in particular the importance of the observation of rare leptonic decays for the reconstruction of the right-handed neutrino masses

  4. Semi-analytical prediction of hydraulic resistance and heat transfer for pipe and channel flows of water at supercritical pressure

    International Nuclear Information System (INIS)

    Laurien, E.

    2012-01-01

    Within the Generation IV International Forum the Supercritical Water Reactor is investigated. For its core design and safety analysis the efficient prediction of flow and heat transfer parameters such as the wall-shear stress and the heat-transfer coefficient for pipe and channel flows is needed. For circular pipe flows a numerical model based on the one-dimensional conservation equations of mass, momentum end energy in the radial direction is presented, referred to as a 'semi-analytical' method. An accurate, high-order numerical method is employed to evaluate previously derived analytical solutions of the governing equations. Flow turbulence is modeled using the algebraic approach of Prandtl/van-Karman, including a model for the buffer layer. The influence of wall roughness is taken into account by a new modified numerical damping function of the turbulence model. The thermo-hydraulic properties of water are implemented according to the international standard of 1997. This method has the potential to be used within a sub-channel analysis code and as wall-functions for CFD codes to predict the wall shear stress and the wall temperature. The present study presents a validation of the method with comparison of model results with experiments and multi-dimensional computational (CFD) studies in a wide range of flow parameters. The focus is laid on forced convection flows related to reactor design and near-design conditions. It is found, that the method can accurately predict the wall temperature even under deterioration conditions as they occur in the selected experiments (Yamagata el al. 1972 at 24.5 MPa, Ornatski et al. 1971 at 25.5 and Swenson et al. 1963 at 22.75 MPa). Comparison of the friction coefficient under high heat flux conditions including significant viscosity and density reductions near the wall with various correlations for the hydraulic resistance will be presented; the best agreement is achieve with the correlation of Pioro et al. 2004. It is

  5. Strategy for solving semi-analytically three-dimensional transient flow in a coupled N-layer aquifer system

    NARCIS (Netherlands)

    Veling, E.J.M.; Maas, C.

    2008-01-01

    Efficient strategies for solving semi-analytically the transient groundwater head in a coupled N-layer aquifer system phi(i)(r, z, t), i = 1, ..., N, with radial symmetry, with full z-dependency, and partially penetrating wells are presented. Aquitards are treated as aquifers with their own

  6. A semi-analytical iterative technique for solving chemistry problems

    Directory of Open Access Journals (Sweden)

    Majeed Ahmed AL-Jawary

    2017-07-01

    Full Text Available The main aim and contribution of the current paper is to implement a semi-analytical iterative method suggested by Temimi and Ansari in 2011 namely (TAM to solve two chemical problems. An approximate solution obtained by the TAM provides fast convergence. The current chemical problems are the absorption of carbon dioxide into phenyl glycidyl ether and the other system is a chemical kinetics problem. These problems are represented by systems of nonlinear ordinary differential equations that contain boundary conditions and initial conditions. Error analysis of the approximate solutions is studied using the error remainder and the maximal error remainder. Exponential rate for the convergence is observed. For both problems the results of the TAM are compared with other results obtained by previous methods available in the literature. The results demonstrate that the method has many merits such as being derivative-free, and overcoming the difficulty arising in calculating Adomian polynomials to handle the non-linear terms in Adomian Decomposition Method (ADM. It does not require to calculate Lagrange multiplier in Variational Iteration Method (VIM in which the terms of the sequence become complex after several iterations, thus, analytical evaluation of terms becomes very difficult or impossible in VIM. No need to construct a homotopy in Homotopy Perturbation Method (HPM and solve the corresponding algebraic equations. The MATHEMATICA® 9 software was used to evaluate terms in the iterative process.

  7. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    Science.gov (United States)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Bailey, Sean W.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    In clear shallow waters, light that is transmitted downward through the water column can reflect off the sea floor and thereby influence the water-leaving radiance signal. This effect can confound contemporary ocean color algorithms designed for deep waters where the seafloor has little or no effect on the water-leaving radiance. Thus, inappropriate use of deep water ocean color algorithms in optically shallow regions can lead to inaccurate retrievals of inherent optical properties (IOPs) and therefore have a detrimental impact on IOP-based estimates of marine parameters, including chlorophyll-a and the diffuse attenuation coefficient. In order to improve IOP retrievals in optically shallow regions, a semi-analytical inversion algorithm, the Shallow Water Inversion Model (SWIM), has been developed. Unlike established ocean color algorithms, SWIM considers both the water column depth and the benthic albedo. A radiative transfer study was conducted that demonstrated how SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Properties algorithm (GIOP) and Quasi-Analytical Algorithm (QAA), performed in optically deep and shallow scenarios. The results showed that SWIM performed well, whilst both GIOP and QAA showed distinct positive bias in IOP retrievals in optically shallow waters. The SWIM algorithm was also applied to a test region: the Great Barrier Reef, Australia. Using a single test scene and time series data collected by NASA's MODIS-Aqua sensor (2002-2013), a comparison of IOPs retrieved by SWIM, GIOP and QAA was conducted.

  8. Calculation of photon attenuation coefficients of elements and compounds from approximate semi-analytical formulae

    International Nuclear Information System (INIS)

    Roteta, M.; Baro, J.; Fernandez-Varea, J.M.; Salvat, F.

    1994-01-01

    The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi-analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections are calculated directly from a simple analytical expression. Atomic cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within equal 1%, in the energy range from 1 KeV to 1 GeV. The complete source listing of the program PHOTAC is included

  9. Semi Active Control of Civil Structures, Analytical and Numerical Studies

    Science.gov (United States)

    Kerboua, M.; Benguediab, M.; Megnounif, A.; Benrahou, K. H.; Kaoulala, F.

    Structural control for civil structures was born out of a need to provide safer and more efficient designs with the reality of limited resources. The purpose of structural control is to absorb and to reflect the energy introduced by dynamic loads such as winds, waves, earthquakes, and traffic. Today, the protection of civil structures from severe dynamic loading is typically achieved by allowing the structures to be damaged. Semi-active control devices, also called "smart" control devices, assume the positive aspects of both the passive and active control devices. A semi-active control strategy is similar to the active control strategy. Only here, the control actuator does not directly apply force to the structure, but instead it is used to control the properties of a passive energy device, a controllable passive damper. Semi-active control strategies can be used in many of the same civil applications as passive and active control. One method of operating smart cable dampers is in a purely passive capacity, supplying the dampers with constant optimal voltage. The advantages to this strategy are the relative simplicity of implementing the control strategy as compared to a smart or active control strategy and that the dampers are more easily optimally tuned in- place, eliminating the need to have passive dampers with unique optimal damping coefficients. This research investigated semi-active control of civil structures for natural hazard mitigation. The research has two components, the seismic protection of buildings and the mitigation of wind-induced vibration in structures. An ideal semi-active motion equation of a composite beam that consists of a cantilever beam bonded with a PZT patch using Hamilton's principle and Galerkin's method was treated. A series R-L and a parallel R-L shunt circuits are coupled into the motion equation respectively by means of the constitutive relation of piezoelectric material and Kirchhoff's law to control the beam vibration. A

  10. DEVELOPMENT OF ANALYTICAL MODELS FOR THE ANALYSIS OF FOUNDATIONS OF BUILDINGS AND STRUCTURES IN THE DENSE URBAN ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Koreneva Elena Borisovna

    2012-10-01

    Full Text Available The author proposes analytical methods of analysis of foundation slabs in the dense environment of present-day cities and towns. The two analytical models, including the model of semi-infinite and finite beams are considered. The influence produced by adjacent tunnels, deep excavations and foundation pits is examined. Bedding properties are described through the employment of the Winkler model. Account of additional deflections and angles of deflections must be taken in the above-mentioned cases.

  11. Semi-Markov Arnason-Schwarz models.

    Science.gov (United States)

    King, Ruth; Langrock, Roland

    2016-06-01

    We consider multi-state capture-recapture-recovery data where observed individuals are recorded in a set of possible discrete states. Traditionally, the Arnason-Schwarz model has been fitted to such data where the state process is modeled as a first-order Markov chain, though second-order models have also been proposed and fitted to data. However, low-order Markov models may not accurately represent the underlying biology. For example, specifying a (time-independent) first-order Markov process involves the assumption that the dwell time in each state (i.e., the duration of a stay in a given state) has a geometric distribution, and hence that the modal dwell time is one. Specifying time-dependent or higher-order processes provides additional flexibility, but at the expense of a potentially significant number of additional model parameters. We extend the Arnason-Schwarz model by specifying a semi-Markov model for the state process, where the dwell-time distribution is specified more generally, using, for example, a shifted Poisson or negative binomial distribution. A state expansion technique is applied in order to represent the resulting semi-Markov Arnason-Schwarz model in terms of a simpler and computationally tractable hidden Markov model. Semi-Markov Arnason-Schwarz models come with only a very modest increase in the number of parameters, yet permit a significantly more flexible state process. Model selection can be performed using standard procedures, and in particular via the use of information criteria. The semi-Markov approach allows for important biological inference to be drawn on the underlying state process, for example, on the times spent in the different states. The feasibility of the approach is demonstrated in a simulation study, before being applied to real data corresponding to house finches where the states correspond to the presence or absence of conjunctivitis. © 2015, The International Biometric Society.

  12. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method

    Science.gov (United States)

    Asavaskulkiet, Krissada

    2018-04-01

    In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.

  13. Selection bias in dynamically measured supermassive black hole samples: scaling relations and correlations between residuals in semi-analytic galaxy formation models

    Science.gov (United States)

    Barausse, Enrico; Shankar, Francesco; Bernardi, Mariangela; Dubois, Yohan; Sheth, Ravi K.

    2017-07-01

    Recent work has confirmed that the scaling relations between the masses of supermassive black holes and host-galaxy properties such as stellar masses and velocity dispersions may be biased high. Much of this may be caused by the requirement that the black hole sphere of influence must be resolved for the black hole mass to be reliably estimated. We revisit this issue with a comprehensive galaxy evolution semi-analytic model. Once tuned to reproduce the (mean) correlation of black hole mass with velocity dispersion, the model cannot account for the correlation with stellar mass. This is independent of the model's parameters, thus suggesting an internal inconsistency in the data. The predicted distributions, especially at the low-mass end, are also much broader than observed. However, if selection effects are included, the model's predictions tend to align with the observations. We also demonstrate that the correlations between the residuals of the scaling relations are more effective than the relations themselves at constraining models for the feedback of active galactic nuclei (AGNs). In fact, we find that our model, while in apparent broad agreement with the scaling relations when accounting for selection biases, yields very weak correlations between their residuals at fixed stellar mass, in stark contrast with observations. This problem persists when changing the AGN feedback strength, and is also present in the hydrodynamic cosmological simulation Horizon-AGN, which includes state-of-the-art treatments of AGN feedback. This suggests that current AGN feedback models are too weak or simply not capturing the effect of the black hole on the stellar velocity dispersion.

  14. Computation of potentials from current electrodes in cylindrically stratified media: A stable, rescaled semi-analytical formulation

    Science.gov (United States)

    Moon, Haksu; Teixeira, Fernando L.; Donderici, Burkay

    2015-01-01

    We present an efficient and robust semi-analytical formulation to compute the electric potential due to arbitrary-located point electrodes in three-dimensional cylindrically stratified media, where the radial thickness and the medium resistivity of each cylindrical layer can vary by many orders of magnitude. A basic roadblock for robust potential computations in such scenarios is the poor scaling of modified-Bessel functions used for computation of the semi-analytical solution, for extreme arguments and/or orders. To accommodate this, we construct a set of rescaled versions of modified-Bessel functions, which avoids underflows and overflows in finite precision arithmetic, and minimizes round-off errors. In addition, several extrapolation methods are applied and compared to expedite the numerical evaluation of the (otherwise slowly convergent) associated Sommerfeld-type integrals. The proposed algorithm is verified in a number of scenarios relevant to geophysical exploration, but the general formulation presented is also applicable to other problems governed by Poisson equation such as Newtonian gravity, heat flow, and potential flow in fluid mechanics, involving cylindrically stratified environments.

  15. Semi-Analytic Galaxies - I. Synthesis of environmental and star-forming regulation mechanisms

    Science.gov (United States)

    Cora, Sofía A.; Vega-Martínez, Cristian A.; Hough, Tomás; Ruiz, Andrés N.; Orsi, Álvaro; Muñoz Arancibia, Alejandra M.; Gargiulo, Ignacio D.; Collacchioni, Florencia; Padilla, Nelson D.; Gottlöber, Stefan; Yepes, Gustavo

    2018-05-01

    We present results from the semi-analytic model of galaxy formation SAG applied on the MULTIDARK simulation MDPL2. SAG features an updated supernova (SN) feedback scheme and a robust modelling of the environmental effects on satellite galaxies. This incorporates a gradual starvation of the hot gas halo driven by the action of ram pressure stripping (RPS), that can affect the cold gas disc, and tidal stripping (TS), which can act on all baryonic components. Galaxy orbits of orphan satellites are integrated providing adequate positions and velocities for the estimation of RPS and TS. The star formation history and stellar mass assembly of galaxies are sensitive to the redshift dependence implemented in the SN feedback model. We discuss a variant of our model that allows to reconcile the predicted star formation rate density at z ≳ 3 with the observed one, at the expense of an excess in the faint end of the stellar mass function at z = 2. The fractions of passive galaxies as a function of stellar mass, halo mass and the halo-centric distances are consistent with observational measurements. The model also reproduces the evolution of the main sequence of star forming central and satellite galaxies. The similarity between them is a result of the gradual starvation of the hot gas halo suffered by satellites, in which RPS plays a dominant role. RPS of the cold gas does not affect the fraction of quenched satellites but it contributes to reach the right atomic hydrogen gas content for more massive satellites (M⋆ ≳ 1010 M⊙).

  16. A semi-analytical approach for solving of nonlinear systems of functional differential equations with delay

    Science.gov (United States)

    Rebenda, Josef; Šmarda, Zdeněk

    2017-07-01

    In the paper, we propose a correct and efficient semi-analytical approach to solve initial value problem for systems of functional differential equations with delay. The idea is to combine the method of steps and differential transformation method (DTM). In the latter, formulas for proportional arguments and nonlinear terms are used. An example of using this technique for a system with constant and proportional delays is presented.

  17. Analytical method for reconstruction pin to pin of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2013-01-01

    An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)

  18. Analytical method for reconstruction pin to pin of the nuclear power density distribution

    Energy Technology Data Exchange (ETDEWEB)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)

  19. Semi-analytic calculation of the gravitational wave signal from the electroweak phase transition for general quartic scalar effective potentials

    International Nuclear Information System (INIS)

    Kehayias, John; Profumo, Stefano

    2010-01-01

    Upcoming gravitational wave (GW) detectors might detect a stochastic background of GWs potentially arising from many possible sources, including bubble collisions from a strongly first-order electroweak phase transition. We investigate whether it is possible to connect, via a semi-analytical approximation to the tunneling rate of scalar fields with quartic potentials, the GW signal through detonations with the parameters entering the potential that drives the electroweak phase transition. To this end, we consider a finite temperature effective potential similar in form to the Higgs potential in the Standard Model (SM). In the context of a semi-analytic approximation to the three dimensional Euclidean action, we derive a general approximate form for the tunneling temperature and the relevant GW parameters. We explore the GW signal across the parameter space describing the potential which drives the phase transition. We comment on the potential detectability of a GW signal with future experiments, and physical relevance of the associated potential parameters in the context of theories which have effective potentials similar in form to that of the SM. In particular we consider singlet, triplet, higher dimensional operators, and top-flavor extensions to the Higgs sector of the SM. We find that the addition of a temperature independent cubic term in the potential, arising from a gauge singlet for instance, can greatly enhance the GW power. The other parameters have milder, but potentially noticeable, effects

  20. Semi-analytical treatment of fracture/matrix flow in a dual-porosity simulator for unsaturated fractured rock masses

    International Nuclear Information System (INIS)

    Zimmerman, R.W.; Bodvarsson, G.S.

    1992-04-01

    A semi-analytical dual-porosity simulator for unsaturated flow in fractured rock masses has been developed. Fluid flow between the fracture network and the matrix blocks is described by analytical expressions that have been derived from approximate solutions to the imbibition equation. These expressions have been programmed into the unsaturated flow simulator, TOUGH, as a source/sink term. Flow processes are then simulated using only fracture elements in the computational grid. The modified code is used to simulate flow along single fractures, and infiltration into pervasively fractured formations

  1. Semi-blind sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O

    2012-09-01

    We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.

  2. Use of an object model in three dimensional image reconstruction. Application in medical imaging

    International Nuclear Information System (INIS)

    Delageniere-Guillot, S.

    1993-02-01

    Threedimensional image reconstruction from projections corresponds to a set of techniques which give information on the inner structure of the studied object. These techniques are mainly used in medical imaging or in non destructive evaluation. Image reconstruction is an ill-posed problem. So the inversion has to be regularized. This thesis deals with the introduction of a priori information within the reconstruction algorithm. The knowledge is introduced through an object model. The proposed scheme is applied to the medical domain for cone beam geometry. We address two specific problems. First, we study the reconstruction of high contrast objects. This can be applied to bony morphology (bone/soft tissue) or to angiography (vascular structures opacified by injection of contrast agent). With noisy projections, the filtering steps of standard methods tend to smooth the natural transitions of the investigated object. In order to regularize the reconstruction but to keep contrast, we introduce a model of classes which involves the Markov random fields theory. We develop a reconstruction scheme: analytic reconstruction-reprojection. Then, we address the case of an object changing during the acquisition. This can be applied to angiography when the contrast agent is moving through the vascular tree. The problem is then stated as a dynamic reconstruction. We define an evolution AR model and we use an algebraic reconstruction method. We represent the object at a particular moment as an intermediary state between the state of the object at the beginning and at the end of the acquisition. We test both methods on simulated and real data, and we prove how the use of an a priori model can improve the results. (author)

  3. Analytic reconstruction of magnetic resonance imaging signal obtained from a periodic encoding field.

    Science.gov (United States)

    Rybicki, F J; Hrovat, M I; Patz, S

    2000-09-01

    We have proposed a two-dimensional PERiodic-Linear (PERL) magnetic encoding field geometry B(x,y) = g(y)y cos(q(x)x) and a magnetic resonance imaging pulse sequence which incorporates two fields to image a two-dimensional spin density: a standard linear gradient in the x dimension, and the PERL field. Because of its periodicity, the PERL field produces a signal where the phase of the two dimensions is functionally different. The x dimension is encoded linearly, but the y dimension appears as the argument of a sinusoidal phase term. Thus, the time-domain signal and image spin density are not related by a two-dimensional Fourier transform. They are related by a one-dimensional Fourier transform in the x dimension and a new Bessel function integral transform (the PERL transform) in the y dimension. The inverse of the PERL transform provides a reconstruction algorithm for the y dimension of the spin density from the signal space. To date, the inverse transform has been computed numerically by a Bessel function expansion over its basis functions. This numerical solution used a finite sum to approximate an infinite summation and thus introduced a truncation error. This work analytically determines the basis functions for the PERL transform and incorporates them into the reconstruction algorithm. The improved algorithm is demonstrated by (1) direct comparison between the numerically and analytically computed basis functions, and (2) reconstruction of a known spin density. The new solution for the basis functions also lends proof of the system function for the PERL transform under specific conditions.

  4. A semi-analytical approach towards plane wave analysis of local resonance metamaterials using a multiscale enriched continuum description

    NARCIS (Netherlands)

    Sridhar, A.; Kouznetsova, V.; Geers, M.G.D.

    2017-01-01

    This work presents a novel multiscale semi-analytical technique for the acoustic plane wave analysis of (negative) dynamic mass density type local resonance metamaterials with complex micro-structural geometry. A two step solution strategy is adopted, in which the unit cell problem at the

  5. A semi-analytical study of the vibrations induced by flow in the piping of nuclear power plants

    International Nuclear Information System (INIS)

    Maneschy, J.E.

    1981-01-01

    A semi-analytical method is presented to evaluate the piping system safety due to internal flow vibration excitation. The method is based on the application of a plane spectrum on the system, resulted by measured modal accelerations. A criteria is established to verify stress levels and compare with the allowable levels. (Author) [pt

  6. CrossRef Energy Reconstruction in a High Granularity Semi-Digital Hadronic Calorimeter for ILC Experiments

    CERN Document Server

    Mannai, S; Cortina, E; Laktineh, I

    2016-01-01

    Abstract: The Semi-Digital Hadronic CALorimeter (SDHCAL) is one of the two hadronic calorimeter options proposed by the International Large Detector (ILD) project for the future International Linear Collider (ILC) experiments. It is a sampling calorimeter with 48 active layers made of Glass Resistive Plate Chambers (GRPCs) and their embedded electronics. A fine lateral segmentation is obtained thanks to pickup pads of 1 cm2. This ensures the high granularity required for the application of the Particle Flow Algorithm (PFA) in order to improve the jet energy resolution in the ILC experiments. The performance of the SDHCAL technological prototype was tested successfully in several beam tests at CERN. The main point to be discussed here concerns the energy reconstruction in SDHCAL. Based on Monte Carlo simulation of the SDHCAL prototype using the GEANT4 package, we present different energy reconstruction methods to study the energy linearity and resolution of the detector response to single hadrons. In particula...

  7. Reconstructing the dark sector interaction with LISA

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Rong-Gen; Yang, Tao [CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, P.O. Box 2735, Beijing 100190 (China); Tamanini, Nicola, E-mail: cairg@itp.ac.cn, E-mail: nicola.tamanini@cea.fr, E-mail: yangtao@itp.ac.cn [Institut de Physique Théorique, CEA-Saclay, CNRS UMR 3681, Université Paris-Saclay, F-91191 Gif-sur-Yvette (France)

    2017-05-01

    We perform a forecast analysis of the ability of the LISA space-based interferometer to reconstruct the dark sector interaction using gravitational wave standard sirens at high redshift. We employ Gaussian process methods to reconstruct the distance-redshift relation in a model independent way. We adopt simulated catalogues of standard sirens given by merging massive black hole binaries visible by LISA, with an electromagnetic counterpart detectable by future telescopes. The catalogues are based on three different astrophysical scenarios for the evolution of massive black hole mergers based on the semi-analytic model of E. Barausse, Mon. Not. Roy. Astron. Soc. 423 (2012) 2533. We first use these standard siren datasets to assess the potential of LISA in reconstructing a possible interaction between vacuum dark energy and dark matter. Then we combine the LISA cosmological data with supernovae data simulated for the Dark Energy Survey. We consider two scenarios distinguished by the time duration of the LISA mission: 5 and 10 years. Using only LISA standard siren data, the dark sector interaction can be well reconstructed from redshift z ∼1 to z ∼3 (for a 5 years mission) and z ∼1 up to z ∼5 (for a 10 years mission), though the reconstruction is inefficient at lower redshift. When combined with the DES datasets, the interaction is well reconstructed in the whole redshift region from 0 z ∼ to z ∼3 (5 yr) and z ∼0 to z ∼5 (10 yr), respectively. Massive black hole binary standard sirens can thus be used to constrain the dark sector interaction at redshift ranges not reachable by usual supernovae datasets which probe only the z ∼< 1.5 range. Gravitational wave standard sirens will not only constitute a complementary and alternative way, with respect to familiar electromagnetic observations, to probe the cosmic expansion, but will also provide new tests to constrain possible deviations from the standard ΛCDM dynamics, especially at high redshift.

  8. Analytical, experimental, and Monte Carlo system response matrix for pinhole SPECT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Spain and Grupo de Imaxe Molecular, IDIS, Santiago de Compostela 15706 (Spain); Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Spain and Servei de Física Médica i Protecció Radiológica, Institut Catalá d' Oncologia, Barcelona 08036 (Spain); Silva-Rodríguez, Jesús [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Santiago de Compostela 15706 (Spain); Pavía, Javier [Servei de Medicina Nuclear, Hospital Clínic, Barcelona (Spain); Institut d' Investigacions Biomèdiques August Pí i Sunyer (IDIBAPS) (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Ros, Doménec [Unitat de Biofísica, Facultat de Medicina, Casanova 143 (Spain); Institut d' Investigacions Biomèdiques August Pí i Sunyer (IDIBAPS) (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Ruibal, Álvaro [Servicio Medicina Nuclear, CHUS (Spain); Grupo de Imaxe Molecular, Facultade de Medicina (USC), IDIS, Santiago de Compostela 15706 (Spain); Fundación Tejerina, Madrid (Spain); and others

    2014-03-15

    Purpose: To assess the performance of two approaches to the system response matrix (SRM) calculation in pinhole single photon emission computed tomography (SPECT) reconstruction. Methods: Evaluation was performed using experimental data from a low magnification pinhole SPECT system that consisted of a rotating flat detector with a monolithic scintillator crystal. The SRM was computed following two approaches, which were based on Monte Carlo simulations (MC-SRM) and analytical techniques in combination with an experimental characterization (AE-SRM). The spatial response of the system, obtained by using the two approaches, was compared with experimental data. The effect of the MC-SRM and AE-SRM approaches on the reconstructed image was assessed in terms of image contrast, signal-to-noise ratio, image quality, and spatial resolution. To this end, acquisitions were carried out using a hot cylinder phantom (consisting of five fillable rods with diameters of 5, 4, 3, 2, and 1 mm and a uniform cylindrical chamber) and a custom-made Derenzo phantom, with center-to-center distances between adjacent rods of 1.5, 2.0, and 3.0 mm. Results: Good agreement was found for the spatial response of the system between measured data and results derived from MC-SRM and AE-SRM. Only minor differences for point sources at distances smaller than the radius of rotation and large incidence angles were found. Assessment of the effect on the reconstructed image showed a similar contrast for both approaches, with values higher than 0.9 for rod diameters greater than 1 mm and higher than 0.8 for rod diameter of 1 mm. The comparison in terms of image quality showed that all rods in the different sections of a custom-made Derenzo phantom could be distinguished. The spatial resolution (FWHM) was 0.7 mm at iteration 100 using both approaches. The SNR was lower for reconstructed images using MC-SRM than for those reconstructed using AE-SRM, indicating that AE-SRM deals better with the

  9. Analytical, experimental, and Monte Carlo system response matrix for pinhole SPECT reconstruction

    International Nuclear Information System (INIS)

    Aguiar, Pablo; Pino, Francisco; Silva-Rodríguez, Jesús; Pavía, Javier; Ros, Doménec; Ruibal, Álvaro

    2014-01-01

    Purpose: To assess the performance of two approaches to the system response matrix (SRM) calculation in pinhole single photon emission computed tomography (SPECT) reconstruction. Methods: Evaluation was performed using experimental data from a low magnification pinhole SPECT system that consisted of a rotating flat detector with a monolithic scintillator crystal. The SRM was computed following two approaches, which were based on Monte Carlo simulations (MC-SRM) and analytical techniques in combination with an experimental characterization (AE-SRM). The spatial response of the system, obtained by using the two approaches, was compared with experimental data. The effect of the MC-SRM and AE-SRM approaches on the reconstructed image was assessed in terms of image contrast, signal-to-noise ratio, image quality, and spatial resolution. To this end, acquisitions were carried out using a hot cylinder phantom (consisting of five fillable rods with diameters of 5, 4, 3, 2, and 1 mm and a uniform cylindrical chamber) and a custom-made Derenzo phantom, with center-to-center distances between adjacent rods of 1.5, 2.0, and 3.0 mm. Results: Good agreement was found for the spatial response of the system between measured data and results derived from MC-SRM and AE-SRM. Only minor differences for point sources at distances smaller than the radius of rotation and large incidence angles were found. Assessment of the effect on the reconstructed image showed a similar contrast for both approaches, with values higher than 0.9 for rod diameters greater than 1 mm and higher than 0.8 for rod diameter of 1 mm. The comparison in terms of image quality showed that all rods in the different sections of a custom-made Derenzo phantom could be distinguished. The spatial resolution (FWHM) was 0.7 mm at iteration 100 using both approaches. The SNR was lower for reconstructed images using MC-SRM than for those reconstructed using AE-SRM, indicating that AE-SRM deals better with the

  10. Improvement of spatial discretization error on the semi-analytic nodal method using the scattered source subtraction method

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Tatsumi, Masahiro

    2006-01-01

    In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)

  11. A Semi-Empirical SNR Model for Soil Moisture Retrieval Using GNSS SNR Data

    Directory of Open Access Journals (Sweden)

    Mutian Han

    2018-02-01

    Full Text Available The Global Navigation Satellite System-Interferometry and Reflectometry (GNSS-IR technique on soil moisture remote sensing was studied. A semi-empirical Signal-to-Noise Ratio (SNR model was proposed as a curve-fitting model for SNR data routinely collected by a GNSS receiver. This model aims at reconstructing the direct and reflected signal from SNR data and at the same time extracting frequency and phase information that is affected by soil moisture as proposed by K. M. Larson et al. This is achieved empirically through approximating the direct and reflected signal by a second-order and fourth-order polynomial, respectively, based on the well-established SNR model. Compared with other models (K. M. Larson et al., T. Yang et al., this model can improve the Quality of Fit (QoF with little prior knowledge needed and can allow soil permittivity to be estimated from the reconstructed signals. In developing this model, we showed how noise affects the receiver SNR estimation and thus the model performance through simulations under the bare soil assumption. Results showed that the reconstructed signals with a grazing angle of 5°–15° were better for soil moisture retrieval. The QoF was improved by around 45%, which resulted in better estimation of the frequency and phase information. However, we found that the improvement on phase estimation could be neglected. Experimental data collected at Lamasquère, France, were also used to validate the proposed model. The results were compared with the simulation and previous works. It was found that the model could ensure good fitting quality even in the case of irregular SNR variation. Additionally, the soil moisture calculated from the reconstructed signals was about 15% closer in relation to the ground truth measurements. A deeper insight into the Larson model and the proposed model was given at this stage, which formed a possible explanation of this fact. Furthermore, frequency and phase information

  12. The semi-Lagrangian method on curvilinear grids

    Directory of Open Access Journals (Sweden)

    Hamiaz Adnane

    2016-09-01

    Full Text Available We study the semi-Lagrangian method on curvilinear grids. The classical backward semi-Lagrangian method [1] preserves constant states but is not mass conservative. Natural reconstruction of the field permits nevertheless to have at least first order in time conservation of mass, even if the spatial error is large. Interpolation is performed with classical cubic splines and also cubic Hermite interpolation with arbitrary reconstruction order of the derivatives. High odd order reconstruction of the derivatives is shown to be a good ersatz of cubic splines which do not behave very well as time step tends to zero. A conservative semi-Lagrangian scheme along the lines of [2] is then described; here conservation of mass is automatically satisfied and constant states are shown to be preserved up to first order in time.

  13. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  14. Analytical approach to determine vertical dynamics of a semi-trailer truck from the point of view of goods protection

    Science.gov (United States)

    Pidl, Renáta

    2018-01-01

    The overwhelming majority of intercontinental long-haul transportations of goods are usually carried out on road by semi-trailer trucks. Vibration has a major effect regarding the safety of the transport, the load and the transported goods. This paper deals with the logistics goals from the point of view of vibration and summarizes the methods to predict or measure the vibration load in order to design a proper system. From these methods, the focus of this paper is on the computer simulation of the vibration. An analytical method is presented to calculate the vertical dynamics of a semi-trailer truck containing general viscous damping and exposed to harmonic base excitation. For the purpose of a better understanding, the method will be presented through a simplified four degrees-of-freedom (DOF) half-vehicle model, which neglects the stiffness and damping of the tires, thus the four degrees-of-freedom are the vertical and angular displacements of the truck and the trailer. From the vertical and angular accelerations of the trailer, the vertical acceleration of each point of the platform of the trailer can easily be determined, from which the forces acting on the transported goods are given. As a result of this paper the response of the full platform-load-packaging system to any kind of vehicle, any kind of load and any kind of road condition can be analyzed. The peak acceleration of any point on the platform can be determined by the presented analytical method.

  15. Flow modeling in a porous cylinder with regressing walls using semi analytical approach

    Directory of Open Access Journals (Sweden)

    M Azimi

    2016-10-01

    Full Text Available In this paper, the mathematical modeling of the flow in a porous cylinder with a focus on applications to solid rocket motors is presented. As usual, the cylindrical propellant grain of a solid rocket motor is modeled as a long tube with one end closed at the headwall, while the other remains open. The cylindrical wall is assumed to be permeable so as to simulate the propellant burning and normal gas injection. At first, the problem description and formulation are considered. The Navier-Stokes equations for the viscous flow in a porous cylinder with regressing walls are reduced to a nonlinear ODE by using a similarity transformation in time and space. Application of Differential Transformation Method (DTM as an approximate analytical method has been successfully applied. Finally the results have been presented for various cases.

  16. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Version 2.0 theory and user's manual

    International Nuclear Information System (INIS)

    Rood, A.S.

    1993-06-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track I and Track II assessment of CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1992). The code calculates the limiting soil concentration such that, after leaching and transport to the aquifer, regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation in groundwater. In Version 2.0, GWSCREEN has incorporated an additional source model to calculate the impacts to groundwater resulting from the release to percolation ponds. In addition, transport of radioactive progeny has also been incorporated. GWSCREEN has shown comparable results when compared against other codes using similar algorithms and techniques. This code was designed for assessment and screening of the groundwater pathway when field data is limited. It was not intended to be a predictive tool

  17. A semi-spring and semi-edge combined contact model in CDEM and its application to analysis of Jiweishan landslide

    Directory of Open Access Journals (Sweden)

    Chun Feng

    2014-02-01

    Full Text Available Continuum-based discrete element method (CDEM is an explicit numerical method used for simulation of progressive failure of geological body. To improve the efficiency of contact detection and simplify the calculation steps for contact forces, semi-spring and semi-edge are introduced in calculation. Semi-spring is derived from block vertex, and formed by indenting the block vertex into each face (24 semi-springs for a hexahedral element. The formation process of semi-edge is the same as that of semi-spring (24 semi-edges for a hexahedral element. Based on the semi-springs and semi-edges, a new type of combined contact model is presented. According to this model, six contact types could be reduced to two, i.e. the semi-spring target face contact and semi-edge target edge contact. By the combined model, the contact force could be calculated directly (the information of contact type is not necessary, and the failure judgment could be executed in a straightforward way (each semi-spring and semi-edge own their characteristic areas. The algorithm has been successfully programmed in C++ program. Some simple numerical cases are presented to show the validity and accuracy of this model. Finally, the failure mode, sliding distance and critical friction angle of Jiweishan landslide are studied with the combined model.

  18. Frames and semi-frames

    International Nuclear Information System (INIS)

    Antoine, Jean-Pierre; Balazs, Peter

    2011-01-01

    Loosely speaking, a semi-frame is a generalized frame for which one of the frame bounds is absent. More precisely, given a total sequence in a Hilbert space, we speak of an upper (resp. lower) semi-frame if only the upper (resp. lower) frame bound is valid. Equivalently, for an upper semi-frame, the frame operator is bounded, but has an unbounded inverse, whereas a lower semi-frame has an unbounded frame operator, with a bounded inverse. We study mostly upper semi-frames, both in the continuous and discrete case, and give some remarks for the dual situation. In particular, we show that reconstruction is still possible in certain cases.

  19. Target normal sheath acceleration analytical modeling, comparative study and developments

    International Nuclear Information System (INIS)

    Perego, C.; Batani, D.; Zani, A.; Passoni, M.

    2012-01-01

    Ultra-intense laser interaction with solid targets appears to be an extremely promising technique to accelerate ions up to several MeV, producing beams that exhibit interesting properties for many foreseen applications. Nowadays, most of all the published experimental results can be theoretically explained in the framework of the target normal sheath acceleration (TNSA) mechanism proposed by Wilks et al. [Phys. Plasmas 8(2), 542 (2001)]. As an alternative to numerical simulation various analytical or semi-analytical TNSA models have been published in the latest years, each of them trying to provide predictions for some of the ion beam features, given the initial laser and target parameters. However, the problem of developing a reliable model for the TNSA process is still open, which is why the purpose of this work is to enlighten the present situation of TNSA modeling and experimental results, by means of a quantitative comparison between measurements and theoretical predictions of the maximum ion energy. Moreover, in the light of such an analysis, some indications for the future development of the model proposed by Passoni and Lontano [Phys. Plasmas 13(4), 042102 (2006)] are then presented.

  20. Weighted-indexed semi-Markov models for modeling financial returns

    International Nuclear Information System (INIS)

    D’Amico, Guglielmo; Petroni, Filippo

    2012-01-01

    In this paper we propose a new stochastic model based on a generalization of semi-Markov chains for studying the high frequency price dynamics of traded stocks. We assume that the financial returns are described by a weighted-indexed semi-Markov chain model. We show, through Monte Carlo simulations, that the model is able to reproduce important stylized facts of financial time series such as the first-passage-time distributions and the persistence of volatility. The model is applied to data from the Italian and German stock markets from 1 January 2007 until the end of December 2010. (paper)

  1. The performance of a hybrid analytical-Monte Carlo system response matrix in pinhole SPECT reconstruction

    International Nuclear Information System (INIS)

    El Bitar, Z; Pino, F; Candela, C; Ros, D; Pavía, J; Rannou, F R; Ruibal, A; Aguiar, P

    2014-01-01

    It is well-known that in pinhole SPECT (single-photon-emission computed tomography), iterative reconstruction methods including accurate estimations of the system response matrix can lead to submillimeter spatial resolution. There are two different methods for obtaining the system response matrix: those that model the system analytically using an approach including an experimental characterization of the detector response, and those that make use of Monte Carlo simulations. Methods based on analytical approaches are faster and handle the statistical noise better than those based on Monte Carlo simulations, but they require tedious experimental measurements of the detector response. One suggested approach for avoiding an experimental characterization, circumventing the problem of statistical noise introduced by Monte Carlo simulations, is to perform an analytical computation of the system response matrix combined with a Monte Carlo characterization of the detector response. Our findings showed that this approach can achieve high spatial resolution similar to that obtained when the system response matrix computation includes an experimental characterization. Furthermore, we have shown that using simulated detector responses has the advantage of yielding a precise estimate of the shift between the point of entry of the photon beam into the detector and the point of interaction inside the detector. Considering this, it was possible to slightly improve the spatial resolution in the edge of the field of view. (paper)

  2. Cake filtration modeling: Analytical cake filtration model and filter medium characterization

    Energy Technology Data Exchange (ETDEWEB)

    Koch, Michael

    2008-05-15

    Cake filtration is a unit operation to separate solids from fluids in industrial processes. The build up of a filter cake is usually accompanied with a decrease in overall permeability over the filter leading to an increased pressure drop over the filter. For an incompressible filter cake that builds up on a homogeneous filter cloth, a linear pressure drop profile over time is expected for a constant fluid volume flow. However, experiments show curved pressure drop profiles, which are also attributed to inhomogeneities of the filter (filter medium and/or residual filter cake). In this work, a mathematical filter model is developed to describe the relationship between time and overall permeability. The model considers a filter with an inhomogeneous permeability and accounts for fluid mechanics by a one-dimensional formulation of Darcy's law and for the cake build up by solid continuity. The model can be solved analytically in the time domain. The analytic solution allows for the unambiguous inversion of the model to determine the inhomogeneous permeability from the time resolved overall permeability, e.g. pressure drop measurements. An error estimation of the method is provided by rewriting the model as convolution transformation. This method is applied to simulated and experimental pressure drop data of gas filters with textile filter cloths and various situations with non-uniform flow situations in practical problems are explored. A routine is developed to generate characteristic filter cycles from semi-continuous filter plant operation. The model is modified to investigate the impact of non-uniform dust concentrations. (author). 34 refs., 40 figs., 1 tab

  3. Analytical fan-beam and cone-beam reconstruction algorithms with uniform attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Tang Qiulin; Zeng, Gengsheng L; Gullberg, Grant T

    2005-01-01

    In this paper, we developed an analytical fan-beam reconstruction algorithm that compensates for uniform attenuation in SPECT. The new fan-beam algorithm is in the form of backprojection first, then filtering, and is mathematically exact. The algorithm is based on three components. The first one is the established generalized central-slice theorem, which relates the 1D Fourier transform of a set of arbitrary data and the 2D Fourier transform of the backprojected image. The second one is the fact that the backprojection of the fan-beam measurements is identical to the backprojection of the parallel measurements of the same object with the same attenuator. The third one is the stable analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan. The fan-beam algorithm is then extended into a cone-beam reconstruction algorithm, where the orbit of the focal point of the cone-beam imaging geometry is a circle. This orbit geometry does not satisfy Tuy's condition and the obtained cone-beam algorithm is an approximation. In the cone-beam algorithm, the cone-beam data are first backprojected into the 3D image volume; then a slice-by-slice filtering is performed. This slice-by-slice filtering procedure is identical to that of the fan-beam algorithm. Both the fan-beam and cone-beam algorithms are efficient, and computer simulations are presented. The new cone-beam algorithm is compared with Bronnikov's cone-beam algorithm, and it is shown to have better performance with noisy projections

  4. Semi-Lagrangian methods in air pollution models

    Directory of Open Access Journals (Sweden)

    A. B. Hansen

    2011-06-01

    Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.

    The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.

    Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.

    All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.

    The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.

    The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme

  5. 'Semi-realistic'F-term inflation model building in supergravity

    International Nuclear Information System (INIS)

    Kain, Ben

    2008-01-01

    We describe methods for building 'semi-realistic' models of F-term inflation. By semi-realistic we mean that they are built in, and obey the requirements of, 'semi-realistic' particle physics models. The particle physics models are taken to be effective supergravity theories derived from orbifold compactifications of string theory, and their requirements are taken to be modular invariance, absence of mass terms and stabilization of moduli. We review the particle physics models, their requirements and tools and methods for building inflation models

  6. Thermal Analysis of Disposal of High-Level Nuclear Waste in a Generic Bedded Salt repository using the Semi-Analytical Method.

    Energy Technology Data Exchange (ETDEWEB)

    Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Matteo, Edward N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    An example case is presented for testing analytical thermal models. The example case represents thermal analysis of a generic repository in bedded salt at 500 m depth. The analysis is part of the study reported in Matteo et al. (2016). Ambient average ground surface temperature of 15°C, and a natural geothermal gradient of 25°C/km, were assumed to calculate temperature at the near field. For generic salt repository concept crushed salt backfill is assumed. For the semi-analytical analysis crushed salt thermal conductivity of 0.57 W/m-K was used. With time the crushed salt is expected to consolidate into intact salt. In this study a backfill thermal conductivity of 3.2 W/m-K (same as intact) is used for sensitivity analysis. Decay heat data for SRS glass is given in Table 1. The rest of the parameter values are shown below. Results of peak temperatures at the waste package surface are given in Table 2.

  7. IPOLE - semi-analytic scheme for relativistic polarized radiative transport

    Science.gov (United States)

    Mościbrodzka, M.; Gammie, C. F.

    2018-03-01

    We describe IPOLE, a new public ray-tracing code for covariant, polarized radiative transport. The code extends the IBOTHROS scheme for covariant, unpolarized transport using two representations of the polarized radiation field: In the coordinate frame, it parallel transports the coherency tensor; in the frame of the plasma it evolves the Stokes parameters under emission, absorption, and Faraday conversion. The transport step is implemented to be as spacetime- and coordinate- independent as possible. The emission, absorption, and Faraday conversion step is implemented using an analytic solution to the polarized transport equation with constant coefficients. As a result, IPOLE is stable, efficient, and produces a physically reasonable solution even for a step with high optical depth and Faraday depth. We show that the code matches analytic results in flat space, and that it produces results that converge to those produced by Dexter's GRTRANS polarized transport code on a complicated model problem. We expect IPOLE will mainly find applications in modelling Event Horizon Telescope sources, but it may also be useful in other relativistic transport problems such as modelling for the IXPE mission.

  8. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Version 2.0 theory and user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Rood, A.S.

    1993-06-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track I and Track II assessment of CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1992). The code calculates the limiting soil concentration such that, after leaching and transport to the aquifer, regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation in groundwater. In Version 2.0, GWSCREEN has incorporated an additional source model to calculate the impacts to groundwater resulting from the release to percolation ponds. In addition, transport of radioactive progeny has also been incorporated. GWSCREEN has shown comparable results when compared against other codes using similar algorithms and techniques. This code was designed for assessment and screening of the groundwater pathway when field data is limited. It was not intended to be a predictive tool.

  9. SPET reconstruction with a non-uniform attenuation coefficient using an analytical regularizing iterative method

    International Nuclear Information System (INIS)

    Soussaline, F.; LeCoq, C.; Raynaud, C.; Kellershohn, C.

    1982-09-01

    The aim of this study is to evaluate the potential of the RIM technique when used in brain studies. The analytical Regulatorizing Iterative Method (RIM) is designed to provide fast and accurate reconstruction of tomographic images when non-uniform attenuation is to be accounted for. As indicated by phantom studies, this method improves the contrast and the signal-to-noise ratio as compared to those obtained with FBP (Filtered Back Projection) technique. Preliminary results obtained in brain studies using AMPI-123 (isopropil-amphetamine I-123) are very encouraging in terms of quantitative regional cellular activity. However, the clinical usefulness of this mathematically accurate reconstruction procedure is going to be demonstrated in our Institution, in comparing quantitative data in heart or liver studies where control values can be obtained

  10. Semi-supervised Learning with Deep Generative Models

    NARCIS (Netherlands)

    Kingma, D.P.; Rezende, D.J.; Mohamed, S.; Welling, M.

    2014-01-01

    The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and

  11. A semi-analytical model for the acoustic impedance of finite length circular holes with mean flow

    Science.gov (United States)

    Yang, Dong; Morgans, Aimee S.

    2016-12-01

    The acoustic response of a circular hole with mean flow passing through it is highly relevant to Helmholtz resonators, fuel injectors, perforated plates, screens, liners and many other engineering applications. A widely used analytical model [M.S. Howe. "Onthe theory of unsteady high Reynolds number flow through a circular aperture", Proc. of the Royal Soc. A. 366, 1725 (1979), 205-223] which assumes an infinitesimally short hole was recently shown to be insufficient for predicting the impedance of holes with a finite length. In the present work, an analytical model based on Green's function method is developed to take the hole length into consideration for "short" holes. The importance of capturing the modified vortex noise accurately is shown. The vortices shed at the hole inlet edge are convected to the hole outlet and further downstream to form a vortex sheet. This couples with the acoustic waves and this coupling has the potential to generate as well as absorb acoustic energy in the low frequency region. The impedance predicted by this model shows the importance of capturing the path of the shed vortex. When the vortex path is captured accurately, the impedance predictions agree well with previous experimental and CFD results, for example predicting the potential for generation of acoustic energy at higher frequencies. For "long" holes, a simplified model which combines Howe's model with plane acoustic waves within the hole is developed. It is shown that the most important effect in this case is the acoustic non-compactness of the hole.

  12. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  13. Simplified analytical model to simulate radionuclide release from radioactive waste trenches

    International Nuclear Information System (INIS)

    Sa, Bernardete Lemes Vieira de

    2001-01-01

    In order to evaluate postclosure off-site doses from low-level radioactive waste disposal facilities, a computer code was developed to simulate the radionuclide released from waste form, transport through vadose zone and transport in the saturated zone. This paper describes the methodology used to model these process. The radionuclide released from the waste is calculated using a model based on first order kinetics and the transport through porous media was determined using semi-analytical solution of the mass transport equation, considering the limiting case of unidirectional convective transport with three-dimensional dispersion in an isotropic medium. The results obtained in this work were compared with other codes, showing good agreement. (author)

  14. Online detector response calculations for high-resolution PET image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Pratx, Guillem [Department of Radiation Oncology, Stanford University, Stanford, CA 94305 (United States); Levin, Craig, E-mail: cslevin@stanford.edu [Departments of Radiology, Physics and Electrical Engineering, and Molecular Imaging Program at Stanford, Stanford University, Stanford, CA 94305 (United States)

    2011-07-07

    Positron emission tomography systems are best described by a linear shift-varying model. However, image reconstruction often assumes simplified shift-invariant models to the detriment of image quality and quantitative accuracy. We investigated a shift-varying model of the geometrical system response based on an analytical formulation. The model was incorporated within a list-mode, fully 3D iterative reconstruction process in which the system response coefficients are calculated online on a graphics processing unit (GPU). The implementation requires less than 512 Mb of GPU memory and can process two million events per minute (forward and backprojection). For small detector volume elements, the analytical model compared well to reference calculations. Images reconstructed with the shift-varying model achieved higher quality and quantitative accuracy than those that used a simpler shift-invariant model. For an 8 mm sphere in a warm background, the contrast recovery was 95.8% for the shift-varying model versus 85.9% for the shift-invariant model. In addition, the spatial resolution was more uniform across the field-of-view: for an array of 1.75 mm hot spheres in air, the variation in reconstructed sphere size was 0.5 mm RMS for the shift-invariant model, compared to 0.07 mm RMS for the shift-varying model.

  15. Hydrodynamical simulations and semi-analytic models of galaxy formation: two sides of the same coin

    Science.gov (United States)

    Neistein, Eyal; Khochfar, Sadegh; Dalla Vecchia, Claudio; Schaye, Joop

    2012-04-01

    In this work we develop a new method to turn a state-of-the-art hydrodynamical cosmological simulation of galaxy formation (HYD) into a simple semi-analytic model (SAM). This is achieved by summarizing the efficiencies of accretion, cooling, star formation and feedback given by the HYD, as functions of the halo mass and redshift. The SAM then uses these functions to evolve galaxies within merger trees that are extracted from the same HYD. Surprisingly, by turning the HYD into a SAM, we conserve the mass of individual galaxies, with deviations at the level of 0.1 dex, on an object-by-object basis, with no significant systematics. This is true for all redshifts, and for the mass of stars and gas components, although the agreement reaches 0.2 dex for satellite galaxies at low redshift. We show that the same level of accuracy is obtained even in case the SAM uses only one phase of gas within each galaxy. Moreover, we demonstrate that the formation history of one massive galaxy provides sufficient information for the SAM to reproduce the population of galaxies within the entire cosmological box. The reasons for the small scatter between the HYD and SAM galaxies are as follows. (i) The efficiencies are matched as functions of the halo mass and redshift, meaning that the evolution within merger trees agrees on average. (ii) For a given galaxy, efficiencies fluctuate around the mean value on time-scales of 0.2-2 Gyr. (iii) The various mass components of galaxies are obtained by integrating the efficiencies over time, averaging out these fluctuations. We compare the efficiencies found here to standard SAM recipes and find that they often deviate significantly. For example, here the HYD shows smooth accretion that is less effective for low-mass haloes, and is always composed of hot or dilute gas; cooling is less effective at high redshift, and star formation changes only mildly with cosmic time. The method developed here can be applied in general to any HYD, and can thus

  16. A simple analytical model for electronic conductance in a one dimensional atomic chain across a defect

    International Nuclear Information System (INIS)

    Khater, Antoine; Szczesniak, Dominik

    2011-01-01

    An analytical model is presented for the electronic conductance in a one dimensional atomic chain across an isolated defect. The model system consists of two semi infinite lead atomic chains with the defect atom making the junction between the two leads. The calculation is based on a linear combination of atomic orbitals in the tight-binding approximation, with a single atomic one s-like orbital chosen in the present case. The matching method is used to derive analytical expressions for the scattering cross sections for the reflection and transmission processes across the defect, in the Landauer-Buttiker representation. These analytical results verify the known limits for an infinite atomic chain with no defects. The model can be applied numerically for one dimensional atomic systems supported by appropriate templates. It is also of interest since it would help establish efficient procedures for ensemble averages over a field of impurity configurations in real physical systems.

  17. A SEMI-ANALYTICAL MODEL OF VISIBLE-WAVELENGTH PHASE CURVES OF EXOPLANETS AND APPLICATIONS TO KEPLER- 7 B AND KEPLER- 10 B

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Renyu [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States); Demory, Brice-Olivier [Astrophysics Group, Cavendish Laboratory, J.J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Seager, Sara; Lewis, Nikole [Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Showman, Adam P., E-mail: renyu.hu@jpl.nasa.gov [Department of Planetary Sciences, University of Arizona, Tucson, AZ 85721 (United States)

    2015-03-20

    Kepler has detected numerous exoplanet transits by measuring stellar light in a single visible-wavelength band. In addition to detection, the precise photometry provides phase curves of exoplanets, which can be used to study the dynamic processes on these planets. However, the interpretation of these observations can be complicated by the fact that visible-wavelength phase curves can represent both thermal emission and scattering from the planets. Here we present a semi-analytical model framework that can be applied to study Kepler and future visible-wavelength phase curve observations of exoplanets. The model efficiently computes reflection and thermal emission components for both rocky and gaseous planets, considering both homogeneous and inhomogeneous surfaces or atmospheres. We analyze the phase curves of the gaseous planet Kepler- 7 b and the rocky planet Kepler- 10 b using the model. In general, we find that a hot exoplanet’s visible-wavelength phase curve having a significant phase offset can usually be explained by two classes of solutions: one class requires a thermal hot spot shifted to one side of the substellar point, and the other class requires reflective clouds concentrated on the same side of the substellar point. Particularly for Kepler- 7 b, reflective clouds located on the west side of the substellar point can best explain its phase curve. The reflectivity of the clear part of the atmosphere should be less than 7% and that of the cloudy part should be greater than 80%, and the cloud boundary should be located at 11° ± 3° to the west of the substellar point. We suggest single-band photometry surveys could yield valuable information on exoplanet atmospheres and surfaces.

  18. Calculation of photon attenuation coefficients of elements and compounds from approximate semi-analytical formulae

    Energy Technology Data Exchange (ETDEWEB)

    Roteta, M; Baro, J; Fernandez-Varea, J M; Salvat, F

    1994-07-01

    The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within - 1%, in the energy range from 1 keV to 1 GeV. The complete source listing of the program PHOTAC is included. (Author) 14 refs.

  19. Calculation of photon attenuation coefficients of elements and compounds from approximate semi-analytical formulae

    International Nuclear Information System (INIS)

    Roteta, M.; Baro, J.; Fernandez-Varea, J. M.; Salvat, F.

    1994-01-01

    The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within - 1%, in the energy range from 1 keV to 1 GeV. The complete source listing of the program PHOTAC is included. (Author) 14 refs

  20. Long-Term Prediction of Satellite Orbit Using Analytical Method

    Directory of Open Access Journals (Sweden)

    Jae-Cheol Yoon

    1997-12-01

    Full Text Available A long-term prediction algorithm of geostationary orbit was developed using the analytical method. The perturbation force models include geopotential upto fifth order and degree and luni-solar gravitation, and solar radiation pressure. All of the perturbation effects were analyzed by secular variations, short-period variations, and long-period variations for equinoctial elements such as the semi-major axis, eccentricity vector, inclination vector, and mean longitude of the satellite. Result of the analytical orbit propagator was compared with that of the cowell orbit propagator for the KOREASAT. The comparison indicated that the analytical solution could predict the semi-major axis with an accuarcy of better than ~35meters over a period of 3 month.

  1. Computing dispersion curves of elastic/viscoelastic transversely-isotropic bone plates coupled with soft tissue and marrow using semi-analytical finite element (SAFE) method.

    Science.gov (United States)

    Nguyen, Vu-Hieu; Tran, Tho N H T; Sacchi, Mauricio D; Naili, Salah; Le, Lawrence H

    2017-08-01

    We present a semi-analytical finite element (SAFE) scheme for accurately computing the velocity dispersion and attenuation in a trilayered system consisting of a transversely-isotropic (TI) cortical bone plate sandwiched between the soft tissue and marrow layers. The soft tissue and marrow are mimicked by two fluid layers of finite thickness. A Kelvin-Voigt model accounts for the absorption of all three biological domains. The simulated dispersion curves are validated by the results from the commercial software DISPERSE and published literature. Finally, the algorithm is applied to a viscoelastic trilayered TI bone model to interpret the guided modes of an ex-vivo experimental data set from a bone phantom. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. A Semi-Analytic Model for Estimating Total Suspended Sediment Concentration in Turbid Coastal Waters of Northern Western Australia Using MODIS-Aqua 250 m Data

    Directory of Open Access Journals (Sweden)

    Passang Dorji

    2016-06-01

    Full Text Available Knowledge of the concentration of total suspended sediment (TSS in coastal waters is of significance to marine environmental monitoring agencies to determine the turbidity of water that serve as a proxy to estimate the availability of light at depth for benthic habitats. TSS models applicable to data collected by satellite sensors can be used to determine TSS with reasonable accuracy and of adequate spatial and temporal resolution to be of use for coastal water quality monitoring. Thus, a study is presented here where we develop a semi-analytic sediment model (SASM applicable to any sensor with red and near infrared (NIR bands. The calibration and validation of the SASM using bootstrap and cross-validation methods showed that the SASM applied to Moderate Resolution Imaging Spectroradiometer (MODIS-Aqua band 1 data retrieved TSS with a root mean square error (RMSE and mean averaged relative error (MARE of 5.75 mg/L and 33.33% respectively. The application of the SASM over our study region using MODIS-Aqua band 1 data showed that the SASM can be used to monitor the on-going, post and pre-dredging activities and identify daily TSS anomalies that are caused by natural and anthropogenic processes in coastal waters of northern Western Australia.

  3. Model-Based Photoacoustic Image Reconstruction using Compressed Sensing and Smoothed L0 Norm

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-01-01

    Photoacoustic imaging (PAI) is a novel medical imaging modality that uses the advantages of the spatial resolution of ultrasound imaging and the high contrast of pure optical imaging. Analytical algorithms are usually employed to reconstruct the photoacoustic (PA) images as a result of their simple implementation. However, they provide a low accurate image. Model-based (MB) algorithms are used to improve the image quality and accuracy while a large number of transducers and data acquisition a...

  4. Climate reconstruction from borehole temperatures influenced by groundwater flow

    Science.gov (United States)

    Kurylyk, B.; Irvine, D. J.; Tang, W.; Carey, S. K.; Ferguson, G. A. G.; Beltrami, H.; Bense, V.; McKenzie, J. M.; Taniguchi, M.

    2017-12-01

    Borehole climatology offers advantages over other climate reconstruction methods because further calibration steps are not required and heat is a ubiquitous subsurface property that can be measured from terrestrial boreholes. The basic theory underlying borehole climatology is that past surface air temperature signals are reflected in the ground surface temperature history and archived in subsurface temperature-depth profiles. High frequency surface temperature signals are attenuated in the shallow subsurface, whereas low frequency signals can be propagated to great depths. A limitation of analytical techniques to reconstruct climate signals from temperature profiles is that they generally require that heat flow be limited to conduction. Advection due to groundwater flow can thermally `contaminate' boreholes and result in temperature profiles being rejected for regional climate reconstructions. Although groundwater flow and climate change can result in contrasting or superimposed thermal disturbances, groundwater flow will not typically remove climate change signals in a subsurface thermal profile. Thus, climate reconstruction is still possible in the presence of groundwater flow if heat advection is accommodated in the conceptual and mathematical models. In this study, we derive a new analytical solution for reconstructing surface temperature history from borehole thermal profiles influenced by vertical groundwater flow. The boundary condition for the solution is composed of any number of sequential `ramps', i.e. periods with linear warming or cooling rates, during the instrumented and pre-observational periods. The boundary condition generation and analytical temperature modeling is conducted in a simple computer program. The method is applied to reconstruct climate in Winnipeg, Canada and Tokyo, Japan using temperature profiles recorded in hydrogeologically active environments. The results demonstrate that thermal disturbances due to groundwater flow and climate

  5. Evaluation of evaporation coefficient for micro-droplets exposed to low pressure: A semi-analytical approach

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Prodyut R., E-mail: pchakraborty@iitj.ac.in [Department of Mechanical Engineering, Indian Institute of Technology Jodhpur, 342011 (India); Hiremath, Kirankumar R., E-mail: k.r.hiremath@iitj.ac.in [Department of Mathematics, Indian Institute of Technology Jodhpur, 342011 (India); Sharma, Manvendra, E-mail: PG201283003@iitj.ac.in [Defence Laboratory Jodhpur, Defence Research & Development Organisation, 342011 (India)

    2017-02-05

    Evaporation rate of water is strongly influenced by energy barrier due to molecular collision and heat transfer limitations. The evaporation coefficient, defined as the ratio of experimentally measured evaporation rate to that maximum possible theoretical limit, varies over a conflicting three orders of magnitude. In the present work, a semi-analytical transient heat diffusion model of droplet evaporation is developed considering the effect of change in droplet size due to evaporation from its surface, when the droplet is injected into vacuum. Negligible effect of droplet size reduction due to evaporation on cooling rate is found to be true. However, the evaporation coefficient is found to approach theoretical limit of unity, when the droplet radius is less than that of mean free path of vapor molecules on droplet surface contrary to the reported theoretical predictions. Evaporation coefficient was found to reduce rapidly when the droplet under consideration has a radius larger than the mean free path of evaporating molecules, confirming the molecular collision barrier to evaporation rate. The trend of change in evaporation coefficient with increasing droplet size predicted by the proposed model will facilitate obtaining functional relation of evaporation coefficient with droplet size, and can be used for benchmarking the interaction between multiple droplets during evaporation in vacuum.

  6. Evaluation of evaporation coefficient for micro-droplets exposed to low pressure: A semi-analytical approach

    International Nuclear Information System (INIS)

    Chakraborty, Prodyut R.; Hiremath, Kirankumar R.; Sharma, Manvendra

    2017-01-01

    Evaporation rate of water is strongly influenced by energy barrier due to molecular collision and heat transfer limitations. The evaporation coefficient, defined as the ratio of experimentally measured evaporation rate to that maximum possible theoretical limit, varies over a conflicting three orders of magnitude. In the present work, a semi-analytical transient heat diffusion model of droplet evaporation is developed considering the effect of change in droplet size due to evaporation from its surface, when the droplet is injected into vacuum. Negligible effect of droplet size reduction due to evaporation on cooling rate is found to be true. However, the evaporation coefficient is found to approach theoretical limit of unity, when the droplet radius is less than that of mean free path of vapor molecules on droplet surface contrary to the reported theoretical predictions. Evaporation coefficient was found to reduce rapidly when the droplet under consideration has a radius larger than the mean free path of evaporating molecules, confirming the molecular collision barrier to evaporation rate. The trend of change in evaporation coefficient with increasing droplet size predicted by the proposed model will facilitate obtaining functional relation of evaporation coefficient with droplet size, and can be used for benchmarking the interaction between multiple droplets during evaporation in vacuum.

  7. An analytical model accounting for tip shape evolution during atom probe analysis of heterogeneous materials.

    Science.gov (United States)

    Rolland, N; Larson, D J; Geiser, B P; Duguay, S; Vurpillot, F; Blavette, D

    2015-12-01

    An analytical model describing the field evaporation dynamics of a tip made of a thin layer deposited on a substrate is presented in this paper. The difference in evaporation field between the materials is taken into account in this approach in which the tip shape is modeled at a mesoscopic scale. It was found that the non-existence of sharp edge on the surface is a sufficient condition to derive the morphological evolution during successive evaporation of the layers. This modeling gives an instantaneous and smooth analytical representation of the surface that shows good agreement with finite difference simulations results, and a specific regime of evaporation was highlighted when the substrate is a low evaporation field phase. In addition, the model makes it possible to calculate theoretically the tip analyzed volume, potentially opening up new horizons for atom probe tomographic reconstruction. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  9. Model study of the compact gravity reconstruction; Juryoku inversion `CGR` no model kento

    Energy Technology Data Exchange (ETDEWEB)

    Ishii, Y; Muraoka, A [Sogo Geophysical Exploration Co. Ltd., Tokyo (Japan)

    1996-05-01

    An examination was made on gravity inversion using a compact gravity reconstruction (CGR) method in gravity tomography analysis. In a model analysis, an analytical region of 100m{times}50m was divided into cells of 10m{times}10m, on the assumption that two density anomalous bodies with a density difference of 1.0g/cm{sup 3} existed with one shallow and the other deep density distribution. The result of the analysis revealed that, in a linear analysis by a general inverse matrix, blurs and blotting were plenty with a tendency of making gravity anomaly attributable to an anomalous distribution of shallow density; that CGR provided a large effect in making a clear contrast of an anomalous part; that, where structures of shallow and deep density anomalies existed, the analysis by CGR was inferior in the restoration of a deep structure with errors enlarged; that, if a gravity traverse was taken long compared with the distribution depth of density anomalies, the analytical precision of a deep part was improved; that an analytical convergence was better with the restriction of density difference given on the large side than on the small side; and so on. 3 refs., 10 figs.

  10. Semi-bounded partial differential operators

    CERN Document Server

    Cialdea, Alberto

    2014-01-01

    This book examines the conditions for the semi-boundedness of partial differential operators, which are interpreted in different ways. For example, today we know a great deal about L2-semibounded differential and pseudodifferential operators, although their complete characterization in analytic terms still poses difficulties, even for fairly simple operators. In contrast, until recently almost nothing was known about analytic characterizations of semi-boundedness for differential operators in other Hilbert function spaces and in Banach function spaces. This book works to address that gap. As such, various types of semi-boundedness are considered and a number of relevant conditions which are either necessary and sufficient or best possible in a certain sense are presented. The majority of the results reported on are the authors’ own contributions.

  11. Semi-doubled sigma models for five-branes

    International Nuclear Information System (INIS)

    Kimura, Tetsuji

    2016-01-01

    We study two-dimensional N=(2,2) gauge theory and its dualized system in terms of complex (linear) superfields and their alternatives. Although this technique itself is not new, we can obtain a new model, the so-called “semi-doubled” GLSM. Similar to doubled sigma model, this involves both the original and dual degrees of freedom simultaneously, whilst the latter only contribute to the system via topological interactions. Applying this to the N=(4,4) GLSM for H-monopoles, i.e., smeared NS5-branes, we obtain its T-dualized systems in quite an easy way. As a bonus, we also obtain the semi-doubled GLSM for an exotic 5_2"3-brane whose background is locally nongeometric. In the low energy limit, we construct the semi-doubled NLSM which also generates the conventional string worldsheet sigma models. In the case of the NLSM for 5_2"3-brane, however, we find that the Dirac monopole equation does not make sense any more because the physical information is absorbed into the divergent part via the smearing procedure. This is nothing but the signal which indicates that the nongeometric feature emerges in the considering model.

  12. Study on Applicability of Conceptual Hydrological Models for Flood Forecasting in Humid, Semi-Humid Semi-Arid and Arid Basins in China

    Directory of Open Access Journals (Sweden)

    Guangyuan Kan

    2017-09-01

    Full Text Available Flood simulation and forecasting in various types of watersheds is a hot issue in hydrology. Conceptual hydrological models have been widely applied to flood forecasting for decades. With the development of economy, modern China faces with severe flood disasters in all types of watersheds include humid, semi-humid semi-arid and arid watersheds. However, conceptual model-based flood forecasting in semi-humid semi-arid and arid regions is still challenging. To investigate the applicability of conceptual hydrological models for flood forecasting in the above mentioned regions, three typical conceptual models, include Xinanjiang (XAJ, mix runoff generation (MIX and northern Shannxi (NS, are applied to 3 humid, 3 semi-humid semi-arid, and 3 arid watersheds. The rainfall-runoff data of the 9 watersheds are analyzed based on statistical analysis and information theory, and the model performances are compared and analyzed based on boxplots and scatter plots. It is observed the complexity of drier watershed data is higher than that of the wetter watersheds. This indicates the flood forecasting is harder in drier watersheds. Simulation results indicate all models perform satisfactorily in humid watersheds and only NS model is applicable in arid watersheds. Model with consideration of saturation excess runoff generation (XAJ and MIX perform better than the infiltration excess-based NS model in semi-humid semi-arid watersheds. It is concluded more accurate mix runoff generation theory, more stable and efficient numerical solution of infiltration equation and rainfall data with higher spatial-temporal resolution are main obstacles for conceptual model-based flood simulation and forecasting.

  13. Semi-analytical quasi-normal mode theory for the local density of states in coupled photonic crystal cavity-waveguide structures

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper

    2015-01-01

    We present and validate a semi-analytical quasi-normal mode (QNM) theory for the local density of states (LDOS) in coupled photonic crystal (PhC) cavity-waveguide structures. By means of an expansion of the Green's function on one or a few QNMs, a closed-form expression for the LDOS is obtained, ......-trivial spectrum with a peak and a dip is found, which is reproduced only when including both the two relevant QNMs in the theory. In both cases, we find relative errors below 1% in the bandwidth of interest.......We present and validate a semi-analytical quasi-normal mode (QNM) theory for the local density of states (LDOS) in coupled photonic crystal (PhC) cavity-waveguide structures. By means of an expansion of the Green's function on one or a few QNMs, a closed-form expression for the LDOS is obtained......, and for two types of two-dimensional PhCs, with one and two cavities side-coupled to an extended waveguide, the theory is validated against numerically exact computations. For the single cavity, a slightly asymmetric spectrum is found, which the QNM theory reproduces, and for two cavities a non...

  14. Vortices, semi-local vortices in gauged linear sigma model

    International Nuclear Information System (INIS)

    Kim, Namkwon

    1998-11-01

    We consider the static (2+1)D gauged linear sigma model. By analyzing the governing system of partial differential equations, we investigate various aspects of the model. We show the existence of energy finite vortices under a partially broken symmetry on R 2 with the necessary condition suggested by Y. Yang. We also introduce generalized semi-local vortices and show the existence of energy finite semi-local vortices under a certain condition. The vacuum manifold for the semi-local vortices turns out to be graded. Besides, with a special choice of a representation, we show that the O(3) sigma model of which target space is nonlinear is a singular limit of the gauged linear sigma model of which target space is linear. (author)

  15. MAGNETO-FRICTIONAL MODELING OF CORONAL NONLINEAR FORCE-FREE FIELDS. I. TESTING WITH ANALYTIC SOLUTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; Keppens, R. [School of Astronomy and Space Science, Nanjing University, Nanjing 210023 (China); Xia, C. [Centre for mathematical Plasma-Astrophysics, Department of Mathematics, KU Leuven, B-3001 Leuven (Belgium); Valori, G., E-mail: guoyang@nju.edu.cn [University College London, Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey RH5 6NT (United Kingdom)

    2016-09-10

    We report our implementation of the magneto-frictional method in the Message Passing Interface Adaptive Mesh Refinement Versatile Advection Code (MPI-AMRVAC). The method aims at applications where local adaptive mesh refinement (AMR) is essential to make follow-up dynamical modeling affordable. We quantify its performance in both domain-decomposed uniform grids and block-adaptive AMR computations, using all frequently employed force-free, divergence-free, and other vector comparison metrics. As test cases, we revisit the semi-analytic solution of Low and Lou in both Cartesian and spherical geometries, along with the topologically challenging Titov–Démoulin model. We compare different combinations of spatial and temporal discretizations, and find that the fourth-order central difference with a local Lax–Friedrichs dissipation term in a single-step marching scheme is an optimal combination. The initial condition is provided by the potential field, which is the potential field source surface model in spherical geometry. Various boundary conditions are adopted, ranging from fully prescribed cases where all boundaries are assigned with the semi-analytic models, to solar-like cases where only the magnetic field at the bottom is known. Our results demonstrate that all the metrics compare favorably to previous works in both Cartesian and spherical coordinates. Cases with several AMR levels perform in accordance with their effective resolutions. The magneto-frictional method in MPI-AMRVAC allows us to model a region of interest with high spatial resolution and large field of view simultaneously, as required by observation-constrained extrapolations using vector data provided with modern instruments. The applications of the magneto-frictional method to observations are shown in an accompanying paper.

  16. A semi-analytical method to evaluate the dielectric response of a tokamak plasma accounting for drift orbit effects

    Science.gov (United States)

    Van Eester, Dirk

    2005-03-01

    A semi-analytical method is proposed to evaluate the dielectric response of a plasma to electromagnetic waves in the ion cyclotron domain of frequencies in a D-shaped but axisymmetric toroidal geometry. The actual drift orbit of the particles is accounted for. The method hinges on subdividing the orbit into elementary segments in which the integrations can be performed analytically or by tabulation, and it relies on the local book-keeping of the relation between the toroidal angular momentum and the poloidal flux function. Depending on which variables are chosen, the method allows computation of elementary building blocks for either the wave or the Fokker-Planck equation, but the accent is mainly on the latter. Two types of tangent resonance are distinguished.

  17. A Semi-Discrete Landweber-Kaczmarz Method for Cone Beam Tomography and Laminography Exploiting Geometric Prior Information

    Science.gov (United States)

    Vogelgesang, Jonas; Schorr, Christian

    2016-12-01

    We present a semi-discrete Landweber-Kaczmarz method for solving linear ill-posed problems and its application to Cone Beam tomography and laminography. Using a basis function-type discretization in the image domain, we derive a semi-discrete model of the underlying scanning system. Based on this model, the proposed method provides an approximate solution of the reconstruction problem, i.e. reconstructing the density function of a given object from its projections, in suitable subspaces equipped with basis function-dependent weights. This approach intuitively allows the incorporation of additional information about the inspected object leading to a more accurate model of the X-rays through the object. Also, physical conditions of the scanning geometry, like flat detectors in computerized tomography as used in non-destructive testing applications as well as non-regular scanning curves e.g. appearing in computed laminography (CL) applications, are directly taken into account during the modeling process. Finally, numerical experiments of a typical CL application in three dimensions are provided to verify the proposed method. The introduction of geometric prior information leads to a significantly increased image quality and superior reconstructions compared to standard iterative methods.

  18. A Semi-analytic Criterion for the Spontaneous Initiation of Carbon Detonations in White Dwarfs

    International Nuclear Information System (INIS)

    Garg, Uma; Chang, Philip

    2017-01-01

    Despite over 40 years of active research, the nature of the white dwarf progenitors of SNe Ia remains unclear. However, in the last decade, various progenitor scenarios have highlighted the need for detonations to be the primary mechanism by which these white dwarfs are consumed, but it is unclear how these detonations are triggered. In this paper we study how detonations are spontaneously initiated due to temperature inhomogeneities, e.g., hotspots, in burning nuclear fuel in a simplified physical scenario. Following the earlier work by Zel’Dovich, we describe the physics of detonation initiation in terms of the comparison between the spontaneous wave speed and the Chapman–Jouguet speed. We develop an analytic expression for the spontaneous wave speed and utilize it to determine a semi-analytic criterion for the minimum size of a hotspot with a linear temperature gradient between a peak and base temperature for which detonations in burning carbon–oxygen material can occur. Our results suggest that spontaneous detonations may easily form under a diverse range of conditions, likely allowing a number of progenitor scenarios to initiate detonations that burn up the star.

  19. A Semi-analytic Criterion for the Spontaneous Initiation of Carbon Detonations in White Dwarfs

    Energy Technology Data Exchange (ETDEWEB)

    Garg, Uma; Chang, Philip, E-mail: umagarg@uwm.edu, E-mail: chang65@uwm.edu [Department of Physics, University of Wisconsin-Milwaukee, 3135 North Maryland Avenue, Milwaukee, WI 53211 (United States)

    2017-02-20

    Despite over 40 years of active research, the nature of the white dwarf progenitors of SNe Ia remains unclear. However, in the last decade, various progenitor scenarios have highlighted the need for detonations to be the primary mechanism by which these white dwarfs are consumed, but it is unclear how these detonations are triggered. In this paper we study how detonations are spontaneously initiated due to temperature inhomogeneities, e.g., hotspots, in burning nuclear fuel in a simplified physical scenario. Following the earlier work by Zel’Dovich, we describe the physics of detonation initiation in terms of the comparison between the spontaneous wave speed and the Chapman–Jouguet speed. We develop an analytic expression for the spontaneous wave speed and utilize it to determine a semi-analytic criterion for the minimum size of a hotspot with a linear temperature gradient between a peak and base temperature for which detonations in burning carbon–oxygen material can occur. Our results suggest that spontaneous detonations may easily form under a diverse range of conditions, likely allowing a number of progenitor scenarios to initiate detonations that burn up the star.

  20. Efficient methodologies for system matrix modelling in iterative image reconstruction for rotating high-resolution PET

    Energy Technology Data Exchange (ETDEWEB)

    Ortuno, J E; Kontaxakis, G; Rubio, J L; Santos, A [Departamento de Ingenieria Electronica (DIE), Universidad Politecnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid (Spain); Guerra, P [Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid (Spain)], E-mail: juanen@die.upm.es

    2010-04-07

    A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.

  1. MO-DE-207A-07: Filtered Iterative Reconstruction (FIR) Via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    International Nuclear Information System (INIS)

    Gao, H

    2016-01-01

    Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected to the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).

  2. Modeling of the Global Water Cycle - Analytical Models

    Science.gov (United States)

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  3. Investigation of Schottky-Barrier carbon nanotube field-effect transistor by an efficient semi-classical numerical modeling

    International Nuclear Information System (INIS)

    Chen Changxin; Zhang Wei; Zhao Bo; Zhang Yafei

    2009-01-01

    An efficient semi-classical numerical modeling approach has been developed to simulate the coaxial Schottky-barrier carbon nanotube field-effect transistor (SB-CNTFET). In the modeling, the electrostatic potential of the CNT is obtained by self-consistently solving the analytic expression of CNT carrier distribution and the cylindrical Poisson equation, which significantly enhances the computational efficiency and simultaneously present a result in good agreement to that obtained from the non-equilibrium Green's function (NEGF) formalism based on the first principle. With this method, the effects of the CNT diameter, power supply voltage, thickness and dielectric constant of gate insulator on the device performance are investigated.

  4. Analytical and grid-free solutions to the Lighthill-Whitham-Richards traffic flow model

    KAUST Repository

    Mazaré , Pierre Emmanuel; Dehwah, Ahmad H.; Claudel, Christian G.; Bayen, Alexandre M.

    2011-01-01

    In this article, we propose a computational method for solving the Lighthill-Whitham-Richards (LWR) partial differential equation (PDE) semi-analytically for arbitrary piecewise-constant initial and boundary conditions, and for arbitrary concave fundamental diagrams. With these assumptions, we show that the solution to the LWR PDE at any location and time can be computed exactly and semi-analytically for a very low computational cost using the cumulative number of vehicles formulation of the problem. We implement the proposed computational method on a representative traffic flow scenario to illustrate the exactness of the analytical solution. We also show that the proposed scheme can handle more complex scenarios including traffic lights or moving bottlenecks. The computational cost of the method is very favorable, and is compared with existing algorithms. A toolbox implementation available for public download is briefly described, and posted at http://traffic.berkeley.edu/project/downloads/lwrsolver. © 2011 Elsevier Ltd.

  5. Analytical and grid-free solutions to the Lighthill-Whitham-Richards traffic flow model

    KAUST Repository

    Mazaré, Pierre Emmanuel

    2011-12-01

    In this article, we propose a computational method for solving the Lighthill-Whitham-Richards (LWR) partial differential equation (PDE) semi-analytically for arbitrary piecewise-constant initial and boundary conditions, and for arbitrary concave fundamental diagrams. With these assumptions, we show that the solution to the LWR PDE at any location and time can be computed exactly and semi-analytically for a very low computational cost using the cumulative number of vehicles formulation of the problem. We implement the proposed computational method on a representative traffic flow scenario to illustrate the exactness of the analytical solution. We also show that the proposed scheme can handle more complex scenarios including traffic lights or moving bottlenecks. The computational cost of the method is very favorable, and is compared with existing algorithms. A toolbox implementation available for public download is briefly described, and posted at http://traffic.berkeley.edu/project/downloads/lwrsolver. © 2011 Elsevier Ltd.

  6. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    Science.gov (United States)

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  7. Validation of a 2-D semi-coupled numerical model for fluid-structure-seabed interaction

    Science.gov (United States)

    Ye, Jianhong; Jeng, Dongsheng; Wang, Ren; Zhu, Changqi

    2013-10-01

    A 2-D semi-coupled model PORO-WSSI 2D (also be referred as FSSI-CAS 2D) for the Fluid-Structure-Seabed Interaction (FSSI) has been developed by employing RANS equations for wave motion in fluid domain, VARANS equations for porous flow in porous structures; and taking the dynamic Biot's equations (known as "u - p" approximation) for soil as the governing equations. The finite difference two-step projection method and the forward time difference method are adopted to solve the RANS, VARANS equations; and the finite element method is adopted to solve the "u - p" approximation. A data exchange port is developed to couple the RANS, VARANS equations and the dynamic Biot's equations together. The analytical solution proposed by Hsu and Jeng (1994) and some experiments conducted in wave flume or geotechnical centrifuge in which various waves involved are used to validate the developed semi-coupled numerical model. The sandy bed involved in these experiments is poro-elastic or poro-elastoplastic. The inclusion of the interaction between fluid, marine structures and poro-elastoplastic seabed foundation is a special point and highlight in this paper, which is essentially different with other previous coupled models The excellent agreement between the numerical results and the experiment data indicates that the developed coupled model is highly reliablefor the FSSI problem.

  8. Semi-empirical corrosion model for Zircaloy-4 cladding

    International Nuclear Information System (INIS)

    Nadeem Elahi, Waseem; Atif Rana, Muhammad

    2015-01-01

    The Zircaloy-4 cladding tube in Pressurize Water Reactors (PWRs) bears corrosion due to fast neutron flux, coolant temperature, and water chemistry. The thickness of Zircaloy-4 cladding tube may be decreased due to the increase in corrosion penetration which may affect the integrity of the fuel rod. The tin content and inter-metallic particles sizes has been found significantly in the magnitude of oxide thickness. In present study we have developed a Semiempirical corrosion model by modifying the Arrhenius equation for corrosion as a function of acceleration factor for tin content and accumulative annealing. This developed model has been incorporated into fuel performance computer code. The cladding oxide thickness data obtained from the Semi-empirical corrosion model has been compared with the experimental results i.e., numerous cases of measured cladding oxide thickness from UO 2 fuel rods, irradiated in various PWRs. The results of the both studies lie within the error band of 20μm, which confirms the validity of the developed Semi-empirical corrosion model. Key words: Corrosion, Zircaloy-4, tin content, accumulative annealing factor, Semi-empirical, PWR. (author)

  9. A 2D semi-analytical model for Faraday shield in ICP source

    International Nuclear Information System (INIS)

    Zhang, L.G.; Chen, D.Z.; Li, D.; Liu, K.F.; Li, X.F.; Pan, R.M.; Fan, M.W.

    2016-01-01

    Highlights: • In this paper, a 2D model of ICP with faraday shield is proposed considering the complex structure of the Faraday shield. • Analytical solution is found to evaluate the electromagnetic field in the ICP source with Faraday shield. • The collision-free motion of electrons in the source is investigated and the results show that the electrons will oscillate along the radial direction, which brings insight into how the RF power couple to the plasma. - Abstract: Faraday shield is a thin copper structure with a large number of slits which is usually used in inductive coupled plasma (ICP) sources. RF power is coupled into the plasma through these slits, therefore Faraday shield plays an important role in ICP discharge. However, due to the complex structure of the Faraday shield, the resulted electromagnetic field is quite hard to evaluate. In this paper, a 2D model is proposed on the assumption that the Faraday shield is sufficiently long and the RF coil is uniformly distributed, and the copper is considered as ideal conductor. Under these conditions, the magnetic field inside the source is uniform with only the axial component, while the electric field can be decomposed into a vortex field generated by changing magnetic field together with a gradient field generated by electric charge accumulated on the Faraday shield surface, which can be easily found by solving Laplace's equation. The motion of the electrons in the electromagnetic field is investigated and the results show that the electrons will oscillate along the radial direction when taking no account of collision. This interesting result brings insight into how the RF power couples into the plasma.

  10. Analytical model for advective-dispersive transport involving flexible boundary inputs, initial distributions and zero-order productions

    Science.gov (United States)

    Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping

    2017-11-01

    A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.

  11. Precision 3d Surface Reconstruction from Lro Nac Images Using Semi-Global Matching with Coupled Epipolar Rectification

    Science.gov (United States)

    Hu, H.; Wu, B.

    2017-07-01

    The Narrow-Angle Camera (NAC) on board the Lunar Reconnaissance Orbiter (LRO) comprises of a pair of closely attached high-resolution push-broom sensors, in order to improve the swath coverage. However, the two image sensors do not share the same lenses and cannot be modelled geometrically using a single physical model. Thus, previous works on dense matching of stereo pairs of NAC images would generally create two to four stereo models, each with an irregular and overlapping region of varying size. Semi-Global Matching (SGM) is a well-known dense matching method and has been widely used for image-based 3D surface reconstruction. SGM is a global matching algorithm relying on global inference in a larger context rather than individual pixels to establish stable correspondences. The stereo configuration of LRO NAC images causes severe problem for image matching methods such as SGM, which emphasizes global matching strategy. Aiming at using SGM for image matching of LRO NAC stereo pairs for precision 3D surface reconstruction, this paper presents a coupled epipolar rectification methods for LRO NAC stereo images, which merges the image pair in the disparity space and in this way, only one stereo model will be estimated. For a stereo pair (four) of NAC images, the method starts with the boresight calibration by finding correspondence in the small overlapping stripe between each pair of NAC images and bundle adjustment of the stereo pair, in order to clean the vertical disparities. Then, the dominate direction of the images are estimated by project the center of the coverage area to the reference image and back-projected to the bounding box plane determined by the image orientation parameters iteratively. The dominate direction will determine an affine model, by which the pair of NAC images are warped onto the object space with a given ground resolution and in the meantime, a mask is produced indicating the owner of each pixel. SGM is then used to generate a disparity

  12. PRECISION 3D SURFACE RECONSTRUCTION FROM LRO NAC IMAGES USING SEMI-GLOBAL MATCHING WITH COUPLED EPIPOLAR RECTIFICATION

    Directory of Open Access Journals (Sweden)

    H. Hu

    2017-07-01

    Full Text Available The Narrow-Angle Camera (NAC on board the Lunar Reconnaissance Orbiter (LRO comprises of a pair of closely attached high-resolution push-broom sensors, in order to improve the swath coverage. However, the two image sensors do not share the same lenses and cannot be modelled geometrically using a single physical model. Thus, previous works on dense matching of stereo pairs of NAC images would generally create two to four stereo models, each with an irregular and overlapping region of varying size. Semi-Global Matching (SGM is a well-known dense matching method and has been widely used for image-based 3D surface reconstruction. SGM is a global matching algorithm relying on global inference in a larger context rather than individual pixels to establish stable correspondences. The stereo configuration of LRO NAC images causes severe problem for image matching methods such as SGM, which emphasizes global matching strategy. Aiming at using SGM for image matching of LRO NAC stereo pairs for precision 3D surface reconstruction, this paper presents a coupled epipolar rectification methods for LRO NAC stereo images, which merges the image pair in the disparity space and in this way, only one stereo model will be estimated. For a stereo pair (four of NAC images, the method starts with the boresight calibration by finding correspondence in the small overlapping stripe between each pair of NAC images and bundle adjustment of the stereo pair, in order to clean the vertical disparities. Then, the dominate direction of the images are estimated by project the center of the coverage area to the reference image and back-projected to the bounding box plane determined by the image orientation parameters iteratively. The dominate direction will determine an affine model, by which the pair of NAC images are warped onto the object space with a given ground resolution and in the meantime, a mask is produced indicating the owner of each pixel. SGM is then used to

  13. Analytical modeling and numerical optimization of the biosurfactants production in solid-state fermentation by Aspergillus fumigatus - doi: 10.4025/actascitechnol.v36i1.17818

    Directory of Open Access Journals (Sweden)

    Gabriel Castiglioni

    2014-01-01

    Full Text Available This is an experimental, analytical and numerical study to optimize the biosurfactants production in solid-state fermentation of a medium containing rice straw and minced rice bran inoculated with Aspergillus fumigatus. The goal of this work was to analytically model the biosurfactants production in solid-state fermentation into a column fixed bed bioreactor. The Least-Squares Method was used to adjust the emulsification activity experimental values to a quadratic function semi-empirical model. Control variables were nutritional conditions, the fermentation time and the aeration. The mathematical model is validated against experimental results and then used to predict the maximum emulsification activity for different nutritional conditions and aerations. Based on the semi-empirical model the maximum emulsification activity with no additional hydrocarbon sources was 8.16 UE·g-1 for 112 hours. When diesel oil was used the predicted maximum emulsification activity was 8.10 UE·g-1 for 108 hours.

  14. LitPathExplorer: a confidence-based visual text analytics tool for exploring literature-enriched pathway models.

    Science.gov (United States)

    Soto, Axel J; Zerva, Chrysoula; Batista-Navarro, Riza; Ananiadou, Sophia

    2018-04-15

    Pathway models are valuable resources that help us understand the various mechanisms underpinning complex biological processes. Their curation is typically carried out through manual inspection of published scientific literature to find information relevant to a model, which is a laborious and knowledge-intensive task. Furthermore, models curated manually cannot be easily updated and maintained with new evidence extracted from the literature without automated support. We have developed LitPathExplorer, a visual text analytics tool that integrates advanced text mining, semi-supervised learning and interactive visualization, to facilitate the exploration and analysis of pathway models using statements (i.e. events) extracted automatically from the literature and organized according to levels of confidence. LitPathExplorer supports pathway modellers and curators alike by: (i) extracting events from the literature that corroborate existing models with evidence; (ii) discovering new events which can update models; and (iii) providing a confidence value for each event that is automatically computed based on linguistic features and article metadata. Our evaluation of event extraction showed a precision of 89% and a recall of 71%. Evaluation of our confidence measure, when used for ranking sampled events, showed an average precision ranging between 61 and 73%, which can be improved to 95% when the user is involved in the semi-supervised learning process. Qualitative evaluation using pair analytics based on the feedback of three domain experts confirmed the utility of our tool within the context of pathway model exploration. LitPathExplorer is available at http://nactem.ac.uk/LitPathExplorer_BI/. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online.

  15. The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine

    Science.gov (United States)

    Liu, Yuan; Zhang, Xin; Zhang, Tianhong

    2017-11-01

    A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.

  16. Development of a Semi-Analytical Algorithm for the Retrieval of Suspended Particulate Matter from Remote Sensing over Clear to Very Turbid Waters

    Directory of Open Access Journals (Sweden)

    Bing Han

    2016-03-01

    Full Text Available Remote sensing of suspended particulate matter, SPM, from space has long been used to assess its spatio-temporal variability in various coastal areas. The associated algorithms were generally site specific or developed over a relatively narrow range of concentration, which make them inappropriate for global applications (or at least over broad SPM range. In the frame of the GlobCoast project, a large in situ data set of SPM and remote sensing reflectance, Rrs(λ, has been built gathering together measurements from various coastal areas around Europe, French Guiana, North Canada, Vietnam, and China. This data set covers various contrasting coastal environments diversely affected by different biogeochemical and physical processes such as sediment resuspension, phytoplankton bloom events, and rivers discharges (Amazon, Mekong, Yellow river, MacKenzie, etc.. The SPM concentration spans about four orders of magnitude, from 0.15 to 2626 g·m−3. Different empirical and semi-analytical approaches developed to assess SPM from Rrs(λ were tested over this in situ data set. As none of them provides satisfactory results over the whole SPM range, a generic semi-analytical approach has been developed. This algorithm is based on two standard semi-analytical equations calibrated for low-to-medium and highly turbid waters, respectively. A mixing law has also been developed for intermediate environments. Sources of uncertainties in SPM retrieval such as the bio-optical variability, atmospheric correction errors, and spectral bandwidth have been evaluated. The coefficients involved in these different algorithms have been calculated for ocean color (SeaWiFS, MODIS-A/T, MERIS/OLCI, VIIRS and high spatial resolution (LandSat8-OLI, and Sentinel2-MSI sensors. The performance of the proposed algorithm varies only slightly from one sensor to another demonstrating the great potential applicability of the proposed approach over global and contrasting coastal waters.

  17. Image reconstruction in computerized tomography using the convolution method

    International Nuclear Information System (INIS)

    Oliveira Rebelo, A.M. de.

    1984-03-01

    In the present work an algoritin was derived, using the analytical convolution method (filtered back-projection) for two-dimensional or three-dimensional image reconstruction in computerized tomography applied to non-destructive testing and to the medical use. This mathematical model is based on the analytical Fourier transform method for image reconstruction. This model consists of a discontinuous system formed by an NxN array of cells (pixels). The attenuation in the object under study of a colimated gamma ray beam has been determined for various positions and incidence angles (projections) in terms of the interaction of the beam with the intercepted pixels. The contribution of each pixel to beam attenuation was determined using the weight function W ij which was used for simulated tests. Simulated tests using standard objects with attenuation coefficients in the range of 0,2 to 0,7 cm -1 were carried out using cell arrays of up to 25x25. One application was carried out in the medical area simulating image reconstruction of an arm phantom with attenuation coefficients in the range of 0,2 to 0,5 cm -1 using cell arrays of 41x41. The simulated results show that, in objects with a great number of interfaces and great variations of attenuation coefficients at these interfaces, a good reconstruction is obtained with the number of projections equal to the reconstruction matrix dimension. A good reconstruction is otherwise obtained with fewer projections. (author) [pt

  18. Semi-analytical solution for electro-magneto-thermoelastic creep response of functionally graded piezoelectric rotating disk

    International Nuclear Information System (INIS)

    Loghman, A.; Abdollahian, M.; Jafarzadeh Jazi, A.; Ghorbanpour Arani, A.

    2013-01-01

    Time-dependent electro-magneto-thermoelastic creep response of rotating disk made of functionally graded piezoelectric materials (FGPM) is studied. The disk is placed in a uniform magnetic and a distributed temperature field and is subjected to an induced electric potential and a centrifugal body force. The material thermal, mechanical, magnetic and electric properties are represented by power-law distributions in radial direction. The creep constitutive model is Norton's law in which the creep parameters are also power functions of radius. Using equations of equilibrium, strain-displacement and stress-strain relations in conjunction with the potential-displacement equation a non-homogeneous differential equation containing time-dependent creep strains for displacement is derived. A semi-analytical solution followed by a numerical procedure has been developed to obtain history of stresses, strains, electric potential and creep-strain rates by using Prandtl-Reuss relations. History of electric potential, Radial, circumferential and effective stresses and strains as well as the creep stress rates and effective creep strain rate histories are presented. It has been found that tensile radial stress distribution decreases during the life of the FGPM rotating disk which is associated with major electric potential redistributions which can be used as a sensor for condition monitoring of the FGPM rotating disk. (authors)

  19. Major Mergers in CANDELS up to z=3: Calibrating the Close-Pair Method Using Semi-Analytic Models and Baryonic Mass Ratio Estimates

    Science.gov (United States)

    Mantha, Kameswara; McIntosh, Daniel H.; Conselice, Christopher; Cook, Joshua S.; Croton, Darren J.; Dekel, Avishai; Ferguson, Henry C.; Hathi, Nimish; Kodra, Dritan; Koo, David C.; Lotz, Jennifer M.; Newman, Jeffrey A.; Popping, Gergo; Rafelski, Marc; Rodriguez-Gomez, Vicente; Simmons, Brooke D.; Somerville, Rachel; Straughn, Amber N.; Snyder, Gregory; Wuyts, Stijn; Yu, Lu; Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS) Team

    2018-01-01

    Cosmological simulations predict that the rate of merging between similar-mass massive galaxies should increase towards early cosmic-time. We study the incidence of major (stellar mass ratio SMR 10.3 galaxies spanning 01.5 in strong disagreement with theoretical merger rate predictions. On the other hand, if we compare to a simulation-tuned, evolving timescale prescription from Snyder et al., 2017, we find that the merger rate evolution agrees with theory out to z=3. These results highlight the need for robust calibrations on the complex and presumably redshift-dependent pair-to-merger-rate conversion factors to improve constraints of the empirical merger history. To address this, we use a unique compilation of mock datasets produced by three independent state-of-the-art Semi-Analytic Models (SAMs). We present preliminary calibrations of the close-pair observability timescale and outlier fraction as a function of redshift, stellar-mass, mass-ratio, and local over-density. Furthermore, to verify the hypothesis by previous empirical studies that SMR-selection of major pairs may be biased, we present a new analysis of the baryonic (gas+stars) mass ratios of a subset of close pairs in our sample. For the first time, our preliminary analysis highlights that a noticeable fraction of SMR-selected minor pairs (SMR>4) have major baryonic-mass ratios (BMR<4), which indicate that merger rates based on SMR selection may be under-estimated.

  20. Analysis of chemical warfare using a transient semi-Markov formulation.

    OpenAIRE

    Kierzewski, Michael O.

    1988-01-01

    Approved for public release; distribution is unlimited This thesis proposes an analytical model to test various assumptions about conventional/chemical warfare. A unit's status in conventional/chemical combat is modeled as states in a semi-Markov chain with transient and absorbing states. The effects of differing chemical threat levels, availability of decontamination assets and assumed personnel degradation rates on expected unit life and capabilities are tested. The ...

  1. Comparing semi-analytic particle tagging and hydrodynamical simulations of the Milky Way's stellar halo

    Science.gov (United States)

    Cooper, Andrew P.; Cole, Shaun; Frenk, Carlos S.; Le Bret, Theo; Pontzen, Andrew

    2017-08-01

    Particle tagging is an efficient, but approximate, technique for using cosmological N-body simulations to model the phase-space evolution of the stellar populations predicted, for example, by a semi-analytic model of galaxy formation. We test the technique developed by Cooper et al. (which we call stings here) by comparing particle tags with stars in a smooth particle hydrodynamic (SPH) simulation. We focus on the spherically averaged density profile of stars accreted from satellite galaxies in a Milky Way (MW)-like system. The stellar profile in the SPH simulation can be recovered accurately by tagging dark matter (DM) particles in the same simulation according to a prescription based on the rank order of particle binding energy. Applying the same prescription to an N-body version of this simulation produces a density profile differing from that of the SPH simulation by ≲10 per cent on average between 1 and 200 kpc. This confirms that particle tagging can provide a faithful and robust approximation to a self-consistent hydrodynamical simulation in this regime (in contradiction to previous claims in the literature). We find only one systematic effect, likely due to the collisionless approximation, namely that massive satellites in the SPH simulation are disrupted somewhat earlier than their collisionless counterparts. In most cases, this makes remarkably little difference to the spherically averaged distribution of their stellar debris. We conclude that, for galaxy formation models that do not predict strong baryonic effects on the present-day DM distribution of MW-like galaxies or their satellites, differences in stellar halo predictions associated with the treatment of star formation and feedback are much more important than those associated with the dynamical limitations of collisionless particle tagging.

  2. Rapid Late Holocene glacier fluctuations reconstructed from South Georgia lake sediments using novel analytical and numerical techniques

    Science.gov (United States)

    van der Bilt, Willem; Bakke, Jostein; Werner, Johannes; Paasche, Øyvind; Rosqvist, Gunhild

    2016-04-01

    The collapse of ice shelves, rapidly retreating glaciers and a dramatic recent temperature increase show that Southern Ocean climate is rapidly shifting. Also, instrumental and modelling data demonstrate transient interactions between oceanic and atmospheric forcings as well as climatic teleconnections with lower-latitude regions. Yet beyond the instrumental period, a lack of proxy climate timeseries impedes our understanding of Southern Ocean climate. Also, available records often lack the resolution and chronological control required to resolve rapid climate shifts like those observed at present. Alpine glaciers are found on most Southern Ocean islands and quickly respond to shifts in climate through changes in mass balance. Attendant changes in glacier size drive variations in the production of rock flour, the suspended product of glacial erosion. This climate response may be captured by downstream distal glacier-fed lakes, continuously recording glacier history. Sediment records from such lakes are considered prime sources for paleoclimate reconstructions. Here, we present the first reconstruction of Late Holocene glacier variability from the island of South Georgia. Using a toolbox of advanced physical, geochemical (XRF) and magnetic proxies, in combination with state-of-the-art numerical techniques, we fingerprinted a glacier signal from glacier-fed lake sediments. This lacustrine sediment signal was subsequently calibrated against mapped glacier extent with the help of geomorphological moraine evidence and remote sensing techniques. The outlined approach enabled us to robustly resolve variations of a complex glacier at sub-centennial timescales, while constraining the sedimentological imprint of other geomorphic catchment processes. From a paleoclimate perspective, our reconstruction reveals a dynamic Late Holocene climate, modulated by long-term shifts in regional circulation patterns. We also find evidence for rapid medieval glacier retreat as well as a

  3. Analytical solution for the transient wave propagation of a buried cylindrical P-wave line source in a semi-infinite elastic medium with a fluid surface layer

    Science.gov (United States)

    Shan, Zhendong; Ling, Daosheng

    2018-02-01

    This article develops an analytical solution for the transient wave propagation of a cylindrical P-wave line source in a semi-infinite elastic solid with a fluid layer. The analytical solution is presented in a simple closed form in which each term represents a transient physical wave. The Scholte equation is derived, through which the Scholte wave velocity can be determined. The Scholte wave is the wave that propagates along the interface between the fluid and solid. To develop the analytical solution, the wave fields in the fluid and solid are defined, their analytical solutions in the Laplace domain are derived using the boundary and interface conditions, and the solutions are then decomposed into series form according to the power series expansion method. Each item of the series solution has a clear physical meaning and represents a transient wave path. Finally, by applying Cagniard's method and the convolution theorem, the analytical solutions are transformed into the time domain. Numerical examples are provided to illustrate some interesting features in the fluid layer, the interface and the semi-infinite solid. When the P-wave velocity in the fluid is higher than that in the solid, two head waves in the solid, one head wave in the fluid and a Scholte wave at the interface are observed for the cylindrical P-wave line source.

  4. Comparative aseismic response study of different analytical models of nuclear power plant

    International Nuclear Information System (INIS)

    Takemori, T.; Ogiwara, Y.; Kawakatsu, T.; Abe, Y.; Kitade, K.

    1977-01-01

    This study consists of two major sections; one is comparison of magnification factor of input acceleration between finite model and spring-mass model, the other is evaluation of modified spring-mass models for aseismic design of nuclear power plant. The structure model used in this study is a P.W.R. reactor containment building composed of the outer shield wall, the steel containment and the internal structure. The rigidity of the foundation rock is represented by shear wave velocity Vs. The magnification of bedrock acceleration at the structure foundation, main floors and free surface of foundation rock are calculated using axisymmetric finite element analytical model with various rock rigidities. The outer shield wall and the steel containment are represented by shell elements, and the internal structure and foundation rock are represented by quadrilateral elements. Each nodal point has four degrees of freedom, in shell element, and three in quadrilateral element. The total degrees of this analytical model are large, so the eigenvalue is calculated by the subspace iterative method. The responses are calculated by time history of acceleration and response spectrum based on the mode superposition method. The spring-mass model is most used in aseismic design for its simplicity. But, if the foundation rock spring is calculated assuming the semi-infinite elastic solid, the analyses of the magnification of acceleration in foundation rock are limited. From the calculated results of the F.E.M. model, the modification of the spring-mass model is estimated, considering the magnification ratio of foundation rock beneath the structure

  5. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements

    Directory of Open Access Journals (Sweden)

    Miguel A. Franesqui

    2017-08-01

    Full Text Available This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA. The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled “Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves” (Franesqui et al., 2017 [1].

  6. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements.

    Science.gov (United States)

    Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida

    2017-08-01

    This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].

  7. A novel region-growing based semi-automatic segmentation protocol for three-dimensional condylar reconstruction using cone beam computed tomography (CBCT.

    Directory of Open Access Journals (Sweden)

    Tong Xi

    Full Text Available OBJECTIVE: To present and validate a semi-automatic segmentation protocol to enable an accurate 3D reconstruction of the mandibular condyles using cone beam computed tomography (CBCT. MATERIALS AND METHODS: Approval from the regional medical ethics review board was obtained for this study. Bilateral mandibular condyles in ten CBCT datasets of patients were segmented using the currently proposed semi-automatic segmentation protocol. This segmentation protocol combined 3D region-growing and local thresholding algorithms. The segmentation of a total of twenty condyles was performed by two observers. The Dice-coefficient and distance map calculations were used to evaluate the accuracy and reproducibility of the segmented and 3D rendered condyles. RESULTS: The mean inter-observer Dice-coefficient was 0.98 (range [0.95-0.99]. An average 90th percentile distance of 0.32 mm was found, indicating an excellent inter-observer similarity of the segmented and 3D rendered condyles. No systematic errors were observed in the currently proposed segmentation protocol. CONCLUSION: The novel semi-automated segmentation protocol is an accurate and reproducible tool to segment and render condyles in 3D. The implementation of this protocol in the clinical practice allows the CBCT to be used as an imaging modality for the quantitative analysis of condylar morphology.

  8. Novel 3D porous semi-IPN hydrogel scaffolds of silk sericin and poly(N-hydroxyethyl acrylamide for dermal reconstruction

    Directory of Open Access Journals (Sweden)

    S. Ross

    2017-09-01

    Full Text Available In this work, a novel semi-interpenetrating polymer network (semi-IPN hydrogel scaffold based on silk sericin (SS and poly(N-hydroxyethyl acrylamide (PHEA was successfully fabricated via conventional free-radical polymerization. The porous structure of the scaffolds was introduced using a lyophilization technique and the effect of cross-linker (XL on morphology, gelation time and physical properties of hydrogel scaffold was first studied. The results show that using low cross-linker content (0.125, 0.25 and 0.5 wt% XL produced flexible scaffolds and appropriate gelation times for fabricating the scaffold. Therefore, the polymerization system with a constant percentage of XL at 0.5 wt% was chosen to study further the effect of SS on the physical properties and cell culture of the scaffolds. It was observed that the hydrogel scaffold of PHEA without SS (PHEA/SS-0 had no cell proliferation, whereas hydrogel scaffolds with SS enhanced cell viability when compared to the positive control. The sample of PHEA/SS at 1.25 wt% of SS and 0.5 wt% of cross-linker was the most suitable for HFF-1 cells to migrate and cell proliferation due to possessing a connective porous structure, along with silk sericin. The results proved that this novel porous semi-IPN hydrogel has the potential to be used as dermal reconstruction scaffold.

  9. Semi-Markov models control of restorable systems with latent failures

    CERN Document Server

    Obzherin, Yuriy E

    2015-01-01

    Featuring previously unpublished results, Semi-Markov Models: Control of Restorable Systems with Latent Failures describes valuable methodology which can be used by readers to build mathematical models of a wide class of systems for various applications. In particular, this information can be applied to build models of reliability, queuing systems, and technical control. Beginning with a brief introduction to the area, the book covers semi-Markov models for different control strategies in one-component systems, defining their stationary characteristics of reliability and efficiency, and uti

  10. Quasi-normal frequencies: Semi-analytic results for highly damped modes

    International Nuclear Information System (INIS)

    Skakala, Jozef; Visser, Matt

    2011-01-01

    Black hole highly-damped quasi-normal frequencies (QNFs) are very often of the form ω n = (offset) + in (gap). We have investigated the genericity of this phenomenon for the Schwarzschild-deSitter (SdS) black hole by considering a model potential that is piecewise Eckart (piecewise Poschl-Teller), and developing an analytic 'quantization condition' for the highly-damped quasi-normal frequencies. We find that the ω n = (offset) + in (gap) behaviour is common but not universal, with the controlling feature being whether or not the ratio of the surface gravities is a rational number. We furthermore observed that the relation between rational ratios of surface gravities and periodicity of QNFs is very generic, and also occurs within different analytic approaches applied to various types of black hole spacetimes. These observations are of direct relevance to any physical situation where highly-damped quasi-normal modes are important.

  11. Improved quantitative 90 Y bremsstrahlung SPECT/CT reconstruction with Monte Carlo scatter modeling.

    Science.gov (United States)

    Dewaraja, Yuni K; Chun, Se Young; Srinivasa, Ravi N; Kaza, Ravi K; Cuneo, Kyle C; Majdalany, Bill S; Novelli, Paula M; Ljungberg, Michael; Fessler, Jeffrey A

    2017-12-01

    In 90 Y microsphere radioembolization (RE), accurate post-therapy imaging-based dosimetry is important for establishing absorbed dose versus outcome relationships for developing future treatment planning strategies. Additionally, accurately assessing microsphere distributions is important because of concerns for unexpected activity deposition outside the liver. Quantitative 90 Y imaging by either SPECT or PET is challenging. In 90 Y SPECT model based methods are necessary for scatter correction because energy window-based methods are not feasible with the continuous bremsstrahlung energy spectrum. The objective of this work was to implement and evaluate a scatter estimation method for accurate 90 Y bremsstrahlung SPECT/CT imaging. Since a fully Monte Carlo (MC) approach to 90 Y SPECT reconstruction is computationally very demanding, in the present study the scatter estimate generated by a MC simulator was combined with an analytical projector in the 3D OS-EM reconstruction model. A single window (105 to 195-keV) was used for both the acquisition and the projector modeling. A liver/lung torso phantom with intrahepatic lesions and low-uptake extrahepatic objects was imaged to evaluate SPECT/CT reconstruction without and with scatter correction. Clinical application was demonstrated by applying the reconstruction approach to five patients treated with RE to determine lesion and normal liver activity concentrations using a (liver) relative calibration. There was convergence of the scatter estimate after just two updates, greatly reducing computational requirements. In the phantom study, compared with reconstruction without scatter correction, with MC scatter modeling there was substantial improvement in activity recovery in intrahepatic lesions (from > 55% to > 86%), normal liver (from 113% to 104%), and lungs (from 227% to 104%) with only a small degradation in noise (13% vs. 17%). Similarly, with scatter modeling contrast improved substantially both visually and in

  12. A biomechanical modeling-guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction

    Science.gov (United States)

    Huang, Xiaokun; Zhang, You; Wang, Jing

    2018-02-01

    Reconstructing four-dimensional cone-beam computed tomography (4D-CBCT) images directly from respiratory phase-sorted traditional 3D-CBCT projections can capture target motion trajectory, reduce motion artifacts, and reduce imaging dose and time. However, the limited numbers of projections in each phase after phase-sorting decreases CBCT image quality under traditional reconstruction techniques. To address this problem, we developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, an iterative method that can reconstruct higher quality 4D-CBCT images from limited projections using an inter-phase intensity-driven motion model. However, the accuracy of the intensity-driven motion model is limited in regions with fine details whose quality is degraded due to insufficient projection number, which consequently degrades the reconstructed image quality in corresponding regions. In this study, we developed a new 4D-CBCT reconstruction algorithm by introducing biomechanical modeling into SMEIR (SMEIR-Bio) to boost the accuracy of the motion model in regions with small fine structures. The biomechanical modeling uses tetrahedral meshes to model organs of interest and solves internal organ motion using tissue elasticity parameters and mesh boundary conditions. This physics-driven approach enhances the accuracy of solved motion in the organ’s fine structures regions. This study used 11 lung patient cases to evaluate the performance of SMEIR-Bio, making both qualitative and quantitative comparisons between SMEIR-Bio, SMEIR, and the algebraic reconstruction technique with total variation regularization (ART-TV). The reconstruction results suggest that SMEIR-Bio improves the motion model’s accuracy in regions containing small fine details, which consequently enhances the accuracy and quality of the reconstructed 4D-CBCT images.

  13. Semi-analytical approach for guided mode resonance in high-index-contrast photonic crystal slab: TE polarization.

    Science.gov (United States)

    Yang, Yi; Peng, Chao; Li, Zhengbin

    2013-09-09

    In high-contrast (HC) photonic crystals (PC) slabs, the high-order coupling is so intense that it is indispensable for analyzing the guided mode resonance (GMR) effect. In this paper, a semi-analytical approach is proposed for analyzing GMR in HC PC slabs with TE-like polarization. The intense high-order coupling is included by using a convergent recursive procedure. The reflection of radiative waves at high-index-contrast interfaces is also considered by adopting a strict Green's function for multi-layer structures. Modal properties of interest like band structure, radiation constant, field profile are calculated, agreeing well with numerical finite-difference time-domain simulations. This analysis is promising for the design and optimization of various HC PC devices.

  14. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  15. Reconstruction of the sediment flow regime in a semi-arid Mediterranean catchment using check dam sediment information.

    Science.gov (United States)

    Bussi, G.; Rodríguez, X.; Francés, F.; Benito, G.; Sánchez-Moya, Y.; Sopeña, A.

    2012-04-01

    When using hydrological and sedimentological models, lack of historical records is often one of the main problems to face, since observed data are essential for model validation. If gauged data are poor or absent, a source of additional proxy data may be the slack-water deposits accumulated in check dams. The aim of this work is to present the result of the reconstruction of the recent hydrological and sediment yield regime of a semi-arid Mediterranean catchment (Rambla del Poyo, Spain, 184 square km) by coupling palaeoflood techniques with a distributed hydrological and sediment cycle model, using as proxy data the sandy slack-water deposits accumulated upstream a small check dam (reservoir volume 2,500 square m) located in the headwater basin (drainage area 13 square km). The solid volume trapped into the reservoir has been estimated using differential GPS data and an interpolation technique. Afterwards, the total solid volume has been disaggregated into various layers (flood units), by means of a stratigraphical description of a depositional sequence in a 3.5 m trench made across the reservoir sediment deposit, taking care of identifying all flood units; the separation between flood units is indicated by a break in deposition. The sedimentary sequence shows evidence of 15 flood events that occurred after the dam construction (early '90). Not all events until the present are included; for the last ones, the stream velocity and energy conditions for generating slack-water deposits were not fulfilled due to the reservoir filling. The volume of each flood unit has been estimated making the hypothesis that layers have a simple pyramidal shape (or wedge); every volume represents an estimation of the sediments trapped into the reservoir corresponding to each flood event. The obtained results have been compared with the results of modeling a 20 year time series (1990 - 2009) with the distributed conceptual hydrological and sediment yield model TETIS-SED, in order to

  16. Oblique reconstructions in tomosynthesis. II. Super-resolution

    International Nuclear Information System (INIS)

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2013-01-01

    Purpose: In tomosynthesis, super-resolution has been demonstrated using reconstruction planes parallel to the detector. Super-resolution allows for subpixel resolution relative to the detector. The purpose of this work is to develop an analytical model that generalizes super-resolution to oblique reconstruction planes.Methods: In a digital tomosynthesis system, a sinusoidal test object is modeled along oblique angles (i.e., “pitches”) relative to the plane of the detector in a 3D divergent-beam acquisition geometry. To investigate the potential for super-resolution, the input frequency is specified to be greater than the alias frequency of the detector. Reconstructions are evaluated in an oblique plane along the extent of the object using simple backprojection (SBP) and filtered backprojection (FBP). By comparing the amplitude of the reconstruction against the attenuation coefficient of the object at various frequencies, the modulation transfer function (MTF) is calculated to determine whether modulation is within detectable limits for super-resolution. For experimental validation of super-resolution, a goniometry stand was used to orient a bar pattern phantom along various pitches relative to the breast support in a commercial digital breast tomosynthesis system.Results: Using theoretical modeling, it is shown that a single projection image cannot resolve a sine input whose frequency exceeds the detector alias frequency. The high frequency input is correctly visualized in SBP or FBP reconstruction using a slice along the pitch of the object. The Fourier transform of this reconstructed slice is maximized at the input frequency as proof that the object is resolved. Consistent with the theoretical results, experimental images of a bar pattern phantom showed super-resolution in oblique reconstructions. At various pitches, the highest frequency with detectable modulation was determined by visual inspection of the bar patterns. The dependency of the highest

  17. Oblique reconstructions in tomosynthesis. II. Super-resolution

    Science.gov (United States)

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2013-01-01

    Purpose: In tomosynthesis, super-resolution has been demonstrated using reconstruction planes parallel to the detector. Super-resolution allows for subpixel resolution relative to the detector. The purpose of this work is to develop an analytical model that generalizes super-resolution to oblique reconstruction planes. Methods: In a digital tomosynthesis system, a sinusoidal test object is modeled along oblique angles (i.e., “pitches”) relative to the plane of the detector in a 3D divergent-beam acquisition geometry. To investigate the potential for super-resolution, the input frequency is specified to be greater than the alias frequency of the detector. Reconstructions are evaluated in an oblique plane along the extent of the object using simple backprojection (SBP) and filtered backprojection (FBP). By comparing the amplitude of the reconstruction against the attenuation coefficient of the object at various frequencies, the modulation transfer function (MTF) is calculated to determine whether modulation is within detectable limits for super-resolution. For experimental validation of super-resolution, a goniometry stand was used to orient a bar pattern phantom along various pitches relative to the breast support in a commercial digital breast tomosynthesis system. Results: Using theoretical modeling, it is shown that a single projection image cannot resolve a sine input whose frequency exceeds the detector alias frequency. The high frequency input is correctly visualized in SBP or FBP reconstruction using a slice along the pitch of the object. The Fourier transform of this reconstructed slice is maximized at the input frequency as proof that the object is resolved. Consistent with the theoretical results, experimental images of a bar pattern phantom showed super-resolution in oblique reconstructions. At various pitches, the highest frequency with detectable modulation was determined by visual inspection of the bar patterns. The dependency of the highest

  18. Free vibration of thin axisymmetric structures by a semi-analytical finite element scheme and isoparametric solid elements

    International Nuclear Information System (INIS)

    Akeju, T.A.I.; Kelly, D.W.; Zienkiewicz, O.C.; Kanaka Raju, K.

    1981-01-01

    The eigenvalue equations governing the free vibration of axisymmetric solids are derived by means of a semi-analytical finite element scheme. In particular we investigated the use of an 8-node solid element in structures which exhibit a 'shell-like' behaviour. Bathe-Wilson subspace iteration algorithm is employed for the solution of the equations. The element is shown to give good results for beam and shell vibration problems. It is also utilised to solve a complex solid in the form of an internal component of a modern jet engine. This particular application is of considerable practical importance as the dynamics of such components form a dominant design constraint. (orig./HP)

  19. Semi-empirical crack tip analysis

    Science.gov (United States)

    Chudnovsky, A.; Ben Ouezdon, M.

    1988-01-01

    Experimentally observed crack opening displacements are employed as the solution of the multiple crack interaction problem. Then the near and far fields are reconstructed analytically by means of the double layer potential technqiue. Evaluation of the effective stress intensity factor resulting from the interaction of the main crack and its surrounding crazes in addition to the remotely applied load is presented as an illustrative example. It is shown that crazing (as well as microcracking) may constitute an alternative mechanism to Dugdale-Berenblatt models responsible for the cancellation of the singularity at the crack tip.

  20. Breast reconstruction following mastectomy: current status in Australia.

    Science.gov (United States)

    Sandelin, Kerstin; King, Elizabeth; Redman, Sally

    2003-09-01

    Although breast reconstruction provides some advantages for women following mastectomy, few Australian breast cancer patients currently receive reconstruction. In Australia, the routine provision of breast reconstruction will require the development of specific health service delivery models. The present paper reports an analysis of the provision of breast reconstruction in eight sites in Australia. A semi-structured telephone interview was conducted with 10 surgeons offering breast reconstruction as part of their practice, including nine breast or general surgeons and one plastic surgeon. Surgeons reported offering breast reconstruction to all women facing mastectomy; the proportion of women deciding to have breast reconstruction varied between sites with up to 50% of women having a reconstruction at some sites. Most sites offered three types of reconstruction. Two pathways emerged: either the breast surgeon performed the breast surgery in a team with the plastic surgeon who undertook the breast reconstruction or the breast surgeon provided both the breast surgery and the reconstruction. Considerable waiting times for breast reconstruction were reported in the public sector particularly for delayed reconstruction. Surgeons reported receiving training in breast reconstruction from plastic surgeons or from a breast surgery team that performed reconstructions; a number had been trained overseas. No audits of breast reconstruction were being undertaken. Breast reconstruction can be offered on a routine basis in Australia in both the private and public sectors. Women may be more readily able to access breast reconstruction when it is provided by a breast surgeon alone, but the range of reconstruction options may be more limited. If access to breast reconstruction is to be increased, there will be a need to: (i) develop effective models for the rural sector taking account of the lack of plastic surgeons; (ii) address waiting times for reconstruction surgery in the

  1. Study on Semi-Parametric Statistical Model of Safety Monitoring of Cracks in Concrete Dams

    Directory of Open Access Journals (Sweden)

    Chongshi Gu

    2013-01-01

    Full Text Available Cracks are one of the hidden dangers in concrete dams. The study on safety monitoring models of concrete dam cracks has always been difficult. Using the parametric statistical model of safety monitoring of cracks in concrete dams, with the help of the semi-parametric statistical theory, and considering the abnormal behaviors of these cracks, the semi-parametric statistical model of safety monitoring of concrete dam cracks is established to overcome the limitation of the parametric model in expressing the objective model. Previous projects show that the semi-parametric statistical model has a stronger fitting effect and has a better explanation for cracks in concrete dams than the parametric statistical model. However, when used for forecast, the forecast capability of the semi-parametric statistical model is equivalent to that of the parametric statistical model. The modeling of the semi-parametric statistical model is simple, has a reasonable principle, and has a strong practicality, with a good application prospect in the actual project.

  2. Tornadoes and related damage costs: statistical modelling with a semi-Markov approach

    Directory of Open Access Journals (Sweden)

    Guglielmo D’Amico

    2016-09-01

    Full Text Available We propose a statistical approach to modelling for predicting and simulating occurrences of tornadoes and accumulated cost distributions over a time interval. This is achieved by modelling the tornado intensity, measured with the Fujita scale, as a stochastic process. Since the Fujita scale divides tornado intensity into six states, it is possible to model the tornado intensity by using Markov and semi-Markov models. We demonstrate that the semi-Markov approach is able to reproduce the duration effect that is detected in tornado occurrence. The superiority of the semi-Markov model as compared to the Markov chain model is also affirmed by means of a statistical test of hypothesis. As an application, we compute the expected value and the variance of the costs generated by the tornadoes over a given time interval in a given area. The paper contributes to the literature by demonstrating that semi-Markov models represent an effective tool for physical analysis of tornadoes as well as for the estimation of the economic damages to human things.

  3. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  4. Filter assessment applied to analytical reconstruction for industrial third-generation tomography

    Energy Technology Data Exchange (ETDEWEB)

    Velo, Alexandre F.; Martins, Joao F.T.; Oliveira, Adriano S.; Carvalho, Diego V.S.; Faria, Fernando S.; Hamada, Margarida M.; Mesquita, Carlos H., E-mail: afvelo@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Multiphase systems are structures that contain a mixture of solids, liquids and gases inside a chemical reactor or pipes in a dynamic process. These systems are found in chemical, food, pharmaceutical and petrochemical industries. The gamma ray computed tomography (CT) system has been applied to visualize the distribution of multiphase systems without interrupting production. CT systems have been used to improve design, operation and troubleshooting of industrial processes. Computer tomography for multiphase processes is being developed at several laboratories. It is well known that scanning systems demand high processing time, limited set of data projections and views to obtain an image. Because of it, the image quality is dependent on the number of projection, number of detectors, acquisition time and reconstruction time. A phantom containing air, iron and aluminum was used on the third generation industrial tomography with 662 keV ({sup 137}Cs) radioactive source. It was applied the Filtered Back Projection algorithm to reconstruct the images. An efficient tomography is dependent of the image quality, thus the objective of this research was to apply different types of filters on the analytical algorithm and compare each other using the figure of merit denominated root mean squared error (RMSE), the filter that presents lower value of RMSE has better quality. On this research, five types of filters were used: Ram-Lak, Shepp-Logan, Cosine, Hamming and Hann filters. As results, all filters presented lower values of RMSE, that means the filters used have low stand deviation compared to the mass absorption coefficient, however, the Hann filter presented better RMSE and CNR compared to the others. (author)

  5. Filter assessment applied to analytical reconstruction for industrial third-generation tomography

    International Nuclear Information System (INIS)

    Velo, Alexandre F.; Martins, Joao F.T.; Oliveira, Adriano S.; Carvalho, Diego V.S.; Faria, Fernando S.; Hamada, Margarida M.; Mesquita, Carlos H.

    2015-01-01

    Multiphase systems are structures that contain a mixture of solids, liquids and gases inside a chemical reactor or pipes in a dynamic process. These systems are found in chemical, food, pharmaceutical and petrochemical industries. The gamma ray computed tomography (CT) system has been applied to visualize the distribution of multiphase systems without interrupting production. CT systems have been used to improve design, operation and troubleshooting of industrial processes. Computer tomography for multiphase processes is being developed at several laboratories. It is well known that scanning systems demand high processing time, limited set of data projections and views to obtain an image. Because of it, the image quality is dependent on the number of projection, number of detectors, acquisition time and reconstruction time. A phantom containing air, iron and aluminum was used on the third generation industrial tomography with 662 keV ( 137 Cs) radioactive source. It was applied the Filtered Back Projection algorithm to reconstruct the images. An efficient tomography is dependent of the image quality, thus the objective of this research was to apply different types of filters on the analytical algorithm and compare each other using the figure of merit denominated root mean squared error (RMSE), the filter that presents lower value of RMSE has better quality. On this research, five types of filters were used: Ram-Lak, Shepp-Logan, Cosine, Hamming and Hann filters. As results, all filters presented lower values of RMSE, that means the filters used have low stand deviation compared to the mass absorption coefficient, however, the Hann filter presented better RMSE and CNR compared to the others. (author)

  6. Dose reconstruction modeling for medical radiation workers

    International Nuclear Information System (INIS)

    Choi, Yeong Chull; Cha, Eun Shil; Lee, Won Jin

    2017-01-01

    Exposure information is a crucial element for the assessment of health risk due to radiation. Radiation doses received by medical radiation workers have been collected and maintained by public registry since 1996. Since exposure levels in the remote past are greater concern, it is essential to reconstruct unmeasured doses in the past using known information. We developed retrodiction models for different groups of medical radiation workers and estimate individual past doses before 1996. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure.

  7. Dose reconstruction modeling for medical radiation workers

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yeong Chull; Cha, Eun Shil; Lee, Won Jin [Dept. of Preventive Medicine, Korea University, Seoul (Korea, Republic of)

    2017-04-15

    Exposure information is a crucial element for the assessment of health risk due to radiation. Radiation doses received by medical radiation workers have been collected and maintained by public registry since 1996. Since exposure levels in the remote past are greater concern, it is essential to reconstruct unmeasured doses in the past using known information. We developed retrodiction models for different groups of medical radiation workers and estimate individual past doses before 1996. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure.

  8. Semi-parametric estimation for ARCH models

    Directory of Open Access Journals (Sweden)

    Raed Alzghool

    2018-03-01

    Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator

  9. Hyperbolic and semi-parametric models in finance

    Science.gov (United States)

    Bingham, N. H.; Kiesel, Rüdiger

    2001-02-01

    The benchmark Black-Scholes-Merton model of mathematical finance is parametric, based on the normal/Gaussian distribution. Its principal parametric competitor, the hyperbolic model of Barndorff-Nielsen, Eberlein and others, is briefly discussed. Our main theme is the use of semi-parametric models, incorporating the mean vector and covariance matrix as in the Markowitz approach, plus a non-parametric part, a scalar function incorporating features such as tail-decay. Implementation is also briefly discussed.

  10. Semi-automated De-identification of German Content Sensitive Reports for Big Data Analytics.

    Science.gov (United States)

    Seuss, Hannes; Dankerl, Peter; Ihle, Matthias; Grandjean, Andrea; Hammon, Rebecca; Kaestle, Nicola; Fasching, Peter A; Maier, Christian; Christoph, Jan; Sedlmayr, Martin; Uder, Michael; Cavallaro, Alexander; Hammon, Matthias

    2017-07-01

    Purpose  Projects involving collaborations between different institutions require data security via selective de-identification of words or phrases. A semi-automated de-identification tool was developed and evaluated on different types of medical reports natively and after adapting the algorithm to the text structure. Materials and Methods  A semi-automated de-identification tool was developed and evaluated for its sensitivity and specificity in detecting sensitive content in written reports. Data from 4671 pathology reports (4105 + 566 in two different formats), 2804 medical reports, 1008 operation reports, and 6223 radiology reports of 1167 patients suffering from breast cancer were de-identified. The content was itemized into four categories: direct identifiers (name, address), indirect identifiers (date of birth/operation, medical ID, etc.), medical terms, and filler words. The software was tested natively (without training) in order to establish a baseline. The reports were manually edited and the model re-trained for the next test set. After manually editing 25, 50, 100, 250, 500 and if applicable 1000 reports of each type re-training was applied. Results  In the native test, 61.3 % of direct and 80.8 % of the indirect identifiers were detected. The performance (P) increased to 91.4 % (P25), 96.7 % (P50), 99.5 % (P100), 99.6 % (P250), 99.7 % (P500) and 100 % (P1000) for direct identifiers and to 93.2 % (P25), 97.9 % (P50), 97.2 % (P100), 98.9 % (P250), 99.0 % (P500) and 99.3 % (P1000) for indirect identifiers. Without training, 5.3 % of medical terms were falsely flagged as critical data. The performance increased, after training, to 4.0 % (P25), 3.6 % (P50), 4.0 % (P100), 3.7 % (P250), 4.3 % (P500), and 3.1 % (P1000). Roughly 0.1 % of filler words were falsely flagged. Conclusion  Training of the developed de-identification tool continuously improved its performance. Training with roughly 100 edited

  11. Automated comparison of Bayesian reconstructions of experimental profiles with physical models

    International Nuclear Information System (INIS)

    Irishkin, Maxim

    2014-01-01

    In this work we developed an expert system that carries out in an integrated and fully automated way i) a reconstruction of plasma profiles from the measurements, using Bayesian analysis ii) a prediction of the reconstructed quantities, according to some models and iii) an intelligent comparison of the first two steps. This system includes systematic checking of the internal consistency of the reconstructed quantities, enables automated model validation and, if a well-validated model is used, can be applied to help detecting interesting new physics in an experiment. The work shows three applications of this quite general system. The expert system can successfully detect failures in the automated plasma reconstruction and provide (on successful reconstruction cases) statistics of agreement of the models with the experimental data, i.e. information on the model validity. (author) [fr

  12. A semi-analytical study on helical springs made of shape memory polymer

    International Nuclear Information System (INIS)

    Baghani, M; Naghdabadi, R; Arghavani, J

    2012-01-01

    In this paper, the responses of shape memory polymer (SMP) helical springs under axial force are studied both analytically and numerically. In the analytical solution, we first derive the response of a cylindrical tube under torsional loadings. This solution can be used for helical springs in which both the curvature and pitch effects are negligible. This is the case for helical springs with large ratios of the mean coil radius to the cross sectional radius (spring index) and also small pitch angles. Making use of this solution simplifies the analysis of the helical springs to that of the torsion of a straight bar with circular cross section. The 3D phenomenological constitutive model recently proposed for SMPs is also reduced to the 1D shear case. Thus, an analytical solution for the torsional response of SMP tubes in a full cycle of stress-free strain recovery is derived. In addition, the curvature effect is added to the formulation and the SMP helical spring is analyzed using the exact solution presented for torsion of curved SMP tubes. In this modified solution, the effect of the direct shear force is also considered. In the numerical analysis, the 3D constitutive equations are implemented in a finite element program and a full cycle of stress-free strain recovery of an SMP (extension or compression) helical spring is simulated. Analytical and numerical results are compared and it is shown that the analytical solution gives accurate stress distributions in the cross section of the helical SMP spring besides the global load–deflection response. Some case studies are presented to show the validity of the presented analytical method. (paper)

  13. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Hydrological modeling in semi-arid region using HEC-HMS model ...

    African Journals Online (AJOL)

    The purpose of this study is to simulate rainfall-runoff in the semi-arid region of ... the frequency storm is used for the meteorological model, the SCS curve number is ... SCS unit hydrograph method have been applied to simulate the runoff rate.

  15. A simulation-based analytic model of radio galaxies

    Science.gov (United States)

    Hardcastle, M. J.

    2018-04-01

    I derive and discuss a simple semi-analytical model of the evolution of powerful radio galaxies which is not based on assumptions of self-similar growth, but rather implements some insights about the dynamics and energetics of these systems derived from numerical simulations, and can be applied to arbitrary pressure/density profiles of the host environment. The model can qualitatively and quantitatively reproduce the source dynamics and synchrotron light curves derived from numerical modelling. Approximate corrections for radiative and adiabatic losses allow it to predict the evolution of radio spectral index and of inverse-Compton emission both for active and `remnant' sources after the jet has turned off. Code to implement the model is publicly available. Using a standard model with a light relativistic (electron-positron) jet, subequipartition magnetic fields, and a range of realistic group/cluster environments, I simulate populations of sources and show that the model can reproduce the range of properties of powerful radio sources as well as observed trends in the relationship between jet power and radio luminosity, and predicts their dependence on redshift and environment. I show that the distribution of source lifetimes has a significant effect on both the source length distribution and the fraction of remnant sources expected in observations, and so can in principle be constrained by observations. The remnant fraction is expected to be low even at low redshift and low observing frequency due to the rapid luminosity evolution of remnants, and to tend rapidly to zero at high redshift due to inverse-Compton losses.

  16. AIR Tools - A MATLAB package of algebraic iterative reconstruction methods

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Saxild-Hansen, Maria

    2012-01-01

    We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are impleme......We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...... are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...

  17. Field-driven chiral bubble dynamics analysed by a semi-analytical approach

    Science.gov (United States)

    Vandermeulen, J.; Leliaert, J.; Dupré, L.; Van Waeyenberge, B.

    2017-12-01

    Nowadays, field-driven chiral bubble dynamics in the presence of the Dzyaloshinskii-Moriya interaction are a topic of thorough investigation. In this paper, a semi-analytical approach is used to derive equations of motion that express the bubble wall (BW) velocity and the change in in-plane magnetization angle as function of the micromagnetic parameters of the involved interactions, thereby taking into account the two-dimensional nature of the bubble wall. It is demonstrated that the equations of motion enable an accurate description of the expanding and shrinking convex bubble dynamics and an expression for the transition field between shrinkage and expansion is derived. In addition, these equations of motion show that the BW velocity is not only dependent on the driving force, but also on the BW curvature. The absolute BW velocity increases for both a shrinking and an expanding bubble, but for different reasons: for expanding bubbles, it is due to the increasing importance of the driving force, while for shrinking bubbles, it is due to the increasing importance of contributions related to the BW curvature. Finally, using this approach we show how the recently proposed magnetic bubblecade memory can operate in the flow regime in the presence of a tilted sinusoidal magnetic field and at greatly reduced bubble sizes compared to the original device prototype.

  18. Weak form implementation of the semi-analytical finite element (SAFE) method for a variety of elastodynamic waveguides

    Science.gov (United States)

    Hakoda, Christopher; Lissenden, Clifford; Rose, Joseph L.

    2018-04-01

    Dispersion curves are essential to any guided wave NDE project. The Semi-Analytical Finite Element (SAFE) method has significantly increased the ease by which these curves can be calculated. However, due to misconceptions regarding theory and fragmentation based on different finite-element software, the theory has stagnated, and adoption by researchers who are new to the field has been slow. This paper focuses on the relationship between the SAFE formulation and finite element theory, and the implementation of the SAFE method in a weak form for plates, pipes, layered waveguides/composites, curved waveguides, and arbitrary cross-sections is shown. The benefits of the weak form are briefly described, as is implementation in open-source and commercial finite element software.

  19. Analytic solutions of QCD motivated Hamiltonians at low energy

    International Nuclear Information System (INIS)

    Yepez, T.; Amor, A.; Hess, P.O.; Szczepaniak, A.; Civitarese, O.

    2011-01-01

    A model Hamiltonian, motivated by QCD, is investigated in order to study only the quark sector, then only the gluon sector and finally both together. Restricting to the pure quark sector and setting the mass of the quarks to zero, we find analytic solutions, involving two to three orbitals. Allowing the mass of the quarks to be different to zero, we find semi-analytic solutions involving an arbitrary number of orbitals. Afterwards, we indicate on how to incorporate gluons. (author)

  20. Tool Efficiency Analysis model research in SEMI industry

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2018-01-01

    Full Text Available One of the key goals in SEMI industry is to improve equipment through put and ensure equipment production efficiency maximization. This paper is based on SEMI standards in semiconductor equipment control, defines the transaction rules between different tool states,and presents a TEA system model which is to analysis tool performance automatically based on finite state machine. The system was applied to fab tools and verified its effectiveness successfully, and obtained the parameter values used to measure the equipment performance, also including the advices of improvement.

  1. GWSCREEN: A Semi-analytical Model for Assessment of the Groundwater Pathway from Surface or Buried Contamination, Theory and User's Manual, Version 2.5

    Energy Technology Data Exchange (ETDEWEB)

    Rood, Arthur South

    1998-08-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non-radioactive substances from surface or buried sources. The code was designed for implementation in the Track I and Track II assessment of Comprehensive Environmental Response, Compensation, and Liability Act sites identified as low probability hazard at the Idaho National Engineering Laboratory. The code calculates 1) the limiting soil concentration such that, after leaching and transport to the aquifer regulatory contaminant levels in groundwater are not exceeded, 2) peak aquifer concentration and associated human health impacts, and 3) aquifer concentrations and associated human health impacts as a function of time and space. The code uses a mass conservation approach to model three processes: contaminant release from a source volume, vertical contaminant transport in the unsaturated zone, and 2D or 3D contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. In Version 2.5, transport in the unsaturated zone is described by a plug flow or dispersive solution model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation in groundwater. Three source models are included; leaching from a surface or buried source, infiltration pond, or user-defined arbitrary release. Dispersion in the aquifer may be described by fixed dispersivity values or three, spatial-variable dispersivity functions. Version 2.5 also includes a Monte Carlo sampling routine for uncertainty/sensitivity analysis and a preprocessor to allow multiple input files and multiple contaminants to be run in a single simulation. GWSCREEN has been validated against other codes using similar algorithms and techniques. The code was originally designed for assessment and screening of the groundwater pathway when field data are limited. It was intended to simulate relatively simple

  2. Detecting New Words from Chinese Text Using Latent Semi-CRF Models

    Science.gov (United States)

    Sun, Xiao; Huang, Degen; Ren, Fuji

    Chinese new words and their part-of-speech (POS) are particularly problematic in Chinese natural language processing. With the fast development of internet and information technology, it is impossible to get a complete system dictionary for Chinese natural language processing, as new words out of the basic system dictionary are always being created. A latent semi-CRF model, which combines the strengths of LDCRF (Latent-Dynamic Conditional Random Field) and semi-CRF, is proposed to detect the new words together with their POS synchronously regardless of the types of the new words from the Chinese text without being pre-segmented. Unlike the original semi-CRF, the LDCRF is applied to generate the candidate entities for training and testing the latent semi-CRF, which accelerates the training speed and decreases the computation cost. The complexity of the latent semi-CRF could be further adjusted by tuning the number of hidden variables in LDCRF and the number of the candidate entities from the Nbest outputs of the LDCRF. A new-words-generating framework is proposed for model training and testing, under which the definitions and distributions of the new words conform to the ones existing in real text. Specific features called “Global Fragment Information” for new word detection and POS tagging are adopted in the model training and testing. The experimental results show that the proposed method is capable of detecting even low frequency new words together with their POS tags. The proposed model is found to be performing competitively with the state-of-the-art models presented.

  3. Semi-empirical modelling of radiation exposure of humans to naturally occurring radioactive materials in a goldmine in Ghana

    International Nuclear Information System (INIS)

    Darko, E. O.; Tetteh, G.K.; Akaho, E.H.K.

    2005-01-01

    A semi-empirical analytical model has been developed and used to assess the radiation doses to workers in a gold mine in Ghana. The gamma dose rates from naturally occurring radioactive materials (uranium-thorium series, potassium-40 and radon concentrations) were related to the annual effective doses for surface and underground mining operations. The calculated effective doses were verified by comparison with field measurements and correlation ratios of 0.94 and 0.93 were obtained, respectively, between calculated and measured data of surface and underground mining. The results agreed with the approved international levels for normal radiation exposure in the mining environment. (au)

  4. Hydraulic modeling of riverbank filtration systems with curved boundaries using analytic elements and series solutions

    Science.gov (United States)

    Bakker, Mark

    2010-08-01

    A new analytic solution approach is presented for the modeling of steady flow to pumping wells near rivers in strip aquifers; all boundaries of the river and strip aquifer may be curved. The river penetrates the aquifer only partially and has a leaky stream bed. The water level in the river may vary spatially. Flow in the aquifer below the river is semi-confined while flow in the aquifer adjacent to the river is confined or unconfined and may be subject to areal recharge. Analytic solutions are obtained through superposition of analytic elements and Fourier series. Boundary conditions are specified at collocation points along the boundaries. The number of collocation points is larger than the number of coefficients in the Fourier series and a solution is obtained in the least squares sense. The solution is analytic while boundary conditions are met approximately. Very accurate solutions are obtained when enough terms are used in the series. Several examples are presented for domains with straight and curved boundaries, including a well pumping near a meandering river with a varying water level. The area of the river bottom where water infiltrates into the aquifer is delineated and the fraction of river water in the well water is computed for several cases.

  5. Discrete-time semi-Markov modeling of human papillomavirus persistence

    Science.gov (United States)

    Mitchell, C. E.; Hudgens, M. G.; King, C. C.; Cu-Uvin, S.; Lo, Y.; Rompalo, A.; Sobel, J.; Smith, J. S.

    2011-01-01

    Multi-state modeling is often employed to describe the progression of a disease process. In epidemiological studies of certain diseases, the disease state is typically only observed at periodic clinical visits, producing incomplete longitudinal data. In this paper we consider fitting semi-Markov models to estimate the persistence of human papillomavirus (HPV) type-specific infection in studies where the status of HPV type(s) is assessed periodically. Simulation study results are presented indicating the semi-Markov estimator is more accurate than an estimator currently used in the HPV literature. The methods are illustrated using data from the HIV Epidemiology Research Study (HERS). PMID:21538985

  6. An improved analytical model of 4H-SiC MESFET incorporating bulk and interface trapping effects

    Science.gov (United States)

    Hema Lata Rao, M.; Narasimha Murty, N. V. L.

    2015-01-01

    An improved analytical model for the current—voltage (I-V) characteristics of the 4H-SiC metal semiconductor field effect transistor (MESFET) on a high purity semi-insulating (HPSI) substrate with trapping and thermal effects is presented. The 4H-SiC MESFET structure includes a stack of HPSI substrates and a uniformly doped channel layer. The trapping effects include both the effect of multiple deep-level traps in the substrate and surface traps between the gate to source/drain. The self-heating effects are also incorporated to obtain the accurate and realistic nature of the analytical model. The importance of the proposed model is emphasised through the inclusion of the recent and exact nature of the traps in the 4H-SiC HPSI substrate responsible for substrate compensation. The analytical model is used to exhibit DC I-V characteristics of the device with and without trapping and thermal effects. From the results, the current degradation is observed due to the surface and substrate trapping effects and the negative conductance introduced by the self-heating effect at a high drain voltage. The calculated results are compared with reported experimental and two-dimensional simulations (Silvaco®-TCAD). The proposed model also illustrates the effectiveness of the gate—source distance scaling effect compared to the gate—drain scaling effect in optimizing 4H-SiC MESFET performance. Results demonstrate that the proposed I-V model of 4H-SiC MESFET is suitable for realizing SiC based monolithic circuits (MMICs) on HPSI substrates.

  7. An improved analytical model of 4H-SiC MESFET incorporating bulk and interface trapping effects

    International Nuclear Information System (INIS)

    Rao, M. Hema Lata; Murty, N. V. L. Narasimha

    2015-01-01

    An improved analytical model for the current—voltage (I–V) characteristics of the 4H-SiC metal semiconductor field effect transistor (MESFET) on a high purity semi-insulating (HPSI) substrate with trapping and thermal effects is presented. The 4H-SiC MESFET structure includes a stack of HPSI substrates and a uniformly doped channel layer. The trapping effects include both the effect of multiple deep-level traps in the substrate and surface traps between the gate to source/drain. The self-heating effects are also incorporated to obtain the accurate and realistic nature of the analytical model. The importance of the proposed model is emphasised through the inclusion of the recent and exact nature of the traps in the 4H-SiC HPSI substrate responsible for substrate compensation. The analytical model is used to exhibit DC I–V characteristics of the device with and without trapping and thermal effects. From the results, the current degradation is observed due to the surface and substrate trapping effects and the negative conductance introduced by the self-heating effect at a high drain voltage. The calculated results are compared with reported experimental and two-dimensional simulations (Silvaco®-TCAD). The proposed model also illustrates the effectiveness of the gate—source distance scaling effect compared to the gate—drain scaling effect in optimizing 4H-SiC MESFET performance. Results demonstrate that the proposed I–V model of 4H-SiC MESFET is suitable for realizing SiC based monolithic circuits (MMICs) on HPSI substrates. (semiconductor devices)

  8. Semi-Automated Processing of Trajectory Simulator Output Files for Model Evaluation

    Science.gov (United States)

    2018-01-01

    ARL-TR-8284 ● JAN 2018 US Army Research Laboratory Semi-Automated Processing of Trajectory Simulator Output Files for Model...Semi-Automated Processing of Trajectory Simulator Output Files for Model Evaluation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...although some minor changes may be needed. The program processes a GTRAJ output text file that contains results from 2 or more simulations , where each

  9. Reconstructing bidimensional scalar field theory models

    International Nuclear Information System (INIS)

    Flores, Gabriel H.; Svaiter, N.F.

    2001-07-01

    In this paper we review how to reconstruct scalar field theories in two dimensional spacetime starting from solvable Scrodinger equations. Theree different Schrodinger potentials are analyzed. We obtained two new models starting from the Morse and Scarf II hyperbolic potencials, the U (θ) θ 2 In 2 (θ 2 ) model and U (θ) = θ 2 cos 2 (In(θ 2 )) model respectively. (author)

  10. Efficient reconstruction of dispersive dielectric profiles using time domain reflectometry (TDR

    Directory of Open Access Journals (Sweden)

    P. Leidenberger

    2006-01-01

    Full Text Available We present a numerical model for time domain reflectometry (TDR signal propagation in dispersive dielectric materials. The numerical probe model is terminated with a parallel circuit, consisting of an ohmic resistor and an ideal capacitance. We derive analytical approximations for the capacitance, the inductance and the conductance of three-wire probes. We couple the time domain model with global optimization in order to reconstruct water content profiles from TDR traces. For efficiently solving the inverse problem we use genetic algorithms combined with a hierarchical parameterization. We investigate the performance of the method by reconstructing synthetically generated profiles. The algorithm is then applied to retrieve dielectric profiles from TDR traces measured in the field. We succeed in reconstructing dielectric and ohmic profiles where conventional methods, based on travel time extraction, fail.

  11. Analytical algorithm for the generation of polygonal projection data for tomographic reconstruction

    International Nuclear Information System (INIS)

    Davis, G.R.

    1996-01-01

    Tomographic reconstruction algorithms and filters can be tested using a mathematical phantom, that is, a computer program which takes numerical data as its input and outputs derived projection data. The input data is usually in the form of pixel ''densities'' over a regular grid, or position and dimensions of simple, geometrical objects. The former technique allows a greater variety of objects to be simulated, but is less suitable in the case when very small (relative to the ray-spacing) features are to be simulated. The second technique is normally used to simulate biological specimens, typically a human skull, modelled as a number of ellipses. This is not suitable for simulating non-biological specimens with features such as straight edges and fine cracks. We have therefore devised an algorithm for simulating objects described as a series of polygons. These polygons, or parts of them, may be smaller than the ray-spacing and there is no limit, except that imposed by computing resources, on the complexity, number or superposition of polygons. A simple test of such a phantom, reconstructed using the filtered back-projection method, revealed reconstruction artefacts not normally seen with ''biological'' phantoms. (orig.)

  12. Modelling water and sediment connectivity patterns in a semi-arid landscape

    International Nuclear Information System (INIS)

    Cammeraat, L. H.; Beek, L. P. H. van; Dooms, T.

    2009-01-01

    Desertification is a major threat in SE Spain and mitigation strategies are required to reduce the adverse effect of water-induced erosion on soil production potential. Severity of soil erosion depends on local runoff response and the connectivity of pathways of water and sediment at different spatial scales. We investigated the connectivity between sources and sinks on semi-natural slopes by means of a semi-distributed model that delineated Hydrological response Units (HRUs) on the basis of physiographic characteristics. The model was calibrated with information at the plot, hill slope and sub-catchment scale covering the period 1995-2008 and validated against larger events that connected the semi-natural sub-catchment with the underlying cultivated slopes. (Author) 12 refs.

  13. Modelling water and sediment connectivity patterns in a semi-arid landscape

    Energy Technology Data Exchange (ETDEWEB)

    Cammeraat, L. H.; Beek, L. P. H. van; Dooms, T.

    2009-07-01

    Desertification is a major threat in SE Spain and mitigation strategies are required to reduce the adverse effect of water-induced erosion on soil production potential. Severity of soil erosion depends on local runoff response and the connectivity of pathways of water and sediment at different spatial scales. We investigated the connectivity between sources and sinks on semi-natural slopes by means of a semi-distributed model that delineated Hydrological response Units (HRUs) on the basis of physiographic characteristics. The model was calibrated with information at the plot, hill slope and sub-catchment scale covering the period 1995-2008 and validated against larger events that connected the semi-natural sub-catchment with the underlying cultivated slopes. (Author) 12 refs.

  14. Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data.

    Science.gov (United States)

    Dinov, Ivo D

    2016-01-01

    Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.

  15. Tornadoes and related damage costs: statistical modeling with a semi-Markov approach

    OpenAIRE

    Corini, Chiara; D'Amico, Guglielmo; Petroni, Filippo; Prattico, Flavio; Manca, Raimondo

    2015-01-01

    We propose a statistical approach to tornadoes modeling for predicting and simulating occurrences of tornadoes and accumulated cost distributions over a time interval. This is achieved by modeling the tornadoes intensity, measured with the Fujita scale, as a stochastic process. Since the Fujita scale divides tornadoes intensity into six states, it is possible to model the tornadoes intensity by using Markov and semi-Markov models. We demonstrate that the semi-Markov approach is able to reprod...

  16. Effective modelling for predictive analytics in data science ...

    African Journals Online (AJOL)

    Effective modelling for predictive analytics in data science. ... the nearabsence of empirical or factual predictive analytics in the mainstream research going on ... Keywords: Predictive Analytics, Big Data, Business Intelligence, Project Planning.

  17. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  18. Semi-analytical model of laser resonance absorption in plasmas with a parabolic density profile

    International Nuclear Information System (INIS)

    Pestehe, S J; Mohammadnejad, M

    2010-01-01

    Analytical expressions for mode conversion and resonance absorption of electromagnetic waves in inhomogeneous, unmagnetized plasmas are required for laboratory and simulation studies. Although most of the analyses of this problem have concentrated on the linear plasma density profile, there are a few research works that deal with different plasma density profiles including the parabolic profile. Almost none of them could give clear analytical formulae for the electric and magnetic components of the electromagnetic field propagating through inhomogeneous plasmas. In this paper, we have considered the resonant absorption of laser light near the critical density of plasmas with parabolic electron density profiles followed by a uniform over-dense region and have obtained expressions for the electric and magnetic vectors of laser light propagating through the plasma. An estimation of the fractional absorption of laser energy has also been carried out. It has been shown that, in contrast to the linear density profile, the energy absorption depends explicitly on the value of collision frequency as well as on a new parameter, N, called the over-dense density order.

  19. Estimating Cloud optical thickness from SEVIRI, for air quality research, by implementing a semi-analytical cloud retrieval algorithm

    Science.gov (United States)

    Pandey, Praveen; De Ridder, Koen; van Looy, Stijn; van Lipzig, Nicole

    2010-05-01

    Clouds play an important role in Earth's climate system. As they affect radiation hence photolysis rate coefficients (ozone formation),they also affect the air quality at the surface of the earth. Thus, a satellite remote sensing technique is used to retrieve the cloud properties for air quality research. The geostationary satellite, Meteosat Second Generation (MSG) has onboard, the Spinning Enhanced Visible and Infrared Imager (SEVIRI). The channels in the wavelength 0.6 µm and 1.64 µm are used to retrieve cloud optical thickness (COT). The study domain is over Europe covering a region between 35°N-70°N and 5°W-30°E, centred over Belgium. The steps involved in pre-processing the EUMETSAT level 1.5 images are described, which includes, acquisition of digital count number, radiometric conversion using offsets and slopes, estimation of radiance and calculation of reflectance. The Sun-earth-satellite geometry also plays an important role. A semi-analytical cloud retrieval algorithm (Kokhanovsky et al., 2003) is implemented for the estimation of COT. This approach doesn't involve the conventional look-up table approach, hence it makes the retrieval independent of numerical radiative transfer solutions. The semi-analytical algorithm is implemented on a monthly dataset of SEVIRI level 1.5 images. Minimum reflectance in the visible channel, at each pixel, during the month is accounted as the surface albedo of the pixel. Thus, monthly variation of COT over the study domain is prepared. The result so obtained, is compared with the COT products of Satellite Application Facility on Climate Monitoring (CM SAF). Henceforth, an approach to assimilate the COT for air quality research is presented. Address of corresponding author: Praveen Pandey, VITO- Flemish Institute for Technological Research, Boeretang 200, B 2400, Mol, Belgium E-mail: praveen.pandey@vito.be

  20. Hydroelastic slamming of flexible wedges: Modeling and experiments from water entry to exit

    Science.gov (United States)

    Shams, Adel; Zhao, Sam; Porfiri, Maurizio

    2017-03-01

    Fluid-structure interactions during hull slamming are of great interest for the design of aircraft and marine vessels. The main objective of this paper is to establish a semi-analytical model to investigate the entire hydroelastic slamming of a wedge, from the entry to the exit phase. The structural dynamics is described through Euler-Bernoulli beam theory and the hydrodynamic loading is estimated using potential flow theory. A Galerkin method is used to obtain a reduced order modal model in closed-form, and a Newmark-type integration scheme is utilized to find an approximate solution. To benchmark the proposed semi-analytical solution, we experimentally investigate fluid-structure interactions through particle image velocimetry (PIV). PIV is used to estimate the velocity field, and the pressure is reconstructed by solving the incompressible Navier-Stokes equations from PIV data. Experimental results confirm that the flow physics and free-surface elevation during water exit are different from water entry. While water entry is characterized by positive values of the pressure field, with respect to the atmospheric pressure, the pressure field during water exit may be less than atmospheric. Experimental observations indicate that the location where the maximum pressure in the fluid is attained moves from the pile-up region to the keel, as the wedge reverses its motion from the entry to the exit stage. Comparing experimental results with semi-analytical findings, we observe that the model is successful in predicting the free-surface elevation and the overall distribution of the hydrodynamic loading on the wedge. These integrated experimental and theoretical analyses of water exit problems are expected to aid in the design of lightweight structures, which experience repeated slamming events during their operation.

  1. Analytical model for screening potential CO2 repositories

    Science.gov (United States)

    Okwen, R.T.; Stewart, M.T.; Cunningham, J.A.

    2011-01-01

    Assessing potential repositories for geologic sequestration of carbon dioxide using numerical models can be complicated, costly, and time-consuming, especially when faced with the challenge of selecting a repository from a multitude of potential repositories. This paper presents a set of simple analytical equations (model), based on the work of previous researchers, that could be used to evaluate the suitability of candidate repositories for subsurface sequestration of carbon dioxide. We considered the injection of carbon dioxide at a constant rate into a confined saline aquifer via a fully perforated vertical injection well. The validity of the analytical model was assessed via comparison with the TOUGH2 numerical model. The metrics used in comparing the two models include (1) spatial variations in formation pressure and (2) vertically integrated brine saturation profile. The analytical model and TOUGH2 show excellent agreement in their results when similar input conditions and assumptions are applied in both. The analytical model neglects capillary pressure and the pressure dependence of fluid properties. However, simulations in TOUGH2 indicate that little error is introduced by these simplifications. Sensitivity studies indicate that the agreement between the analytical model and TOUGH2 depends strongly on (1) the residual brine saturation, (2) the difference in density between carbon dioxide and resident brine (buoyancy), and (3) the relationship between relative permeability and brine saturation. The results achieved suggest that the analytical model is valid when the relationship between relative permeability and brine saturation is linear or quasi-linear and when the irreducible saturation of brine is zero or very small. ?? 2011 Springer Science+Business Media B.V.

  2. Model-based image reconstruction in X-ray computed tomography

    NARCIS (Netherlands)

    Zbijewski, Wojciech Bartosz

    2006-01-01

    The thesis investigates the applications of iterative, statistical reconstruction (SR) algorithms in X-ray Computed Tomography. Emphasis is put on various aspects of system modeling in statistical reconstruction. Fundamental issues such as effects of object discretization and algorithm

  3. Semi-analytical model for a slab one-dimensional photonic crystal

    Science.gov (United States)

    Libman, M.; Kondratyev, N. M.; Gorodetsky, M. L.

    2018-02-01

    In our work we justify the applicability of a dielectric mirror model to the description of a real photonic crystal. We demonstrate that a simple one-dimensional model of a multilayer mirror can be employed for modeling of a slab waveguide with periodically changing width. It is shown that this width change can be recalculated to the effective refraction index modulation. The applicability of transfer matrix method of reflection properties calculation was demonstrated. Finally, our 1-D model was employed to analyze reflection properties of a 2-D structure - a slab photonic crystal with a number of elliptic holes.

  4. The costs and benefits of reconstruction options in Nepal using the CEDIM FDA modelled and empirical analysis following the 2015 earthquake

    Science.gov (United States)

    Daniell, James; Schaefer, Andreas; Wenzel, Friedemann; Khazai, Bijan; Girard, Trevor; Kunz-Plapp, Tina; Kunz, Michael; Muehr, Bernhard

    2016-04-01

    Over the days following the 2015 Nepal earthquake, rapid loss estimates of deaths and the economic loss and reconstruction cost were undertaken by our research group in conjunction with the World Bank. This modelling relied on historic losses from other Nepal earthquakes as well as detailed socioeconomic data and earthquake loss information via CATDAT. The modelled results were very close to the final death toll and reconstruction cost for the 2015 earthquake of around 9000 deaths and a direct building loss of ca. 3 billion (a). A description of the process undertaken to produce these loss estimates is described and the potential for use in analysing reconstruction costs from future Nepal earthquakes in rapid time post-event. The reconstruction cost and death toll model is then used as the base model for the examination of the effect of spending money on earthquake retrofitting of buildings versus complete reconstruction of buildings. This is undertaken future events using empirical statistics from past events along with further analytical modelling. The effects of investment vs. the time of a future event is also explored. Preliminary low-cost options (b) along the line of other country studies for retrofitting (ca. 100) are examined versus the option of different building typologies in Nepal as well as investment in various sectors of construction. The effect of public vs. private capital expenditure post-earthquake is also explored as part of this analysis, as well as spending on other components outside of earthquakes. a) http://www.scientificamerican.com/article/experts-calculate-new-loss-predictions-for-nepal-quake/ b) http://www.aees.org.au/wp-content/uploads/2015/06/23-Daniell.pdf

  5. A reconstruction of Maxwell model for effective thermal conductivity of composite materials

    International Nuclear Information System (INIS)

    Xu, J.Z.; Gao, B.Z.; Kang, F.Y.

    2016-01-01

    Highlights: • Deficiencies were found in classical Maxwell model for effective thermal conductivity. • Maxwell model was reconstructed based on potential mean-field theory. • Reconstructed Maxwell model was extended with particle–particle contact resistance. • Predictions by reconstructed Maxwell model agree excellently with experimental data. - Abstract: Composite materials consisting of high thermal conductive fillers and polymer matrix are often used as thermal interface materials to dissipate heat generated from mechanical and electronic devices. The prediction of effective thermal conductivity of composites remains as a critical issue due to its dependence on considerably factors. Most models for prediction are based on the analog between electric potential and temperature that satisfy the Laplace equation under steady condition. Maxwell was the first to derive the effective electric resistivity of composites by examining the far-field spherical harmonic solution of Laplace equation perturbed by a sphere of different resistivity, and his model was considered as classical. However, a close review of Maxwell’s derivation reveals that there exist several controversial issues (deficiencies) inherent in his model. In this study, we reconstruct the Maxwell model based on a potential mean-field theory to resolve these issues. For composites made of continuum matrix and particle fillers, the contact resistance among particles was introduced in the reconstruction of Maxwell model. The newly reconstructed Maxwell model with contact resistivity as a fitting parameter is shown to fit excellently to experimental data over wide ranges of particle concentration and mean particle diameter. The scope of applicability of the reconstructed Maxwell model is also discussed using the contact resistivity as a parameter.

  6. Variational reconstruction using subdivision surfaces with continuous sharpness control

    Institute of Scientific and Technical Information of China (English)

    Xiaoqun Wu; Jianmin Zheng; Yiyu Cai; Haisheng Li

    2017-01-01

    We present a variational method for subdivision surface reconstruction from a noisy dense mesh.A new set of subdivision rules with continuous sharpness control is introduced into Loop subdivision for better modeling subdivision surface features such as semi-sharp creases,creases,and corners.The key idea is to assign a sharpness value to each edge of the control mesh to continuously control the surface features.Based on the new subdivision rules,a variational model with L1 norm is formulated to find the control mesh and the corresponding sharpness values of the subdivision surface that best fits the input mesh.An iterative solver based on the augmented Lagrangian method and particle swarm optimization is used to solve the resulting non-linear,non-differentiable optimization problem.Our experimental results show that our method can handle meshes well with sharp/semi-sharp features and noise.

  7. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study

    International Nuclear Information System (INIS)

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-01-01

    Purpose: To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. Materials and methods: CT scans were performed on a chest phantom containing various nodules (10 and 12 mm; +100, −630 and −800 HU) at 120 kVp with tube current–time settings of 10, 20, 50, and 100 mAs. Each CT was reconstructed using filtered back projection (FBP), iDose 4 and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Results: Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p > 0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose 4 at all radiation dose settings (p < 0.05). Conclusion: Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility

  8. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyungjin, E-mail: khj.snuh@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Park, Chang Min, E-mail: cmpark@radiol.snu.ac.kr [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Cancer Research Institute, Seoul National University, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Song, Yong Sub, E-mail: terasong@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Lee, Sang Min, E-mail: sangmin.lee.md@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Goo, Jin Mo, E-mail: jmgoo@plaza.snu.ac.kr [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Cancer Research Institute, Seoul National University, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of)

    2014-05-15

    Purpose: To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. Materials and methods: CT scans were performed on a chest phantom containing various nodules (10 and 12 mm; +100, −630 and −800 HU) at 120 kVp with tube current–time settings of 10, 20, 50, and 100 mAs. Each CT was reconstructed using filtered back projection (FBP), iDose{sup 4} and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Results: Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p > 0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose{sup 4} at all radiation dose settings (p < 0.05). Conclusion: Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility.

  9. Evaluation of two semi-analytical techniques in air quality applications Avaliação de duas técnicas semi-analíticas em aplicações na qualidade do ar

    Directory of Open Access Journals (Sweden)

    Jonas C. Carvalho

    2007-04-01

    Full Text Available In this article an evaluation of two semi-analytical techniques is carried out, considering the quality and accuracy of these techniques in reproducing the ground-level concentration values of passive pollutant released from low and high sources. The first technique is an Eulerian model based on the solution of the advection-diffusion equation by the Laplace transform technique. The second is a Lagrangian model based on solution of the Langevin equation through the Picard Iterative Method. Turbulence parameters are calculated according to a parameterization capable of generating continuous values in all stability conditions and in all heights of the planetary boundary layer. Numerical simulations and comparisons show a good agreement between predicted and observed concentrations values. Comparisons between the two proposed techniques reveal that Lagrangian model generated more accurate results, but Eulerian model demands a lesser computational time.Neste artigo é realizada uma avaliação de duas técnicas semi-analíticas, considerando a qualidade e a exatidão destas técnicas em reproduzir valores de concentração ao nível da superfície de poluentes passivos emitidos a partir de fontes baixas e altas. A primeira técnica é um modelo Euleriano baseado na solução da equação advecção-difusão através da técnica de transformada de Laplace. A segunda é um modelo Lagrangiano baseado na solução da equação de Langevin através do Método Iterativo de Picard. Parâmetros da turbulência são calculados de acordo com uma parametrização capaz de gerar valores contínuos em todas as condições de estabilidade e em todas as alturas na camada limite planetária. Simulações numéricas e comparações mostram uma boa concordância entre valores de concentração previstos e observados. Comparações entre as duas técnicas revelam que o modelo Lagrangiano gera resultados mais precisos, mas o modelo Euleriano exige um menor tempo

  10. Task Analytic Models to Guide Analysis and Design: Use of the Operator Function Model to Represent Pilot-Autoflight System Mode Problems

    Science.gov (United States)

    Degani, Asaf; Mitchell, Christine M.; Chappell, Alan R.; Shafto, Mike (Technical Monitor)

    1995-01-01

    Task-analytic models structure essential information about operator interaction with complex systems, in this case pilot interaction with the autoflight system. Such models serve two purposes: (1) they allow researchers and practitioners to understand pilots' actions; and (2) they provide a compact, computational representation needed to design 'intelligent' aids, e.g., displays, assistants, and training systems. This paper demonstrates the use of the operator function model to trace the process of mode engagements while a pilot is controlling an aircraft via the, autoflight system. The operator function model is a normative and nondeterministic model of how a well-trained, well-motivated operator manages multiple concurrent activities for effective real-time control. For each function, the model links the pilot's actions with the required information. Using the operator function model, this paper describes several mode engagement scenarios. These scenarios were observed and documented during a field study that focused on mode engagements and mode transitions during normal line operations. Data including time, ATC clearances, altitude, system states, and active modes and sub-modes, engagement of modes, were recorded during sixty-six flights. Using these data, seven prototypical mode engagement scenarios were extracted. One scenario details the decision of the crew to disengage a fully automatic mode in favor of a semi-automatic mode, and the consequences of this action. Another describes a mode error involving updating aircraft speed following the engagement of a speed submode. Other scenarios detail mode confusion at various phases of the flight. This analysis uses the operator function model to identify three aspects of mode engagement: (1) the progress of pilot-aircraft-autoflight system interaction; (2) control/display information required to perform mode management activities; and (3) the potential cause(s) of mode confusion. The goal of this paper is twofold

  11. Improving Semi-Supervised Learning with Auxiliary Deep Generative Models

    DEFF Research Database (Denmark)

    Maaløe, Lars; Sønderby, Casper Kaae; Sønderby, Søren Kaae

    Deep generative models based upon continuous variational distributions parameterized by deep networks give state-of-the-art performance. In this paper we propose a framework for extending the latent representation with extra auxiliary variables in order to make the variational distribution more...... expressive for semi-supervised learning. By utilizing the stochasticity of the auxiliary variable we demonstrate how to train discriminative classifiers resulting in state-of-the-art performance within semi-supervised learning exemplified by an 0.96% error on MNIST using 100 labeled data points. Furthermore...

  12. Homogenization of steady-state creep of porous metals using three-dimensional microstructural reconstructions

    DEFF Research Database (Denmark)

    Kwok, Kawai; Boccaccini, Dino; Persson, Åsa Helen

    2016-01-01

    The effective steady-state creep response of porous metals is studied by numerical homogenization and analytical modeling in this paper. The numerical homogenization is based on finite element models of three-dimensional microstructures directly reconstructed from tomographic images. The effects ...... model, and closely matched by the Gibson-Ashby compression and the Ramakrishnan-Arunchalam creep models. [All rights reserved Elsevier]....

  13. BUMPER: the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction

    Science.gov (United States)

    Holden, Phil; Birks, John; Brooks, Steve; Bush, Mark; Hwang, Grace; Matthews-Bird, Frazer; Valencia, Bryan; van Woesik, Robert

    2017-04-01

    We describe the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. The principal motivation for a Bayesian approach is that the palaeoenvironment is treated probabilistically, and can be updated as additional data become available. Bayesian approaches therefore provide a reconstruction-specific quantification of the uncertainty in the data and in the model parameters. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring 2 seconds to build a 100-taxon model from a 100-site training-set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training-sets under ideal assumptions. We then use these to demonstrate both the general applicability of the model and the sensitivity of reconstructions to the characteristics of the training-set, considering assemblage richness, taxon tolerances, and the number of training sites. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. In all of these applications an identically configured model is used, the only change being the input files that provide the training-set environment and taxon-count data.

  14. Modeling of the anode side of a direct methanol fuel cell with analytical solutions

    International Nuclear Information System (INIS)

    Mosquera, Martin A.; Lizcano-Valbuena, William H.

    2009-01-01

    In this work, analytical solutions were derived (for any methanol oxidation reaction order) for the profiles of methanol concentration and proton current density, by assuming diffusion mass transport mechanism, Tafel kinetics, and fast proton transport in the anodic catalyst layer of a direct methanol fuel cell. An expression for the Thiele modulus that allows to express the anodic overpotential as a function of the cell current and kinetic and mass transfer parameters was obtained. For high cell current densities, it was found that the Thiele modulus (φ 2 ) varies quadratically with cell current density; yielding a simple correlation between anodic overpotential and cell current density. Analytical solutions were derived for the profiles of both local methanol concentration in the catalyst layer and local anodic current density in the catalyst layer. Under the assumptions of the model presented here, in general, the local methanol concentration in the catalyst layer cannot be expressed as an explicit function of the position in the layer. In spite of this, the equations presented here for the anodic overpotential allow the derivation of new semi-empirical equations

  15. Light Source Estimation with Analytical Path-tracing

    OpenAIRE

    Kasper, Mike; Keivan, Nima; Sibley, Gabe; Heckman, Christoffer

    2017-01-01

    We present a novel algorithm for light source estimation in scenes reconstructed with a RGB-D camera based on an analytically-derived formulation of path-tracing. Our algorithm traces the reconstructed scene with a custom path-tracer and computes the analytical derivatives of the light transport equation from principles in optics. These derivatives are then used to perform gradient descent, minimizing the photometric error between one or more captured reference images and renders of our curre...

  16. Right adrenal vein: comparison between adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Science.gov (United States)

    Noda, Y; Goshima, S; Nagata, S; Miyoshi, T; Kawada, H; Kawai, N; Tanahashi, Y; Matsuo, M

    2018-06-01

    To compare right adrenal vein (RAV) visualisation and contrast enhancement degree on adrenal venous phase images reconstructed using adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) techniques. This prospective study was approved by the institutional review board, and written informed consent was waived. Fifty-seven consecutive patients who underwent adrenal venous phase imaging were enrolled. The same raw data were reconstructed using ASiR 40% and MBIR. The expert and beginner independently reviewed computed tomography (CT) images. RAV visualisation rates, background noise, and CT attenuation of the RAV, right adrenal gland, inferior vena cava (IVC), hepatic vein, and bilateral renal veins were compared between the two reconstruction techniques. RAV visualisation rates were higher with MBIR than with ASiR (95% versus 88%, p=0.13 in expert and 93% versus 75%, p=0.002 in beginner, respectively). RAV visualisation confidence ratings with MBIR were significantly greater than with ASiR (pASiR (pASiR (p=0.0013 and 0.02). Reconstruction of adrenal venous phase images using MBIR significantly reduces background noise, leading to an improvement in the RAV visualisation compared with ASiR. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  17. EXPRESIÓN SEMI-ANALÍTICA PARA EL CÁLCULO DE MATRICES DE CONDUCTIVIDAD DE ELEMENTOS FINITOS EN PROBLEMAS DE CONDUCCIÓN DE CALOR //\tA SEMI-ANALYTICAL EXPRESSION FOR CALCULATING FINITE ELEMENT CONDUCTIVITY MATRICES IN HEAT CONDUCTION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Hector Godoy

    2012-12-01

    Full Text Available The heat transfer equation by conduction is not more than a mathematical expression of the energy conservation law for a given solid. Solving the equation which model this problem is generally very dificult or impossible in an analytical way, so it is necessary to make a discrete approximation of the continuous problem. In this paper, we present a methodology applied to the quadrilateral finite elements in problems of heat transfer by conduction, where the components of the thermal conductivity matrix are obtained by a semi-analytical expression and simple algebraic manipulations. This technique has been used successfully in stiness arrays' integrations of bidimensional and tridimensional finite elements, reporting substantial improvements of CPU times compared with the Gaussian integration. // RESUMEN: La ecuación para la transferencia de calor por conducción no es más que una expresión matemática de la ley de conservación de la energía para un sólido dado. La resolución de la ecuación que modela este problema generalmente es muy difícil o imposible de obtener de forma analítica, por ello es necesario efectuar una aproximación discreta del problema continuo. En este trabajo, se presenta una metodología aplicada a elementos finitos cuadriláteros en problemas de transferencia de calor por conducción, donde las componentes de la matriz de conductividad térmica son obtenidas mediante una expresión semi-analítica y manipulaciones algebraicas sencillas. Esta técnica ha sido utilizada exitosamente en integraciones de matrices de rigidez de elementos finitos bidimensionales y tridimensionales,reportando mejoras sustanciales de tiempos de CPU en comparación con la integración Gaussiana.

  18. Modelling Spatial Compositional Data: Reconstructions of past land cover and uncertainties

    DEFF Research Database (Denmark)

    Pirzamanbein, Behnaz; Lindström, Johan; Poska, Anneli

    2018-01-01

    In this paper, we construct a hierarchical model for spatial compositional data, which is used to reconstruct past land-cover compositions (in terms of coniferous forest, broadleaved forest, and unforested/open land) for five time periods during the past $6\\,000$ years over Europe. The model...... to a fast MCMC algorithm. Reconstructions are obtained by combining pollen-based estimates of vegetation cover at a limited number of locations with scenarios of past deforestation and output from a dynamic vegetation model. To evaluate uncertainties in the predictions a novel way of constructing joint...... confidence regions for the entire composition at each prediction location is proposed. The hierarchical model's ability to reconstruct past land cover is evaluated through cross validation for all time periods, and by comparing reconstructions for the recent past to a present day European forest map...

  19. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    International Nuclear Information System (INIS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-01-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  20. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  1. ZFITTER - an analytical program for fermion-pair production

    International Nuclear Information System (INIS)

    Riemann, T.

    1992-10-01

    I discuss the semi-analytical codes which have been developed for the Z line-shape analysis at LEP I. They are applied for a model-independent and, when using a weak library, a Standard Model interpretation of the data. Some of them are applicable for New Physics searches. The package ZF I TT ER serves as an example, and comparisons of the codes are discussed. The degrees of freedom of the line shape and of asymmetries are made explicit. (orig.)

  2. Semi-Markov processes

    CERN Document Server

    Grabski

    2014-01-01

    Semi-Markov Processes: Applications in System Reliability and Maintenance is a modern view of discrete state space and continuous time semi-Markov processes and their applications in reliability and maintenance. The book explains how to construct semi-Markov models and discusses the different reliability parameters and characteristics that can be obtained from those models. The book is a useful resource for mathematicians, engineering practitioners, and PhD and MSc students who want to understand the basic concepts and results of semi-Markov process theory. Clearly defines the properties and

  3. Active numerical model of human body for reconstruction of falls from height.

    Science.gov (United States)

    Milanowicz, Marcin; Kędzior, Krzysztof

    2017-01-01

    Falls from height constitute the largest group of incidents out of approximately 90,000 occupational accidents occurring each year in Poland. Reconstruction of the exact course of a fall from height is generally difficult due to lack of sufficient information from the accident scene. This usually results in several contradictory versions of an incident and impedes, for example, determination of the liability in a judicial process. In similar situations, in many areas of human activity, researchers apply numerical simulation. They use it to model physical phenomena to reconstruct their real course over time; e.g. numerical human body models are frequently used for investigation and reconstruction of road accidents. However, they are validated in terms of specific road traffic accidents and are considerably limited when applied to the reconstruction of other types of accidents. The objective of the study was to develop an active numerical human body model to be used for reconstruction of accidents associated with falling from height. Development of the model involved extension and adaptation of the existing Pedestrian human body model (available in the MADYMO package database) for the purposes of reconstruction of falls from height by taking into account the human reaction to the loss of balance. The model was developed by using the results of experimental tests of the initial phase of the fall from height. The active numerical human body model covering 28 sets of initial conditions related to various human reactions to the loss of balance was developed. The application of the model was illustrated by using it to reconstruct a real fall from height. From among the 28 sets of initial conditions, those whose application made it possible to reconstruct the most probable version of the incident was selected. The selection was based on comparison of the results of the reconstruction with information contained in the accident report. Results in the form of estimated

  4. Human eyeball model reconstruction and quantitative analysis.

    Science.gov (United States)

    Xing, Qi; Wei, Qi

    2014-01-01

    Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.

  5. Semi-analytical modelling of positive corona discharge in air

    Science.gov (United States)

    Pontiga, Francisco; Yanallah, Khelifa; Chen, Junhong

    2013-09-01

    Semianalytical approximate solutions of the spatial distribution of electric field and electron and ion densities have been obtained by solving Poisson's equations and the continuity equations for the charged species along the Laplacian field lines. The need to iterate for the correct value of space charge on the corona electrode has been eliminated by using the corona current distribution over the grounded plane derived by Deutsch, which predicts a cos m θ law similar to Warburg's law. Based on the results of the approximated model, a parametric study of the influence of gas pressure, the corona wire radius, and the inter-electrode wire-plate separation has been carried out. Also, the approximate solutions of the electron number density has been combined with a simplified plasma chemistry model in order to compute the ozone density generated by the corona discharge in the presence of a gas flow. This work was supported by the Consejeria de Innovacion, Ciencia y Empresa (Junta de Andalucia) and by the Ministerio de Ciencia e Innovacion, Spain, within the European Regional Development Fund contracts FQM-4983 and FIS2011-25161.

  6. Image quality of iterative reconstruction in cranial CT imaging: comparison of model-based iterative reconstruction (MBIR) and adaptive statistical iterative reconstruction (ASiR).

    Science.gov (United States)

    Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S

    2015-01-01

    The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p ASiR was 2 (p ASiR (p ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.

  7. A Performance Analytical Strategy for Network-on-Chip Router with Input Buffer Architecture

    Directory of Open Access Journals (Sweden)

    WANG, J.

    2012-11-01

    Full Text Available In this paper, a performance analytical strategy is proposed for Network-on-Chip router with input buffer architecture. First, an analytical model is developed based on semi-Markov process. For the non-work-conserving router with small buffer size, the model can be used to analyze the schedule delay and the average service time for each buffer when given the related parameters. Then, the packet average delay in router is calculated by using the model. Finally, we validate the effectiveness of our strategy by simulation. By comparing our analytical results to simulation results, we show that our strategy successfully captures the Network-on-Chip router performance and it performs better than the state-of-art technology. Therefore, our strategy can be used as an efficiency performance analytical tool for Network-on-Chip design.

  8. Application of semi-empirical modeling and non-linear regression to unfolding fast neutron spectra from integral reaction rate data

    International Nuclear Information System (INIS)

    Harker, Y.D.

    1976-01-01

    A semi-empirical analytical expression representing a fast reactor neutron spectrum has been developed. This expression was used in a non-linear regression computer routine to obtain from measured multiple foil integral reaction data the neutron spectrum inside the Coupled Fast Reactivity Measurement Facility. In this application six parameters in the analytical expression for neutron spectrum were adjusted in the non-linear fitting process to maximize consistency between calculated and measured integral reaction rates for a set of 15 dosimetry detector foils. In two-thirds of the observations the calculated integral agreed with its respective measured value to within the experimental standard deviation, and in all but one case agreement within two standard deviations was obtained. Based on this quality of fit the estimated 70 to 75 percent confidence intervals for the derived spectrum are 10 to 20 percent for the energy range 100 eV to 1 MeV, 10 to 50 percent for 1 MeV to 10 MeV and 50 to 90 percent for 10 MeV to 18 MeV. The analytical model has demonstrated a flexibility to describe salient features of neutron spectra of the fast reactor type. The use of regression analysis with this model has produced a stable method to derive neutron spectra from a limited amount of integral data

  9. Modeling and analytical simulation of a smouldering carbonaceous ...

    African Journals Online (AJOL)

    Modeling and analytical simulation of a smouldering carbonaceous rod. A.A. Mohammed, R.O. Olayiwola, M Eseyin, A.A. Wachin. Abstract. Modeling of pyrolysis and combustion in a smouldering fuel bed requires the solution of flow, heat and mass transfer through porous media. This paper presents an analytical method ...

  10. A variable resolution nonhydrostatic global atmospheric semi-implicit semi-Lagrangian model

    Science.gov (United States)

    Pouliot, George Antoine

    2000-10-01

    The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested. The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high

  11. Improved steamflood analytical model

    Energy Technology Data Exchange (ETDEWEB)

    Chandra, S.; Mamora, D.D. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Texas A and M Univ., TX (United States)

    2005-11-01

    Predicting the performance of steam flooding can help in the proper execution of enhanced oil recovery (EOR) processes. The Jones model is often used for analytical steam flooding performance prediction, but it does not accurately predict oil production peaks. In this study, an improved steam flood model was developed by modifying 2 of the 3 components of the capture factor in the Jones model. The modifications were based on simulation results from a Society of Petroleum Engineers (SPE) comparative project case model. The production performance of a 5-spot steamflood pattern unit was simulated and compared with results obtained from the Jones model. Three reservoir types were simulated through the use of 3-D Cartesian black oil models. In order to correlate the simulation and the Jones analytical model results for the start and height of the production peak, the dimensionless steam zone size was modified to account for a decrease in oil viscosity during steam flooding and its dependence on the steam injection rate. In addition, the dimensionless volume of displaced oil produced was modified from its square-root format to an exponential form. The modified model improved results for production performance by up to 20 years of simulated steam flooding, compared to the Jones model. Results agreed with simulation results for 13 different cases, including 3 different sets of reservoir and fluid properties. Reservoir engineers will benefit from the improved accuracy of the model. Oil displacement calculations were based on methods proposed in earlier research, in which the oil displacement rate is a function of cumulative oil steam ratio. The cumulative oil steam ratio is a function of overall thermal efficiency. Capture factor component formulae were presented, as well as charts of oil production rates and cumulative oil-steam ratios for various reservoirs. 13 refs., 4 tabs., 29 figs.

  12. A semi-empirical two phase model for rocks

    International Nuclear Information System (INIS)

    Fogel, M.B.

    1993-01-01

    This article presents data from an experiment simulating a spherically symmetric tamped nuclear explosion. A semi-empirical two-phase model of the measured response in tuff is presented. A comparison is made of the computed peak stress and velocity versus scaled range and that measured on several recent tuff events

  13. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Julien G. [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Kim, Hyungjin; Park, Su Bin [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Ginneken, Bram van [Radboud University Nijmegen Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Ferretti, Gilbert R. [Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Institut A Bonniot, INSERM U 823, La Tronche (France); Lee, Chang Hyun [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Goo, Jin Mo; Park, Chang Min [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Seoul (Korea, Republic of)

    2017-08-15

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p < 0.05) with mean differences of 1.1% (limits of agreement, -6.4 to 8.5%), 3.2% (-20.9 to 27.3%) and 2.9% (-16.9 to 22.7%) and 3.2% (-20.5 to 27%), 6.3% (-51.9 to 64.6%), 6.6% (-50.1 to 63.3%), respectively. The limits of agreement between FBP and MBIR were within the range of intra- and interobserver variability for both algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. (orig.)

  14. Modelling hydrologic and hydrodynamic processes in basins with large semi-arid wetlands

    Science.gov (United States)

    Fleischmann, Ayan; Siqueira, Vinícius; Paris, Adrien; Collischonn, Walter; Paiva, Rodrigo; Pontes, Paulo; Crétaux, Jean-François; Bergé-Nguyen, Muriel; Biancamaria, Sylvain; Gosset, Marielle; Calmant, Stephane; Tanimoun, Bachir

    2018-06-01

    Hydrological and hydrodynamic models are core tools for simulation of large basins and complex river systems associated to wetlands. Recent studies have pointed towards the importance of online coupling strategies, representing feedbacks between floodplain inundation and vertical hydrology. Especially across semi-arid regions, soil-floodplain interactions can be strong. In this study, we included a two-way coupling scheme in a large scale hydrological-hydrodynamic model (MGB) and tested different model structures, in order to assess which processes are important to be simulated in large semi-arid wetlands and how these processes interact with water budget components. To demonstrate benefits from this coupling over a validation case, the model was applied to the Upper Niger River basin encompassing the Niger Inner Delta, a vast semi-arid wetland in the Sahel Desert. Simulation was carried out from 1999 to 2014 with daily TMPA 3B42 precipitation as forcing, using both in-situ and remotely sensed data for calibration and validation. Model outputs were in good agreement with discharge and water levels at stations both upstream and downstream of the Inner Delta (Nash-Sutcliffe Efficiency (NSE) >0.6 for most gauges), as well as for flooded areas within the Delta region (NSE = 0.6; r = 0.85). Model estimates of annual water losses across the Delta varied between 20.1 and 30.6 km3/yr, while annual evapotranspiration ranged between 760 mm/yr and 1130 mm/yr. Evaluation of model structure indicated that representation of both floodplain channels hydrodynamics (storage, bifurcations, lateral connections) and vertical hydrological processes (floodplain water infiltration into soil column; evapotranspiration from soil and vegetation and evaporation of open water) are necessary to correctly simulate flood wave attenuation and evapotranspiration along the basin. Two-way coupled models are necessary to better understand processes in large semi-arid wetlands. Finally, such coupled

  15. Assessing Women's Preferences and Preference Modeling for Breast Reconstruction Decision-Making.

    Science.gov (United States)

    Sun, Clement S; Cantor, Scott B; Reece, Gregory P; Crosby, Melissa A; Fingeret, Michelle C; Markey, Mia K

    2014-03-01

    Women considering breast reconstruction must make challenging trade-offs amongst issues that often conflict. It may be useful to quantify possible outcomes using a single summary measure to aid a breast cancer patient in choosing a form of breast reconstruction. In this study, we used multiattribute utility theory to combine multiple objectives to yield a summary value using nine different preference models. We elicited the preferences of 36 women, aged 32 or older with no history of breast cancer, for the patient-reported outcome measures of breast satisfaction, psychosocial well-being, chest well-being, abdominal well-being, and sexual wellbeing as measured by the BREAST-Q in addition to time lost to reconstruction and out-of-pocket cost. Participants ranked hypothetical breast reconstruction outcomes. We examined each multiattribute utility preference model and assessed how often each model agreed with participants' rankings. The median amount of time required to assess preferences was 34 minutes. Agreement among the nine preference models with the participants ranged from 75.9% to 78.9%. None of the preference models performed significantly worse than the best performing risk averse multiplicative model. We hypothesize an average theoretical agreement of 94.6% for this model if participant error is included. There was a statistically significant positive correlation with more unequal distribution of weight given to the seven attributes. We recommend the risk averse multiplicative model for modeling the preferences of patients considering different forms of breast reconstruction because it agreed most often with the participants in this study.

  16. A CIRCULAR-CYLINDRICAL FLUX-ROPE ANALYTICAL MODEL FOR MAGNETIC CLOUDS

    International Nuclear Information System (INIS)

    Nieves-Chinchilla, T.; Linton, M. G.; Hidalgo, M. A.; Vourlidas, A.; Savani, N. P.; Szabo, A.; Farrugia, C.; Yu, W.

    2016-01-01

    We present an analytical model to describe magnetic flux-rope topologies. When these structures are observed embedded in Interplanetary Coronal Mass Ejections (ICMEs) with a depressed proton temperature, they are called Magnetic Clouds (MCs). Our model extends the circular-cylindrical concept of Hidalgo et al. by introducing a general form for the radial dependence of the current density. This generalization provides information on the force distribution inside the flux rope in addition to the usual parameters of MC geometrical information and orientation. The generalized model provides flexibility for implementation in 3D MHD simulations. Here, we evaluate its performance in the reconstruction of MCs in in situ observations. Four Earth-directed ICME events, observed by the Wind spacecraft, are used to validate the technique. The events are selected from the ICME Wind list with the magnetic obstacle boundaries chosen consistently with the magnetic field and plasma in situ observations and with a new parameter (EPP, the Electron Pitch angle distribution Parameter) which quantifies the bidirectionally of the plasma electrons. The goodness of the fit is evaluated with a single correlation parameter to enable comparative analysis of the events. In general, at first glance, the model fits the selected events very well. However, a detailed analysis of events with signatures of significant compression indicates the need to explore geometries other than the circular-cylindrical. An extension of our current modeling framework to account for such non-circular CMEs will be presented in a forthcoming publication.

  17. A CIRCULAR-CYLINDRICAL FLUX-ROPE ANALYTICAL MODEL FOR MAGNETIC CLOUDS

    Energy Technology Data Exchange (ETDEWEB)

    Nieves-Chinchilla, T. [Catholic University of America, Washington, DC (United States); Linton, M. G. [Space Science Division, Naval Research Laboratory, Washington, DC (United States); Hidalgo, M. A. [Dept. de Fisica, UAH, Alcala de Henares, Madrid (Spain); Vourlidas, A. [The Johns Hopkins University Applied Physics Laboratory, Laurel, MD (United States); Savani, N. P.; Szabo, A. [NASA Goddard Space Flight Center, Greenbelt, MD (United States); Farrugia, C.; Yu, W., E-mail: Teresa.Nieves@nasa.gov [Space Science Center and Department of Physics, University of New Hampshire, Durham, NH (United States)

    2016-05-20

    We present an analytical model to describe magnetic flux-rope topologies. When these structures are observed embedded in Interplanetary Coronal Mass Ejections (ICMEs) with a depressed proton temperature, they are called Magnetic Clouds (MCs). Our model extends the circular-cylindrical concept of Hidalgo et al. by introducing a general form for the radial dependence of the current density. This generalization provides information on the force distribution inside the flux rope in addition to the usual parameters of MC geometrical information and orientation. The generalized model provides flexibility for implementation in 3D MHD simulations. Here, we evaluate its performance in the reconstruction of MCs in in situ observations. Four Earth-directed ICME events, observed by the Wind spacecraft, are used to validate the technique. The events are selected from the ICME Wind list with the magnetic obstacle boundaries chosen consistently with the magnetic field and plasma in situ observations and with a new parameter (EPP, the Electron Pitch angle distribution Parameter) which quantifies the bidirectionally of the plasma electrons. The goodness of the fit is evaluated with a single correlation parameter to enable comparative analysis of the events. In general, at first glance, the model fits the selected events very well. However, a detailed analysis of events with signatures of significant compression indicates the need to explore geometries other than the circular-cylindrical. An extension of our current modeling framework to account for such non-circular CMEs will be presented in a forthcoming publication.

  18. A theoretical model of semi-elliptic surface crack growth

    Directory of Open Access Journals (Sweden)

    Shi Kaikai

    2014-06-01

    Full Text Available A theoretical model of semi-elliptic surface crack growth based on the low cycle strain damage accumulation near the crack tip along the cracking direction and the Newman–Raju formula is developed. The crack is regarded as a sharp notch with a small curvature radius and the process zone is assumed to be the size of cyclic plastic zone. The modified Hutchinson, Rice and Rosengren (HRR formulations are used in the presented study. Assuming that the shape of surface crack front is controlled by two critical points: the deepest point and the surface point. The theoretical model is applied to semi-elliptic surface cracked Al 7075-T6 alloy plate under cyclic loading, and five different initial crack shapes are discussed in present study. Good agreement between experimental and theoretical results is obtained.

  19. Verifying three-dimensional skull model reconstruction using cranial index of symmetry.

    Science.gov (United States)

    Kung, Woon-Man; Chen, Shuo-Tsung; Lin, Chung-Hsiang; Lu, Yu-Mei; Chen, Tzu-Hsuan; Lin, Muh-Shi

    2013-01-01

    Difficulty exists in scalp adaptation for cranioplasty with customized computer-assisted design/manufacturing (CAD/CAM) implant in situations of excessive wound tension and sub-cranioplasty dead space. To solve this clinical problem, the CAD/CAM technique should include algorithms to reconstruct a depressed contour to cover the skull defect. Satisfactory CAM-derived alloplastic implants are based on highly accurate three-dimensional (3-D) CAD modeling. Thus, it is quite important to establish a symmetrically regular CAD/CAM reconstruction prior to depressing the contour. The purpose of this study is to verify the aesthetic outcomes of CAD models with regular contours using cranial index of symmetry (CIS). From January 2011 to June 2012, decompressive craniectomy (DC) was performed for 15 consecutive patients in our institute. 3-D CAD models of skull defects were reconstructed using commercial software. These models were checked in terms of symmetry by CIS scores. CIS scores of CAD reconstructions were 99.24±0.004% (range 98.47-99.84). CIS scores of these CAD models were statistically significantly greater than 95%, identical to 99.5%, but lower than 99.6% (ppairs signed rank test). These data evidenced the highly accurate symmetry of these CAD models with regular contours. CIS calculation is beneficial to assess aesthetic outcomes of CAD-reconstructed skulls in terms of cranial symmetry. This enables further accurate CAD models and CAM cranial implants with depressed contours, which are essential in patients with difficult scalp adaptation.

  20. Multiphase Interface Tracking with Fast Semi-Lagrangian Contouring.

    Science.gov (United States)

    Li, Xiaosheng; He, Xiaowei; Liu, Xuehui; Zhang, Jian J; Liu, Baoquan; Wu, Enhua

    2016-08-01

    We propose a semi-Lagrangian method for multiphase interface tracking. In contrast to previous methods, our method maintains an explicit polygonal mesh, which is reconstructed from an unsigned distance function and an indicator function, to track the interface of arbitrary number of phases. The surface mesh is reconstructed at each step using an efficient multiphase polygonization procedure with precomputed stencils while the distance and indicator function are updated with an accurate semi-Lagrangian path tracing from the meshes of the last step. Furthermore, we provide an adaptive data structure, multiphase distance tree, to accelerate the updating of both the distance function and the indicator function. In addition, the adaptive structure also enables us to contour the distance tree accurately with simple bisection techniques. The major advantage of our method is that it can easily handle topological changes without ambiguities and preserve both the sharp features and the volume well. We will evaluate its efficiency, accuracy and robustness in the results part with several examples.

  1. Reconstructing plateau icefields: Evaluating empirical and modelled approaches

    Science.gov (United States)

    Pearce, Danni; Rea, Brice; Barr, Iestyn

    2013-04-01

    Glacial landforms are widely utilised to reconstruct former glacier geometries with a common aim to estimate the Equilibrium Line Altitudes (ELAs) and from these, infer palaeoclimatic conditions. Such inferences may be studied on a regional scale and used to correlate climatic gradients across large distances (e.g., Europe). In Britain, the traditional approach uses geomorphological mapping with hand contouring to derive the palaeo-ice surface. Recently, ice surface modelling enables an equilibrium profile reconstruction tuned using the geomorphology. Both methods permit derivation of palaeo-climate but no study has compared the two methods for the same ice-mass. This is important because either approach may result in differences in glacier limits, ELAs and palaeo-climate. This research uses both methods to reconstruct a plateau icefield and quantifies the results from a cartographic and geometrical aspect. Detailed geomorphological mapping of the Tweedsmuir Hills in the Southern Uplands, Scotland (c. 320 km2) was conducted to examine the extent of Younger Dryas (YD; 12.9 -11.7 cal. ka BP) glaciation. Landform evidence indicates a plateau icefield configuration of two separate ice-masses during the YD covering an area c. 45 km2 and 25 km2. The interpreted age is supported by new radiocarbon dating of basal stratigraphies and Terrestrial Cosmogenic Nuclide Analysis (TCNA) of in situ boulders. Both techniques produce similar configurations however; the model results in a coarser resolution requiring further processing if a cartographic map is required. When landforms are absent or fragmentary (e.g., trimlines and lateral moraines), like in many accumulation zones on plateau icefields, the geomorphological approach increasingly relies on extrapolation between lines of evidence and on the individual's perception of how the ice-mass ought to look. In some locations this results in an underestimation of the ice surface compared to the modelled surface most likely due to

  2. Analytical models for low-power rectenna design

    NARCIS (Netherlands)

    Akkermans, J.A.G.; Beurden, van M.C.; Doodeman, G.J.N.; Visser, H.J.

    2005-01-01

    The design of a low-cost rectenna for low-power applications is presented. The rectenna is designed with the use of analytical models and closed-form analytical expressions. This allows for a fast design of the rectenna system. To acquire a small-area rectenna, a layered design is proposed.

  3. Analytically solvable models of reaction-diffusion systems

    Energy Technology Data Exchange (ETDEWEB)

    Zemskov, E P; Kassner, K [Institut fuer Theoretische Physik, Otto-von-Guericke-Universitaet, Universitaetsplatz 2, 39106 Magdeburg (Germany)

    2004-05-01

    We consider a class of analytically solvable models of reaction-diffusion systems. An analytical treatment is possible because the nonlinear reaction term is approximated by a piecewise linear function. As particular examples we choose front and pulse solutions to illustrate the matching procedure in the one-dimensional case.

  4. Semi-automated curation of metabolic models via flux balance analysis: a case study with Mycoplasma gallisepticum.

    Directory of Open Access Journals (Sweden)

    Eddy J Bautista

    Full Text Available Primarily used for metabolic engineering and synthetic biology, genome-scale metabolic modeling shows tremendous potential as a tool for fundamental research and curation of metabolism. Through a novel integration of flux balance analysis and genetic algorithms, a strategy to curate metabolic networks and facilitate identification of metabolic pathways that may not be directly inferable solely from genome annotation was developed. Specifically, metabolites involved in unknown reactions can be determined, and potentially erroneous pathways can be identified. The procedure developed allows for new fundamental insight into metabolism, as well as acting as a semi-automated curation methodology for genome-scale metabolic modeling. To validate the methodology, a genome-scale metabolic model for the bacterium Mycoplasma gallisepticum was created. Several reactions not predicted by the genome annotation were postulated and validated via the literature. The model predicted an average growth rate of 0.358±0.12[Formula: see text], closely matching the experimentally determined growth rate of M. gallisepticum of 0.244±0.03[Formula: see text]. This work presents a powerful algorithm for facilitating the identification and curation of previously known and new metabolic pathways, as well as presenting the first genome-scale reconstruction of M. gallisepticum.

  5. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    International Nuclear Information System (INIS)

    Chen, G; Pan, X; Stayman, J; Samei, E

    2014-01-01

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  6. Analytical simulation platform describing projections in computed tomography systems

    International Nuclear Information System (INIS)

    Youn, Hanbean; Kim, Ho Kyung

    2013-01-01

    To reduce the patient dose, several approaches such as spectral imaging using photon counting detectors and statistical image reconstruction, are being considered. Although image-reconstruction algorithms may significantly enhance image quality in reconstructed images with low dose, true signal-to-noise properties are mainly determined by image quality in projections. We are developing an analytical simulation platform describing projections to investigate how quantum-interaction physics in each component configuring CT systems affect image quality in projections. This simulator will be very useful for an improved design or optimization of CT systems in economy as well as the development of novel image-reconstruction algorithms. In this study, we present the progress of development of the simulation platform with an emphasis on the theoretical framework describing the generation of projection data. We have prepared the analytical simulation platform describing projections in computed tomography systems. The remained further study before the meeting includes the following: Each stage in the cascaded signal-transfer model for obtaining projections will be validated by the Monte Carlo simulations. We will build up energy-dependent scatter and pixel-crosstalk kernels, and show their effects on image quality in projections and reconstructed images. We will investigate the effects of projections obtained from various imaging conditions and system (or detector) operation parameters on reconstructed images. It is challenging to include the interaction physics due to photon-counting detectors into the simulation platform. Detailed descriptions of the simulator will be presented with discussions on its performance and limitation as well as Monte Carlo validations. Computational cost will also be addressed in detail. The proposed method in this study is simple and can be used conveniently in lab environment

  7. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  8. Analytic continuation of the rotating black hole state counting

    Energy Technology Data Exchange (ETDEWEB)

    Achour, Jibril Ben [Departement of Physics, Center for Field Theory and Particles Physics, Fudan University,20433 Shanghai (China); Noui, Karim [Fédération Denis Poisson, Laboratoire de Mathématiques et Physique Théorique (UMR 7350),Université François Rabelais,Parc de Grandmont, 37200 Tours (France); Laboratoire APC - Astroparticule et Cosmologie, Université Paris Diderot Paris 7,75013 Paris (France); Perez, Alejandro [Centre de Physique Théorique (UMR 7332), Aix Marseille Université and Université de Toulon,13288 Marseille (France)

    2016-08-24

    In loop quantum gravity, a spherical black hole can be described in terms of a Chern-Simons theory on a punctured 2-sphere. The sphere represents the horizon. The punctures are the edges of spin-networks in the bulk which cross the horizon and carry quanta of area. One can generalize this construction and model a rotating black hole by adding an extra puncture colored with the angular momentum J in the 2-sphere. We compute the entropy of rotating black holes in this model and study its semi-classical limit. After performing an analytic continuation which sends the Barbero-Immirzi parameter to γ=±i, we show that the leading order term in the semi-classical expansion of the entropy reproduces the Bekenstein-Hawking law independently of the value of J.

  9. Analytical Solution of the Hyperbolic Heat Conduction Equation for Moving Semi-Infinite Medium under the Effect of Time-Dependent Laser Heat Source

    Directory of Open Access Journals (Sweden)

    R. T. Al-Khairy

    2009-01-01

    source, whose capacity is given by (,=((1−− while the semi-infinite body has insulated boundary. The solution is obtained by Laplace transforms method, and the discussion of solutions for different time characteristics of heat sources capacity (constant, instantaneous, and exponential is presented. The effect of absorption coefficients on the temperature profiles is examined in detail. It is found that the closed form solution derived from the present study reduces to the previously obtained analytical solution when the medium velocity is set to zero in the closed form solution.

  10. AUTOMATIC TEXTURE RECONSTRUCTION OF 3D CITY MODEL FROM OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  11. Semi adiabatic theory of seasonal Markov processes

    Energy Technology Data Exchange (ETDEWEB)

    Talkner, P [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    The dynamics of many natural and technical systems are essentially influenced by a periodic forcing. Analytic solutions of the equations of motion for periodically driven systems are generally not known. Simulations, numerical solutions or in some limiting cases approximate analytic solutions represent the known approaches to study the dynamics of such systems. Besides the regime of weak periodic forces where linear response theory works, the limit of a slow driving force can often be treated analytically using an adiabatic approximation. For this approximation to hold all intrinsic processes must be fast on the time-scale of a period of the external driving force. We developed a perturbation theory for periodically driven Markovian systems that covers the adiabatic regime but also works if the system has a single slow mode that may even be slower than the driving force. We call it the semi adiabatic approximation. Some results of this approximation for a system exhibiting stochastic resonance which usually takes place within the semi adiabatic regime are indicated. (author) 1 fig., 8 refs.

  12. biomvRhsmm: Genomic Segmentation with Hidden Semi-Markov Model

    Directory of Open Access Journals (Sweden)

    Yang Du

    2014-01-01

    Full Text Available High-throughput technologies like tiling array and next-generation sequencing (NGS generate continuous homogeneous segments or signal peaks in the genome that represent transcripts and transcript variants (transcript mapping and quantification, regions of deletion and amplification (copy number variation, or regions characterized by particular common features like chromatin state or DNA methylation ratio (epigenetic modifications. However, the volume and output of data produced by these technologies present challenges in analysis. Here, a hidden semi-Markov model (HSMM is implemented and tailored to handle multiple genomic profile, to better facilitate genome annotation by assisting in the detection of transcripts, regulatory regions, and copy number variation by holistic microarray or NGS. With support for various data distributions, instead of limiting itself to one specific application, the proposed hidden semi-Markov model is designed to allow modeling options to accommodate different types of genomic data and to serve as a general segmentation engine. By incorporating genomic positions into the sojourn distribution of HSMM, with optional prior learning using annotation or previous studies, the modeling output is more biologically sensible. The proposed model has been compared with several other state-of-the-art segmentation models through simulation benchmarking, which shows that our efficient implementation achieves comparable or better sensitivity and specificity in genomic segmentation.

  13. Modeling astronomical adaptive optics performance with temporally filtered Wiener reconstruction of slope data

    Science.gov (United States)

    Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.

    2017-10-01

    We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.

  14. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    Science.gov (United States)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  15. A Taxonomic Reduced-Space Pollen Model for Paleoclimate Reconstruction

    Science.gov (United States)

    Wahl, E. R.; Schoelzel, C.

    2010-12-01

    Paleoenvironmental reconstruction from fossil pollen often attempts to take advantage of the rich taxonomic diversity in such data. Here, a taxonomically "reduced-space" reconstruction model is explored that would be parsimonious in introducing parameters needing to be estimated within a Bayesian Hierarchical Modeling context. This work involves a refinement of the traditional pollen ratio method. This method is useful when one (or a few) dominant pollen type(s) in a region have a strong positive correlation with a climate variable of interest and another (or a few) dominant pollen type(s) have a strong negative correlation. When, e.g., counts of pollen taxa a and b (r >0) are combined with pollen types c and d (r logistic generalized linear model (GLM). The GLM can readily model this relationship in the forward form, pollen = g(climate), which is more physically realistic than inverse models often used in paleoclimate reconstruction [climate = f(pollen)]. The specification of the model is: rnum Bin(n,p), where E(r|T) = p = exp(η)/[1+exp(η)], and η = α + β(T); r is the pollen ratio formed as above, rnum is the ratio numerator, n is the ratio denominator (i.e., the sum of pollen counts), the denominator-specific count is (n - rnum), and T is the temperature at each site corresponding to a specific value of r. Ecological and empirical screening identified the model (Spruce+Birch) / (Spruce+Birch+Oak+Hickory) for use in temperate eastern N. America. α and β were estimated using both "traditional" and Bayesian GLM algorithms (in R). Although it includes only four pollen types, the ratio model yields more explained variation ( 80%) in the pollen-temperature relationship of the study region than a 64-taxon modern analog technique (MAT). Thus, the new pollen ratio method represents an information-rich, reduced space data model that can be efficiently employed in a BHM framework. The ratio model can directly reconstruct past temperature by solving the GLM equations

  16. A reduced order model to analytically infer atmospheric CO2 concentration from stomatal and climate data

    Science.gov (United States)

    Konrad, Wilfried; Katul, Gabriel; Roth-Nebelsick, Anita; Grein, Michaela

    2017-06-01

    expression derived from water vapor gas diffusion that includes anatomical traits. When combined with isotopic measurements for long-term Ci/Ca, Ca can be analytically determined and is interpreted as the time-averaged Ca that existed over the life-span of the leaf. Key advantages of the proposed ROM are: 1) the usage of isotopic data provides constraints on the reconstructed atmospheric CO2 concentration from ν, 2) the analytical form of this approach permits direct links between parameter uncertainties and reconstructed Ca, and 3) the time-scale mismatch between the application of instantaneous leaf-gas exchange expressions constrained with longer-term isotopic data is reconciled through averaging rules and sensitivity analysis. The latter point was rarely considered in prior reconstruction studies that combined models of leaf-gas exchange and isotopic data to reconstruct Ca from ν. The proposed ROM is not without its limitations given the need to a priori assume a parameter related to the control on photosynthetic rate. The work here further explores immanent constraints for the aforementioned photosynthetic parameter.

  17. Application of reconstructive tomography to the measurement of density distribution in two-phase flow

    International Nuclear Information System (INIS)

    Fincke, J.R.; Berggren, M.J.; Johnson, S.A.

    1980-01-01

    The technique of reconstructive tomography has been applied to the measurement of average density and density distribution in multiphase flows. The technique of reconstructive tomography provides a model independent method of obtaining flow field density information. The unique features of interest in application of a practical tomographic densitometer system are the limited number of data values and the correspondingly coarse reconstruction grid (0.5 by 0.5 cm). These features were studied both experimentally, through the use of prototype hardware on a 3-in. pipe, and analytically, through computer generation of simulated data. Prototypical data were taken on phantoms constructed of Plexiglas and laminated Plexiglas, wood, and polyurethane foam. Reconstructions obtained from prototype data were compared with reconstructions from the simulated data

  18. Directed walk models of adsorbing semi-flexible polymers subject to an elongational force

    Energy Technology Data Exchange (ETDEWEB)

    Iliev, G K [Department of Mathematics and Statistics, University of Melbourne, Parkville (Australia); Orlandini, E [Dipartimento di Fisica, CNISM, Universita di Padova, Via Marzolo 8, 35131 Padova (Italy); Whittington, S G [Department of Chemistry, University of Toronto, Toronto (Canada)

    2010-08-06

    We consider several directed path models of semi-flexible polymers. In each model we associate an energy parameter for every pair of adjacent collinear steps, allowing for a model of a polymer with tunable stiffness. We introduce weightings for vertices or edges in a distinguished plane to model the interaction of a semi-flexible polymer with an impenetrable surface. We also investigate the desorption of such a polymer under the influence of an elongational force and study the order of the associated phase transitions. Using a simple low-temperature theory, we approximate and study the ground state behaviour of the models.

  19. SIRIUS - A one-dimensional multigroup analytic nodal diffusion theory code

    Energy Technology Data Exchange (ETDEWEB)

    Forslund, P. [Westinghouse Atom AB, Vaesteraas (Sweden)

    2000-09-01

    In order to evaluate relative merits of some proposed intranodal cross sections models, a computer code called Sirius has been developed. Sirius is a one-dimensional, multigroup analytic nodal diffusion theory code with microscopic depletion capability. Sirius provides the possibility of performing a spatial homogenization and energy collapsing of cross sections. In addition a so called pin power reconstruction method is available for the purpose of reconstructing 'heterogeneous' pin qualities. consequently, Sirius has the capability of performing all the calculations (incl. depletion calculations) which are an integral part of the nodal calculation procedure. In this way, an unambiguous numerical analysis of intranodal cross section models is made possible. In this report, the theory of the nodal models implemented in sirius as well as the verification of the most important features of these models are addressed.

  20. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  1. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  2. Evaluating Portfolio Value-At-Risk Using Semi-Parametric GARCH Models

    NARCIS (Netherlands)

    J.V.K. Rombouts; M.J.C.M. Verbeek (Marno)

    2009-01-01

    textabstractIn this paper we examine the usefulness of multivariate semi-parametric GARCH models for evaluating the Value-at-Risk (VaR) of a portfolio with arbitrary weights. We specify and estimate several alternative multivariate GARCH models for daily returns on the S&P 500 and Nasdaq indexes.

  3. Tomographic reconstruction of the time-averaged density distribution in two-phase flow

    International Nuclear Information System (INIS)

    Fincke, J.R.

    1982-01-01

    The technique of reconstructive tomography has been applied to the measurement of time-average density and density distribution in a two-phase flow field. The technique of reconstructive tomography provides a model-independent method of obtaining flow-field density information. A tomographic densitometer system for the measurement of two-phase flow has two unique problems: a limited number of data values and a correspondingly coarse reconstruction grid. These problems were studied both experimentally through the use of prototype hardware on a 3-in. pipe, and analytically through computer generation of simulated data. The prototype data were taken on phantoms constructed of all Plexiglas and Plexiglas laminated with wood and polyurethane foam. Reconstructions obtained from prototype data are compared with reconstructions from the simulated data. Also presented are some representative results in a horizontal air/water flow

  4. Fabrication of metal matrix composite by semi-solid powder processing

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Yufeng [Iowa State Univ., Ames, IA (United States)

    2011-01-01

    Various metal matrix composites (MMCs) are widely used in the automotive, aerospace and electrical industries due to their capability and flexibility in improving the mechanical, thermal and electrical properties of a component. However, current manufacturing technologies may suffer from insufficient process stability and reliability and inadequate economic efficiency and may not be able to satisfy the increasing demands placed on MMCs. Semi-solid powder processing (SPP), a technology that combines traditional powder metallurgy and semi-solid forming methods, has potential to produce MMCs with low cost and high efficiency. In this work, the analytical study and experimental investigation of SPP on the fabrication of MMCs were explored. An analytical model was developed to understand the deformation mechanism of the powder compact in the semi-solid state. The densification behavior of the Al6061 and SiC powder mixtures was investigated with different liquid fractions and SiC volume fractions. The limits of SPP were analyzed in terms of reinforcement phase loading and its impact on the composite microstructure. To explore adoption of new materials, carbon nanotube (CNT) was investigated as a reinforcing material in aluminum matrix using SPP. The process was successfully modeled for the mono-phase powder (Al6061) compaction and the density and density distribution were predicted. The deformation mechanism at low and high liquid fractions was discussed. In addition, the compaction behavior of the ceramic-metal powder mixture was understood, and the SiC loading limit was identified by parametric study. For the fabrication of CNT reinforced Al6061 composite, the mechanical alloying of Al6061-CNT powders was first investigated. A mathematical model was developed to predict the CNT length change during the mechanical alloying process. The effects of mechanical alloying time and processing temperature during SPP were studied on the mechanical, microstructural and

  5. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    Science.gov (United States)

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Feeling like me again: a grounded theory of the role of breast reconstruction surgery in self-image.

    Science.gov (United States)

    McKean, L N; Newman, E F; Adair, P

    2013-07-01

    The present study aimed to develop a theoretical understanding of the role of breast reconstruction in women's self-image. Semi-structured interviews were conducted with 10 women from breast cancer support groups who had undergone breast reconstruction surgery. A grounded theory methodology was used to explore their experiences. The study generated a model of 'breast cancer, breast reconstruction and self-image', with a core category entitled 'feeling like me again' and two principal categories of 'normal appearance' and 'normal life'. A further two main categories, 'moving on' and 'image of sick person' were generated. The results indicated a role of breast reconstruction in several aspects of self-image including the restoration of pre-surgery persona, which further promoted adjustment. © 2013 John Wiley & Sons Ltd.

  7. Fully three-dimensional image reconstruction in radiology and nuclear medicine. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    The proceedings of the meeting on ''fully three-dimensional image reconstruction in radiology and nuclear medicine'' covers contributions on the following topics: CT imaging, PET imaging, fidelity; iterative and few-view CT, CT-analytical; PET/SPECT Compton analytical; doses - spectral methods; phase contrast; compressed sensing- sparse reconstruction; special issues; motion - cardiac.

  8. Analytical model of impedance in elliptical beam pipes

    CERN Document Server

    Pesah, Arthur Chalom

    2017-01-01

    Beam instabilities are among the main limitations in building higher intensity accelerators. Having a good impedance model for every accelerators is necessary in order to build components that minimize the probability of instabilities caused by the interaction beam-environment and to understand what piece to change in case of intensity increasing. Most of accelerator components have their impedance simulated with finite elements method (using softwares like CST Studio), but simple components such as circular or flat pipes are modeled analytically, with a decreasing computation time and an increasing precision compared to their simulated model. Elliptical beam pipes, while being a simple component present in some accelerators, still misses a good analytical model working for the hole range of velocities and frequencies. In this report, we present a general framework to study the impedance of elliptical pipes analytically. We developed a model for both longitudinal and transverse impedance, first in the case of...

  9. A Semi-Analytical Methodology for Multiwell Productivity Index of Well-Industry-Production-Scheme in Tight Oil Reservoirs

    Directory of Open Access Journals (Sweden)

    Guangfeng Liu

    2018-04-01

    Full Text Available Recently, the well-industry-production-scheme (WIPS has attracted more and more attention to improve tight oil recovery. However, multi-well pressure interference (MWPI induced by well-industry-production-scheme (WIPS strongly challenges the traditional transient pressure analysis methods, which focus on single multi-fractured horizontal wells (SMFHWs without MWPI. Therefore, a semi-analytical methodology for multiwell productivity index (MPI was proposed to study well performance of WIPS scheme in tight reservoir. To facilitate methodology development, the conceptual models of tight formation and WIPS scheme were firstly described. Secondly, seepage models of tight reservoir and hydraulic fractures (HFs were sequentially established and then dynamically coupled. Numerical simulation was utilized to validate our model. Finally, identification of flow regimes and sensitivity analysis were conducted. Our results showed that there was good agreement between our proposed model and numerical simulation; moreover, our approach also gave promising calculation speed over numerical simulation. Some expected flow regimes were significantly distorted due to WIPS. The slope of type curves which characterize the linear or bi-linear flow regime is bigger than 0.5 or 0.25. The horizontal line which characterize radial flow regime is also bigger 0.5. The smaller the oil rate, the more severely flow regimes were distorted. Well rate mainly determines the distortion of MPI curves, while fracture length, well spacing, fracture spacing mainly determine when the distortion of the MPI curves occurs. The bigger the well rate, the more severely the MPI curves are distorted. While as the well spacing decreases, fracture length increases, fracture spacing increases, occurrence of MWPI become earlier. Stress sensitivity coefficient mainly affects the MPI at the formation pseudo-radial flow stage, almost has no influence on the occurrence of MWPI. This work gains some

  10. 33 CFR 385.33 - Revisions to models and analytical tools.

    Science.gov (United States)

    2010-07-01

    ... on a case-by-case basis what documentation is appropriate for revisions to models and analytic tools... analytical tools. 385.33 Section 385.33 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE... Incorporating New Information Into the Plan § 385.33 Revisions to models and analytical tools. (a) In carrying...

  11. Novel Low Cost 3D Surface Model Reconstruction System for Plant Phenotyping

    Directory of Open Access Journals (Sweden)

    Suxing Liu

    2017-09-01

    Full Text Available Accurate high-resolution three-dimensional (3D models are essential for a non-invasive analysis of phenotypic characteristics of plants. Previous limitations in 3D computer vision algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present an image-based 3D plant reconstruction system that can be achieved by using a single camera and a rotation stand. Our method is based on the structure from motion method, with a SIFT image feature descriptor. In order to improve the quality of the 3D models, we segmented the plant objects based on the PlantCV platform. We also deducted the optimal number of images needed for reconstructing a high-quality model. Experiments showed that an accurate 3D model of the plant was successfully could be reconstructed by our approach. This 3D surface model reconstruction system provides a simple and accurate computational platform for non-destructive, plant phenotyping.

  12. Accurate Modelling of Surface Currents and Internal Tides in a Semi-enclosed Coastal Sea

    Science.gov (United States)

    Allen, S. E.; Soontiens, N. K.; Dunn, M. B. H.; Liu, J.; Olson, E.; Halverson, M. J.; Pawlowicz, R.

    2016-02-01

    The Strait of Georgia is a deep (400 m), strongly stratified, semi-enclosed coastal sea on the west coast of North America. We have configured a baroclinic model of the Strait of Georgia and surrounding coastal waters using the NEMO ocean community model. We run daily nowcasts and forecasts and publish our sea-surface results (including storm surge warnings) to the web (salishsea.eos.ubc.ca/storm-surge). Tides in the Strait of Georgia are mixed and large. The baroclinic model and previous barotropic models accurately represent tidal sea-level variations and depth mean currents. The baroclinic model reproduces accurately the diurnal but not the semi-diurnal baroclinic tidal currents. In the Southern Strait of Georgia, strong internal tidal currents at the semi-diurnal frequency are observed. Strong semi-diurnal tides are also produced in the model, but are almost 180 degrees out of phase with the observations. In the model, in the surface, the barotropic and baroclinic tides reinforce, whereas the observations show that at the surface the baroclinic tides oppose the barotropic. As such the surface currents are very poorly modelled. Here we will present evidence of the internal tidal field from observations. We will discuss the generation regions of the tides, the necessary modifications to the model required to correct the phase, the resulting baroclinic tides and the improvements in the surface currents.

  13. Stability of numerical method for semi-linear stochastic pantograph differential equations

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2016-01-01

    Full Text Available Abstract As a particular expression of stochastic delay differential equations, stochastic pantograph differential equations have been widely used in nonlinear dynamics, quantum mechanics, and electrodynamics. In this paper, we mainly study the stability of analytical solutions and numerical solutions of semi-linear stochastic pantograph differential equations. Some suitable conditions for the mean-square stability of an analytical solution are obtained. Then we proved the general mean-square stability of the exponential Euler method for a numerical solution of semi-linear stochastic pantograph differential equations, that is, if an analytical solution is stable, then the exponential Euler method applied to the system is mean-square stable for arbitrary step-size h > 0 $h>0$ . Numerical examples further illustrate the obtained theoretical results.

  14. MASCOTTE: analytical model of eddy current signals

    International Nuclear Information System (INIS)

    Delsarte, G.; Levy, R.

    1992-01-01

    Tube examination is a major application of the eddy current technique in the nuclear and petrochemical industries. Such examination configurations being specially adapted to analytical modes, a physical model is developed on portable computers. It includes simple approximations made possible by the effective conditions of the examinations. The eddy current signal is described by an analytical formulation that takes into account the tube dimensions, the sensor conception, the physical characteristics of the defect and the examination parameters. Moreover, the model makes it possible to associate real signals and simulated signals

  15. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  16. Analytical model for Stirling cycle machine design

    Energy Technology Data Exchange (ETDEWEB)

    Formosa, F. [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France); Despesse, G. [Laboratoire Capteurs Actionneurs et Recuperation d' Energie, CEA-LETI-MINATEC, Grenoble (France)

    2010-10-15

    In order to study further the promising free piston Stirling engine architecture, there is a need of an analytical thermodynamic model which could be used in a dynamical analysis for preliminary design. To aim at more realistic values, the models have to take into account the heat losses and irreversibilities on the engine. An analytical model which encompasses the critical flaws of the regenerator and furthermore the heat exchangers effectivenesses has been developed. This model has been validated using the whole range of the experimental data available from the General Motor GPU-3 Stirling engine prototype. The effects of the technological and operating parameters on Stirling engine performance have been investigated. In addition to the regenerator influence, the effect of the cooler effectiveness is underlined. (author)

  17. Food Reconstruction Using Isotopic Transferred Signals (FRUITS): A Bayesian Model for Diet Reconstruction

    Czech Academy of Sciences Publication Activity Database

    Fernandes, R.; Millard, A.R.; Brabec, Marek; Nadeau, M.J.; Grootes, P.

    2014-01-01

    Roč. 9, č. 2 (2014), Art . no. e87436 E-ISSN 1932-6203 Institutional support: RVO:67985807 Keywords : ancienit diet reconstruction * stable isotope measurements * mixture model * Bayesian estimation * Dirichlet prior Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.234, year: 2014

  18. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Science.gov (United States)

    Pereira, N. F.; Sitek, A.

    2010-09-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  19. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    International Nuclear Information System (INIS)

    Pereira, N F; Sitek, A

    2010-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  20. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, N F; Sitek, A, E-mail: nfp4@bwh.harvard.ed, E-mail: asitek@bwh.harvard.ed [Department of Radiology, Brigham and Women' s Hospital-Harvard Medical School Boston, MA (United States)

    2010-09-21

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  1. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy

  2. A semi-automated 2D/3D marker-based registration algorithm modelling prostate shrinkage during radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Budiharto, Tom; Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Verstraete, Jan; Heuvel, Frank Van den; Depuydt, Tom; Oyen, Raymond; Haustermans, Karin

    2009-01-01

    Background and purpose: Currently, most available patient alignment tools based on implanted markers use manual marker matching and rigid registration transformations to measure the needed translational shifts. To quantify the particular effect of prostate gland shrinkage, implanted gold markers were tracked during a course of radiotherapy including an isotropic scaling factor to model prostate shrinkage. Materials and methods: Eight patients with prostate cancer had gold markers implanted transrectally and seven were treated with (neo) adjuvant androgen deprivation therapy. After patient alignment to skin tattoos, orthogonal electronic portal images (EPIs) were taken. A semi-automated 2D/3D marker-based registration was performed to calculate the necessary couch shifts. The registration consists of a rigid transformation combined with an isotropic scaling to model prostate shrinkage. Results: The inclusion of an isotropic shrinkage model in the registration algorithm cancelled the corresponding increase in registration error. The mean scaling factor was 0.89 ± 0.09. For all but two patients, a decrease of the isotropic scaling factor during treatment was observed. However, there was almost no difference in the translation offset between the manual matching of the EPIs to the digitally reconstructed radiographs and the semi-automated 2D/3D registration. A decrease in the intermarker distance was found correlating with prostate shrinkage rather than with random marker migration. Conclusions: Inclusion of shrinkage in the registration process reduces registration errors during a course of radiotherapy. Nevertheless, this did not lead to a clinically significant change in the proposed table translations when compared to translations obtained with manual marker matching without a scaling correction

  3. Improvements on Semi-Classical Distorted-Wave model

    Energy Technology Data Exchange (ETDEWEB)

    Sun Weili; Watanabe, Y.; Kuwata, R. [Kyushu Univ., Fukuoka (Japan); Kohno, M.; Ogata, K.; Kawai, M.

    1998-03-01

    A method of improving the Semi-Classical Distorted Wave (SCDW) model in terms of the Wigner transform of the one-body density matrix is presented. Finite size effect of atomic nuclei can be taken into account by using the single particle wave functions for harmonic oscillator or Wood-Saxon potential, instead of those based on the local Fermi-gas model which were incorporated into previous SCDW model. We carried out a preliminary SCDW calculation of 160 MeV (p,p`x) reaction on {sup 90}Zr with the Wigner transform of harmonic oscillator wave functions. It is shown that the present calculation of angular distributions increase remarkably at backward angles than the previous ones and the agreement with the experimental data is improved. (author)

  4. Reconstruction of hyperspectral image using matting model for classification

    Science.gov (United States)

    Xie, Weiying; Li, Yunsong; Ge, Chiru

    2016-05-01

    Although hyperspectral images (HSIs) captured by satellites provide much information in spectral regions, some bands are redundant or have large amounts of noise, which are not suitable for image analysis. To address this problem, we introduce a method for reconstructing the HSI with noise reduction and contrast enhancement using a matting model for the first time. The matting model refers to each spectral band of an HSI that can be decomposed into three components, i.e., alpha channel, spectral foreground, and spectral background. First, one spectral band of an HSI with more refined information than most other bands is selected, and is referred to as an alpha channel of the HSI to estimate the hyperspectral foreground and hyperspectral background. Finally, a combination operation is applied to reconstruct the HSI. In addition, the support vector machine (SVM) classifier and three sparsity-based classifiers, i.e., orthogonal matching pursuit (OMP), simultaneous OMP, and OMP based on first-order neighborhood system weighted classifiers, are utilized on the reconstructed HSI and the original HSI to verify the effectiveness of the proposed method. Specifically, using the reconstructed HSI, the average accuracy of the SVM classifier can be improved by as much as 19%.

  5. Comparison of physical and semi-empirical hydraulic models for flood inundation mapping

    Science.gov (United States)

    Tavakoly, A. A.; Afshari, S.; Omranian, E.; Feng, D.; Rajib, A.; Snow, A.; Cohen, S.; Merwade, V.; Fekete, B. M.; Sharif, H. O.; Beighley, E.

    2016-12-01

    Various hydraulic/GIS-based tools can be used for illustrating spatial extent of flooding for first-responders, policy makers and the general public. The objective of this study is to compare four flood inundation modeling tools: HEC-RAS-2D, Gridded Surface Subsurface Hydrologic Analysis (GSSHA), AutoRoute and Height Above the Nearest Drainage (HAND). There is a trade-off among accuracy, workability and computational demand in detailed, physics-based flood inundation models (e.g. HEC-RAS-2D and GSSHA) in contrast with semi-empirical, topography-based, computationally less expensive approaches (e.g. AutoRoute and HAND). The motivation for this study is to evaluate this trade-off and offer guidance to potential large-scale application in an operational prediction system. The models were assessed and contrasted via comparability analysis (e.g. overlapping statistics) by using three case studies in the states of Alabama, Texas, and West Virginia. The sensitivity and accuracy of physical and semi-eimpirical models in producing inundation extent were evaluated for the following attributes: geophysical characteristics (e.g. high topographic variability vs. flat natural terrain, urbanized vs. rural zones, effect of surface roughness paratermer value), influence of hydraulic structures such as dams and levees compared to unobstructed flow condition, accuracy in large vs. small study domain, effect of spatial resolution in topographic data (e.g. 10m National Elevation Dataset vs. 0.3m LiDAR). Preliminary results suggest that semi-empericial models tend to underestimate in a flat, urbanized area with controlled/managed river channel around 40% of the inundation extent compared to the physical models, regardless of topographic resolution. However, in places where there are topographic undulations, semi-empericial models attain relatively higher level of accuracy than they do in flat non-urbanized terrain.

  6. A semi-empirical model for predicting crown diameter of cedrela ...

    African Journals Online (AJOL)

    A semi-empirical model relating age and breast height has been developed to predict individual tree crown diameter for Cedrela odorata (L) plantation in the moist evergreen forest zones of Ghana. The model was based on field records of 269 trees, and could determine the crown cover dynamics, forecast time of canopy ...

  7. SEMI-COMPETING RISKS ON A TRIVARIATE WEIBULL SURVIVAL MODEL

    Directory of Open Access Journals (Sweden)

    Jenq-Daw Lee

    2008-07-01

    Full Text Available A setting of a trivairate survival function using semi-competing risks concept is proposed, in which a terminal event can only occur after other events. The Stanford Heart Transplant data is reanalyzed using a trivariate Weibull distribution model with the proposed survival function.

  8. Verifying three-dimensional skull model reconstruction using cranial index of symmetry.

    Directory of Open Access Journals (Sweden)

    Woon-Man Kung

    Full Text Available BACKGROUND: Difficulty exists in scalp adaptation for cranioplasty with customized computer-assisted design/manufacturing (CAD/CAM implant in situations of excessive wound tension and sub-cranioplasty dead space. To solve this clinical problem, the CAD/CAM technique should include algorithms to reconstruct a depressed contour to cover the skull defect. Satisfactory CAM-derived alloplastic implants are based on highly accurate three-dimensional (3-D CAD modeling. Thus, it is quite important to establish a symmetrically regular CAD/CAM reconstruction prior to depressing the contour. The purpose of this study is to verify the aesthetic outcomes of CAD models with regular contours using cranial index of symmetry (CIS. MATERIALS AND METHODS: From January 2011 to June 2012, decompressive craniectomy (DC was performed for 15 consecutive patients in our institute. 3-D CAD models of skull defects were reconstructed using commercial software. These models were checked in terms of symmetry by CIS scores. RESULTS: CIS scores of CAD reconstructions were 99.24±0.004% (range 98.47-99.84. CIS scores of these CAD models were statistically significantly greater than 95%, identical to 99.5%, but lower than 99.6% (p<0.001, p = 0.064, p = 0.021 respectively, Wilcoxon matched pairs signed rank test. These data evidenced the highly accurate symmetry of these CAD models with regular contours. CONCLUSIONS: CIS calculation is beneficial to assess aesthetic outcomes of CAD-reconstructed skulls in terms of cranial symmetry. This enables further accurate CAD models and CAM cranial implants with depressed contours, which are essential in patients with difficult scalp adaptation.

  9. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    Science.gov (United States)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  10. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    Science.gov (United States)

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  11. A genetic algorithm-based job scheduling model for big data analytics.

    Science.gov (United States)

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  12. The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting

    International Nuclear Information System (INIS)

    Schuster, T; Schöpfer, F; Rieder, A

    2012-01-01

    This article concerns the method of approximate inverse to solve semi-discrete, linear operator equations in Banach spaces. Semi-discrete means that we search for a solution in an infinite-dimensional Banach space having only a finite number of data available. In this sense the situation is applicable to a large variety of applications where a measurement process delivers a discretization of an infinite-dimensional data space. The method of approximate inverse computes scalar products of the data with pre-computed reconstruction kernels which are associated with mollifiers and the dual of the model operator. The convergence, approximation power and regularization property of this method when applied to semi-discrete operator equations in Hilbert spaces has been investigated in three prequels to this paper. Here we extend these results to a Banach space setting. We prove convergence and stability for general Banach spaces and reproduce the results specifically for the integration operator acting on the space of continuous functions. (paper)

  13. Analytical Model for Fictitious Crack Propagation in Concrete Beams

    DEFF Research Database (Denmark)

    Ulfkjær, J. P.; Krenk, S.; Brincker, Rune

    An analytical model for load-displacement curves of unreinforced notched and un-notched concrete beams is presented. The load displacement-curve is obtained by combining two simple models. The fracture is modelled by a fictitious crack in an elastic layer around the mid-section of the beam. Outside...... the elastic layer the deformations are modelled by the Timoshenko beam theory. The state of stress in the elastic layer is assumed to depend bi-lineary on local elongation corresponding to a linear softening relation for the fictitious crack. For different beam size results from the analytical model...... is compared with results from a more accurate model based on numerical methods. The analytical model is shown to be in good agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. Several general results are obtained. It is shown that the point on the load...

  14. Semi-analytical solutions for flow to a well in an unconfined-fractured aquifer system

    Science.gov (United States)

    Sedghi, Mohammad M.; Samani, Nozar

    2015-09-01

    Semi-analytical solutions of flow to a well in an unconfined single porosity aquifer underlain by a fractured double porosity aquifer, both of infinite radial extent, are obtained. The upper aquifer is pumped at a constant rate from a pumping well of infinitesimal radius. The solutions are obtained via Laplace and Hankel transforms and are then numerically inverted to time domain solutions using the de Hoog et al. algorithm and Gaussian quadrature. The results are presented in the form of dimensionless type curves. The solution takes into account the effects of pumping well partial penetration, water table with instantaneous drainage, leakage with storage in the lower aquifer into the upper aquifer, and storativity and hydraulic conductivity of both fractures and matrix blocks. Both spheres and slab-shaped matrix blocks are considered. The effects of the underlying fractured aquifer hydraulic parameters on the dimensionless drawdown produced by the pumping well in the overlying unconfined aquifer are examined. The presented solution can be used to estimate hydraulic parameters of the unconfined and the underlying fractured aquifer by type curve matching techniques or with automated optimization algorithms. Errors arising from ignoring the underlying fractured aquifer in the drawdown distribution in the unconfined aquifer are also investigated.

  15. A Semi-Analytical Approach for the Response of Nonlinear Conservative Systems

    DEFF Research Database (Denmark)

    Kimiaeifar, Amin; Barari, Amin; Fooladi, M

    2011-01-01

    This work applies Parameter expanding method (PEM) as a powerful analytical technique in order to obtain the exact solution of nonlinear problems in the classical dynamics. Lagrange method is employed to derive the governing equations. The nonlinear governing equations are solved analytically by ...

  16. Iterative reconstruction: how it works, how to apply it

    Energy Technology Data Exchange (ETDEWEB)

    Seibert, James Anthony [University of California Davis Medical Center, Department of Radiology, Sacramento, CA (United States)

    2014-10-15

    Computed tomography acquires X-ray projection data from multiple angles though an object to generate a tomographic rendition of its attenuation characteristics. Filtered back projection is a fast, closed analytical solution to the reconstruction process, whereby all projections are equally weighted, but is prone to deliver inadequate image quality when the dose levels are reduced. Iterative reconstruction is an algorithmic method that uses statistical and geometric models to variably weight the image data in a process that can be solved iteratively to independently reduce noise and preserve resolution and image quality. Applications of this technology in a clinical setting can result in lower dose on the order of 20-40% compared to a standard filtered back projection reconstruction for most exams. A carefully planned implementation strategy and methodological approach is necessary to achieve the goals of lower dose with uncompromised image quality. (orig.)

  17. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    International Nuclear Information System (INIS)

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-01-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)

  18. Analytical dynamic modeling of fast trilayer polypyrrole bending actuators

    International Nuclear Information System (INIS)

    Amiri Moghadam, Amir Ali; Moavenian, Majid; Tahani, Masoud; Torabi, Keivan

    2011-01-01

    Analytical modeling of conjugated polymer actuators with complicated electro-chemo-mechanical dynamics is an interesting area for research, due to the wide range of applications including biomimetic robots and biomedical devices. Although there have been extensive reports on modeling the electrochemical dynamics of polypyrrole (PPy) bending actuators, mechanical dynamics modeling of the actuators remains unexplored. PPy actuators can operate with low voltage while producing large displacement in comparison to robotic joints, they do not have friction or backlash, but they suffer from some disadvantages such as creep and hysteresis. In this paper, a complete analytical dynamic model for fast trilayer polypyrrole bending actuators has been proposed and named the analytical multi-domain dynamic actuator (AMDDA) model. First an electrical admittance model of the actuator will be obtained based on a distributed RC line; subsequently a proper mechanical dynamic model will be derived, based on Hamilton's principle. The purposed modeling approach will be validated based on recently published experimental results

  19. Complex Empiricism and the Quantification of Uncertainty in Paleoclimate Reconstructions

    Science.gov (United States)

    Brumble, K. C.

    2014-12-01

    Because the global climate cannot be observed directly, and because of vast and noisy data sets, climate science is a rich field to study how computational statistics informs what it means to do empirical science. Traditionally held virtues of empirical science and empirical methods like reproducibility, independence, and straightforward observation are complicated by representational choices involved in statistical modeling and data handling. Examining how climate reconstructions instantiate complicated empirical relationships between model, data, and predictions reveals that the path from data to prediction does not match traditional conceptions of empirical inference either. Rather, the empirical inferences involved are "complex" in that they require articulation of a good deal of statistical processing wherein assumptions are adopted and representational decisions made, often in the face of substantial uncertainties. Proxy reconstructions are both statistical and paleoclimate science activities aimed at using a variety of proxies to reconstruct past climate behavior. Paleoclimate proxy reconstructions also involve complex data handling and statistical refinement, leading to the current emphasis in the field on the quantification of uncertainty in reconstructions. In this presentation I explore how the processing needed for the correlation of diverse, large, and messy data sets necessitate the explicit quantification of the uncertainties stemming from wrangling proxies into manageable suites. I also address how semi-empirical pseudo-proxy methods allow for the exploration of signal detection in data sets, and as intermediary steps for statistical experimentation.

  20. Four-parameter analytical local model potential for atoms

    International Nuclear Information System (INIS)

    Fei, Yu; Jiu-Xun, Sun; Rong-Gang, Tian; Wei, Yang

    2009-01-01

    Analytical local model potential for modeling the interaction in an atom reduces the computational effort in electronic structure calculations significantly. A new four-parameter analytical local model potential is proposed for atoms Li through Lr, and the values of four parameters are shell-independent and obtained by fitting the results of X a method. At the same time, the energy eigenvalues, the radial wave functions and the total energies of electrons are obtained by solving the radial Schrödinger equation with a new form of potential function by Numerov's numerical method. The results show that our new form of potential function is suitable for high, medium and low Z atoms. A comparison among the new potential function and other analytical potential functions shows the greater flexibility and greater accuracy of the present new potential function. (atomic and molecular physics)

  1. SU-F-I-49: Vendor-Independent, Model-Based Iterative Reconstruction On a Rotating Grid with Coordinate-Descent Optimization for CT Imaging Investigations

    International Nuclear Information System (INIS)

    Young, S; Hoffman, J; McNitt-Gray, M; Noo, F

    2016-01-01

    Purpose: Iterative reconstruction methods show promise for improving image quality and lowering the dose in helical CT. We aim to develop a novel model-based reconstruction method that offers potential for dose reduction with reasonable computation speed and storage requirements for vendor-independent reconstruction from clinical data on a normal desktop computer. Methods: In 2012, Xu proposed reconstructing on rotating slices to exploit helical symmetry and reduce the storage requirements for the CT system matrix. Inspired by this concept, we have developed a novel reconstruction method incorporating the stored-system-matrix approach together with iterative coordinate-descent (ICD) optimization. A penalized-least-squares objective function with a quadratic penalty term is solved analytically voxel-by-voxel, sequentially iterating along the axial direction first, followed by the transaxial direction. 8 in-plane (transaxial) neighbors are used for the ICD algorithm. The forward problem is modeled via a unique approach that combines the principle of Joseph’s method with trilinear B-spline interpolation to enable accurate reconstruction with low storage requirements. Iterations are accelerated with multi-CPU OpenMP libraries. For preliminary evaluations, we reconstructed (1) a simulated 3D ellipse phantom and (2) an ACR accreditation phantom dataset exported from a clinical scanner (Definition AS, Siemens Healthcare). Image quality was evaluated in the resolution module. Results: Image quality was excellent for the ellipse phantom. For the ACR phantom, image quality was comparable to clinical reconstructions and reconstructions using open-source FreeCT-wFBP software. Also, we did not observe any deleterious impact associated with the utilization of rotating slices. The system matrix storage requirement was only 4.5GB, and reconstruction time was 50 seconds per iteration. Conclusion: Our reconstruction method shows potential for furthering research in low

  2. SU-F-I-49: Vendor-Independent, Model-Based Iterative Reconstruction On a Rotating Grid with Coordinate-Descent Optimization for CT Imaging Investigations

    Energy Technology Data Exchange (ETDEWEB)

    Young, S; Hoffman, J; McNitt-Gray, M [UCLA School of Medicine, Los Angeles, CA (United States); Noo, F [University of Utah, Salt Lake City, UT (United States)

    2016-06-15

    Purpose: Iterative reconstruction methods show promise for improving image quality and lowering the dose in helical CT. We aim to develop a novel model-based reconstruction method that offers potential for dose reduction with reasonable computation speed and storage requirements for vendor-independent reconstruction from clinical data on a normal desktop computer. Methods: In 2012, Xu proposed reconstructing on rotating slices to exploit helical symmetry and reduce the storage requirements for the CT system matrix. Inspired by this concept, we have developed a novel reconstruction method incorporating the stored-system-matrix approach together with iterative coordinate-descent (ICD) optimization. A penalized-least-squares objective function with a quadratic penalty term is solved analytically voxel-by-voxel, sequentially iterating along the axial direction first, followed by the transaxial direction. 8 in-plane (transaxial) neighbors are used for the ICD algorithm. The forward problem is modeled via a unique approach that combines the principle of Joseph’s method with trilinear B-spline interpolation to enable accurate reconstruction with low storage requirements. Iterations are accelerated with multi-CPU OpenMP libraries. For preliminary evaluations, we reconstructed (1) a simulated 3D ellipse phantom and (2) an ACR accreditation phantom dataset exported from a clinical scanner (Definition AS, Siemens Healthcare). Image quality was evaluated in the resolution module. Results: Image quality was excellent for the ellipse phantom. For the ACR phantom, image quality was comparable to clinical reconstructions and reconstructions using open-source FreeCT-wFBP software. Also, we did not observe any deleterious impact associated with the utilization of rotating slices. The system matrix storage requirement was only 4.5GB, and reconstruction time was 50 seconds per iteration. Conclusion: Our reconstruction method shows potential for furthering research in low

  3. A Semi-implicit Numerical Scheme for a Two-dimensional, Three-field Thermo-Hydraulic Modeling

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Jeong, Jaejoon

    2007-07-01

    The behavior of two-phase flow is modeled, depending on the purpose, by either homogeneous model, drift flux model, or separated flow model, Among these model, in the separated flow model, the behavior of each flow phase is modeled by its own governing equation, together with the interphase models which describe the thermal and mechanical interactions between the phases involved. In this study, a semi-implicit numerical scheme for two-dimensional, transient, two-fluid, three-field is derived. The work is an extension to the previous study for the staggered, semi-implicit numerical scheme in one-dimensional geometry (KAERI/TR-3239/2006). The two-dimensional extension is performed by specifying a relevant governing equation set and applying the related finite differencing method. The procedure for employing the semi-implicit scheme is also described in detail. Verifications are performed for a 2-dimensional vertical plate for a single-phase and two-phase flows. The calculations verify the mass and energy conservations. The symmetric flow behavior, for the verification problem, also confirms the momentum conservation of the numerical scheme

  4. Analytic nearest neighbour model for FCC metals

    International Nuclear Information System (INIS)

    Idiodi, J.O.A.; Garba, E.J.D.; Akinlade, O.

    1991-06-01

    A recently proposed analytic nearest-neighbour model for fcc metals is criticised and two alternative nearest-neighbour models derived from the separable potential method (SPM) are recommended. Results for copper and aluminium illustrate the utility of the recommended models. (author). 20 refs, 5 tabs

  5. Analytical eigenstates for the quantum Rabi model

    International Nuclear Information System (INIS)

    Zhong, Honghua; Xie, Qiongtao; Lee, Chaohong; Batchelor, Murray T

    2013-01-01

    We develop a method to find analytical solutions for the eigenstates of the quantum Rabi model. These include symmetric, anti-symmetric and asymmetric analytic solutions given in terms of the confluent Heun functions. Both regular and exceptional solutions are given in a unified form. In addition, the analytic conditions for determining the energy spectrum are obtained. Our results show that conditions proposed by Braak (2011 Phys. Rev. Lett. 107 100401) are a type of sufficiency condition for determining the regular solutions. The well-known Judd isolated exact solutions appear naturally as truncations of the confluent Heun functions. (paper)

  6. Advanced Semi-Implicit Method (ASIM) for hyperbolic two-fluid model

    International Nuclear Information System (INIS)

    Lee, Sung Jae; Chung, Moon Sun

    2003-01-01

    Introducing the interfacial pressure jump terms based on the surface tension into the momentum equations of two-phase two-fluid model, the system of governing equations is turned mathematically into the hyperbolic system. The eigenvalues of the equation system become always real representing the void wave and the pressure wave propagation speeds as shown in the previous manuscript. To solve the interfacial pressure jump terms with void fraction gradients implicitly, the conventional semi-implicit method should be modified as an intermediate iteration method for void fraction at fractional time step. This Advanced Semi-Implicit Method (ASIM) then becomes stable without conventional additive terms. As a consequence, including the interfacial pressure jump terms with the advanced semi-implicit method, the numerical solutions of typical two-phase problems can be more stable and sound than those calculated exclusively by using any other terms like virtual mass, or artificial viscosity

  7. Analytical Models Development of Compact Monopole Vortex Flows

    Directory of Open Access Journals (Sweden)

    Pavlo V. Lukianov

    2017-09-01

    Conclusions. The article contains series of the latest analytical models that describe both laminar and turbulent dynamics of monopole vortex flows which have not been reflected in traditional publications up to the present. The further research must be directed to search of analytical models for the coherent vortical structures in flows of viscous fluids, particularly near curved surfaces, where known in hydromechanics “wall law” is disturbed and heat and mass transfer anomalies take place.

  8. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  9. A Novel Hybrid Model for Drawing Trace Reconstruction from Multichannel Surface Electromyographic Activity.

    Science.gov (United States)

    Chen, Yumiao; Yang, Zhongliang

    2017-01-01

    Recently, several researchers have considered the problem of reconstruction of handwriting and other meaningful arm and hand movements from surface electromyography (sEMG). Although much progress has been made, several practical limitations may still affect the clinical applicability of sEMG-based techniques. In this paper, a novel three-step hybrid model of coordinate state transition, sEMG feature extraction and gene expression programming (GEP) prediction is proposed for reconstructing drawing traces of 12 basic one-stroke shapes from multichannel surface electromyography. Using a specially designed coordinate data acquisition system, we recorded the coordinate data of drawing traces collected in accordance with the time series while 7-channel EMG signals were recorded. As a widely-used time domain feature, Root Mean Square (RMS) was extracted with the analysis window. The preliminary reconstruction models can be established by GEP. Then, the original drawing traces can be approximated by a constructed prediction model. Applying the three-step hybrid model, we were able to convert seven channels of EMG activity recorded from the arm muscles into smooth reconstructions of drawing traces. The hybrid model can yield a mean accuracy of 74% in within-group design (one set of prediction models for all shapes) and 86% in between-group design (one separate set of prediction models for each shape), averaged for the reconstructed x and y coordinates. It can be concluded that it is feasible for the proposed three-step hybrid model to improve the reconstruction ability of drawing traces from sEMG.

  10. An Analytical Diffusion–Expansion Model for Forbush Decreases Caused by Flux Ropes

    Science.gov (United States)

    Dumbović, Mateja; Heber, Bernd; Vršnak, Bojan; Temmer, Manuela; Kirin, Anamarija

    2018-06-01

    We present an analytical diffusion–expansion Forbush decrease (FD) model ForbMod, which is based on the widely used approach of an initially empty, closed magnetic structure (i.e., flux rope) that fills up slowly with particles by perpendicular diffusion. The model is restricted to explaining only the depression caused by the magnetic structure of the interplanetary coronal mass ejection (ICME). We use remote CME observations and a 3D reconstruction method (the graduated cylindrical shell method) to constrain initial boundary conditions of the FD model and take into account CME evolutionary properties by incorporating flux rope expansion. Several flux rope expansion modes are considered, which can lead to different FD characteristics. In general, the model is qualitatively in agreement with observations, whereas quantitative agreement depends on the diffusion coefficient and the expansion properties (interplay of the diffusion and expansion). A case study was performed to explain the FD observed on 2014 May 30. The observed FD was fitted quite well by ForbMod for all expansion modes using only the diffusion coefficient as a free parameter, where the diffusion parameter was found to correspond to an expected range of values. Our study shows that, in general, the model is able to explain the global properties of an FD caused by a flux rope and can thus be used to help understand the underlying physics in case studies.

  11. Analytical Model for Fictitious Crack Propagation in Concrete Beams

    DEFF Research Database (Denmark)

    Ulfkjær, J. P.; Krenk, Steen; Brincker, Rune

    1995-01-01

    An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations...... are modeled by beam theory. The state of stress in the elastic layer is assumed to depend bilinearly on local elongation corresponding to a linear softening relation for the fictitious crack. Results from the analytical model are compared with results from a more detailed model based on numerical methods...... for different beam sizes. The analytical model is shown to be in agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. It is shown that the point on the load-displacement curve where the fictitious crack starts to develop and the point where the real crack...

  12. 4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties

    Science.gov (United States)

    Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.

    2018-05-01

    4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved  >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated

  13. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study.

    Science.gov (United States)

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-05-01

    To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. CT scans were performed on a chest phantom containing various nodules (10 and 12mm; +100, -630 and -800HU) at 120kVp with tube current-time settings of 10, 20, 50, and 100mAs. Each CT was reconstructed using filtered back projection (FBP), iDose(4) and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p>0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose(4) at all radiation dose settings (pvolumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. A semi-local quasi-harmonic model to compute the thermodynamic and mechanical properties of silicon nanostructures

    International Nuclear Information System (INIS)

    Zhao, H; Aluru, N R

    2007-01-01

    This paper presents a semi-local quasi-harmonic model with local phonon density of states (LPDOS) to compute the thermodynamic and mechanical properties of silicon nanostructures at finite temperature. In contrast to an earlier approach (Tang and Aluru 2006 Phys. Rev. B 74 235441), where a quasi-harmonic model with LPDOS computed by a Green's function technique (QHMG) was developed considering many layers of atoms, the semi-local approach considers only two layers of atoms to compute the LPDOS. We show that the semi-local approach combines the accuracy of the QHMG approach and the computational efficiency of the local quasi-harmonic model. We present results for several silicon nanostructures to address the accuracy and efficiency of the semi-local approach

  15. Steady-state groundwater recharge in trapezoidal-shaped aquifers: A semi-analytical approach based on variational calculus

    Science.gov (United States)

    Mahdavi, Ali; Seyyedian, Hamid

    2014-05-01

    This study presents a semi-analytical solution for steady groundwater flow in trapezoidal-shaped aquifers in response to an areal diffusive recharge. The aquifer is homogeneous, anisotropic and interacts with four surrounding streams of constant-head. Flow field in this laterally bounded aquifer-system is efficiently constructed by means of variational calculus. This is accomplished by minimizing a properly defined penalty function for the associated boundary value problem. Simple yet demonstrative scenarios are defined to investigate anisotropy effects on the water table variation. Qualitative examination of the resulting equipotential contour maps and velocity vector field illustrates the validity of the method, especially in the vicinity of boundary lines. Extension to the case of triangular-shaped aquifer with or without an impervious boundary line is also demonstrated through a hypothetical example problem. The present solution benefits from an extremely simple mathematical expression and exhibits strictly close agreement with the numerical results obtained from Modflow. Overall, the solution may be used to conduct sensitivity analysis on various hydrogeological parameters that affect water table variation in aquifers defined in trapezoidal or triangular-shaped domains.

  16. A priori motion models for four-dimensional reconstruction in gated cardiac SPECT

    International Nuclear Information System (INIS)

    Lalush, D.S.; Tsui, B.M.W.; Cui, Lin

    1996-01-01

    We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these open-quotes most likelyclose quotes motion vectors. To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies

  17. Assessment of the impact of modeling axial compression on PET image reconstruction.

    Science.gov (United States)

    Belzunce, Martin A; Reader, Andrew J

    2017-10-01

    To comprehensively evaluate both the acceleration and image-quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end-point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5-10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher

  18. Reliability of a semi-automated 3D-CT measuring method for tunnel diameters after anterior cruciate ligament reconstruction: A comparison between soft-tissue single-bundle allograft vs. autograft.

    Science.gov (United States)

    Robbrecht, Cedric; Claes, Steven; Cromheecke, Michiel; Mahieu, Peter; Kakavelakis, Kyriakos; Victor, Jan; Bellemans, Johan; Verdonk, Peter

    2014-10-01

    Post-operative widening of tibial and/or femoral bone tunnels is a common observation after ACL reconstruction, especially with soft-tissue grafts. There are no studies comparing tunnel widening in hamstring autografts versus tibialis anterior allografts. The goal of this study was to observe the difference in tunnel widening after the use of allograft vs. autograft for ACL reconstruction, by measuring it with a novel 3-D computed tomography based method. Thirty-five ACL-deficient subjects were included, underwent anatomic single-bundle ACL reconstruction and were evaluated at one year after surgery with the use of 3-D CT imaging. Three independent observers semi-automatically delineated femoral and tibial tunnel outlines, after which a best-fit cylinder was derived and the tunnel diameter was determined. Finally, intra- and inter-observer reliability of this novel measurement protocol was defined. In femoral tunnels, the intra-observer ICC was 0.973 (95% CI: 0.922-0.991) and the inter-observer ICC was 0.992 (95% CI: 0.982-0.996). In tibial tunnels, the intra-observer ICC was 0.955 (95% CI: 0.875-0.985). The combined inter-observer ICC was 0.970 (95% CI: 0.987-0.917). Tunnel widening was significantly higher in allografts compared to autografts, in the tibial tunnels (p=0.013) as well as in the femoral tunnels (p=0.007). To our knowledge, this novel, semi-automated 3D-computed tomography image processing method has shown to yield highly reproducible results for the measurement of bone tunnel diameter and area. This series showed a significantly higher amount of tunnel widening observed in the allograft group at one-year follow-up. Level II, Prospective comparative study. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Analytical models for the rewetting of hot surfaces

    International Nuclear Information System (INIS)

    Olek, S.

    1988-10-01

    Some aspects concerning analytical models for the rewetting of hot surface are discussed. These include the problems with applying various forms of boundary conditions, compatibility of boundary conditions with the physics of the rewetting problems, recent analytical models, the use of the separation of variables method versus the Wiener-Hopf technique, and the use of transformations. The report includes an updated list of rewetting models as well as benchmark solutions in tabular form for several models. It should be emphasized that this report is not meant to cover the topic of rewetting models. It merely discusses some points which are less commonly referred to in the literature. 93 refs., 3 figs., 22 tabs

  20. Medical image reconstruction. A conceptual tutorial

    International Nuclear Information System (INIS)

    Zeng, Gengsheng Lawrence

    2010-01-01

    ''Medical Image Reconstruction: A Conceptual Tutorial'' introduces the classical and modern image reconstruction technologies, such as two-dimensional (2D) parallel-beam and fan-beam imaging, three-dimensional (3D) parallel ray, parallel plane, and cone-beam imaging. This book presents both analytical and iterative methods of these technologies and their applications in X-ray CT (computed tomography), SPECT (single photon emission computed tomography), PET (positron emission tomography), and MRI (magnetic resonance imaging). Contemporary research results in exact region-of-interest (ROI) reconstruction with truncated projections, Katsevich's cone-beam filtered backprojection algorithm, and reconstruction with highly undersampled data with l 0 -minimization are also included. (orig.)

  1. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Science.gov (United States)

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  2. Combined multi-analytical approach for study of pore system in bricks: How much porosity is there?

    Energy Technology Data Exchange (ETDEWEB)

    Coletti, Chiara, E-mail: chiara.coletti@studenti.unipd.it [Department of Geosciences, University of Padova, Via G. Gradenigo 6, 35131 Padova (Italy); Department of Mineralogy and Petrology, Faculty of Science, University of Granada, Avda. Fuentenueva s/n, 18002 Granada (Spain); Cultrone, Giuseppe [Department of Mineralogy and Petrology, Faculty of Science, University of Granada, Avda. Fuentenueva s/n, 18002 Granada (Spain); Maritan, Lara; Mazzoli, Claudio [Department of Geosciences, University of Padova, Via G. Gradenigo 6, 35131 Padova (Italy)

    2016-11-15

    During the firing of bricks, mineralogical and textural transformations produce an artificial aggregate characterised by significant porosity. Particularly as regards pore-size distribution and the interconnection model, porosity is an important parameter to evaluate and predict the durability of bricks. The pore system is in fact the main element, which correlates building materials and their environment (especially in cases of aggressive weathering, e.g., salt crystallisation and freeze-thaw cycles) and determines their durability. Four industrial bricks with differing compositions and firing temperatures were analysed with “direct” and “indirect” techniques, traditional methods (mercury intrusion porosimetry, hydric tests, nitrogen adsorption) and new analytical approaches based on digital image reconstruction of 2D and 3D models (back-scattered electrons and computerised X-ray micro-Tomography, respectively). The comparison of results from different analytical methods in the “overlapping ranges” of porosity and the careful reconstruction of a cumulative curve, allowed overcoming their specific limitations and achieving better knowledge on the pore system of bricks. - Highlights: •Pore-size distribution and structure of the pore system in four commercial bricks •A multi-analytical approach combining “direct” and “indirect” techniques •Traditional methods vs. new approaches based on 2D/3D digital image reconstruction •The use of “overlapping ranges” to overcome the limitations of various techniques.

  3. Modelling the effects of porous and semi-permeable layers on corrosion processes

    International Nuclear Information System (INIS)

    King, F.; Kolar, M.; Shoesmith, D.W.

    1996-09-01

    Porous and semi-permeable layers play a role in many corrosion processes. Porous layers may simply affect the rate of corrosion by affecting the rate of mass transport of reactants and products to and from the corroding surface. Semi-permeable layers can further affect the corrosion process by reacting with products and/or reactants. Reactions in semi-permeable layers include redox processes involving electron transfer, adsorption, ion-exchange and complexation reactions and precipitation/dissolution processes. Examples of porous and semi-permeable layers include non-reactive salt films, precipitate layers consisting of redox-active species in multiple oxidation states (e.g., Fe oxide films), clay and soil layers and biofilms. Examples of these various types of processes will be discussed and modelling techniques developed from studies for the disposal of high-level nuclear waste presented. (author). 48 refs., 1 tab., 12 figs

  4. A semi-analytical solution for viscothermal wave propagation in narrow gaps with arbitrary boundary conditions.

    NARCIS (Netherlands)

    Wijnant, Ysbrand H.; Spiering, R.M.E.J.; Blijderveen, M.; de Boer, Andries

    2006-01-01

    Previous research has shown that viscothermal wave propagation in narrow gaps can efficiently be described by means of the low reduced frequency model. For simple geometries and boundary conditions, analytical solutions are available. For example, Beltman [4] gives the acoustic pressure in the gap

  5. AIR Tools - A MATLAB Package of Algebraic Iterative Reconstruction Techniques

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Saxild-Hansen, Maria

    This collection of MATLAB software contains implementations of several Algebraic Iterative Reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...... are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...

  6. Linking plate reconstructions with deforming lithosphere to geodynamic models

    Science.gov (United States)

    Müller, R. D.; Gurnis, M.; Flament, N.; Seton, M.; Spasojevic, S.; Williams, S.; Zahirovic, S.

    2011-12-01

    While global computational models are rapidly advancing in terms of their capabilities, there is an increasing need for assimilating observations into these models and/or ground-truthing model outputs. The open-source and platform independent GPlates software fills this gap. It was originally conceived as a tool to interactively visualize and manipulate classical rigid plate reconstructions and represent them as time-dependent topological networks of editable plate boundaries. The user can export time-dependent plate velocity meshes that can be used either to define initial surface boundary conditions for geodynamic models or alternatively impose plate motions throughout a geodynamic model run. However, tectonic plates are not rigid, and neglecting plate deformation, especially that of the edges of overriding plates, can result in significant misplacing of plate boundaries through time. A new, substantially re-engineered version of GPlates is now being developed that allows an embedding of deforming plates into topological plate boundary networks. We use geophysical and geological data to define the limit between rigid and deforming areas, and the deformation history of non-rigid blocks. The velocity field predicted by these reconstructions can then be used as a time-dependent surface boundary condition in regional or global 3-D geodynamic models, or alternatively as an initial boundary condition for a particular plate configuration at a given time. For time-dependent models with imposed plate motions (e.g. using CitcomS) we incorporate the continental lithosphere by embedding compositionally distinct crust and continental lithosphere within the thermal lithosphere. We define three isostatic columns of different thickness and buoyancy based on the tectonothermal age of the continents: Archean, Proterozoic and Phanerozoic. In the fourth isostatic column, the oceans, the thickness of the thermal lithosphere is assimilated using a half-space cooling model. We also

  7. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    Science.gov (United States)

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  8. Modeling economic costs of disasters and recovery involving positive effects of reconstruction: analysis using a dynamic CGE model

    Science.gov (United States)

    Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.

    2013-11-01

    Disaster damages have negative effects on economy, whereas reconstruction investments have positive effects. The aim of this study is to model economic causes of disasters and recovery involving positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and further avoid double-counting problem. In order to factor both shocks in CGE model, direct loss is set as the amount of capital stock reduced on supply side of economy; A portion of investments restore the capital stock in existing period; An investment-driven dynamic model is formulated due to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction respectively. The study showed that output from S1 is found to be closer to real data than that from S2. S2 overestimates economic loss by roughly two times that under S1. The gap in economic aggregate between S1 and S0 is reduced to 3% in 2011, a level that should take another four years to achieve under S2.

  9. Improving head and neck CTA with hybrid and model-based iterative reconstruction techniques

    NARCIS (Netherlands)

    Niesten, J. M.; van der Schaaf, I. C.; Vos, P. C.; Willemink, MJ; Velthuis, B. K.

    2015-01-01

    AIM: To compare image quality of head and neck computed tomography angiography (CTA) reconstructed with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and model-based iterative reconstruction (MIR) algorithms. MATERIALS AND METHODS: The raw data of 34 studies were

  10. A consensus yeast metabolic network reconstruction obtained from a community approach to systems biology

    NARCIS (Netherlands)

    Herrgård, Markus J.; Swainston, Neil; Dobson, Paul; Dunn, Warwick B.; Arga, K. Yalçin; Arvas, Mikko; Blüthgen, Nils; Borger, Simon; Costenoble, Roeland; Heinemann, Matthias; Hucka, Michael; Novère, Nicolas Le; Li, Peter; Liebermeister, Wolfram; Mo, Monica L.; Oliveira, Ana Paula; Petranovic, Dina; Pettifer, Stephen; Simeonidis, Evangelos; Smallbone, Kieran; Spasić, Irena; Weichart, Dieter; Brent, Roger; Broomhead, David S.; Westerhoff, Hans V.; Kırdar, Betül; Penttilä, Merja; Klipp, Edda; Palsson, Bernhard Ø.; Sauer, Uwe; Oliver, Stephen G.; Mendes, Pedro; Nielsen, Jens; Kell, Douglas B.

    2008-01-01

    Genomic data allow the large-scale manual or semi-automated assembly of metabolic network reconstructions, which provide highly curated organism-specific knowledge bases. Although several genome-scale network reconstructions describe Saccharomyces cerevisiae metabolism, they differ in scope and

  11. The implementation of a simplified spherical harmonics semi-analytic nodal method in PANTHER

    International Nuclear Information System (INIS)

    Hall, S.K.; Eaton, M.D.; Knight, M.P.

    2013-01-01

    Highlights: ► An SP N nodal method is proposed. ► Consistent CMFD derived and tested. ► Mark vacuum boundary conditions applied. ► Benchmarked against other diffusions and transport codes. - Abstract: In this paper an SP N nodal method is proposed which can utilise existing multi-group neutron diffusion solvers to obtain the solution. The semi-analytic nodal method is used in conjunction with a coarse mesh finite difference (CMFD) scheme to solve the resulting set of equations. This is compared against various nuclear benchmarks to show that the method is capable of computing an accurate solution for practical cases. A few different CMFD formulations are implemented and their performance compared. It is found that the effective diffusion coefficent (EDC) can provide additional stability and require less power iterations on a coarse mesh. A re-arrangement of the EDC is proposed that allows the iteration matrix to be computed at the beginning of a calculation. Successive nodal updates only modify the source term unlike existing CMFD methods which update the iteration matrix. A set of Mark vacuum boundary conditions are also derived which can be applied to the SP N nodal method extending its validity. This is possible due to a similarity transformation of the angular coupling matrix, which is used when applying the nodal method. It is found that the Marshak vacuum condition can also be derived, but would require the significant modification of existing neutron diffusion codes to implement it

  12. On semi-classical questions related to signal analysis

    KAUST Repository

    Helffer, Bernard

    2011-12-01

    This study explores the reconstruction of a signal using spectral quantities associated with some self-adjoint realization of an h-dependent Schrödinger operator -h2(d2/dx2)-y(x), h>0, when the parameter h tends to 0. Theoretical results in semi-classical analysis are proved. Some numerical results are also presented. We first consider as a toy model the sech2 function. Then we study a real signal given by arterial blood pressure measurements. This approach seems to be very promising in signal analysis. Indeed it provides new spectral quantities that can give relevant information on some signals as it is the case for arterial blood pressure signal. © 2011 - IOS Press and the authors. All rights reserved.

  13. Heat transfer analytical models for the rapid determination of cooling time in crystalline thermoplastic injection molding and experimental validation

    Science.gov (United States)

    Didier, Delaunay; Baptiste, Pignon; Nicolas, Boyard; Vincent, Sobotka

    2018-05-01

    Heat transfer during the cooling of a thermoplastic injected part directly affects the solidification of the polymer and consequently the quality of the part in term of mechanical properties, geometric tolerance and surface aspect. This paper proposes to mold designers a methodology based on analytical models to provide quickly the time to reach the ejection temperature depending of the temperature and the position of cooling channels. The obtained cooling time is the first step of the thermal conception of the mold. The presented methodology is dedicated to the determination of solidification time of a semi-crystalline polymer slab. It allows the calculation of the crystallization time of the part and is based on the analytical solution of the Stefan problem in a semi-infinite medium. The crystallization is then considered as a phase change with an effective crystallization temperature, which is obtained from Fast Scanning Calorimetry (FSC) results. The crystallization time is then corrected to take the finite thickness of the part into account. To check the accuracy of such approach, the solidification time is calculated by solving the heat conduction equation coupled to the crystallization kinetics of the polymer. The impact of the nature of the contact between the polymer and the mold is evaluated. The thermal contact resistance (TCR) appears as significant parameter that needs to be taken into account in the cooling time calculation. The results of the simplified model including or not TCR are compared in the case of a polypropylene (PP) with experiments carried out with an instrumented mold. Then, the methodology is applied for a part made with PolyEtherEtherKetone (PEEK).

  14. Large-scale hydrological modelling in the semi-arid north-east of Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Guentner, A

    2002-09-01

    Semi-arid areas are characterized by small water resources. An increasing water demand due to population growth and economic development as well as a possible decreasing water availability in the course of climate change may aggravate water scarcity in future in these areas. The quantitative assessment of the water resources is a prerequisite for the development of sustainable measures of water management. For this task, hydrological models within a dynamic integrated framework are indispensable tools. The main objective of this study is to develop a hydrological model for the quantification of water availability over a large geographic domain of semi-arid environments. The study area is the Federal State of Ceara in the semi-arid north-east of Brazil. Surface water from reservoirs provides the largest part of water supply. The area has recurrently been affected by droughts which caused serious economic losses and social impacts like migration from the rural regions. (orig.)

  15. Vorticity-divergence semi-Lagrangian global atmospheric model SL-AV20: dynamical core

    Science.gov (United States)

    Tolstykh, Mikhail; Shashkin, Vladimir; Fadeev, Rostislav; Goyman, Gordey

    2017-05-01

    SL-AV (semi-Lagrangian, based on the absolute vorticity equation) is a global hydrostatic atmospheric model. Its latest version, SL-AV20, provides global operational medium-range weather forecast with 20 km resolution over Russia. The lower-resolution configurations of SL-AV20 are being tested for seasonal prediction and climate modeling. The article presents the model dynamical core. Its main features are a vorticity-divergence formulation at the unstaggered grid, high-order finite-difference approximations, semi-Lagrangian semi-implicit discretization and the reduced latitude-longitude grid with variable resolution in latitude. The accuracy of SL-AV20 numerical solutions using a reduced lat-lon grid and the variable resolution in latitude is tested with two idealized test cases. Accuracy and stability of SL-AV20 in the presence of the orography forcing are tested using the mountain-induced Rossby wave test case. The results of all three tests are in good agreement with other published model solutions. It is shown that the use of the reduced grid does not significantly affect the accuracy up to the 25 % reduction in the number of grid points with respect to the regular grid. Variable resolution in latitude allows us to improve the accuracy of a solution in the region of interest.

  16. SemiBoost: boosting for semi-supervised learning.

    Science.gov (United States)

    Mallapragada, Pavan Kumar; Jin, Rong; Jain, Anil K; Liu, Yi

    2009-11-01

    Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms.

  17. Semi-Supervised Generation with Cluster-aware Generative Models

    DEFF Research Database (Denmark)

    Maaløe, Lars; Fraccaro, Marco; Winther, Ole

    2017-01-01

    Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning. Many real life data sets contain a small amount of labelled data points, that are typically disregarded when training generative models. We propose the Clust...... a log-likelihood of −79.38 nats on permutation invariant MNIST, while also achieving competitive semi-supervised classification accuracies. The model can also be trained fully unsupervised, and still improve the log-likelihood performance with respect to related methods.......Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning. Many real life data sets contain a small amount of labelled data points, that are typically disregarded when training generative models. We propose the Cluster...

  18. Analytical modeling of worldwide medical radiation use

    International Nuclear Information System (INIS)

    Mettler, F.A. Jr.; Davis, M.; Kelsey, C.A.; Rosenberg, R.; Williams, A.

    1987-01-01

    An analytical model was developed to estimate the availability and frequency of medical radiation use on a worldwide basis. This model includes medical and dental x-ray, nuclear medicine, and radiation therapy. The development of an analytical model is necessary as the first step in estimating the radiation dose to the world's population from this source. Since there is no data about the frequency of medical radiation use in more than half the countries in the world and only fragmentary data in an additional one-fourth of the world's countries, such a model can be used to predict the uses of medical radiation in these countries. The model indicates that there are approximately 400,000 medical x-ray machines worldwide and that approximately 1.2 billion diagnostic medical x-ray examinations are performed annually. Dental x-ray examinations are estimated at 315 million annually and approximately 22 million in-vivo diagnostic nuclear medicine examinations. Approximately 4 million radiation therapy procedures or courses of treatment are undertaken annually

  19. Analytic investigation of extended Heitler-Matthews model

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Stefan; Veberic, Darko; Engel, Ralph [KIT, IKP (Germany)

    2016-07-01

    Many features of extensive air showers are qualitatively well described by the Heitler cascade model and its extensions. The core of a shower is given by hadrons that interact with air nuclei. After each interaction some of these hadrons decay and feed the electromagnetic shower component. The most important parameters of such hadronic interactions are inelasticity, multiplicity, and the ratio of charged vs. neutral particles. However, in analytic considerations approximations are needed to include the characteristics of hadron production. We discuss extensions of the simple cascade model by analytic description of air showers by cascade models which include also the elasticity, and derive the number of produced muons. In a second step we apply this model to calculate the dependence of the shower center of gravity on model parameters. The depth of the center of gravity is closely related to that of the shower maximum, which is a commonly-used composition-sensitive observable.

  20. Occupational radiation protection around medical linear accelerators: measurements and semi-analytical approaches

    International Nuclear Information System (INIS)

    Donadille, L.; Derreumaux, S.; Mantione, J.; Robbes, I.; Trompier, F.; Amgarou, K.; Asselineau, B.; Martin, A.

    2008-01-01

    Full text: X-rays produced by high-energy (larger than 6 MeV) medical electron linear accelerators create secondary neutron radiation fields mainly by photonuclear reactions inside the materials of the accelerator head, the patient and the walls of the therapy room. Numerous papers were devoted to the study of neutron production in medical linear accelerators and resulting decay of activation products. However, data associated to doses delivered to workers in treatment conditions are scarce. In France, there are more than 350 external radiotherapy facilities representing almost all types of techniques and designs. IRSN carried out a measurement campaign in order to investigate the variation of the occupational dose according the different encountered situations. Six installations were investigated, associated with the main manufacturers (Varian, Elekta, General Electrics, Siemens), for several nominal energies, conventional and IMRT techniques, and bunker designs. Measurements were carried out separately for neutron and photon radiation fields, and for radiation associated with the decay of the activation products, by means of radiometers, tissue-equivalent proportional counters and spectrometers (neutron and photon spectrometry). They were performed at the positions occupied by the workers, i.e. outside the bunker during treatments, inside between treatments. Measurements have been compared to published data. In addition, semi-empirical analytical approaches recommended by international protocols were used to estimate doses inside and outside the bunkers. The results obtained by both approaches were compared and analysed. The annual occupational effective dose was estimated to about 1 mSv, including more than 50 % associated with the decay of activation products and less than 10 % due to direct exposure to leakage neutrons produced during treatments. (author)

  1. Three Dimensional Dynamic Model Based Wind Field Reconstruction from Lidar Data

    International Nuclear Information System (INIS)

    Raach, Steffen; Schlipf, David; Haizmann, Florian; Cheng, Po Wen

    2014-01-01

    Using the inflowing horizontal and vertical wind shears for individual pitch controller is a promising method if blade bending measurements are not available. Due to the limited information provided by a lidar system the reconstruction of shears in real-time is a challenging task especially for the horizontal shear in the presence of changing wind direction. The internal model principle has shown to be a promising approach to estimate the shears and directions in 10 minutes averages with real measurement data. The static model based wind vector field reconstruction is extended in this work taking into account a dynamic reconstruction model based on Taylor's Frozen Turbulence Hypothesis. The presented method provides time series over several seconds of the wind speed, shears and direction, which can be directly used in advanced optimal preview control. Therefore, this work is an important step towards the application of preview individual blade pitch control under realistic wind conditions. The method is tested using a turbulent wind field and a detailed lidar simulator. For the simulation, the turbulent wind field structure is flowing towards the lidar system and is continuously misaligned with respect to the horizontal axis of the wind turbine. Taylor's Frozen Turbulence Hypothesis is taken into account to model the wind evolution. For the reconstruction, the structure is discretized into several stages where each stage is reduced to an effective wind speed, superposed with a linear horizontal and vertical wind shear. Previous lidar measurements are shifted using again Taylor's Hypothesis. The wind field reconstruction problem is then formulated as a nonlinear optimization problem, which minimizes the residual between the assumed wind model and the lidar measurements to obtain the misalignment angle and the effective wind speed and the wind shears for each stage. This method shows good results in reconstructing the wind characteristics of a three

  2. Practical considerations for image-based PSF and blobs reconstruction in PET

    International Nuclear Information System (INIS)

    Stute, Simon; Comtat, Claude

    2013-01-01

    Iterative reconstructions in positron emission tomography (PET) need a model relating the recorded data to the object/patient being imaged, called the system matrix (SM). The more realistic this model, the better the spatial resolution in the reconstructed images. However, a serious concern when using a SM that accurately models the resolution properties of the PET system is the undesirable edge artefact, visible through oscillations near sharp discontinuities in the reconstructed images. This artefact is a natural consequence of solving an ill-conditioned inverse problem, where the recorded data are band-limited. In this paper, we focus on practical aspects when considering image-based point-spread function (PSF) reconstructions. To remove the edge artefact, we propose to use a particular case of the method of sieves (Grenander 1981 Abstract Inference New York: Wiley), which simply consists in performing a standard PSF reconstruction, followed by a post-smoothing using the PSF as the convolution kernel. Using analytical simulations, we investigate the impact of different reconstruction and PSF modelling parameters on the edge artefact and its suppression, in the case of noise-free data and an exactly known PSF. Using Monte-Carlo simulations, we assess the proposed method of sieves with respect to the choice of the geometric projector and the PSF model used in the reconstruction. When the PSF model is accurately known, we show that the proposed method of sieves succeeds in completely suppressing the edge artefact, though after a number of iterations higher than typically used in practice. When applying the method to realistic data (i.e. unknown true SM and noisy data), we show that the choice of the geometric projector and the PSF model does not impact the results in terms of noise and contrast recovery, as long as the PSF has a width close to the true PSF one. Equivalent results were obtained using either blobs or voxels in the same conditions (i.e. the blob

  3. An analytical simulation technique for cone-beam CT and pinhole SPECT

    International Nuclear Information System (INIS)

    Zhang Xuezhu; Qi Yujin

    2011-01-01

    This study was aimed at developing an efficient simulation technique with an ordinary PC. The work involved derivation of mathematical operators, analytic phantom generations, and effective analytical projectors developing for cone-beam CT and pinhole SPECT imaging. The computer simulations based on the analytical projectors were developed by ray-tracing method for cone-beam CT and voxel-driven method for pinhole SPECT of degrading blurring. The 3D Shepp-Logan, Jaszczak and Defrise phantoms were used for simulation evaluations and image reconstructions. The reconstructed phantom images were of good accuracy with the phantoms. The results showed that the analytical simulation technique is an efficient tool for studying cone-beam CT and pinhole SPECT imaging. (authors)

  4. The SENSE-Isomorphism Theoretical Image Voxel Estimation (SENSE-ITIVE) Model for Reconstruction and Observing Statistical Properties of Reconstruction Operators

    Science.gov (United States)

    Bruce, Iain P.; Karaman, M. Muge; Rowe, Daniel B.

    2012-01-01

    The acquisition of sub-sampled data from an array of receiver coils has become a common means of reducing data acquisition time in MRI. Of the various techniques used in parallel MRI, SENSitivity Encoding (SENSE) is one of the most common, making use of a complex-valued weighted least squares estimation to unfold the aliased images. It was recently shown in Bruce et al. [Magn. Reson. Imag. 29(2011):1267–1287] that when the SENSE model is represented in terms of a real-valued isomorphism, it assumes a skew-symmetric covariance between receiver coils, as well as an identity covariance structure between voxels. In this manuscript, we show that not only is the skew-symmetric coil covariance unlike that of real data, but the estimated covariance structure between voxels over a time series of experimental data is not an identity matrix. As such, a new model, entitled SENSE-ITIVE, is described with both revised coil and voxel covariance structures. Both the SENSE and SENSE-ITIVE models are represented in terms of real-valued isomorphisms, allowing for a statistical analysis of reconstructed voxel means, variances, and correlations resulting from the use of different coil and voxel covariance structures used in the reconstruction processes to be conducted. It is shown through both theoretical and experimental illustrations that the miss-specification of the coil and voxel covariance structures in the SENSE model results in a lower standard deviation in each voxel of the reconstructed images, and thus an artificial increase in SNR, compared to the standard deviation and SNR of the SENSE-ITIVE model where both the coil and voxel covariances are appropriately accounted for. It is also shown that there are differences in the correlations induced by the reconstruction operations of both models, and consequently there are differences in the correlations estimated throughout the course of reconstructed time series. These differences in correlations could result in meaningful

  5. Analytical Model-based Fault Detection and Isolation in Control Systems

    DEFF Research Database (Denmark)

    Vukic, Z.; Ozbolt, H.; Blanke, M.

    1998-01-01

    The paper gives an introduction and an overview of the field of fault detection and isolation for control systems. The summary of analytical (quantitative model-based) methodds and their implementation are presented. The focus is given to mthe analytical model-based fault-detection and fault...

  6. Issues in the validation of CFD modelling of semi-solid metal forming

    International Nuclear Information System (INIS)

    Ward, P.J.; Atkinson, H.V.; Kirkwood, D.H.; Liu, T.Y.; Chin, S.B.

    2000-01-01

    Modelling of die filling during semi-solid metal processing (thixoforming) places particular demands on the CFD package being used. Not only are the velocities of the metal slurry in the die very high, the viscosity is too. Furthermore, the viscosity changes with shear rate (i.e. with changes in cross sectional area of the region the slurry travels through) and with time, as the injected material is thixotropic. The CFD software therefore requires good free surface tracking, accurate implicit solutions of the flow equations (as the CPU times for explicit solutions at high viscosities are impractical) and a model that adequately describes the slurry thixotropy. Finally, reliable, experimentally determined viscosity data are required. This paper describes the experiments on tin-lead and aluminium alloy slurries using compressive tests and rotating cylinder viscometry, followed by modelling using FLOW-3D. This package is known for its ability to track free surfaces accurately. Compressive tests allow rapid changes in shear rate to be imparted to the slurry, without wall slip, while the simple geometry of the viscometer makes it possible to compare analytical and numerical solutions. It is shown that the implicit viscous solver in its original form can reproduce the general trends found in the compressive and viscometry tests. However, sharp changes in shear rate lead to overestimation of pressure gradients in the slurry, making it difficult to separate these effects from those due to thixotropic breakdown. In order to achieve this separation, it is necessary to implement a more accurate implicit solver, which is currently under development. (author)

  7. Discussion of Source Reconstruction Models Using 3D MCG Data

    Science.gov (United States)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  8. Semi-local invariance in Ising models with multi-spin interaction

    International Nuclear Information System (INIS)

    Lipowski, A.

    1996-08-01

    We examine implications of semi-local invariance in Ising models with multispin interaction. In ergodic models all spin-spin correlation functions vanish and the local symmetry is the same as in locally gauge-invariant models. The d = 3 model with four-spin interaction is nonergodic at low temperature but the magnetic symmetry remains unbroken. The d = 3 model with eight-spin interaction is ergodic but undergoes the phase transition and most likely its low-temperature phase is characterized by a nonlocal order parameter. (author). 7 refs, 1 fig

  9. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  10. Modelling the physics in iterative reconstruction for transmission computed tomography

    Science.gov (United States)

    Nuyts, Johan; De Man, Bruno; Fessler, Jeffrey A.; Zbijewski, Wojciech; Beekman, Freek J.

    2013-01-01

    There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of X-ray CT imaging. IR has the ability to significantly reduce patient dose, it provides the flexibility to reconstruct images from arbitrary X-ray system geometries and it allows to include detailed models of photon transport and detection physics, to accurately correct for a wide variety of image degrading effects. This paper reviews discretisation issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. Widespread implementation of IR with highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling. PMID:23739261

  11. Birth-death models and coalescent point processes: the shape and probability of reconstructed phylogenies.

    Science.gov (United States)

    Lambert, Amaury; Stadler, Tanja

    2013-12-01

    Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Human semi-supervised learning.

    Science.gov (United States)

    Gibson, Bryan R; Rogers, Timothy T; Zhu, Xiaojin

    2013-01-01

    Most empirical work in human categorization has studied learning in either fully supervised or fully unsupervised scenarios. Most real-world learning scenarios, however, are semi-supervised: Learners receive a great deal of unlabeled information from the world, coupled with occasional experiences in which items are directly labeled by a knowledgeable source. A large body of work in machine learning has investigated how learning can exploit both labeled and unlabeled data provided to a learner. Using equivalences between models found in human categorization and machine learning research, we explain how these semi-supervised techniques can be applied to human learning. A series of experiments are described which show that semi-supervised learning models prove useful for explaining human behavior when exposed to both labeled and unlabeled data. We then discuss some machine learning models that do not have familiar human categorization counterparts. Finally, we discuss some challenges yet to be addressed in the use of semi-supervised models for modeling human categorization. Copyright © 2013 Cognitive Science Society, Inc.

  13. Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L

    2018-02-01

    This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Analytical Model of Underground Train Induced Vibrations on Nearby Building Structures in Cameroon: Assessment and Prediction

    Directory of Open Access Journals (Sweden)

    Lezin Seba MINSILI

    2013-11-01

    Full Text Available The purpose of this research paper was to assess and predict the effect of vibrations induced by an underground railway on nearby-existing buildings prior to the construction of projected new railway lines of the National Railway Master Plan of Cameroon and after upgrading of the railway conceded to CAMRAIL linking the two most densely populated cities of Cameroon: Douala and Yaoundé. With the source-transmitter-receiver mathematical model as the train-soil-structure interaction model, taking into account sub-model parameters such as type of the train-railway system, typical geotechnical conditions of the ground and the sensitivity of the nearby buildings, the analysis is carried out over the entire system using the dynamic finite element method in the time domain. This subdivision of the model is a powerful tool that allows to consider different alternatives of sub-models with different characteristics, and thus to determine any critical excessive vibration impact. Based on semi-empirical analytical results obtained from presented models, the present work assesses and predicts characteristics of traffic-induced vibrations as a function of time duration, intensity and vehicle speed, as well as their influence on buildings at different levels.

  15. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    Energy Technology Data Exchange (ETDEWEB)

    Zheng Guoyan [Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, CH-3014 Bern (Switzerland)

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction. The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark

  16. Semi-convergence and relaxation parameters for a class of SIRT algorithms

    DEFF Research Database (Denmark)

    Elfving, Tommy; Nikazad, Touraj; Hansen, Per Christian

    2010-01-01

    This paper is concerned with the Simultaneous Iterative Reconstruction Technique (SIRT) class of iterative methods for solving inverse problems. Based on a careful analysis of the semi-convergence behavior of these methods, we propose two new techniques to specify the relaxation parameters...

  17. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.

    Science.gov (United States)

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-02-08

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  18. Prospective regularization design in prior-image-based reconstruction

    International Nuclear Information System (INIS)

    Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster

    2015-01-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in

  19. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    Science.gov (United States)

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  20. An analytical model of the HINT performance metric

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.; Gustafson, J.L. [Scalable Computing Lab., Ames, IA (United States)

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  1. Quantum decay model with exact explicit analytical solution

    Science.gov (United States)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  2. Coupled flow and salinity transport modelling in semi-arid environments

    DEFF Research Database (Denmark)

    Bauer-Gottwein, Peter; Held, R.J.; Zimmermann, S.

    2006-01-01

    Numerical groundwater modelling is used as the base for sound aquifer system analysis and water resources assessment. In many cases, particularly in semi-arid and arid regions, groundwater flow is intricately linked to salinity transport. A case in point is the Shashe River Valley in Botswana. A ...

  3. GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H; Duchaineau, M; Max, N

    2011-09-21

    We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.

  4. A semi-nonparametric mixture model for selecting functionally consistent proteins.

    Science.gov (United States)

    Yu, Lianbo; Doerge, Rw

    2010-09-28

    High-throughput technologies have led to a new era of proteomics. Although protein microarray experiments are becoming more common place there are a variety of experimental and statistical issues that have yet to be addressed, and that will carry over to new high-throughput technologies unless they are investigated. One of the largest of these challenges is the selection of functionally consistent proteins. We present a novel semi-nonparametric mixture model for classifying proteins as consistent or inconsistent while controlling the false discovery rate and the false non-discovery rate. The performance of the proposed approach is compared to current methods via simulation under a variety of experimental conditions. We provide a statistical method for selecting functionally consistent proteins in the context of protein microarray experiments, but the proposed semi-nonparametric mixture model method can certainly be generalized to solve other mixture data problems. The main advantage of this approach is that it provides the posterior probability of consistency for each protein.

  5. A comparative study of the models dealing with localized and semi-localized transitions in thermally stimulated luminescence

    International Nuclear Information System (INIS)

    Kumar, Munish; Kher, R K; Bhatt, B C; Sunta, C M

    2007-01-01

    Different models dealing with localized and semi-localized transitions, namely Chen-Halperin, Mandowski and the model based on the Braunlich-Scharmann (BS) approach are compared. It has been found that for recombination dominant situations (r > 1, the three models differ. This implies that for localized transitions under recombination dominant situations, the Chen-Halperin model is the best representative of the thermally stimulated luminescence (TSL) process. It has also been found that for the TSL glow curves arising from delocalized recombination in Mandowski's semi-localized transitions model, the double peak structure of the TSL glow curve is a function of the radiation dose as well as of the heating rate. Further, the double peak structure of the TSL glow curves arising from delocalized recombination disappears at low doses as well as at higher heating rates. It has also been found that the TSL glow curves arising from delocalized recombination in the semi-localized transitions model based on the BS approach do not exhibit double peak structure as observed in the Mandowski semi-localized transitions model

  6. Analytical study on model tests of soil-structure interaction

    International Nuclear Information System (INIS)

    Odajima, M.; Suzuki, S.; Akino, K.

    1987-01-01

    Since nuclear power plant (NPP) structures are stiff, heavy and partly-embedded, the behavior of those structures during an earthquake depends on the vibrational characteristics of not only the structure but also the soil. Accordingly, seismic response analyses considering the effects of soil-structure interaction (SSI) are extremely important for seismic design of NPP structures. Many studies have been conducted on analytical techniques concerning SSI and various analytical models and approaches have been proposed. Based on the studies, SSI analytical codes (computer programs) for NPP structures have been improved at JINS (Japan Institute of Nuclear Safety), one of the departments of NUPEC (Nuclear Power Engineering Test Center) in Japan. These codes are soil-spring lumped-mass code (SANLUM), finite element code (SANSSI), thin layered element code (SANSOL). In proceeding with the improvement of the analytical codes, in-situ large-scale forced vibration SSI tests were performed using models simulating light water reactor buildings, and simulation analyses were performed to verify the codes. This paper presents an analytical study to demonstrate the usefulness of the codes

  7. Observation of Pt-{100}-p(2×2-O reconstruction by an environmental TEM

    Directory of Open Access Journals (Sweden)

    Hengbo Li

    2016-06-01

    Full Text Available The surface structure of noble metal nanoparticles usually plays a crucial role during the catalytic process in the fields of energy and environment. It has been studied extensively by surface analytic methods, such as scanning tunneling microscopy. However, it is still challenging to secure a direct observation of the structural evolution of surfaces of nanocatalysts in reaction (gas and heating conditions at the atomic scale. Here we report an in-situ observation of atomic reconstruction on Pt {100} surfaces exposed to oxygen in an environmental transmission electron microscope (TEM. Our high-resolution TEM images revealed that Pt-{100}-p(2×2-O reconstruction occurs during the reaction between oxygen atoms and {100} facets. A reconstruction model was proposed, and TEM images simulated according to this model with different defocus values match the experimental results well.

  8. Comparison Between Laser Scanning and Automated 3d Modelling Techniques to Reconstruct Complex and Extensive Cultural Heritage Areas

    Science.gov (United States)

    Fassi, F.; Fregonese, L.; Ackermann, S.; De Troia, V.

    2013-02-01

    In Cultural Heritage field, the necessity to survey objects in a fast manner, with the ability to repeat the measurements several times for deformation or degradation monitoring purposes, is increasing. In this paper, two significant cases, an architectonical one and an archaeological one, are presented. Due to different reasons and emergency situations, the finding of the optimal solution to enable quick and well-timed survey for a complete digital reconstruction of the object is required. In both cases, two survey methods have been tested and used: a laser scanning approach that allows to obtain high-resolution and complete scans within a short time and a photogrammetric one that allows the three-dimensional reconstruction of the object from images. In the last months, several methodologies, including free or low cost techniques, have arisen. These kinds of software allow the fully automatically three-dimensional reconstruction of objects from images, giving back a dense point cloud and, in some case, a surfaced mesh model. In this paper some comparisons between the two methodologies above mentioned are presented, using the example of some real cases of study. The surveys have been performed by employing both photogrammetry and laser scanner techniques. The methodological operational choices, depending on the required goal, the difficulties encountered during the survey with these methods, the execution time (that is the key parameter), and finally the obtained results, are fully described and examinated. On the final 3D model, an analytical comparison has been made, to analyse the differences, the tolerances, the possibility of accuracy improvement and the future developments.

  9. An analytical discrete-ordinates solution for an improved one-dimensional model of three-dimensional transport in ducts

    International Nuclear Information System (INIS)

    Garcia, R.D.M.

    2015-01-01

    Highlights: • An improved 1-D model of 3-D particle transport in ducts is studied. • The cases of isotropic and directional incidence are treated with the ADO method. • Accurate numerical results are reported for ducts of circular cross section. • A comparison with results of other authors is included. • The ADO method is found to be very efficient. - Abstract: An analytical discrete-ordinates solution is developed for the problem of particle transport in ducts, as described by a one-dimensional model constructed with two basis functions. Two types of particle incidence are considered: isotropic incidence and incidence described by the Dirac delta distribution. Accurate numerical results are tabulated for the reflection probabilities of semi-infinite ducts and the reflection and transmission probabilities of finite ducts. It is concluded that the developed solution is more efficient than commonly used numerical implementations of the discrete-ordinates method.

  10. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  11. Design of homogeneous trench-assisted multi-core fibers based on analytical model

    DEFF Research Database (Denmark)

    Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa

    2016-01-01

    We present a design method of homogeneous trench-assisted multicore fibers (TA-MCFs) based on an analytical model utilizing an analytical expression for the mode coupling coefficient between two adjacent cores. The analytical model can also be used for crosstalk (XT) properties analysis, such as ...

  12. A semi-analytical finite element process for nonlinear elastoplastic analysis of arbitrarily loaded shells of revolution

    International Nuclear Information System (INIS)

    Rensch, H.J.; Wunderlich, W.

    1981-01-01

    The governing partial differential equations used are valid for small strains and moderate rotations. Plasticity relations are based on J 2 -flow theory. In order to eliminate the circumferential coordinate, the loading as well as the unkown quantities are expanded in Fourier series in the circumferential direction. The nonlinear terms due to moderate rotations and plastic deformations are treated as pseudo load quantities. In this way, the governing equations can be reduced to uncoupled systems of first-order ordinary differential equations in the meridional direction. They are then integrated over a shell segment via a matrix series expansion. The resulting element transfer matrices are transformed into stiffness matrices, and for the analysis of the total structure the finite element method is employed. Thus, arbitrary branching of the shell geometry is possible. Compared to two-dimensional approximations, the major advantage of the semi-analytical procedure is that the structural stiffness matrix usually has a small handwidth, resulting in shorter computer run times. Moreover, its assemblage and triangularization has to be carried out only once bacause all nonlinear effects are treated as initial loads. (orig./HP)

  13. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  14. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  15. A three-dimensional semi-analytical solution for predicting drug release through the orifice of a spherical device.

    Science.gov (United States)

    Simon, Laurent; Ospina, Juan

    2016-07-25

    Three-dimensional solute transport was investigated for a spherical device with a release hole. The governing equation was derived using the Fick's second law. A mixed Neumann-Dirichlet condition was imposed at the boundary to represent diffusion through a small region on the surface of the device. The cumulative percentage of drug released was calculated in the Laplace domain and represented by the first term of an infinite series of Legendre and modified Bessel functions of the first kind. Application of the Zakian algorithm yielded the time-domain closed-form expression. The first-order solution closely matched a numerical solution generated by Mathematica(®). The proposed method allowed computation of the characteristic time. A larger surface pore resulted in a smaller effective time constant. The agreement between the numerical solution and the semi-analytical method improved noticeably as the size of the orifice increased. It took four time constants for the device to release approximately ninety-eight of its drug content. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. The Role of Surface Infiltration in Hydromechanical Coupling Effects in an Unsaturated Porous Medium of Semi-Infinite Extent

    Directory of Open Access Journals (Sweden)

    L. Z. Wu

    2017-01-01

    Full Text Available Rainfall infiltration into an unsaturated region of the earth’s surface is a pervasive natural phenomenon. During the rainfall-induced seepage process, the soil skeleton can deform and the permeability can change with the water content in the unsaturated porous medium. A coupled water infiltration and deformation formulation is used to examine a problem related to the mechanics of a two-dimensional region of semi-infinite extent. The van Genuchten model is used to represent the soil-water characteristic curve. The model, incorporating coupled infiltration and deformation, was developed to resolve the coupled problem in a semi-infinite domain based on numerical methods. The numerical solution is verified by the analytical solution when the coupled effects in an unsaturated medium of semi-infinite extent are considered. The computational results show that a numerical procedure can be employed to examine the semi-infinite unsaturated seepage incorporating coupled water infiltration and deformation. The analysis indicates that the coupling effect is significantly influenced by the boundary conditions of the problem and varies with the duration of water infiltration.

  17. Labral reconstruction with iliotibial band autografts and semitendinosus allografts improves hip joint contact area and contact pressure: an in vitro analysis.

    Science.gov (United States)

    Lee, Simon; Wuerz, Thomas H; Shewman, Elizabeth; McCormick, Frank M; Salata, Michael J; Philippon, Marc J; Nho, Shane J

    2015-01-01

    Labral reconstruction using iliotibial band (ITB) autografts and semitendinosus (Semi-T) allografts has recently been described in cases of labral deficiency. To characterize the joint biomechanics with a labrum-intact, labrum-deficient, and labrum-reconstructed acetabulum in a hip cadaveric model. The hypothesis was that labral resection would decrease contact area, increase contact pressure, and increase peak force, while subsequent labral reconstruction with ITB autografts or Semi-T allografts would restore these values toward the native intact labral state. Controlled laboratory study. Ten fresh-frozen human cadaveric hips were analyzed utilizing thin-film piezoresistive load sensors to measure contact area, contact pressure, and peak force (1) with the native intact labrum, (2) after segmental labral resection, and (3) after graft labral reconstruction with either ITB autografts or Semi-T allografts. Each specimen was examined at 20° of extension and 60° of flexion. Statistical analysis was conducted through 1-way analysis of variance with post hoc Games-Howell tests. For the ITB group, labral resection significantly decreased contact area (at 20°: 73.2%±5.38%, P=.0010; at 60°: 78.5%±6.93%, P=.0063) and increased contact pressure (at 20°: 106.7%±4.15%, P=.0387; at 60°: 103.9%±1.15%, P=.0428). In addition, ITB reconstruction improved contact area (at 20°: 87.2%±12.3%, P=.0130; at 60°: 90.5%±8.81%, P=.0079) and contact pressure (at 20°: 98.5%±5.71%, P=.0476; at 60°: 96.6%±1.13%, P=.0056) from the resected state. Contact pressure at 60° of flexion was significantly lower compared with the native labrum (P=.0420). For the Semi-T group, labral resection significantly decreased contact area (at 20°: 68.1%±12.57%, P=.0002; at 60°: 67.5%±6.70%, P=.0002) and increased contact pressure (at 20°: 105.3%±3.73%, P=.0304; at 60°: 106.8%±4.04%, P=.0231). Semi-T reconstruction improved contact area (at 20°: 87.9%±7.95%, P=.0087; at 60°: 92.9%±13

  18. A Synthesis of Light Absorption Properties of the Arctic Ocean: Application to Semi-analytical Estimates of Dissolved Organic Carbon Concentrations from Space

    Science.gov (United States)

    Matsuoka, A.; Babin, M.; Doxaran, D.; Hooker, S. B.; Mitchell, B. G.; Belanger, S.; Bricaud, A.

    2014-01-01

    The light absorption coefficients of particulate and dissolved materials are the main factors determining the light propagation of the visible part of the spectrum and are, thus, important for developing ocean color algorithms. While these absorption properties have recently been documented by a few studies for the Arctic Ocean [e.g., Matsuoka et al., 2007, 2011; Ben Mustapha et al., 2012], the datasets used in the literature were sparse and individually insufficient to draw a general view of the basin-wide spatial and temporal variations in absorption. To achieve such a task, we built a large absorption database at the pan-Arctic scale by pooling the majority of published datasets and merging new datasets. Our results showed that the total non-water absorption coefficients measured in the Eastern Arctic Ocean (EAO; Siberian side) are significantly higher 74 than in the Western Arctic Ocean (WAO; North American side). This higher absorption is explained 75 by higher concentration of colored dissolved organic matter (CDOM) in watersheds on the Siberian 76 side, which contains a large amount of dissolved organic carbon (DOC) compared to waters off 77 North America. In contrast, the relationship between the phytoplankton absorption (a()) and chlorophyll a (chl a) concentration in the EAO was not significantly different from that in the WAO. Because our semi-analytical CDOM absorption algorithm is based on chl a-specific a() values [Matsuoka et al., 2013], this result indirectly suggests that CDOM absorption can be appropriately erived not only for the WAO but also for the EAO using ocean color data. Derived CDOM absorption values were reasonable compared to in situ measurements. By combining this algorithm with empirical DOC versus CDOM relationships, a semi-analytical algorithm for estimating DOC concentrations for coastal waters at the Pan-Arctic scale is presented and applied to satellite ocean color data.

  19. Reconstructing Climate Change: The Model-Data Ping-Pong

    Science.gov (United States)

    Stocker, T. F.

    2017-12-01

    When Cesare Emiliani, the father of paleoceanography, made the first attempts at a quantitative reconstruction of Pleistocene climate change in the early 1950s, climate models were not yet conceived. The understanding of paleoceanographic records was therefore limited, and scientists had to resort to plausibility arguments to interpret their data. With the advent of coupled climate models in the early 1970s, for the first time hypotheses about climate processes and climate change could be tested in a dynamically consistent framework. However, only a model hierarchy can cope with the long time scales and the multi-component physical-biogeochemical Earth System. There are many examples how climate models have inspired the interpretation of paleoclimate data on the one hand, and conversely, how data have questioned long-held concepts and models. In this lecture I critically revisit a few examples of this model-data ping-pong, such as the bipolar seesaw, the mid-Holocene greenhouse gas increase, millennial and rapid CO2 changes reconstructed from polar ice cores, and the interpretation of novel paleoceanographic tracers. These examples also highlight many of the still unsolved questions and provide guidance for future research. The combination of high-resolution paleoceanographic data and modeling has never been more relevant than today. It will be the key for an appropriate risk assessment of impacts on the Earth System that are already underway in the Anthropocene.

  20. Continuous-Time Semi-Markov Models in Health Economic Decision Making: An Illustrative Example in Heart Failure Disease Management.

    Science.gov (United States)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    2016-01-01

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.

  1. Stochastic methods of data modeling: application to the reconstruction of non-regular data

    International Nuclear Information System (INIS)

    Buslig, Leticia

    2014-01-01

    This research thesis addresses two issues or applications related to IRSN studies. The first one deals with the mapping of measurement data (the IRSN must regularly control the radioactivity level in France and, for this purpose, uses a network of sensors distributed among the French territory). The objective is then to predict, by means of reconstruction model which used observations, maps which will be used to inform the population. The second application deals with the taking of uncertainties into account in complex computation codes (the IRSN must perform safety studies to assess the risks of loss of integrity of a nuclear reactor in case of hypothetical accidents, and for this purpose, codes are used which simulate physical phenomena occurring within an installation). Some input parameters are not precisely known, and the author therefore tries to assess the impact of some uncertainties on simulated values. She notably aims at seeing whether variations of input parameters may push the system towards a behaviour which is very different from that obtained with parameters having a reference value, or even towards a state in which safety conditions are not met. The precise objective of this second part is then to a reconstruction model which is not costly (in terms of computation time) and to perform simulation in relevant areas (strong gradient areas, threshold overrun areas, so on). Two issues are then important: the choice of the approximation model and the construction of the experiment plan. The model is based on a kriging-type stochastic approach, and an important part of the work addresses the development of new numerical techniques of experiment planning. The first part proposes a generic criterion of adaptive planning, and reports its analysis and implementation. In the second part, an alternative to error variance addition is developed. Methodological developments are tested on analytic functions, and then applied to the cases of measurement mapping and

  2. Evaluation of semi-generic PBTK modeling for emergency risk assessment after acute inhalation exposure to volatile hazardous chemicals.

    Science.gov (United States)

    Olie, J Daniël N; Bessems, Jos G; Clewell, Harvey J; Meulenbelt, Jan; Hunault, Claudine C

    2015-08-01

    Physiologically Based Toxicokinetic Models (PBTK) may facilitate emergency risk assessment after chemical incidents with inhalation exposure, but they are rarely used due to their relative complexity and skill requirements. We aimed to tackle this problem by evaluating a semi-generic PBTK model built in MS Excel for nine chemicals that are widely-used and often released in a chemical incident. The semi-generic PBTK model was used to predict blood concentration-time curves using inhalation exposure scenarios from human volunteer studies, case reports and hypothetical exposures at Emergency Response Planning Guideline, Level 3 (ERPG-3) levels.(2) Predictions using this model were compared with measured blood concentrations from volunteer studies or case reports, as well as blood concentrations predicted by chemical-specific models. The performances of the semi-generic model were evaluated on biological rationale, accuracy, and ease of use and range of application. Our results indicate that the semi-generic model can be easily used to predict blood levels for eight out of nine parent chemicals (dichloromethane, benzene, xylene, styrene, toluene, isopropanol trichloroethylene and tetrachloroethylene). However, for methanol, 2-propanol and dichloromethane the semi-generic model could not cope with the endogenous production of methanol and of acetone (being a metabolite of 2-propanol) nor could it simulate the formation of HbCO, which is one of the toxic end-points of dichloromethane. The model is easy and intuitive to use by people who are not so familiar with toxicokinetic models. A semi-generic PBTK modeling approach can be used as a 'quick-and-dirty' method to get a crude estimate of the exposure dose. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Analytical Model for Sensor Placement on Microprocessors

    National Research Council Canada - National Science Library

    Lee, Kyeong-Jae; Skadron, Kevin; Huang, Wei

    2005-01-01

    .... In this paper, we present an analytical model that describes the maximum temperature differential between a hot spot and a region of interest based on their distance and processor packaging information...

  4. Total variation regularization in measurement and image space for PET reconstruction

    KAUST Repository

    Burger, M

    2014-09-18

    © 2014 IOP Publishing Ltd. The aim of this paper is to test and analyse a novel technique for image reconstruction in positron emission tomography, which is based on (total variation) regularization on both the image space and the projection space. We formulate our variational problem considering both total variation penalty terms on the image and on an idealized sinogram to be reconstructed from a given Poisson distributed noisy sinogram. We prove existence, uniqueness and stability results for the proposed model and provide some analytical insight into the structures favoured by joint regularization. For the numerical solution of the corresponding discretized problem we employ the split Bregman algorithm and extensively test the approach in comparison to standard total variation regularization on the image. The numerical results show that an additional penalty on the sinogram performs better on reconstructing images with thin structures.

  5. Two-dimensional analytical model of a proton exchange membrane fuel cell

    International Nuclear Information System (INIS)

    Liu, Jia Xing; Guo, Hang; Ye, Fang; Ma, Chong Fang

    2017-01-01

    In this study, a two-dimensional full cell analytical model of a proton exchange membrane fuel cell is developed. The analytical model describes electrochemical reactions on the anode and cathode catalyst layer, reactants diffusion in the gas diffusion layer, and gases flow in the gas channel, etc. The analytical solution is derived according to the basic physical equations. The performance predicted by the model is in good agreement with the experimental data. The results show that the polarization mainly occurs in the cathode side of the proton exchange membrane fuel cell. The anodic overpotential cannot be neglected. The hydrogen and oxygen concentrations decrease along the channel flow direction. The hydrogen and oxygen concentrations in the catalyst layer decrease with the current density. As predicted by the model, concentration polarization mainly occurs in the cathode side. - Highlights: • A 2D full cell analytical model of a proton exchange membrane fuel cell is developed. • The analytical solution is deduced according to the basic equations. • The anode overpotential is not so small that it cannot be neglected. • Species concentration distributions in the fuel cell is obtained and analyzed.

  6. "Growing trees backwards": Description of a stand reconstruction model

    Science.gov (United States)

    Jonathan D. Bakker; Andrew J. Sanchez Meador; Peter Z. Fule; David W. Huffman; Margaret M. Moore

    2008-01-01

    We describe an individual-tree model that uses contemporary measurements to "grow trees backward" and reconstruct past tree diameters and stand structure in ponderosa pine dominated stands of the Southwest. Model inputs are contemporary structural measurements of all snags, logs, stumps, and living trees, and radial growth measurements, if available. Key...

  7. Analytical Model for High Impedance Fault Analysis in Transmission Lines

    Directory of Open Access Journals (Sweden)

    S. Maximov

    2014-01-01

    Full Text Available A high impedance fault (HIF normally occurs when an overhead power line physically breaks and falls to the ground. Such faults are difficult to detect because they often draw small currents which cannot be detected by conventional overcurrent protection. Furthermore, an electric arc accompanies HIFs, resulting in fire hazard, damage to electrical devices, and risk with human life. This paper presents an analytical model to analyze the interaction between the electric arc associated to HIFs and a transmission line. A joint analytical solution to the wave equation for a transmission line and a nonlinear equation for the arc model is presented. The analytical model is validated by means of comparisons between measured and calculated results. Several cases of study are presented which support the foundation and accuracy of the proposed model.

  8. Analytical modeling of post-tensioned precast beam-to-column connections

    International Nuclear Information System (INIS)

    Kaya, Mustafa; Arslan, A. Samet

    2009-01-01

    In this study, post-tensioned precast beam-to-column connections are tested experimentally at different stress levels, and are modelled analytically using 3D nonlinear finite element modelling method. ANSYS finite element software is used for this purposes. Nonlinear static analysis is used to determine the connection strength, behavior and stiffness when subjected to cyclic inelastic loads simulating ground excitation during an earthquake. The results obtained from the analytical studies are compared with the test results. In terms of stiffness, it was seen that the initial stiffness of the analytical models was lower than that of the tested specimens. As a result, modelling of these types of connection using 3D FEM can give crucial beforehand information, and overcome the disadvantages of time consuming workmanship and cost of experimental studies.

  9. A semi-analytical solution for elastic analysis of rotating thick cylindrical shells with variable thickness using disk form multilayers.

    Science.gov (United States)

    Zamani Nejad, Mohammad; Jabbari, Mehdi; Ghannad, Mehdi

    2014-01-01

    Using disk form multilayers, a semi-analytical solution has been derived for determination of displacements and stresses in a rotating cylindrical shell with variable thickness under uniform pressure. The thick cylinder is divided into disk form layers form with their thickness corresponding to the thickness of the cylinder. Due to the existence of shear stress in the thick cylindrical shell with variable thickness, the equations governing disk layers are obtained based on first-order shear deformation theory (FSDT). These equations are in the form of a set of general differential equations. Given that the cylinder is divided into n disks, n sets of differential equations are obtained. The solution of this set of equations, applying the boundary conditions and continuity conditions between the layers, yields displacements and stresses. A numerical solution using finite element method (FEM) is also presented and good agreement was found.

  10. A Semi-Analytical Solution for Elastic Analysis of Rotating Thick Cylindrical Shells with Variable Thickness Using Disk Form Multilayers

    Directory of Open Access Journals (Sweden)

    Mohammad Zamani Nejad

    2014-01-01

    Full Text Available Using disk form multilayers, a semi-analytical solution has been derived for determination of displacements and stresses in a rotating cylindrical shell with variable thickness under uniform pressure. The thick cylinder is divided into disk form layers form with their thickness corresponding to the thickness of the cylinder. Due to the existence of shear stress in the thick cylindrical shell with variable thickness, the equations governing disk layers are obtained based on first-order shear deformation theory (FSDT. These equations are in the form of a set of general differential equations. Given that the cylinder is divided into n disks, n sets of differential equations are obtained. The solution of this set of equations, applying the boundary conditions and continuity conditions between the layers, yields displacements and stresses. A numerical solution using finite element method (FEM is also presented and good agreement was found.

  11. Micromechanical modeling of the elasto-viscoplastic bahavior of semi-crystalline polymers

    NARCIS (Netherlands)

    Dommelen, van J.A.W.; Parks, D.M.; Boyce, M.C.; Brekelmans, W.A.M.; Baaijens, F.P.T.

    2003-01-01

    A micromechanically-based constitutive model for the elasto-viscoplastic deformationand texture evolution of semi-crystalline polymers is developed. The modelidealizes the microstructure to consist of an aggregate of two-phase layered compositeinclusions. A new framework for the composite inclusion

  12. Technical note: Representing glacier geometry changes in a semi-distributed hydrological model

    Directory of Open Access Journals (Sweden)

    J. Seibert

    2018-04-01

    Full Text Available Glaciers play an important role in high-mountain hydrology. While changing glacier areas are considered of highest importance for the understanding of future changes in runoff, glaciers are often only poorly represented in hydrological models. Most importantly, the direct coupling between the simulated glacier mass balances and changing glacier areas needs feasible solutions. The use of a complex glacier model is often not possible due to data and computational limitations. The Δh parameterization is a simple approach to consider the spatial variation of glacier thickness and area changes. Here, we describe a conceptual implementation of the Δh parameterization in the semi-distributed hydrological model HBV-light, which also allows for the representation of glacier advance phases and for comparison between the different versions of the implementation. The coupled glacio-hydrological simulation approach, which could also be implemented in many other semi-distributed hydrological models, is illustrated based on an example application.

  13. Conceptualising forensic science and forensic reconstruction. Part I: A conceptual model.

    Science.gov (United States)

    Morgan, R M

    2017-11-01

    There has been a call for forensic science to actively return to the approach of scientific endeavour. The importance of incorporating an awareness of the requirements of the law in its broadest sense, and embedding research into both practice and policy within forensic science, is arguably critical to achieving such an endeavour. This paper presents a conceptual model (FoRTE) that outlines the holistic nature of trace evidence in the 'endeavour' of forensic reconstruction. This model offers insights into the different components intrinsic to transparent, reproducible and robust reconstructions in forensic science. The importance of situating evidence within the whole forensic science process (from crime scene to court), of developing evidence bases to underpin each stage, of frameworks that offer insights to the interaction of different lines of evidence, and the role of expertise in decision making are presented and their interactions identified. It is argued that such a conceptual model has value in identifying the future steps for harnessing the value of trace evidence in forensic reconstruction. It also highlights that there is a need to develop a nuanced approach to reconstructions that incorporates both empirical evidence bases and expertise. A conceptual understanding has the potential to ensure that the endeavour of forensic reconstruction has its roots in 'problem-solving' science, and can offer transparency and clarity in the conclusions and inferences drawn from trace evidence, thereby enabling the value of trace evidence to be realised in investigations and the courts. Copyright © 2017 The Author. Published by Elsevier B.V. All rights reserved.

  14. Application of dynamic model to predict some inside environment variables in a semi-solar greenhouse

    Directory of Open Access Journals (Sweden)

    Behzad Mohammadi

    2018-06-01

    Full Text Available Greenhouses are one of the most effective cultivation methods with a yield per cultivated area up to 10 times more than free land cultivation but the use of fossil fuels in this production field is very high. The greenhouse environment is an uncertain nonlinear system which classical modeling methods have some problems to solve it. There are many control methods, such as adaptive, feedback and intelligent control and they require a precise model. Therefore, many modeling methods have been proposed for this purpose; including physical, transfer function and black-box modeling. The objective of this paper is to modeling and experimental validation of some inside environment variables in an innovative greenhouse structure (semi-solar greenhouse. For this propose, a semi-solar greenhouse was designed and constructed at the North-West of Iran in Azerbaijan Province (38°10′N and 46°18′E with elevation of 1364 m above the sea level. The main inside environment factors include inside air temperature (Ta and inside soil temperature (Ts were collected as the experimental data samples. The dynamic heat transfer model used to estimate the temperature in two different points of semi-solar greenhouse with initial values. The results showed that dynamic model can predict the inside temperatures in two different points (Ta and Ts with RMSE, MAPE and EF about 5.3 °C, 10.2% and 0.78% and 3.45 °C, 7.7% and 0.86%, respectively. Keywords: Semi-solar greenhouse, Dynamic model, Commercial greenhouse

  15. Hidden Semi Markov Models for Multiple Observation Sequences: The mhsmm Package for R

    DEFF Research Database (Denmark)

    O'Connell, Jarad Michael; Højsgaard, Søren

    2011-01-01

    models only allow a geometrically distributed sojourn time in a given state, while hidden semi-Markov models extend this by allowing an arbitrary sojourn distribution. We demonstrate the software with simulation examples and an application involving the modelling of the ovarian cycle of dairy cows...

  16. Development of a simulation model of semi-active suspension for monorail

    Science.gov (United States)

    Hasnan, K.; Didane, D. H.; Kamarudin, M. A.; Bakhsh, Qadir; Abdulmalik, R. E.

    2016-11-01

    The new Kuala Lumpur Monorail Fleet Expansion Project (KLMFEP) uses semiactive technology in its suspension system. It is recognized that the suspension system influences the ride quality. Thus, among the way to further improve the ride quality is by fine- tuning the semi-active suspension system on the new KL Monorail. The semi-active suspension for the monorail specifically in terms of improving ride quality could be exploited further. Hence a simulation model which will act as a platform to test the design of a complete suspension system particularly to investigate the ride comfort performance is required. MSC Adams software was considered as the tool to develop the simulation platform, where all parameters and data are represented by mathematical equations; whereas the new KL Monorail being the reference model. In the simulation, the model went through step disturbance on the guideway for stability and ride comfort analysis. The model has shown positive results where the monorail is in stable condition as an outcome from stability analysis. The model also scores a Rating 1 classification in ISO 2631 Ride Comfort performance which is very comfortable as an overall outcome from ride comfort analysis. The model is also adjustable, flexibile and understandable by the engineers within the field for the purpose of further development.

  17. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  18. A Kalman Filter-Based Method for Reconstructing GMS-5 Global Solar Radiation by Introduction of In Situ Data

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2013-06-01

    Full Text Available Solar radiation is an important input for various land-surface energy balance models. Global solar radiation data retrieved from the Japanese Geostationary Meteorological Satellite 5 (GMS-5/Visible and Infrared Spin Scan Radiometer (VISSR has been widely used in recent years. However, due to the impact of clouds, aerosols, solar elevation angle and bidirectional reflection, spatial or temporal deficiencies often exist in solar radiation datasets that are derived from satellite remote sensing, which can seriously affect the accuracy of application models of land-surface energy balance. The goal of reconstructing radiation data is to simulate the seasonal variation patterns of solar radiation, using various statistical and numerical analysis methods to interpolate the missing observations and optimize the whole time-series dataset. In the current study, a reconstruction method based on data assimilation is proposed. Using a Kalman filter as the assimilation algorithm, the retrieved radiation values are corrected through the continuous introduction of local in-situ global solar radiation (GSR provided by the China Meteorological Data Sharing Service System (Daily radiation dataset_Version 3 which were collected from 122 radiation data collection stations over China. A complete and optimal set of time-series data is ultimately obtained. This method is applied and verified in China’s northern agricultural areas (humid regions, semi-humid regions and semi-arid regions in a warm temperate zone. The results show that the mean value and standard deviation of the reconstructed solar radiation data series are significantly improved, with greater consistency with ground-based observations than the series before reconstruction. The method implemented in this study provides a new solution for the time-series reconstruction of surface energy parameters, which can provide more reliable data for scientific research and regional renewable-energy planning.

  19. Crop Upgrading Strategies and Modelling for Rainfed Cereals in a Semi-Arid Climate—A Review

    Directory of Open Access Journals (Sweden)

    Festo Richard Silungwe

    2018-03-01

    Full Text Available Spatiotemporal rainfall variability and low soil fertility are the primary crop production challenges facing poor farmers in semi-arid environments. However, there are few solutions for addressing these challenges. The literature provides several crop upgrading strategies (UPS for improving crop yields, and biophysical models are used to simulate these strategies. However, the suitability of UPS is limited by systemization of their areas of application and the need to cope with the challenges faced by poor farmers. In this study, we reviewed 187 papers from peer-reviewed journals, conferences and reports that discuss UPS suitable for cereals and biophysical models used to assist in the selection of UPS in semi-arid areas. We found that four UPS were the most suitable, namely tied ridges, microdose fertilization, varying sowing dates, and field scattering. The DSSAT, APSIM and AquaCrop models adequately simulate these UPS. This work provides a systemization of crop UPS and models in semi-arid areas that can be applied by scientists and planners.

  20. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    International Nuclear Information System (INIS)

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-01

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated