WorldWideScience

Sample records for modeling cascade interpolation

  1. Fuzzy linguistic model for interpolation

    International Nuclear Information System (INIS)

    Abbasbandy, S.; Adabitabar Firozja, M.

    2007-01-01

    In this paper, a fuzzy method for interpolating of smooth curves was represented. We present a novel approach to interpolate real data by applying the universal approximation method. In proposed method, fuzzy linguistic model (FLM) applied as universal approximation for any nonlinear continuous function. Finally, we give some numerical examples and compare the proposed method with spline method

  2. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    Science.gov (United States)

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  3. Scientific data interpolation with low dimensional manifold model

    Science.gov (United States)

    Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley

    2018-01-01

    We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  4. Scientific data interpolation with low dimensional manifold model

    International Nuclear Information System (INIS)

    Zhu, Wei; Wang, Bao; Barnard, Richard C.; Hauck, Cory D.

    2017-01-01

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  5. Interpolation solution of the single-impurity Anderson model

    International Nuclear Information System (INIS)

    Kuzemsky, A.L.

    1990-10-01

    The dynamical properties of the single-impurity Anderson model (SIAM) is studied using a novel Irreducible Green's Function method (IGF). The new solution for one-particle GF interpolating between the strong and weak correlation limits is obtained. The unified concept of relevant mean-field renormalizations is indispensable for strong correlation limit. (author). 21 refs

  6. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    Science.gov (United States)

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  7. Single image interpolation via adaptive nonlocal sparsity-based modeling.

    Science.gov (United States)

    Romano, Yaniv; Protter, Matan; Elad, Michael

    2014-07-01

    Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

  8. Interpolation of daily rainfall using spatiotemporal models and clustering

    KAUST Repository

    Militino, A. F.

    2014-06-11

    Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.

  9. Interpolation of daily rainfall using spatiotemporal models and clustering

    KAUST Repository

    Militino, A. F.; Ugarte, M. D.; Goicoa, T.; Genton, Marc G.

    2014-01-01

    Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.

  10. The Attention Cascade Model and Attentional Blink

    Science.gov (United States)

    Shih, Shui-I

    2008-01-01

    An attention cascade model is proposed to account for attentional blinks in rapid serial visual presentation (RSVP) of stimuli. Data were collected using single characters in a single RSVP stream at 10 Hz [Shih, S., & Reeves, A. (2007). "Attentional capture in rapid serial visual presentation." "Spatial Vision", 20(4), 301-315], and single words,…

  11. A comparison of linear interpolation models for iterative CT reconstruction.

    Science.gov (United States)

    Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric

    2016-12-01

    Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects

  12. Modeling techniques for quantum cascade lasers

    Energy Technology Data Exchange (ETDEWEB)

    Jirauschek, Christian [Institute for Nanoelectronics, Technische Universität München, D-80333 Munich (Germany); Kubis, Tillmann [Network for Computational Nanotechnology, Purdue University, 207 S Martin Jischke Drive, West Lafayette, Indiana 47907 (United States)

    2014-03-15

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  13. Modeling techniques for quantum cascade lasers

    Science.gov (United States)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-01

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  14. Cascade model for fluvial geomorphology

    Science.gov (United States)

    Newman, W. I.; Turcotte, D. L.

    1990-01-01

    Erosional landscapes are generally scale invariant and fractal. Spectral studies provide quantitative confirmation of this statement. Linear theories of erosion will not generate scale-invariant topography. In order to explain the fractal behavior of landscapes a modified Fourier series has been introduced that is the basis for a renormalization approach. A nonlinear dynamical model has been introduced for the decay of the modified Fourier series coefficients that yield a fractal spectra. It is argued that a physical basis for this approach is that a fractal (or nearly fractal) distribution of storms (floods) continually renews erosional features on all scales.

  15. Cascading walks model for human mobility patterns.

    Science.gov (United States)

    Han, Xiao-Pu; Wang, Xiang-Wen; Yan, Xiao-Yong; Wang, Bing-Hong

    2015-01-01

    Uncovering the mechanism behind the scaling laws and series of anomalies in human trajectories is of fundamental significance in understanding many spatio-temporal phenomena. Recently, several models, e.g. the explorations-returns model (Song et al., 2010) and the radiation model for intercity travels (Simini et al., 2012), have been proposed to study the origin of these anomalies and the prediction of human movements. However, an agent-based model that could reproduce most of empirical observations without priori is still lacking. In this paper, considering the empirical findings on the correlations of move-lengths and staying time in human trips, we propose a simple model which is mainly based on the cascading processes to capture the human mobility patterns. In this model, each long-range movement activates series of shorter movements that are organized by the law of localized explorations and preferential returns in prescribed region. Based on the numerical simulations and analytical studies, we show more than five statistical characters that are well consistent with the empirical observations, including several types of scaling anomalies and the ultraslow diffusion properties, implying the cascading processes associated with the localized exploration and preferential returns are indeed a key in the understanding of human mobility activities. Moreover, the model shows both of the diverse individual mobility and aggregated scaling displacements, bridging the micro and macro patterns in human mobility. In summary, our model successfully explains most of empirical findings and provides deeper understandings on the emergence of human mobility patterns.

  16. Interpolation-Based Condensation Model Reduction Part 1: Frequency Window Reduction Method Application to Structural Acoustics

    National Research Council Canada - National Science Library

    Ingel, R

    1999-01-01

    .... Projection operators are employed for the model reduction or condensation process. Interpolation is then introduced over a user defined frequency window, which can have real and imaginary boundaries and be quite large. Hermitian...

  17. Advantage of Fast Fourier Interpolation for laser modeling

    International Nuclear Information System (INIS)

    Epatko, I.V.; Serov, R.V.

    2006-01-01

    The abilities of a new algorithm: the 2-dimensional Fast Fourier Interpolation (FFI) with magnification factor (zoom) 2 n whose purpose is to improve the spatial resolution when necessary, are analyzed in details. FFI procedure is useful when diaphragm/aperture size is less than half of the current simulation scale. The computation noise due to FFI procedure is less than 10 -6 . The additional time for FFI is approximately equal to one Fast Fourier Transform execution time. For some applications using FFI procedure, the execution time decreases by a 10 4 factor compared with other laser simulation codes. (authors)

  18. ERRORS MEASUREMENT OF INTERPOLATION METHODS FOR GEOID MODELS: STUDY CASE IN THE BRAZILIAN REGION

    Directory of Open Access Journals (Sweden)

    Daniel Arana

    Full Text Available Abstract: The geoid is an equipotential surface regarded as the altimetric reference for geodetic surveys and it therefore, has several practical applications for engineers. In recent decades the geodetic community has concentrated efforts on the development of highly accurate geoid models through modern techniques. These models are supplied through regular grids which users need to make interpolations. Yet, little information can be obtained regarding the most appropriate interpolation method to extract information from the regular grid of geoidal models. The use of an interpolator that does not represent the geoid surface appropriately can impair the quality of geoid undulations and consequently the height transformation. This work aims to quantify the magnitude of error that comes from a regular mesh of geoid models. The analysis consisted of performing a comparison between the interpolation of the MAPGEO2015 program and three interpolation methods: bilinear, cubic spline and neural networks Radial Basis Function. As a result of the experiments, it was concluded that 2.5 cm of the 18 cm error of the MAPGEO2015 validation is caused by the use of interpolations in the 5'x5' grid.

  19. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    Directory of Open Access Journals (Sweden)

    Hyojin Lee

    2015-01-01

    Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

  20. Comparison of Spatial Interpolation Schemes for Rainfall Data and Application in Hydrological Modeling

    Directory of Open Access Journals (Sweden)

    Tao Chen

    2017-05-01

    Full Text Available The spatial distribution of precipitation is an important aspect of water-related research. The use of different interpolation schemes in the same catchment may cause large differences and deviations from the actual spatial distribution of rainfall. Our study analyzes different methods of spatial rainfall interpolation at annual, daily, and hourly time scales to provide a comprehensive evaluation. An improved regression-based scheme is proposed using principal component regression with residual correction (PCRR and is compared with inverse distance weighting (IDW and multiple linear regression (MLR interpolation methods. In this study, the meso-scale catchment of the Fuhe River in southeastern China was selected as a typical region. Furthermore, a hydrological model HEC-HMS was used to calculate streamflow and to evaluate the impact of rainfall interpolation methods on the results of the hydrological model. Results show that the PCRR method performed better than the other methods tested in the study and can effectively eliminate the interpolation anomalies caused by terrain differences between observation points and surrounding areas. Simulated streamflow showed different characteristics based on the mean, maximum, minimum, and peak flows. The results simulated by PCRR exhibited the lowest streamflow error and highest correlation with measured values at the daily time scale. The application of the PCRR method is found to be promising because it considers multicollinearity among variables.

  1. Comparison of data-driven and model-driven approaches to brightness temperature diurnal cycle interpolation

    CSIR Research Space (South Africa)

    Van den Bergh, F

    2006-01-01

    Full Text Available This paper presents two new schemes for interpolating missing samples in satellite diurnal temperature cycles (DTCs). The first scheme, referred to here as the cosine model, is an improvement of the model proposed in [2] and combines a cosine...

  2. Exploring the Role of Genetic Algorithms and Artificial Neural Networks for Interpolation of Elevation in Geoinformation Models

    Science.gov (United States)

    Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.

    2013-09-01

    One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.

  3. A cascading failure model for analyzing railway accident causation

    Science.gov (United States)

    Liu, Jin-Tao; Li, Ke-Ping

    2018-01-01

    In this paper, a new cascading failure model is proposed for quantitatively analyzing the railway accident causation. In the model, the loads of nodes are redistributed according to the strength of the causal relationships between the nodes. By analyzing the actual situation of the existing prevention measures, a critical threshold of the load parameter in the model is obtained. To verify the effectiveness of the proposed cascading model, simulation experiments of a train collision accident are performed. The results show that the cascading failure model can describe the cascading process of the railway accident more accurately than the previous models, and can quantitatively analyze the sensitivities and the influence of the causes. In conclusion, this model can assist us to reveal the latent rules of accident causation to reduce the occurrence of railway accidents.

  4. A new stochastic model considering satellite clock interpolation errors in precise point positioning

    Science.gov (United States)

    Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong

    2018-03-01

    Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.

  5. Application of an enhanced cross-section interpolation model for highly poisoned LWR core calculations

    International Nuclear Information System (INIS)

    Palau, J.M.; Cathalau, S.; Hudelot, J.P.; Barran, F.; Bellanger, V.; Magnaud, C.; Moreau, F.

    2011-01-01

    Burnable poisons are extensively used by Light Water Reactor designers in order to preserve the fuel reactivity potential and increase the cycle length (without increasing the uranium enrichment). In the industrial two-steps (assembly 2D transport-core 3D diffusion) calculation schemes these heterogeneities yield to strong flux and cross-sections perturbations that have to be taken into account in the final 3D burn-up calculations. This paper presents the application of an enhanced cross-section interpolation model (implemented in the French CRONOS2 code) to LWR (highly poisoned) depleted core calculations. The principle is to use the absorbers (or actinide) concentrations as the new interpolation parameters instead of the standard local burnup/fluence parameters. It is shown by comparing the standard (burnup/fluence) and new (concentration) interpolation models and using the lattice transport code APOLLO2 as a numerical reference that reactivity and local reaction rate prediction of a 2x2 LWR assembly configuration (slab geometry) is significantly improved with the concentration interpolation model. Gains on reactivity and local power predictions (resp. more than 1000 pcm and 20 % discrepancy reduction compared to the reference APOLLO2 scheme) are obtained by using this model. In particular, when epithermal absorbers are inserted close to thermal poison the 'shadowing' ('screening') spectral effects occurring during control operations are much more correctly modeled by concentration parameters. Through this outstanding example it is highlighted that attention has to be paid to the choice of cross-section interpolation parameters (burnup 'indicator') in core calculations with few energy groups and variable geometries all along the irradiation cycle. Actually, this new model could be advantageously applied to steady-state and transient LWR heterogeneous core computational analysis dealing with strong spectral-history variations under

  6. Adapting Better Interpolation Methods to Model Amphibious MT Data Along the Cascadian Subduction Zone.

    Science.gov (United States)

    Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.

    2016-12-01

    Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new

  7. Different methods for spatial interpolation of rainfall data for operational hydrology and hydrological modeling at watershed scale: a review

    Directory of Open Access Journals (Sweden)

    Ly, S.

    2013-01-01

    Full Text Available Watershed management and hydrological modeling require data related to the very important matter of precipitation, often measured using raingages or weather stations. Hydrological models often require a preliminary spatial interpolation as part of the modeling process. The success of spatial interpolation varies according to the type of model chosen, its mode of geographical management and the resolution used. The quality of a result is determined by the quality of the continuous spatial rainfall, which ensues from the interpolation method used. The objective of this article is to review the existing methods for interpolation of rainfall data that are usually required in hydrological modeling. We review the basis for the application of certain common methods and geostatistical approaches used in interpolation of rainfall. Previous studies have highlighted the need for new research to investigate ways of improving the quality of rainfall data and ultimately, the quality of hydrological modeling.

  8. Digital elevation modeling via curvature interpolation for lidar data

    Science.gov (United States)

    Digital elevation model (DEM) is a three-dimensional (3D) representation of a terrain's surface - for a planet (including Earth), moon, or asteroid - created from point cloud data which measure terrain elevation. Its modeling requires surface reconstruction for the scattered data, which is an ill-p...

  9. Energy Cascade in Fermi-Pasta Models

    Science.gov (United States)

    Ponno, A.; Bambusi, D.

    We show that, for long-wavelength initial conditions, the FPU dynamics is described, up to a certain time, by two KdV-like equations, which represent the resonant Hamiltonian normal form of the system. The energy cascade taking place in the system is then quantitatively characterized by arguments of dimensional analysis based on such equations.

  10. Interpolating Spline Curve-Based Perceptual Encryption for 3D Printing Models

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the development of 3D printing technology, 3D printing has recently been applied to many areas of life including healthcare and the automotive industry. Due to the benefit of 3D printing, 3D printing models are often attacked by hackers and distributed without agreement from the original providers. Furthermore, certain special models and anti-weapon models in 3D printing must be protected against unauthorized users. Therefore, in order to prevent attacks and illegal copying and to ensure that all access is authorized, 3D printing models should be encrypted before being transmitted and stored. A novel perceptual encryption algorithm for 3D printing models for secure storage and transmission is presented in this paper. A facet of 3D printing model is extracted to interpolate a spline curve of degree 2 in three-dimensional space that is determined by three control points, the curvature coefficients of degree 2, and an interpolating vector. Three control points, the curvature coefficients, and interpolating vector of the spline curve of degree 2 are encrypted by a secret key. The encrypted features of the spline curve are then used to obtain the encrypted 3D printing model by inverse interpolation and geometric distortion. The results of experiments and evaluations prove that the entire 3D triangle model is altered and deformed after the perceptual encryption process. The proposed algorithm is responsive to the various formats of 3D printing models. The results of the perceptual encryption process is superior to those of previous methods. The proposed algorithm also provides a better method and more security than previous methods.

  11. Modeling of cascade and sub-cascade formation at high pka energies in irradiated fusion structural materials

    International Nuclear Information System (INIS)

    Ryazanov, A.; Metelkin, E.V.; Semenov, E.A.

    2007-01-01

    Full text of publication follows: A new theoretical model is developed for the investigations of cascade and sub-cascade formation in fusion structural materials under fast neutron irradiation at high primary knock atom (PKA) energies. Under 14 MeV neutron irradiation especially of light fusion structural materials such as Be, C, SiC materials PKA will have the energies up to 1 MeV. At such high energies it is very difficult to use the Monte Carlo or molecular dynamic simulations. The developed model is based on the analytical consideration of elastic collisions between displaced moving atoms into atomic cascades produced by a PKAs with the some kinetic energy obtained from fast neutrons. The Tomas-Fermy interaction potential is used for the describing of elastic collisions between moving atoms. The suggested model takes into account also the electronic losses for moving atoms between elastic collisions. The self consistent criterion for sub-cascade formation is suggested here which is based on the comparison of mean distance between two consequent PKA collisions and size of sub-cascade produced by PKA. The analytical relations for the most important characteristics of cascades and sub-cascade are determined including the average number of sub-cascades per one PKA in the dependence on PKA energy, the distance between sub-cascades and the average cascade and sub-cascade sizes as a function of PKA energy. The developed model allows determining the total numbers, distribution functions of cascades and sub-cascades in dependence on their sizes and generation rate of cascades and sub-cascades for different fusion neutron energy spectra. Based on the developed model the numerical calculations for main characteristics of cascades and sub-cascades in different fusion structural materials are performed using the neutron flux and PKA energy spectra for fusion reactors: ITER and DEMO. The main characteristics for cascade and sub-cascade formation are calculated here for the

  12. Interpolation-Based Condensation Model Reduction Part 1: Frequency Window Reduction Method Application to Structural Acoustics

    National Research Council Canada - National Science Library

    Ingel, R

    1999-01-01

    ... (which require derivative information) interpolation functions as well as standard Lagrangian functions, which can be linear, quadratic or cubic, have been used to construct the interpolation windows...

  13. Effect of raingage density, position and interpolation on rainfall-discharge modelling

    Science.gov (United States)

    Ly, S.; Sohier, C.; Charles, C.; Degré, A.

    2012-04-01

    Precipitation traditionally observed using raingages or weather stations, is one of the main parameters that have direct impact on runoff production. Precipitation data require a preliminary spatial interpolation prior to hydrological modeling. The accuracy of modelling result depends on the accuracy of the interpolated spatial rainfall which differs according to different interpolation methods. The accuracy of the interpolated spatial rainfall is usually determined by cross-validation method. The objective of this study is to assess the different interpolation methods of daily rainfall at the watershed scale through hydrological modelling and to explore the best methods that provide a good long term simulation. Four versions of geostatistics: Ordinary Kriging (ORK), Universal Kriging (UNK), Kriging with External Dridft (KED) and Ordinary Cokriging (OCK) and two types of deterministic methods: Thiessen polygon (THI) and Inverse Distance Weighting (IDW) are used to produce 30-year daily rainfall inputs for a distributed physically-based hydrological model (EPIC-GRID). This work is conducted in the Ourthe and Ambleve nested catchments, located in the Ardennes hilly landscape in the Walloon region, Belgium. The total catchment area is 2908 km2, lies between 67 and 693 m in elevation. The multivariate geostatistics (KED and OCK) are also used by incorporating elevation as external data to improve the rainfall prediction. This work also aims at analysing the effect of different raingage densities and position used for interpolation, on the stream flow modelled to get insight in terms of the capability and limitation of the geostatistical methods. The number of raingage varies from 70, 60, 50, 40, 30, 20, 8 to 4 stations located in and surrounding the catchment area. In the latter case, we try to use different positions: around the catchment and only a part of the catchment. The result shows that the simple method like THI fails to capture the rainfall and to produce

  14. Multivariate interpolation

    Directory of Open Access Journals (Sweden)

    Pakhnutov I.A.

    2017-04-01

    Full Text Available the paper deals with iterative interpolation methods in forms of similar recursive procedures defined by a sort of simple functions (interpolation basis not necessarily real valued. These basic functions are kind of arbitrary type being defined just by wish and considerations of user. The studied interpolant construction shows virtue of versatility: it may be used in a wide range of vector spaces endowed with scalar product, no dimension restrictions, both in Euclidean and Hilbert spaces. The choice of basic interpolation functions is as wide as possible since it is subdued nonessential restrictions. The interpolation method considered in particular coincides with traditional polynomial interpolation (mimic of Lagrange method in real unidimensional case or rational, exponential etc. in other cases. The interpolation as iterative process, in fact, is fairly flexible and allows one procedure to change the type of interpolation, depending on the node number in a given set. Linear interpolation basis options (perhaps some nonlinear ones allow to interpolate in noncommutative spaces, such as spaces of nondegenerate matrices, interpolated data can also be relevant elements of vector spaces over arbitrary numeric field. By way of illustration, the author gives the examples of interpolation on the real plane, in the separable Hilbert space and the space of square matrices with vektorvalued source data.

  15. Identification of cascade water tanks using a PWARX model

    Science.gov (United States)

    Mattsson, Per; Zachariah, Dave; Stoica, Petre

    2018-06-01

    In this paper we consider the identification of a discrete-time nonlinear dynamical model for a cascade water tank process. The proposed method starts with a nominal linear dynamical model of the system, and proceeds to model its prediction errors using a model that is piecewise affine in the data. As data is observed, the nominal model is refined into a piecewise ARX model which can capture a wide range of nonlinearities, such as the saturation in the cascade tanks. The proposed method uses a likelihood-based methodology which adaptively penalizes model complexity and directly leads to a computationally efficient implementation.

  16. A thermal modelling of displacement cascades in uranium dioxide

    Energy Technology Data Exchange (ETDEWEB)

    Martin, G., E-mail: guillaume.martin@cea.fr [CEA – DEN/DEC/SESC/LLCC, Bât. 352, 13108 Saint-Paul-Lez-Durance Cedex (France); Garcia, P.; Sabathier, C. [CEA – DEN/DEC/SESC/LLCC, Bât. 352, 13108 Saint-Paul-Lez-Durance Cedex (France); Devynck, F.; Krack, M. [Laboratory for Reactor Physics and Systems Behaviour, Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Maillard, S. [CEA – DEN/DEC/SESC/LLCC, Bât. 352, 13108 Saint-Paul-Lez-Durance Cedex (France)

    2014-05-01

    The space and time dependent temperature distribution was studied in uranium dioxide during displacement cascades simulated by classical molecular dynamics (MD). The energy for each simulated radiation event ranged between 0.2 keV and 20 keV in cells at initial temperatures of 700 K or 1400 K. Spheres into which atomic velocities were rescaled (thermal spikes) have also been simulated by MD to simulate the thermal excitation induced by displacement cascades. Equipartition of energy was shown to occur in displacement cascades, half of the kinetic energy of the primary knock-on atom being converted after a few tenths of picoseconds into potential energy. The kinetic and potential parts of the system energy are however subjected to little variations during dedicated thermal spike simulations. This is probably due to the velocity rescaling process, which impacts a large number of atoms in this case and would drive the system away from a dynamical equilibrium. This result makes questionable MD simulations of thermal spikes carried out up to now (early 2014). The thermal history of cascades was compared to the heat equation solution of a punctual thermal excitation in UO{sub 2}. The maximum volume brought to a temperature above the melting temperature during the simulated cascade events is well reproduced by this simple model. This volume eventually constitutes a relevant estimate of the volume affected by a displacement cascade in UO{sub 2}. This definition of the cascade volume could also make sense in other materials, like iron.

  17. Subcellular localization for Gram positive and Gram negative bacterial proteins using linear interpolation smoothing model.

    Science.gov (United States)

    Saini, Harsh; Raicar, Gaurav; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2015-12-07

    Protein subcellular localization is an important topic in proteomics since it is related to a protein׳s overall function, helps in the understanding of metabolic pathways, and in drug design and discovery. In this paper, a basic approximation technique from natural language processing called the linear interpolation smoothing model is applied for predicting protein subcellular localizations. The proposed approach extracts features from syntactical information in protein sequences to build probabilistic profiles using dependency models, which are used in linear interpolation to determine how likely is a sequence to belong to a particular subcellular location. This technique builds a statistical model based on maximum likelihood. It is able to deal effectively with high dimensionality that hinders other traditional classifiers such as Support Vector Machines or k-Nearest Neighbours without sacrificing performance. This approach has been evaluated by predicting subcellular localizations of Gram positive and Gram negative bacterial proteins. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Damped trophic cascades driven by fishing in model marine ecosystems

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Pedersen, Martin

    2010-01-01

    The largest perturbation on upper trophic levels of many marine ecosystems stems from fishing. The reaction of the ecosystem goes beyond the trophic levels directly targeted by the fishery. This reaction has been described either as a change in slope of the overall size spectrum or as a trophic...... cascade triggered by the removal of top predators. Here we use a novel size- and trait-based model to explore how marine ecosystems might react to perturbations from different types of fishing pressure. The model explicitly resolves the whole life history of fish, from larvae to adults. The results show...... that fishing does not change the overall slope of the size spectrum, but depletes the largest individuals and induces trophic cascades. A trophic cascade can propagate both up and down in trophic levels driven by a combination of changes in predation mortality and food limitation. The cascade is damped...

  19. A local effect model-based interpolation framework for experimental nanoparticle radiosensitisation data

    OpenAIRE

    Brown, Jeremy M. C.; Currell, Fred J.

    2017-01-01

    A local effect model (LEM)-based framework capable of interpolating nanoparticle-enhanced photon-irradiated clonogenic cell survival fraction measurements as a function of nanoparticle concentration was developed and experimentally benchmarked for gold nanoparticle (AuNP)-doped bovine aortic endothelial cells (BAECs) under superficial kilovoltage X-ray irradiation. For three different superficial kilovoltage X-ray spectra, the BAEC survival fraction response was predicted for two different Au...

  20. Effect of the precipitation interpolation method on the performance of a snowmelt runoff model

    Science.gov (United States)

    Jacquin, Alexandra

    2014-05-01

    Uncertainties on the spatial distribution of precipitation seriously affect the reliability of the discharge estimates produced by watershed models. Although there is abundant research evaluating the goodness of fit of precipitation estimates obtained with different gauge interpolation methods, few studies have focused on the influence of the interpolation strategy on the response of watershed models. The relevance of this choice may be even greater in the case of mountain catchments, because of the influence of orography on precipitation. This study evaluates the effect of the precipitation interpolation method on the performance of conceptual type snowmelt runoff models. The HBV Light model version 4.0.0.2, operating at daily time steps, is used as a case study. The model is applied in Aconcagua at Chacabuquito catchment, located in the Andes Mountains of Central Chile. The catchment's area is 2110[Km2] and elevation ranges from 950[m.a.s.l.] to 5930[m.a.s.l.] The local meteorological network is sparse, with all precipitation gauges located below 3000[m.a.s.l.] Precipitation amounts corresponding to different elevation zones are estimated through areal averaging of precipitation fields interpolated from gauge data. Interpolation methods applied include kriging with external drift (KED), optimal interpolation method (OIM), Thiessen polygons (TP), multiquadratic functions fitting (MFF) and inverse distance weighting (IDW). Both KED and OIM are able to account for the existence of a spatial trend in the expectation of precipitation. By contrast, TP, MFF and IDW, traditional methods widely used in engineering hydrology, cannot explicitly incorporate this information. Preliminary analysis confirmed that these methods notably underestimate precipitation in the study catchment, while KED and OIM are able to reduce the bias; this analysis also revealed that OIM provides more reliable estimations than KED in this region. Using input precipitation obtained by each method

  1. An Online Method for Interpolating Linear Parametric Reduced-Order Models

    KAUST Repository

    Amsallem, David; Farhat, Charbel

    2011-01-01

    A two-step online method is proposed for interpolating projection-based linear parametric reduced-order models (ROMs) in order to construct a new ROM for a new set of parameter values. The first step of this method transforms each precomputed ROM into a consistent set of generalized coordinates. The second step interpolates the associated linear operators on their appropriate matrix manifold. Real-time performance is achieved by precomputing inner products between the reduced-order bases underlying the precomputed ROMs. The proposed method is illustrated by applications in mechanical and aeronautical engineering. In particular, its robustness is demonstrated by its ability to handle the case where the sampled parameter set values exhibit a mode veering phenomenon. © 2011 Society for Industrial and Applied Mathematics.

  2. Modeling of cascade and sub-cascade formation at high PKA energies in irradiated fusion structural materials

    International Nuclear Information System (INIS)

    Ryazanov, A.I.; Metelkin, E.V.; Semenov, E.V.

    2009-01-01

    A new theoretical model is developed for the investigations of cascade and sub-cascade formation in fusion structural materials under fast neutron irradiation at high primary knock-on atom energies. Light fusion structural materials: such as Be, C and SiC under 14 MeV neutron irradiation in fusion reactor will have the primary knock-on atoms with the energies up to 1 MeV. It is very difficult to use at such high energies the Monte-Carlo or molecular dynamic simulations [H.L. Heinisch, B.N. Singh, Philos. Mag. A67 (1993) 407; H.L. Heinisch, B.N. Singh, J. Nucl. Mater. 251 (1997) 77]. The developed model is based on the analytical consideration of elastic collisions between displaced moving atoms produced by primary knock-on atoms with some kinetic energies obtained from fast neutrons and crystal lattice atoms. The Thomas-Fermi interaction potential is used here for the description of these elastic atomic collisions. The suggested model takes into account also the electronic losses for moving atoms between elastic collisions. The self-consistent criterion for sub-cascade formation is suggested here which is based on the comparison of mean distance of primary knock-on atoms between consequent collisions of them with the target atoms and a size of sub-cascade produced by moving secondary knock-on atoms produced in such collisions. The analytical relations for the most important characteristics of cascades and sub-cascades are determined including the average number of sub-cascades per one primary knock-on atom in the dependence on its energy, the distance between sub-cascades and the average cascade and sub-cascade sizes. The developed model allows determining the total numbers, distribution functions of cascades and sub-cascades in dependence on their sizes and generation rate of cascades and sub-cascades for the different fusion neutron energy spectra. On the basis of this developed model the numerical calculations for main characteristics of cascades and sub-cascades

  3. Interpolation Routines Assessment in ALS-Derived Digital Elevation Models for Forestry Applications

    Directory of Open Access Journals (Sweden)

    Antonio Luis Montealegre

    2015-07-01

    Full Text Available Airborne Laser Scanning (ALS is capable of estimating a variety of forest parameters using different metrics extracted from the normalized heights of the point cloud using a Digital Elevation Model (DEM. In this study, six interpolation routines were tested over a range of land cover and terrain roughness in order to generate a collection of DEMs with spatial resolution of 1 and 2 m. The accuracy of the DEMs was assessed twice, first using a test sample extracted from the ALS point cloud, second using a set of 55 ground control points collected with a high precision Global Positioning System (GPS. The effects of terrain slope, land cover, ground point density and pulse penetration on the interpolation error were examined stratifying the study area with these variables. In addition, a Classification and Regression Tree (CART analysis allowed the development of a prediction uncertainty map to identify in which areas DEMs and Airborne Light Detection and Ranging (LiDAR derived products may be of low quality. The Triangulated Irregular Network (TIN to raster interpolation method produced the best result in the validation process with the training data set while the Inverse Distance Weighted (IDW routine was the best in the validation with GPS (RMSE of 2.68 cm and RMSE of 37.10 cm, respectively.

  4. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    Science.gov (United States)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  5. Modeling of Bit Error Rate in Cascaded 2R Regenerators

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper

    2006-01-01

    and the regenerating nonlinearity is investigated. It is shown that an increase in nonlinearity can compensate for an increase in noise figure or decrease in signal power. Furthermore, the influence of the improvement in signal extinction ratio along the cascade and the importance of choosing the proper threshold......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...

  6. INCAS: an analytical model to describe displacement cascades

    Energy Technology Data Exchange (ETDEWEB)

    Jumel, Stephanie E-mail: stephanie.jumel@edf.fr; Claude Van-Duysen, Jean E-mail: jean-claude.van-duysen@edf.fr

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricite de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  7. INCAS: an analytical model to describe displacement cascades

    Science.gov (United States)

    Jumel, Stéphanie; Claude Van-Duysen, Jean

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricité de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  8. INCAS: an analytical model to describe displacement cascades

    International Nuclear Information System (INIS)

    Jumel, Stephanie; Claude Van-Duysen, Jean

    2004-01-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricite de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory

  9. Spatial interpolation

    NARCIS (Netherlands)

    Stein, A.

    1991-01-01

    The theory and practical application of techniques of statistical interpolation are studied in this thesis, and new developments in multivariate spatial interpolation and the design of sampling plans are discussed. Several applications to studies in soil science are

  10. Investigation of Back-off Based Interpolation Between Recurrent Neural Network and N-gram Language Models (Author’s Manuscript)

    Science.gov (United States)

    2016-02-11

    experiments were then conducted on the same BABEL task. The acoustic models were trained on 46 hours of speech. Tan - dem and hybrid DNN systems were...interpolation gave a comparable WER score of 46.9%. A fur - ther linear interpolation using equation (11) between the back-off based interpolated LM and the

  11. Digital elevation model production from scanned topographic contour maps via thin plate spline interpolation

    International Nuclear Information System (INIS)

    Soycan, Arzu; Soycan, Metin

    2009-01-01

    GIS (Geographical Information System) is one of the most striking innovation for mapping applications supplied by the developing computer and software technology to users. GIS is a very effective tool which can show visually combination of the geographical and non-geographical data by recording these to allow interpretations and analysis. DEM (Digital Elevation Model) is an inalienable component of the GIS. The existing TM (Topographic Map) can be used as the main data source for generating DEM by amanual digitizing or vectorization process for the contours polylines. The aim of this study is to examine the DEM accuracies, which were obtained by TMs, as depending on the number of sampling points and grid size. For these purposes, the contours of the several 1/1000 scaled scanned topographical maps were vectorized. The different DEMs of relevant area have been created by using several datasets with different numbers of sampling points. We focused on the DEM creation from contour lines using gridding with RBF (Radial Basis Function) interpolation techniques, namely TPS as the surface fitting model. The solution algorithm and a short review of the mathematical model of TPS (Thin Plate Spline) interpolation techniques are given. In the test study, results of the application and the obtained accuracies are drawn and discussed. The initial object of this research is to discuss the requirement of DEM in GIS, urban planning, surveying engineering and the other applications with high accuracy (a few deci meters). (author)

  12. Cascading failures in interdependent systems under a flow redistribution model

    Science.gov (United States)

    Zhang, Yingrui; Arenas, Alex; Yaǧan, Osman

    2018-02-01

    Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {LA,i,CA ,i} i =1 n and {LB,i,CB ,i} i =1 n, respectively. When a line fails in system A , a fraction of its load is redistributed to alive lines in B , while remaining (1 -a ) fraction is redistributed equally among all functional lines in A ; a line failure in B is treated similarly with b giving the fraction to be redistributed to A . We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p1 fraction of lines in A and p2 fraction in B . We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b , and robustness is maximized at non-trivial a ,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.

  13. Development of high fidelity soot aerosol dynamics models using method of moments with interpolative closure

    KAUST Repository

    Roy, Subrata P.

    2014-01-28

    The method of moments with interpolative closure (MOMIC) for soot formation and growth provides a detailed modeling framework maintaining a good balance in generality, accuracy, robustness, and computational efficiency. This study presents several computational issues in the development and implementation of the MOMIC-based soot modeling for direct numerical simulations (DNS). The issues of concern include a wide dynamic range of numbers, choice of normalization, high effective Schmidt number of soot particles, and realizability of the soot particle size distribution function (PSDF). These problems are not unique to DNS, but they are often exacerbated by the high-order numerical schemes used in DNS. Four specific issues are discussed in this article: the treatment of soot diffusion, choice of interpolation scheme for MOMIC, an approach to deal with strongly oxidizing environments, and realizability of the PSDF. General, robust, and stable approaches are sought to address these issues, minimizing the use of ad hoc treatments such as clipping. The solutions proposed and demonstrated here are being applied to generate new physical insight into complex turbulence-chemistry-soot-radiation interactions in turbulent reacting flows using DNS. © 2014 Copyright Taylor and Francis Group, LLC.

  14. Resistor mesh model of a spherical head: part 1: applications to scalp potential interpolation.

    Science.gov (United States)

    Chauveau, N; Morucci, J P; Franceries, X; Celsis, P; Rigaud, B

    2005-11-01

    A resistor mesh model (RMM) has been implemented to describe the electrical properties of the head and the configuration of the intracerebral current sources by simulation of forward and inverse problems in electroencephalogram/event related potential (EEG/ERP) studies. For this study, the RMM representing the three basic tissues of the human head (brain, skull and scalp) was superimposed on a spherical volume mimicking the head volume: it included 43 102 resistances and 14 123 nodes. The validation was performed with reference to the analytical model by consideration of a set of four dipoles close to the cortex. Using the RMM and the chosen dipoles, four distinct families of interpolation technique (nearest neighbour, polynomial, splines and lead fields) were tested and compared so that the scalp potentials could be recovered from the electrode potentials. The 3D spline interpolation and the inverse forward technique (IFT) gave the best results. The IFT is very easy to use when the lead-field matrix between scalp electrodes and cortex nodes has been calculated. By simple application of the Moore-Penrose pseudo inverse matrix to the electrode cap potentials, a set of current sources on the cortex is obtained. Then, the forward problem using these cortex sources renders all the scalp potentials.

  15. Shape determinative slice localization for patient-specific masseter modeling using shape-based interpolation

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering (Singapore); Biomedical Imaging Lab., Agency for Science Technology and Research (Singapore); Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering (Singapore); Dept. of Preventive Dentistry, National Univ. of Singapore (Singapore); Ong, S.H. [Dept. of Electrical and Computer Engineering, National Univ. of Singapore (Singapore); Div. of Bioengineering, National Univ. of Singapore (Singapore); Liu, J.; Nowinski, W.L. [Biomedical Imaging Lab., Agency for Science Technology and Research (Singapore); Goh, P.S. [Dept. of Diagnostic Radiology, National Univ. of Singapore (Singapore)

    2007-06-15

    The masseter plays a critical role in the mastication system. A hybrid method to shape-based interpolation is used to build the masseter model from magnetic resonance (MR) data sets. The main contribution here is the localizing of determinative slices in the data sets where clinicians are required to perform manual segmentations in order for an accurate model to be built. Shape-based criteria were used to locate the candidates for determinative slices and fuzzy-c-means (FCM) clustering technique was used to establish the determinative slices. Five masseter models were built in our work and the average overlap indices ({kappa}) achieved is 85.2%. This indicates that there is good agreement between the models and the manual contour tracings. In addition, the time taken, as compared to manually segmenting all the slices, is significantly lesser. (orig.)

  16. Shape determinative slice localization for patient-specific masseter modeling using shape-based interpolation

    International Nuclear Information System (INIS)

    Ng, H.P.; Foong, K.W.C.; Ong, S.H.; Liu, J.; Nowinski, W.L.; Goh, P.S.

    2007-01-01

    The masseter plays a critical role in the mastication system. A hybrid method to shape-based interpolation is used to build the masseter model from magnetic resonance (MR) data sets. The main contribution here is the localizing of determinative slices in the data sets where clinicians are required to perform manual segmentations in order for an accurate model to be built. Shape-based criteria were used to locate the candidates for determinative slices and fuzzy-c-means (FCM) clustering technique was used to establish the determinative slices. Five masseter models were built in our work and the average overlap indices (κ) achieved is 85.2%. This indicates that there is good agreement between the models and the manual contour tracings. In addition, the time taken, as compared to manually segmenting all the slices, is significantly lesser. (orig.)

  17. Period adding cascades: experiment and modeling in air bubbling.

    Science.gov (United States)

    Pereira, Felipe Augusto Cardoso; Colli, Eduardo; Sartorelli, José Carlos

    2012-03-01

    Period adding cascades have been observed experimentally/numerically in the dynamics of neurons and pancreatic cells, lasers, electric circuits, chemical reactions, oceanic internal waves, and also in air bubbling. We show that the period adding cascades appearing in bubbling from a nozzle submerged in a viscous liquid can be reproduced by a simple model, based on some hydrodynamical principles, dealing with the time evolution of two variables, bubble position and pressure of the air chamber, through a system of differential equations with a rule of detachment based on force balance. The model further reduces to an iterating one-dimensional map giving the pressures at the detachments, where time between bubbles come out as an observable of the dynamics. The model has not only good agreement with experimental data, but is also able to predict the influence of the main parameters involved, like the length of the hose connecting the air supplier with the needle, the needle radius and the needle length.

  18. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  19. Grade Distribution Modeling within the Bauxite Seams of the Wachangping Mine, China, Using a Multi-Step Interpolation Algorithm

    Directory of Open Access Journals (Sweden)

    Shaofeng Wang

    2017-05-01

    Full Text Available Mineral reserve estimation and mining design depend on a precise modeling of the mineralized deposit. A multi-step interpolation algorithm, including 1D biharmonic spline estimator for interpolating floor altitudes, 2D nearest neighbor, linear, natural neighbor, cubic, biharmonic spline, inverse distance weighted, simple kriging, and ordinary kriging interpolations for grade distribution on the two vertical sections at roadways, and 3D linear interpolation for grade distribution between sections, was proposed to build a 3D grade distribution model of the mineralized seam in a longwall mining panel with a U-shaped layout having two roadways at both sides. Compared to field data from exploratory boreholes, this multi-step interpolation using a natural neighbor method shows an optimal stability and a minimal difference between interpolation and field data. Using this method, the 97,576 m3 of bauxite, in which the mass fraction of Al2O3 (Wa and the mass ratio of Al2O3 to SiO2 (Wa/s are 61.68% and 27.72, respectively, was delimited from the 189,260 m3 mineralized deposit in the 1102 longwall mining panel in the Wachangping mine, Southwest China. The mean absolute errors, the root mean squared errors and the relative standard deviations of errors between interpolated data and exploratory grade data at six boreholes are 2.544, 2.674, and 32.37% of Wa; and 1.761, 1.974, and 67.37% of Wa/s, respectively. The proposed method can be used for characterizing the grade distribution in a mineralized seam between two roadways at both sides of a longwall mining panel.

  20. A weakened cascade model for turbulence in astrophysical plasmas

    International Nuclear Information System (INIS)

    Howes, G. G.; TenBarge, J. M.; Dorland, W.

    2011-01-01

    A refined cascade model for kinetic turbulence in weakly collisional astrophysical plasmas is presented that includes both the transition between weak and strong turbulence and the effect of nonlocal interactions on the nonlinear transfer of energy. The model describes the transition between weak and strong MHD turbulence and the complementary transition from strong kinetic Alfven wave (KAW) turbulence to weak dissipating KAW turbulence, a new regime of weak turbulence in which the effects of shearing by large scale motions and kinetic dissipation play an important role. The inclusion of the effect of nonlocal motions on the nonlinear energy cascade rate in the dissipation range, specifically the shearing by large-scale motions, is proposed to explain the nearly power-law energy spectra observed in the dissipation range of both kinetic numerical simulations and solar wind observations.

  1. Comparison of inverse modeling results with measured and interpolated hydraulic head data

    International Nuclear Information System (INIS)

    Jacobson, E.A.

    1986-12-01

    Inverse modeling of aquifers involves identification of effective parameters, such as transmissivities, based on hydraulic head data. The result of inverse modeling is a calibrated ground water flow model that reproduces the measured hydraulic head data as closely as is statistically possible. An inverse method that includes prior information about the parameters (i.e., kriged log transmissivity) was applied to the Avra Valley aquifer of southern Arizona using hydraulic heads obtained in three ways: measured at well locations, estimated at nodes by hand contouring, and estimated at nodes by kriging. Hand contouring yields only estimates of hydraulic head at node points, whereas kriging yields hydraulic head estimates at node points and their corresponding estimation errors. A comparison of the three inverse applications indicates the variations in the ground water flow model caused by the different treatments of the hydraulic head data. Estimates of hydraulic head computed by all three inverse models were more representative of the measured or interpolated hydraulic heads than those computed using the kriged estimates of log transmissivity. The large-scale trends in the estimates of log transmissivity determined by the three inverse models were generally similar except in the southern portion of the study area. The hydraulic head values and gradients produced by the three inverse models were similar in the interior of the study area, while the major differences between the inverse models occurred along the boundaries. 17 refs., 18 figs., 1 tab

  2. Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model

    Directory of Open Access Journals (Sweden)

    C. Narendra Babu

    2015-07-01

    Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.

  3. Modeling defect production in high energy collision cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.; Singh, B.N.

    1993-01-01

    A multi-model approach roach (MMA) to simulating defect production processes at the atomic scale is described that incorporates molecular dynamics (MD), binary collision approximation (BCA) calculations and stochastic annealing simulations. The central hypothesis of the MMA is that the simple, fast computer codes capable of simulating large numbers of high energy cascades (e.g., BCA codes) can be made to yield the correct defect configurations when their parameters are calibrated using the results of the more physically realistic MD simulations. The calibration procedure is investigated using results of MD simulations of 25 keV cascades in copper. The configurations of point defects are extracted from the MD cascade simulations at the end of the collisional phase, similar to the information obtained with a binary collision model. The MD collisional phase defect configurations are used as input to the ALSOME annealing simulation code, and values of the ALSOME quenching parameters are determined that yield the best fit to the post-quenching defect configurations of the MD simulations

  4. A new stellar spectrum interpolation algorithm and its application to Yunnan-III evolutionary population synthesis models

    Science.gov (United States)

    Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang

    2018-05-01

    In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.

  5. Feature displacement interpolation

    DEFF Research Database (Denmark)

    Nielsen, Mads; Andresen, Per Rønsholt

    1998-01-01

    Given a sparse set of feature matches, we want to compute an interpolated dense displacement map. The application may be stereo disparity computation, flow computation, or non-rigid medical registration. Also estimation of missing image data, may be phrased in this framework. Since the features...... often are very sparse, the interpolation model becomes crucial. We show that a maximum likelihood estimation based on the covariance properties (Kriging) show properties more expedient than methods such as Gaussian interpolation or Tikhonov regularizations, also including scale......-selection. The computational complexities are identical. We apply the maximum likelihood interpolation to growth analysis of the mandibular bone. Here, the features used are the crest-lines of the object surface....

  6. Series-NonUniform Rational B-Spline (S-NURBS) model: a geometrical interpolation framework for chaotic data.

    Science.gov (United States)

    Shao, Chenxi; Liu, Qingqing; Wang, Tingting; Yin, Peifeng; Wang, Binghong

    2013-09-01

    Time series is widely exploited to study the innate character of the complex chaotic system. Existing chaotic models are weak in modeling accuracy because of adopting either error minimization strategy or an acceptable error to end the modeling process. Instead, interpolation can be very useful for solving differential equations with a small modeling error, but it is also very difficult to deal with arbitrary-dimensional series. In this paper, geometric theory is considered to reduce the modeling error, and a high-precision framework called Series-NonUniform Rational B-Spline (S-NURBS) model is developed to deal with arbitrary-dimensional series. The capability of the interpolation framework is proved in the validation part. Besides, we verify its reliability by interpolating Musa dataset. The main improvement of the proposed framework is that we are able to reduce the interpolation error by properly adjusting weights series step by step if more information is given. Meanwhile, these experiments also demonstrate that studying the physical system from a geometric perspective is feasible.

  7. Boolean Models of Biological Processes Explain Cascade-Like Behavior.

    Science.gov (United States)

    Chen, Hao; Wang, Guanyu; Simha, Rahul; Du, Chenghang; Zeng, Chen

    2016-01-29

    Biological networks play a key role in determining biological function and therefore, an understanding of their structure and dynamics is of central interest in systems biology. In Boolean models of such networks, the status of each molecule is either "on" or "off" and along with the molecules interact with each other, their individual status changes from "on" to "off" or vice-versa and the system of molecules in the network collectively go through a sequence of changes in state. This sequence of changes is termed a biological process. In this paper, we examine the common perception that events in biomolecular networks occur sequentially, in a cascade-like manner, and ask whether this is likely to be an inherent property. In further investigations of the budding and fission yeast cell-cycle, we identify two generic dynamical rules. A Boolean system that complies with these rules will automatically have a certain robustness. By considering the biological requirements in robustness and designability, we show that those Boolean dynamical systems, compared to an arbitrary dynamical system, statistically present the characteristics of cascadeness and sequentiality, as observed in the budding and fission yeast cell- cycle. These results suggest that cascade-like behavior might be an intrinsic property of biological processes.

  8. Unified model of secondary electron cascades in diamond

    International Nuclear Information System (INIS)

    Ziaja, Beata; London, Richard A.; Hajdu, Janos

    2005-01-01

    In this article we present a detailed and unified theoretical treatment of secondary electron cascades that follow the absorption of x-ray photons. A Monte Carlo model has been constructed that treats in detail the evolution of electron cascades induced by photoelectrons and by Auger electrons following inner shell ionizations. Detailed calculations are presented for cascades initiated by electron energies between 0.1 and 10 keV. The present article expands our earlier work [B. Ziaja, D. van der Spoel, A. Szoeke, and J. Hajdu, Phys. Rev. B 64, 214104 (2001), Phys. Rev. B 66, 024116 (2002)] by extending the primary energy range, by improving the treatment of secondary electrons, especially at low electron energies, by including ionization by holes, and by taking into account their coupling to the crystal lattice. The calculations describe the three-dimensional evolution of the electron cloud, and monitor the equivalent instantaneous temperature of the free electron gas as the system cools. The dissipation of the impact energy proceeds predominantly through the production of secondary electrons whose energies are comparable to the binding energies of the valence (40-50 eV) and of the core electrons (300 eV). The electron cloud generated by a 10 keV electron is strongly anisotropic in the early phases of the cascade (t≤1 fs). At later times, the sample is dominated by low energy electrons, and these are scattered more isotropically by atoms in the sample. Our results for the total number of secondary electrons agree with available experimental data, and show that the emission of secondary electrons approaches saturation within about 100 fs following the primary impact

  9. Connected word recognition using a cascaded neuro-computational model

    Science.gov (United States)

    Hoya, Tetsuya; van Leeuwen, Cees

    2016-10-01

    We propose a novel framework for processing a continuous speech stream that contains a varying number of words, as well as non-speech periods. Speech samples are segmented into word-tokens and non-speech periods. An augmented version of an earlier-proposed, cascaded neuro-computational model is used for recognising individual words within the stream. Simulation studies using both a multi-speaker-dependent and speaker-independent digit string database show that the proposed method yields a recognition performance comparable to that obtained by a benchmark approach using hidden Markov models with embedded training.

  10. Interpolation theory

    CERN Document Server

    Lunardi, Alessandra

    2018-01-01

    This book is the third edition of the 1999 lecture notes of the courses on interpolation theory that the author delivered at the Scuola Normale in 1998 and 1999. In the mathematical literature there are many good books on the subject, but none of them is very elementary, and in many cases the basic principles are hidden below great generality. In this book the principles of interpolation theory are illustrated aiming at simplification rather than at generality. The abstract theory is reduced as far as possible, and many examples and applications are given, especially to operator theory and to regularity in partial differential equations. Moreover the treatment is self-contained, the only prerequisite being the knowledge of basic functional analysis.

  11. Creating high-resolution digital elevation model using thin plate spline interpolation and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Pohjola, J.; Turunen, J.; Lipping, T.

    2009-07-01

    In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)

  12. Study on Meshfree Hermite Radial Point Interpolation Method for Flexural Wave Propagation Modeling and Damage Quantification

    Directory of Open Access Journals (Sweden)

    Hosein Ghaffarzadeh

    Full Text Available Abstract This paper investigates the numerical modeling of the flexural wave propagation in Euler-Bernoulli beams using the Hermite-type radial point interpolation method (HRPIM under the damage quantification approach. HRPIM employs radial basis functions (RBFs and their derivatives for shape function construction as a meshfree technique. The performance of Multiquadric(MQ RBF to the assessment of the reflection ratio was evaluated. HRPIM signals were compared with the theoretical and finite element responses. Results represent that MQ is a suitable RBF for HRPIM and wave propagation. However, the range of the proper shape parameters is notable. The number of field nodes is the main parameter for accurate wave propagation modeling using HRPIM. The size of support domain should be less thanan upper bound in order to prevent high error. With regard to the number of quadrature points, providing the minimum numbers of points are adequate for the stable solution, but the existence of more points in damage region does not leads to necessarily the accurate responses. It is concluded that the pure HRPIM, without any polynomial terms, is acceptable but considering a few terms will improve the accuracy; even though more terms make the problem unstable and inaccurate.

  13. Phase transition and information cascade in a voting model

    Energy Technology Data Exchange (ETDEWEB)

    Hisakado, M [Standard and Poor' s, Marunouchi 1-6-5, Chiyoda ku, Tokyo 100-0005 (Japan); Mori, S, E-mail: masato_hisakado@standardandpoors.co, E-mail: mori@sci.kitasato-u.ac.j [Department of Physics, School of Science, Kitasato University, Kitasato 1-15-1, Sagamihara, Kanagawa 228-8555 (Japan)

    2010-08-06

    In this paper, we introduce a voting model that is similar to a Keynesian beauty contest and analyse it from a mathematical point of view. There are two types of voters-copycat and independent-and two candidates. Our voting model is a binomial distribution (independent voters) doped in a beta binomial distribution (copycat voters). We find that the phase transition in this system is at the upper limit of t, where t is the time (or the number of the votes). Our model contains three phases. If copycats constitute a majority or even half of the total voters, the voting rate converges more slowly than it would in a binomial distribution. If independents constitute the majority of voters, the voting rate converges at the same rate as it would in a binomial distribution. We also study why it is difficult to estimate the conclusion of a Keynesian beauty contest when there is an information cascade.

  14. Phase transition and information cascade in a voting model

    International Nuclear Information System (INIS)

    Hisakado, M; Mori, S

    2010-01-01

    In this paper, we introduce a voting model that is similar to a Keynesian beauty contest and analyse it from a mathematical point of view. There are two types of voters-copycat and independent-and two candidates. Our voting model is a binomial distribution (independent voters) doped in a beta binomial distribution (copycat voters). We find that the phase transition in this system is at the upper limit of t, where t is the time (or the number of the votes). Our model contains three phases. If copycats constitute a majority or even half of the total voters, the voting rate converges more slowly than it would in a binomial distribution. If independents constitute the majority of voters, the voting rate converges at the same rate as it would in a binomial distribution. We also study why it is difficult to estimate the conclusion of a Keynesian beauty contest when there is an information cascade.

  15. Phase transition and information cascade in a voting model

    Science.gov (United States)

    Hisakado, M.; Mori, S.

    2010-08-01

    In this paper, we introduce a voting model that is similar to a Keynesian beauty contest and analyse it from a mathematical point of view. There are two types of voters—copycat and independent—and two candidates. Our voting model is a binomial distribution (independent voters) doped in a beta binomial distribution (copycat voters). We find that the phase transition in this system is at the upper limit of t, where t is the time (or the number of the votes). Our model contains three phases. If copycats constitute a majority or even half of the total voters, the voting rate converges more slowly than it would in a binomial distribution. If independents constitute the majority of voters, the voting rate converges at the same rate as it would in a binomial distribution. We also study why it is difficult to estimate the conclusion of a Keynesian beauty contest when there is an information cascade.

  16. Mathematical Modeling of Nonstationary Separation Processes in Gas Centrifuge Cascade for Separation of Multicomponent Isotope Mixtures

    Directory of Open Access Journals (Sweden)

    Orlov Alexey

    2016-01-01

    Full Text Available This article presents results of development of the mathematical model of nonstationary separation processes occurring in gas centrifuge cascades for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of germanium isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary separation processes in gas centrifuge cascades for separation of multicomponent isotope mixtures.

  17. Mathematical model of nonstationary hydraulic processes in gas centrifuge cascade for separation of multicomponent isotope mixtures

    OpenAIRE

    Orlov, Aleksey Alekseevich; Ushakov, Anton; Sovach, Victor

    2017-01-01

    The article presents results of development of a mathematical model of nonstationary hydraulic processes in gas centrifuge cascade for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of silicon isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary hydraulic processes in gas centrifuge cascades for separation...

  18. Monte Carlo Modeling Electronuclear Processes in Cascade Subcritical Reactor

    CERN Document Server

    Bznuni, S A; Zhamkochyan, V M; Polyanskii, A A; Sosnin, A N; Khudaverdian, A G

    2000-01-01

    Accelerator driven subcritical cascade reactor composed of the main thermal neutron reactor constructed analogous to the core of the VVER-1000 reactor and a booster-reactor, which is constructed similar to the core of the BN-350 fast breeder reactor, is taken as a model example. It is shown by means of Monte Carlo calculations that such system is a safe energy source (k_{eff}=0.94-0.98) and it is capable of transmuting produced radioactive wastes (neutron flux density in the thermal zone is PHI^{max} (r,z)=10^{14} n/(cm^{-2} s^{-1}), neutron flux in the fast zone is respectively equal PHI^{max} (r,z)=2.25 cdot 10^{15} n/(cm^{-2} s^{-1}) if the beam current of the proton accelerator is k_{eff}=0.98 and I=5.3 mA). Suggested configuration of the "cascade" reactor system essentially reduces the requirements on the proton accelerator current.

  19. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison.

    Science.gov (United States)

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh

    2011-12-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. © The Author(s) 2011. Published by Oxford University Press.

  20. A simple model of global cascades on random networks

    Science.gov (United States)

    Watts, Duncan J.

    2002-04-01

    The origin of large but rare cascades that are triggered by small initial shocks is a phenomenon that manifests itself as diversely as cultural fads, collective action, the diffusion of norms and innovations, and cascading failures in infrastructure and organizational networks. This paper presents a possible explanation of this phenomenon in terms of a sparse, random network of interacting agents whose decisions are determined by the actions of their neighbors according to a simple threshold rule. Two regimes are identified in which the network is susceptible to very large cascadesherein called global cascadesthat occur very rarely. When cascade propagation is limited by the connectivity of the network, a power law distribution of cascade sizes is observed, analogous to the cluster size distribution in standard percolation theory and avalanches in self-organized criticality. But when the network is highly connected, cascade propagation is limited instead by the local stability of the nodes themselves, and the size distribution of cascades is bimodal, implying a more extreme kind of instability that is correspondingly harder to anticipate. In the first regime, where the distribution of network neighbors is highly skewed, it is found that the most connected nodes are far more likely than average nodes to trigger cascades, but not in the second regime. Finally, it is shown that heterogeneity plays an ambiguous role in determining a system's stability: increasingly heterogeneous thresholds make the system more vulnerable to global cascades; but an increasingly heterogeneous degree distribution makes it less vulnerable.

  1. Interpolating and Estimating Horizontal Diffuse Solar Irradiation to Provide UK-Wide Coverage: Selection of the Best Performing Models

    Directory of Open Access Journals (Sweden)

    Diane Palmer

    2017-02-01

    Full Text Available Plane-of-array (PoA irradiation data is a requirement to simulate the energetic performance of photovoltaic devices (PVs. Normally, solar data is only available as global horizontal irradiation, for a limited number of locations, and typically in hourly time resolution. One approach to handling this restricted data is to enhance it initially by interpolation to the location of interest; next, it must be translated to PoA data by separately considering the diffuse and the beam components. There are many methods of interpolation. This research selects ordinary kriging as the best performing technique by studying mathematical properties, experimentation and leave-one-out-cross validation. Likewise, a number of different translation models has been developed, most of them parameterised for specific measurement setups and locations. The work presented identifies the optimum approach for the UK on a national scale. The global horizontal irradiation will be split into its constituent parts. Divers separation models were tried. The results of each separation algorithm were checked against measured data distributed across the UK. It became apparent that while there is little difference between procedures (14 Wh/m2 mean bias error (MBE, 12 Wh/m2 root mean square error (RMSE, the Ridley, Boland, Lauret equation (a universal split algorithm consistently performed well. The combined interpolation/separation RMSE is 86 Wh/m2.

  2. Developmental Cascade Model for Adolescent Substance Use from Infancy to Late Adolescence

    Science.gov (United States)

    Eiden, Rina D.; Lessard, Jared; Colder, Craig R.; Livingston, Jennifer; Casey, Meghan; Leonard, Kenneth E.

    2016-01-01

    A developmental cascade model for adolescent substance use beginning in infancy was examined in a sample of children with alcoholic and nonalcoholic parents. The model examined the role of parents' alcohol diagnoses, depression and antisocial behavior in a cascading process of risk via 3 major hypothesized pathways: first, via parental…

  3. Modeling cascading failures in interdependent infrastructures under terrorist attacks

    International Nuclear Information System (INIS)

    Wu, Baichao; Tang, Aiping; Wu, Jie

    2016-01-01

    An attack strength degradation model has been introduced to further capture the interdependencies among infrastructures and model cascading failures across infrastructures when terrorist attacks occur. A medium-sized energy system including oil network and power network is selected for exploring the vulnerabilities from independent networks to interdependent networks, considering the structural vulnerability and the functional vulnerability. Two types of interdependencies among critical infrastructures are involved in this paper: physical interdependencies and geographical interdependencies, shown by tunable parameters based on the probabilities of failures of nodes in the networks. In this paper, a tolerance parameter α is used to evaluation of the overloads of the substations based on power flow redistribution in power transmission systems under the attack. The results of simulation show that the independent networks or interdependent networks will be collapsed when only a small fraction of nodes are attacked under the attack strength degradation model, especially for the interdependent networks. The methodology introduced in this paper with physical interdependencies and geographical interdependencies involved in can be applied to analyze the vulnerability of the interdependent infrastructures further, and provides the insights of vulnerability of interdependent infrastructures to mitigation actions for critical infrastructure protections. - Highlights: • An attack strength degradation model based on the specified locations has been introduced. • Interdependencies considering both physical and geographical have been analyzed. • The structural vulnerability and the functional vulnerability have been considered.

  4. Interpolative Boolean Networks

    Directory of Open Access Journals (Sweden)

    Vladimir Dobrić

    2017-01-01

    Full Text Available Boolean networks are used for modeling and analysis of complex systems of interacting entities. Classical Boolean networks are binary and they are relevant for modeling systems with complex switch-like causal interactions. More descriptive power can be provided by the introduction of gradation in this model. If this is accomplished by using conventional fuzzy logics, the generalized model cannot secure the Boolean frame. Consequently, the validity of the model’s dynamics is not secured. The aim of this paper is to present the Boolean consistent generalization of Boolean networks, interpolative Boolean networks. The generalization is based on interpolative Boolean algebra, the [0,1]-valued realization of Boolean algebra. The proposed model is adaptive with respect to the nature of input variables and it offers greater descriptive power as compared with traditional models. For illustrative purposes, IBN is compared to the models based on existing real-valued approaches. Due to the complexity of the most systems to be analyzed and the characteristics of interpolative Boolean algebra, the software support is developed to provide graphical and numerical tools for complex system modeling and analysis.

  5. Interpolation functors and interpolation spaces

    CERN Document Server

    Brudnyi, Yu A

    1991-01-01

    The theory of interpolation spaces has its origin in the classical work of Riesz and Marcinkiewicz but had its first flowering in the years around 1960 with the pioneering work of Aronszajn, Calderón, Gagliardo, Krein, Lions and a few others. It is interesting to note that what originally triggered off this avalanche were concrete problems in the theory of elliptic boundary value problems related to the scale of Sobolev spaces. Later on, applications were found in many other areas of mathematics: harmonic analysis, approximation theory, theoretical numerical analysis, geometry of Banach spaces, nonlinear functional analysis, etc. Besides this the theory has a considerable internal beauty and must by now be regarded as an independent branch of analysis, with its own problems and methods. Further development in the 1970s and 1980s included the solution by the authors of this book of one of the outstanding questions in the theory of the real method, the K-divisibility problem. In a way, this book harvests the r...

  6. Account of the effect of nuclear collision cascades in model of radiation damage of RPV steels

    International Nuclear Information System (INIS)

    Kevorkyan, Yu.R.; Nikolaev, Yu.A.

    1997-01-01

    A kinetic model is proposed for describing the effect of collision cascades in model of radiation damage of reactor pressure vessel steels. This is a closed system of equations which can be solved only by numerical methods in general case

  7. A minimal rupture cascade model for living cell plasticity

    Science.gov (United States)

    Polizzi, Stefano; Laperrousaz, Bastien; Perez-Reche, Francisco J.; Nicolini, Franck E.; Maguer Satta, Véronique; Arneodo, Alain; Argoul, Françoise

    2018-05-01

    Under physiological and pathological conditions, cells experience large forces and deformations that often exceed the linear viscoelastic regime. Here we drive CD34+ cells isolated from healthy and leukemic bone marrows in the highly nonlinear elasto-plastic regime, by poking their perinuclear region with a sharp AFM cantilever tip. We use the wavelet transform mathematical microscope to identify singular events in the force-indentation curves induced by local rupture events in the cytoskeleton (CSK). We distinguish two types of rupture events, brittle failures likely corresponding to irreversible ruptures in a stiff and highly cross-linked CSK and ductile failures resulting from dynamic cross-linker unbindings during plastic deformation without loss of CSK integrity. We propose a stochastic multiplicative cascade model of mechanical ruptures that reproduces quantitatively the experimental distributions of the energy released during these events, and provides some mathematical and mechanistic understanding of the robustness of the log-normal statistics observed in both brittle and ductile situations. We also show that brittle failures are relatively more prominent in leukemia than in healthy cells suggesting their greater fragility.

  8. Modelling of the Blood Coagulation Cascade in an In Vitro Flow System

    DEFF Research Database (Denmark)

    Andersen, Nina Marianne; Sørensen, Mads Peter; Efendiev, Messoud A.

    2010-01-01

    We derive a mathematical model of a part of the blood coagulation cascade set up in a perfusion experiment. Our purpose is to simulate the influence of blood flow and diffusion on the blood coagulation pathway. The resulting model consists of a system of partial differential equations taking...... and flow equations, which guarantee non negative concentrations at all times. The criteria is applied to the model of the blood coagulation cascade....

  9. Mathematical Modeling of Nonstationary Separation Processes in Gas Centrifuge Cascade for Separation of Multicomponent Isotope Mixtures

    OpenAIRE

    Orlov Alexey; Ushakov Anton; Sovach Victor

    2016-01-01

    This article presents results of development of the mathematical model of nonstationary separation processes occurring in gas centrifuge cascades for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of germanium isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary separation processes in gas centrifuge casca...

  10. COMPARISONS BETWEEN DIFFERENT INTERPOLATION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    G. Garnero

    2014-01-01

    In the present study different algorithms will be analysed in order to spot an optimal interpolation methodology. The availability of the recent digital model produced by the Regione Piemonte with airborne LIDAR and the presence of sections of testing realized with higher resolutions and the presence of independent digital models on the same territory allow to set a series of analysis with consequent determination of the best methodologies of interpolation. The analysis of the residuals on the test sites allows to calculate the descriptive statistics of the computed values: all the algorithms have furnished interesting results; all the more interesting, notably for dense models, the IDW (Inverse Distance Weighing algorithm results to give best results in this study case. Moreover, a comparative analysis was carried out by interpolating data at different input point density, with the purpose of highlighting thresholds in input density that may influence the quality reduction of the final output in the interpolation phase.

  11. Linear Methods for Image Interpolation

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  12. Model-Free Autotuning Testing on a Model of a Three-Tank Cascade

    Directory of Open Access Journals (Sweden)

    Stanislav VRÁNA

    2009-06-01

    Full Text Available A newly developed model-free autotuning method based on frequency response analysis has been tested on a laboratory set-up that represents a physical model of a three-tank cascade. This laboratory model was chosen for the following reasons: a the laboratory model was ready for computer control; b simultaneously, computer simulation could be effectively utilized, because a mathematical description of the cascade based on quite exactly valid relations was available; c the set-up provided the necessary degree of nonlinearity and changeable properties. The improvement of the laboratory set-up instrumentation presented here was necessary because the results obtained from the first experimental identification did not correspond to the results provided by the simulation. The data was evidently imprecise, because the available sensors and the conditions for process settling were inadequate.

  13. Modeling of filling gas centrifuge cascade for nickel isotope separation by feed flow input to different stages

    Directory of Open Access Journals (Sweden)

    Orlov Alexey A.

    2017-01-01

    Full Text Available The article presents results of research filling gas centrifuge cascade by process gas fed into different stages. The modeling of filling cascade was done for nickel isotope separation. Analysis of the research results shows that nickel isotope concentrations of light and heavy fraction flows after filling cascade depend on feed stage number.

  14. CASCADE: An Agent Based Framework For Modeling The Dynamics Of Smart Electricity Systems

    OpenAIRE

    Rylatt, R. M.; Gammon, Rupert; Boait, Peter John; Varga, L.; Allen, P.; Savill, M.; Snape, J. Richard; Lemon, Mark; Ardestani, B. M.; Pakka, V. H.; Fletcher, G.; Smith, S.; Fan, D.; Strathern, M.

    2013-01-01

    Collaborative project with Cranfield University The Complex Adaptive Systems, Cognitive Agents and Distributed Energy (CASCADE) project is developing a framework based on Agent Based Modelling (ABM). The CASCADE Framework can be used both to gain policy and industry relevant insights into the smart grid concept itself and as a platform to design and test distributed ICT solutions for smart grid based business entities. ABM is used to capture the behaviors of diff erent socia...

  15. Mathematical modeling of filling of gas centrifuge cascade for nickel isotope separation by various feed flow rate

    Science.gov (United States)

    Ushakov, Anton; Orlov, Alexey; Sovach, Victor P.

    2018-03-01

    This article presents the results of research filling of gas centrifuge cascade for separation of the multicomponent isotope mixture with process gas by various feed flow rate. It has been used mathematical model of the nonstationary hydraulic and separation processes occurring in the gas centrifuge cascade. The research object is definition of the regularity transient of nickel isotopes into cascade during filling of the cascade. It is shown that isotope concentrations into cascade stages after its filling depend on variable parameters and are not equal to its concentration on initial isotope mixture (or feed flow of cascade). This assumption is used earlier any researchers for modeling such nonstationary process as set of steady-state concentration of isotopes into cascade. Article shows physical laws of isotope distribution into cascade stage after its filling. It's shown that varying each parameters of cascade (feed flow rate, feed stage number or cascade stage number) it is possible to change isotope concentration on output cascade flows (light or heavy fraction) for reduction of duration of further process to set of steady-state concentration of isotopes into cascade.

  16. Nonword Reading: Comparing Dual-Route Cascaded and Connectionist Dual-Process Models with Human Data

    Science.gov (United States)

    Pritchard, Stephen C.; Coltheart, Max; Palethorpe, Sallyanne; Castles, Anne

    2012-01-01

    Two prominent dual-route computational models of reading aloud are the dual-route cascaded (DRC) model, and the connectionist dual-process plus (CDP+) model. While sharing similarly designed lexical routes, the two models differ greatly in their respective nonlexical route architecture, such that they often differ on nonword pronunciation. Neither…

  17. Spline Interpolation of Image

    OpenAIRE

    I. Kuba; J. Zavacky; J. Mihalik

    1995-01-01

    This paper presents the use of B spline functions in various digital signal processing applications. The theory of one-dimensional B spline interpolation is briefly reviewed, followed by its extending to two dimensions. After presenting of one and two dimensional spline interpolation, the algorithms of image interpolation and resolution increasing were proposed. Finally, experimental results of computer simulations are presented.

  18. Universal resilience patterns in cascading load model: More capacity is not always better

    Science.gov (United States)

    Wang, Jianwei; Wang, Xue; Cai, Lin; Ni, Chengzhang; Xie, Wei; Xu, Bo

    We study the problem of universal resilience patterns in complex networks against cascading failures. We revise the classical betweenness method and overcome its limitation of quantifying the load in cascading model. Considering that the generated load by all nodes should be equal to the transported one by all edges in the whole network, we propose a new method to quantify the load on an edge and construct a simple cascading model. By attacking the edge with the highest load, we show that, if the flow between two nodes is transported along the shortest paths between them, then the resilience of some networks against cascading failures inversely decreases with the enhancement of the capacity of every edge, i.e. the more capacity is not always better. We also observe the abnormal fluctuation of the additional load that exceeds the capacity of each edge. By a simple graph, we analyze the propagation of cascading failures step by step, and give a reasonable explanation of the abnormal fluctuation of cascading dynamics.

  19. Childhood Leukaemia Incidence in Hungary, 1973-2002. Interpolation Model for Analysing the Possible Effects of the Chernobyl Accident

    International Nuclear Information System (INIS)

    Toeroek, Szabolcs; Borgulya, Gabor; Lobmayer, Peter; Jakab, Zsuzsanna; Schuler, Dezsoe; Fekete, Gyoergy

    2005-01-01

    The incidence of childhood leukaemia in Hungary has yet to be reported, although data are available since the early 70s. The Hungarian data therefore cover the time before and after the Chernobyl nuclear accident (1986). The aim of this study was to assess the effects of the Chernobyl accident on childhood leukaemia incidence in Hungary. A population-based study was carried out using data of the National Paediatric Cancer Registry of Hungary from 1973 to 2002. The total number of cases was 2204. To test the effect of the Chernobyl accident the authors applied a new approach called 'Hypothesized Impact Period Interpolation'-model, which takes into account the increasing trend of childhood leukaemia incidence and the hypothesized exposure and latency times. The incidence of leukaemia in the age group 0-14 varied between 33.2 and 39.4 per million person-years along the observed 30 year period, and the incidence of childhood leukaemia showed a moderate increase of 0.71% annually (p=0.0105). In the period of the hypothesized impact of the Chernobyl accident the incidence rate was elevated by 2.5% (95% CI: -8.1%; +14.3%), but this change was not statistically significant (p=0.663). The age standardised incidence, the age distribution, the gender ratio, and the magnitude of increasing trend of childhood leukaemia incidence in Hungary were similar to other European countries. Applying the presented interpolation method the authors did not find a statistically significant increase in the leukaemia incidence in the period of the hypothesized impact of the Chernobyl accident

  20. A model of disordered zone formation in Cu3Au under cascade-producing irradiation

    International Nuclear Information System (INIS)

    Kapinos, V.G.; Bacon, D.J.

    1995-01-01

    A model to describe the disordering of ordered Cu 3 Au under irradiation is proposed. For the thermal spike phase of a displacement cascade, the processes of heat evolution and conduction in the cascade region are modelled by solving the thermal conduction equation by a discretization method for a medium that can melt and solidify under appropriate conditions. The model considers disordering to result from cascade core melting, with the final disordered zone corresponding to the largest molten zone achieved. The initial conditions for this treatment are obtained by simulation of cascades by the MARLOWE code. The contrast of disordered zones imaged in a superlattice dark-field reflection and projected on the plane parallel to the surface of a thin foil was calculated. The average size of images from hundreds of cascades created by incident Cu + ions were calculated for different ion energies and compared with experimental transmission electron microscopy data. The model is in reasonable quantitative agreement with the experimentally observed trends. (author)

  1. An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.

    Directory of Open Access Journals (Sweden)

    P Martin Sander

    Full Text Available Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM. This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism". Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.

  2. An Evolutionary Cascade Model for Sauropod Dinosaur Gigantism - Overview, Update and Tests

    Science.gov (United States)

    Sander, P. Martin

    2013-01-01

    Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades (“Reproduction”, “Feeding”, “Head and neck”, “Avian-style lung”, and “Metabolism”). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait “Very high body mass”. Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size. PMID:24205267

  3. An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.

    Science.gov (United States)

    Sander, P Martin

    2013-01-01

    Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism"). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.

  4. Modeling of fissile material diversion in solvent extraction cascades

    International Nuclear Information System (INIS)

    Schneider, A.; Carlson, R.W.

    1980-01-01

    Changes were calculated for measurable parameters of a solvent extraction section of a reprocessing plant resulting from postulated fissile material diversion actions. The computer program SEPHIS was modified to calculate the time-dependent concentrations of uranium and plutonium in each stage of a cascade. The calculation of the inventories of uranium and plutonium in each contactor was also included. The concentration and inventory histories were computed for a group of four sequential columns during start-up and for postulated diversion conditions within this group of columns. Monitoring of column exit streams or of integrated column inventories for fissile materials could provide qualitative indications of attempted diversions. However, the time delays and resulting changes are complex and do not correlate quantitatively with the magnitude of the initiating event

  5. Statistical modeling in phenomenological description of electromagnetic cascade processes produced by high-energy gamma quanta

    International Nuclear Information System (INIS)

    Slowinski, B.

    1987-01-01

    A description of a simple phenomenological model of electromagnetic cascade process (ECP) initiated by high-energy gamma quanta in heavy absorbents is given. Within this model spatial structure and fluctuations of ionization losses of shower electrons and positrons are described. Concrete formulae have been obtained as a result of statistical analysis of experimental data from the xenon bubble chamber of ITEP (Moscow)

  6. ARRA: Reconfiguring Power Systems to Minimize Cascading Failures - Models and Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Dobson, Ian [Iowa State University; Hiskens, Ian [Unversity of Michigan; Linderoth, Jeffrey [University of Wisconsin-Madison; Wright, Stephen [University of Wisconsin-Madison

    2013-12-16

    Building on models of electrical power systems, and on powerful mathematical techniques including optimization, model predictive control, and simluation, this project investigated important issues related to the stable operation of power grids. A topic of particular focus was cascading failures of the power grid: simulation, quantification, mitigation, and control. We also analyzed the vulnerability of networks to component failures, and the design of networks that are responsive to and robust to such failures. Numerous other related topics were investigated, including energy hubs and cascading stall of induction machines

  7. Local-metrics error-based Shepard interpolation as surrogate for highly non-linear material models in high dimensions

    Science.gov (United States)

    Lorenzi, Juan M.; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian

    2017-10-01

    Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.

  8. Experimental Validation of Surrogate Models for Predicting the Draping of Physical Interpolating Surfaces

    DEFF Research Database (Denmark)

    Christensen, Esben Toke; Lund, Erik; Lindgaard, Esben

    2018-01-01

    This paper concerns the experimental validation of two surrogate models through a benchmark study involving two different variable shape mould prototype systems. The surrogate models in question are different methods based on kriging and proper orthogonal decomposition (POD), which were developed...... to the performance of the studied surrogate models. By comparing surrogate model performance for the two variable shape mould systems, and through a numerical study involving simple finite element models, the underlying cause of this effect is explained. It is concluded that for a variable shape mould prototype...... hypercube approach. This sampling method allows for generating a space filling and high-quality sample plan that respects mechanical constraints of the variable shape mould systems. Through the benchmark study, it is found that mechanical freeplay in the modeled system is severely detrimental...

  9. Interpolating Spline Curve-Based Perceptual Encryption for 3D Printing Models

    OpenAIRE

    Giao N. Pham; Suk-Hwan Lee; Ki-Ryong Kwon

    2018-01-01

    With the development of 3D printing technology, 3D printing has recently been applied to many areas of life including healthcare and the automotive industry. Due to the benefit of 3D printing, 3D printing models are often attacked by hackers and distributed without agreement from the original providers. Furthermore, certain special models and anti-weapon models in 3D printing must be protected against unauthorized users. Therefore, in order to prevent attacks and illegal copying and to ensure...

  10. Bayesian interpolation in a dynamic sinusoidal model with application to packet-loss concealment

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Cemgil, Ali Taylan

    2010-01-01

    a Bayesian inference scheme for the missing observations, hidden states and model parameters of the dynamic model. The inference scheme is based on a Markov chain Monte Carlo method known as Gibbs sampler. We illustrate the performance of the inference scheme to the application of packet-loss concealment...

  11. Modelling, interpolation and stochastic simulation in space and time of global solar radiation

    NARCIS (Netherlands)

    Bechini, L.; Ducco, G.; Donatelli, M.; Stein, A.

    2000-01-01

    Global solar radiation data used as daily inputs for most cropping systems and water budget models are frequently available from only a few weather stations and over short periods of time. To overcome this limitation, the Campbell–Donatelli model relates daily maximum and minimum air temperatures to

  12. A developmental cascade perspective of paediatric obesity: a conceptual model and scoping review.

    Science.gov (United States)

    Smith, Justin D; Egan, Kaitlyn N; Montaño, Zorash; Dawson-McClure, Spring; Jake-Schoffman, Danielle E; Larson, Madeline; St George, Sara M

    2018-04-05

    Considering the immense challenge of preventing obesity, the time has come to reconceptualise the way we study the obesity development in childhood. The developmental cascade model offers a longitudinal framework to elucidate the way cumulative consequences and spreading effects of risk and protective factors, across and within biopsychosocial spheres and phases of development, can propel individuals towards obesity. In this article, we use a theory-driven model-building approach and a scoping review that included 310 published studies to propose a developmental cascade model of paediatric obesity. The proposed model provides a basis for testing hypothesised cascades with multiple intervening variables and complex longitudinal processes. Moreover, the model informs future research by resolving seemingly contradictory findings on pathways to obesity previously thought to be distinct (low self-esteem, consuming sugary foods, and poor sleep cause obesity) that are actually processes working together over time (low self-esteem causes consumption of sugary foods which disrupts sleep quality and contributes to obesity). The findings of such inquiries can aid in identifying the timing and specific targets of preventive interventions across and within developmental phases. The implications of such a cascade model of paediatric obesity for health psychology and developmental and prevention sciences are discussed.

  13. SPLINE, Spline Interpolation Function

    International Nuclear Information System (INIS)

    Allouard, Y.

    1977-01-01

    1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10

  14. Using ‘snapshot’ measurements of CH4 fluxes from an ombrotrophic peatland to estimate annual budgets: interpolation versus modelling

    Directory of Open Access Journals (Sweden)

    S.M. Green

    2017-03-01

    Full Text Available Flux-chamber measurements of greenhouse gas exchanges between the soil and the atmosphere represent a snapshot of the conditions on a particular site and need to be combined or used in some way to provide integrated fluxes for the longer time periods that are often of interest. In contrast to carbon dioxide (CO2, most studies that have estimated the time-integrated flux of CH4 on ombrotrophic peatlands have not used models. Typically, linear interpolation is used to estimate CH4 fluxes during the time periods between flux-chamber measurements. CH4 fluxes generally show a rise followed by a fall through the growing season that may be captured reasonably well by interpolation, provided there are sufficiently frequent measurements. However, day-to-day and week-to-week variability is also often evident in CH4 flux data, and will not necessarily be properly represented by interpolation. Using flux chamber data from a UK blanket peatland, we compared annualised CH4 fluxes estimated by interpolation with those estimated using linear models and found that the former tended to be higher than the latter. We consider the implications of these results for the calculation of the radiative forcing effect of ombrotrophic peatlands.

  15. CASCADER: An M-chain gas-phase radionuclide transport and fate model

    International Nuclear Information System (INIS)

    Cawlfield, D.E.; Emer, D.F.; Lindstrom, F.T.; Shott, G.J.

    1993-09-01

    Chemicals and radionuclides move either in the gas-phase, liquid-phase, or both phases in soils. They may be acted upon by either biological or abiotic processes through advection and/or dispersion. Additionally during the transport of parent and daughter radionuclides in soil, radionuclide decay may occur. This version of CASCADER called CASCADR9 starts with the concepts presented in volumes one and three of this series. For a proper understanding of how the model works, the reader should read volume one first. Also presented in this volume is a set of realistic scenarios for buried sources of radon gas, and the input and output file structure for CASCADER9

  16. Joint state and parameter estimation for a class of cascade systems: Application to a hemodynamic model

    KAUST Repository

    Zayane, Chadia

    2014-06-01

    In this paper, we address a special case of state and parameter estimation, where the system can be put on a cascade form allowing to estimate the state components and the set of unknown parameters separately. Inspired by the nonlinear Balloon hemodynamic model for functional Magnetic Resonance Imaging problem, we propose a hierarchical approach. The system is divided into two subsystems in cascade. The state and input are first estimated from a noisy measured signal using an adaptive observer. The obtained input is then used to estimate the parameters of a linear system using the modulating functions method. Some numerical results are presented to illustrate the efficiency of the proposed method.

  17. Baryon stopping and strangeness baryon production in a parton cascade model

    International Nuclear Information System (INIS)

    Nara, Yasushi

    1999-01-01

    A parton cascade model which is based on pQCD incorporating hard partonic scattering and dynamical hadronization scheme describes the space-time evolution of parton/hadron system produced by ultra-relativistic nuclear collisions. Hadron yield, baryon stopping and transverse momentum distribution are calculated and compared with experimental data at SPS energies. Using new version of parton cascade code VNI in which baryonic cluster formation is implemented, we calculate the net baryon number distributions and Λ yield. It is found that baryon stopping behavior at SPS energies is well accounted for within the parton cascade picture. As a consequence of the production of the baryon (u and d quark) rich parton matter, parton coalescence naturally explains the enhanced yield of Λ particle which has been observed in experiment. (author)

  18. An algorithm for treating flat areas and depressions in digital elevation models using linear interpolation

    Science.gov (United States)

    F. Pan; M. Stieglitz; R.B. McKane

    2012-01-01

    Digital elevation model (DEM) data are essential to hydrological applications and have been widely used to calculate a variety of useful topographic characteristics, e.g., slope, flow direction, flow accumulation area, stream channel network, topographic index, and others. Except for slope, none of the other topographic characteristics can be calculated until the flow...

  19. Cumulative Risk Disparities in Children's Neurocognitive Functioning: A Developmental Cascade Model

    Science.gov (United States)

    Wade, Mark; Browne, Dillon T.; Plamondon, Andre; Daniel, Ella; Jenkins, Jennifer M.

    2016-01-01

    The current longitudinal study examined the role of cumulative social risk on children's theory of mind (ToM) and executive functioning (EF) across early development. Further, we also tested a cascade model of development in which children's social cognition at 18 months was hypothesized to predict ToM and EF at age 4.5 through intermediary…

  20. Using the Cascade Model to Improve Antenatal Screening for the Hemoglobin Disorders

    Science.gov (United States)

    Gould, Dinah; Papadopoulos, Irena; Kelly, Daniel

    2012-01-01

    Introduction: The inherited hemoglobin disorders constitute a major public health problem. Facilitators (experienced hemoglobin counselors) were trained to deliver knowledge and skills to "frontline" practitioners to enable them to support parents during antenatal screening via a cascade (train-the-trainer) model. Objectives of…

  1. Generalized interpolative quantum statistics

    International Nuclear Information System (INIS)

    Ramanathan, R.

    1992-01-01

    A generalized interpolative quantum statistics is presented by conjecturing a certain reordering of phase space due to the presence of possible exotic objects other than bosons and fermions. Such an interpolation achieved through a Bose-counting strategy predicts the existence of an infinite quantum Boltzmann-Gibbs statistics akin to the one discovered by Greenberg recently

  2. CMB anisotropies interpolation

    NARCIS (Netherlands)

    Zinger, S.; Delabrouille, Jacques; Roux, Michel; Maitre, Henri

    2010-01-01

    We consider the problem of the interpolation of irregularly spaced spatial data, applied to observation of Cosmic Microwave Background (CMB) anisotropies. The well-known interpolation methods and kriging are compared to the binning method which serves as a reference approach. We analyse kriging

  3. Modeling cascading failures with the crisis of trust in social networks

    Science.gov (United States)

    Yi, Chengqi; Bao, Yuanyuan; Jiang, Jingchi; Xue, Yibo

    2015-10-01

    In social networks, some friends often post or disseminate malicious information, such as advertising messages, informal overseas purchasing messages, illegal messages, or rumors. Too much malicious information may cause a feeling of intense annoyance. When the feeling exceeds a certain threshold, it will lead social network users to distrust these friends, which we call the crisis of trust. The crisis of trust in social networks has already become a universal concern and an urgent unsolved problem. As a result of the crisis of trust, users will cut off their relationships with some of their untrustworthy friends. Once a few of these relationships are made unavailable, it is likely that other friends will decline trust, and a large portion of the social network will be influenced. The phenomenon in which the unavailability of a few relationships will trigger the failure of successive relationships is known as cascading failure dynamics. To our best knowledge, no one has formally proposed cascading failures dynamics with the crisis of trust in social networks. In this paper, we address this potential issue, quantify the trust between two users based on user similarity, and model the minimum tolerance with a nonlinear equation. Furthermore, we construct the processes of cascading failures dynamics by considering the unique features of social networks. Based on real social network datasets (Sina Weibo, Facebook and Twitter), we adopt two attack strategies (the highest trust attack (HT) and the lowest trust attack (LT)) to evaluate the proposed dynamics and to further analyze the changes of the topology, connectivity, cascading time and cascade effect under the above attacks. We numerically find that the sparse and inhomogeneous network structure in our cascading model can better improve the robustness of social networks than the dense and homogeneous structure. However, the network structure that seems like ripples is more vulnerable than the other two network

  4. NLP model based thermoeconomic optimization of vapor compression–absorption cascaded refrigeration system

    International Nuclear Information System (INIS)

    Jain, Vaibhav; Sachdeva, Gulshan; Kachhwaha, S.S.

    2015-01-01

    Highlights: • It addresses the size and cost estimation of cascaded refrigeration system. • Cascaded system is a promising decarburizing and energy efficient technology. • Second law analysis is carried out with modified Gouy-Stodola equation. • The total annual cost of plant operation is optimized in present work. - Abstract: This paper addresses the size and cost estimation of vapor compression–absorption cascaded refrigeration system (VCACRS) for water chilling application taking R410a and water–LiBr as refrigerants in compression and absorption section respectively which can help the design engineers in manufacturing and experimenting on such kind of systems. The main limitation in the practical implementation of VCACRS is its size and cost which are optimized in the present work by implementing Direct Search Method in non-linear programming (NLP) mathematical model of VCACRS. The main objective of optimization is to minimize the total annual cost of system which comprises of costs of exergy input and capital costs in monetary units. The appropriate set of decision variables (temperature of evaporator, condenser, generator, absorber, cascade condenser, degree of overlap and effectiveness of solution heat exchanger) minimizes the total annual cost of VCACRS by 11.9% with 22.4% reduction in investment cost at the base case whereas the same is reduced by 7.5% with 11.7% reduction in investment cost with reduced rate of interest and increased life span and period of operation. Optimization results show that the more investment cost in later case is well compensated through the performance and operational cost of the system. In the present analysis, optimum cascade condensing temperature is a strong function of period of operation and capital recovery factor. The cascading of compression and absorption systems becomes attractive for lower rate of interest and increase life span and operational period

  5. Monotone piecewise bicubic interpolation

    International Nuclear Information System (INIS)

    Carlson, R.E.; Fritsch, F.N.

    1985-01-01

    In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables

  6. Modeling Spatial Distribution of Some Contamination within the Lower Reaches of Diyala River Using IDW Interpolation

    Directory of Open Access Journals (Sweden)

    Huda M. Madhloom

    2017-12-01

    Full Text Available The aim of this research was to simulate the water quality along the lower course of the Diyala River using Geographic Information Systems (GIS techniques. For this purpose, the samples were taken at 24 sites along the study area. The parameters: total dissolved solids (T.D.S, total suspended solids (T.S.S, iron (Fe, copper (Cu, chromium (Cr, and manganese (Mn were considered. Water samples were collected on a monthly basis for a duration of five years. The adopted analyzing approach was tested by calculating the mean absolute error (MAE and the correlation coefficient (R between observed water samples and predicted results. The result showed a percentage error less than 10% and significant correlation at R > 89% for all pollutant indicators. It was concluded that the accuracy of the applied model to simulate the river pollutants can decrease the number of monitoring station to 50%. Additionally, a distribution map for the concentrations’ results indicated that many of the major pollution indicators did not satisfy the river water quality standards.

  7. Linear Methods for Image Interpolation

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  8. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    Science.gov (United States)

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  9. The on-line coupled atmospheric chemistry model system MECO(n) - Part 5: Expanding the Multi-Model-Driver (MMD v2.0) for 2-way data exchange including data interpolation via GRID (v1.0)

    Science.gov (United States)

    Kerkweg, Astrid; Hofmann, Christiane; Jöckel, Patrick; Mertens, Mariano; Pante, Gregor

    2018-03-01

    As part of the Modular Earth Submodel System (MESSy), the Multi-Model-Driver (MMD v1.0) was developed to couple online the regional Consortium for Small-scale Modeling (COSMO) model into a driving model, which can be either the regional COSMO model or the global European Centre Hamburg general circulation model (ECHAM) (see Part 2 of the model documentation). The coupled system is called MECO(n), i.e., MESSy-fied ECHAM and COSMO models nested n times. In this article, which is part of the model documentation of the MECO(n) system, the second generation of MMD is introduced. MMD comprises the message-passing infrastructure required for the parallel execution (multiple programme multiple data, MPMD) of different models and the communication of the individual model instances, i.e. between the driving and the driven models. Initially, the MMD library was developed for a one-way coupling between the global chemistry-climate ECHAM/MESSy atmospheric chemistry (EMAC) model and an arbitrary number of (optionally cascaded) instances of the regional chemistry-climate model COSMO/MESSy. Thus, MMD (v1.0) provided only functions for unidirectional data transfer, i.e. from the larger-scale to the smaller-scale models.Soon, extended applications requiring data transfer from the small-scale model back to the larger-scale model became of interest. For instance, the original fields of the larger-scale model can directly be compared to the upscaled small-scale fields to analyse the improvements gained through the small-scale calculations, after the results are upscaled. Moreover, the fields originating from the two different models might be fed into the same diagnostic tool, e.g. the online calculation of the radiative forcing calculated consistently with the same radiation scheme. Last but not least, enabling the two-way data transfer between two models is the first important step on the way to a fully dynamical and chemical two-way coupling of the various model instances.In MMD (v1

  10. The on-line coupled atmospheric chemistry model system MECO(n – Part 5: Expanding the Multi-Model-Driver (MMD v2.0 for 2-way data exchange including data interpolation via GRID (v1.0

    Directory of Open Access Journals (Sweden)

    A. Kerkweg

    2018-03-01

    Full Text Available As part of the Modular Earth Submodel System (MESSy, the Multi-Model-Driver (MMD v1.0 was developed to couple online the regional Consortium for Small-scale Modeling (COSMO model into a driving model, which can be either the regional COSMO model or the global European Centre Hamburg general circulation model (ECHAM (see Part 2 of the model documentation. The coupled system is called MECO(n, i.e., MESSy-fied ECHAM and COSMO models nested n times. In this article, which is part of the model documentation of the MECO(n system, the second generation of MMD is introduced. MMD comprises the message-passing infrastructure required for the parallel execution (multiple programme multiple data, MPMD of different models and the communication of the individual model instances, i.e. between the driving and the driven models. Initially, the MMD library was developed for a one-way coupling between the global chemistry–climate ECHAM/MESSy atmospheric chemistry (EMAC model and an arbitrary number of (optionally cascaded instances of the regional chemistry–climate model COSMO/MESSy. Thus, MMD (v1.0 provided only functions for unidirectional data transfer, i.e. from the larger-scale to the smaller-scale models.Soon, extended applications requiring data transfer from the small-scale model back to the larger-scale model became of interest. For instance, the original fields of the larger-scale model can directly be compared to the upscaled small-scale fields to analyse the improvements gained through the small-scale calculations, after the results are upscaled. Moreover, the fields originating from the two different models might be fed into the same diagnostic tool, e.g. the online calculation of the radiative forcing calculated consistently with the same radiation scheme. Last but not least, enabling the two-way data transfer between two models is the first important step on the way to a fully dynamical and chemical two-way coupling of the various model

  11. Lumley's energy cascade dissipation rate model for boundary-free turbulent shear flows

    Science.gov (United States)

    Duncan, B. S.

    1992-01-01

    True dissipation occurs mainly at the highest wavenumbers where the eddy sizes are comparatively small. These high wavenumbers receive their energy through the spectral cascade of energy starting with the largest eddies spilling energy into the smaller eddies, passing through each wavenumber until it is dissipated at the microscopic scale. However, a small percentage of the energy does not spill continuously through the cascade but is instantly passed to the higher wavenumbers. Consequently, the smallest eddies receive a certain amount of energy almost immediately. As the spectral energy cascade continues, the highest wavenumber needs a certain time to receive all the energy which has been transferred from the largest eddies. As such, there is a time delay, of the order of tau, between the generation of energy by the largest eddies and the eventual dissipation of this energy. For equilibrium turbulence at high Reynolds numbers, there is a wide range where energy is neither produced by the large eddies nor dissipated by viscosity, but is conserved and passed from wavenumber to higher wavenumbers. The rate at which energy cascades from one wavenumber to another is proportional to the energy contained within that wavenumber. This rate is constant and has been used in the past as a dissipation rate of turbulent kinetic energy. However, this is true only in steady, equilibrium turbulence. Most dissipation models contend that the production of dissipation is proportional to the production of energy and that the destruction of dissipation is proportional to the destruction of energy. In essence, these models state that the change in the dissipation rate is proportional to the change in the kinetic energy. This assumption is obviously incorrect for the case where there is no production of turbulent energy, yet energy continues to cascade from large to small eddies. If the time lag between the onset on the energy cascade to the destruction of energy at the microscale can be

  12. Experimental calibration of the mathematical model of Air Torque Position dampers with non-cascading blades

    Directory of Open Access Journals (Sweden)

    Bikić Siniša M.

    2016-01-01

    Full Text Available This paper is focused on the mathematical model of the Air Torque Position dampers. The mathematical model establishes a link between the velocity of air in front of the damper, position of the damper blade and the moment acting on the blade caused by the air flow. This research aims to experimentally verify the mathematical model for the damper type with non-cascading blades. Four different types of dampers with non-cascading blades were considered: single blade dampers, dampers with two cross-blades, dampers with two parallel blades and dampers with two blades of which one is a fixed blade in the horizontal position. The case of a damper with a straight pipeline positioned in front of and behind the damper was taken in consideration. Calibration and verification of the mathematical model was conducted experimentally. The experiment was conducted on the laboratory facility for testing dampers used for regulation of the air flow rate in heating, ventilation and air conditioning systems. The design and setup of the laboratory facility, as well as construction, adjustment and calibration of the laboratory damper are presented in this paper. The mathematical model was calibrated by using one set of data, while the verification of the mathematical model was conducted by using the second set of data. The mathematical model was successfully validated and it can be used for accurate measurement of the air velocity on dampers with non-cascading blades under different operating conditions. [Projekat Ministarstva nauke Republike Srbije, br. TR31058

  13. Selection of a Geostatistical Method to Interpolate Soil Properties of the State Crop Testing Fields using Attributes of a Digital Terrain Model

    Science.gov (United States)

    Sahabiev, I. A.; Ryazanov, S. S.; Kolcova, T. G.; Grigoryan, B. R.

    2018-03-01

    The three most common techniques to interpolate soil properties at a field scale—ordinary kriging (OK), regression kriging with multiple linear regression drift model (RK + MLR), and regression kriging with principal component regression drift model (RK + PCR)—were examined. The results of the performed study were compiled into an algorithm of choosing the most appropriate soil mapping technique. Relief attributes were used as the auxiliary variables. When spatial dependence of a target variable was strong, the OK method showed more accurate interpolation results, and the inclusion of the auxiliary data resulted in an insignificant improvement in prediction accuracy. According to the algorithm, the RK + PCR method effectively eliminates multicollinearity of explanatory variables. However, if the number of predictors is less than ten, the probability of multicollinearity is reduced, and application of the PCR becomes irrational. In that case, the multiple linear regression should be used instead.

  14. Cascade annealing: an overview

    International Nuclear Information System (INIS)

    Doran, D.G.; Schiffgens, J.O.

    1976-04-01

    Concepts and an overview of radiation displacement damage modeling and annealing kinetics are presented. Short-term annealing methodology is described and results of annealing simulations performed on damage cascades generated using the Marlowe and Cascade programs are included. Observations concerning the inconsistencies and inadequacies of current methods are presented along with simulation of high energy cascades and simulation of longer-term annealing

  15. Assimilation of the AVISO Altimetry Data into the Ocean Dynamics Model with a High Spatial Resolution Using Ensemble Optimal Interpolation (EnOI)

    Science.gov (United States)

    Kaurkin, M. N.; Ibrayev, R. A.; Belyaev, K. P.

    2018-01-01

    A parallel realization of the Ensemble Optimal Interpolation (EnOI) data assimilation (DA) method in conjunction with the eddy-resolving global circulation model is implemented. The results of DA experiments in the North Atlantic with the assimilation of the Archiving, Validation and Interpretation of Satellite Oceanographic (AVISO) data from the Jason-1 satellite are analyzed. The results of simulation are compared with the independent temperature and salinity data from the ARGO drifters.

  16. Numerical study of corner separation in a linear compressor cascade using various turbulence models

    Directory of Open Access Journals (Sweden)

    Liu Yangwei

    2016-06-01

    Full Text Available Three-dimensional corner separation is a common phenomenon that significantly affects compressor performance. Turbulence model is still a weakness for RANS method on predicting corner separation flow accurately. In the present study, numerical study of corner separation in a linear highly loaded prescribed velocity distribution (PVD compressor cascade has been investigated using seven frequently used turbulence models. The seven turbulence models include Spalart–Allmaras model, standard k–ɛ model, realizable k–ɛ model, standard k–ω model, shear stress transport k–ω model, v2–f model and Reynolds stress model. The results of these turbulence models have been compared and analyzed in detail with available experimental data. It is found the standard k–ɛ model, realizable k–ɛ model, v2–f model and Reynolds stress model can provide reasonable results for predicting three dimensional corner separation in the compressor cascade. The Spalart–Allmaras model, standard k–ω model and shear stress transport k–ω model overestimate corner separation region at incidence of 0°. The turbulence characteristics are discussed and turbulence anisotropy is observed to be stronger in the corner separating region.

  17. E-model modification for case of cascade codecs arrangement

    OpenAIRE

    Vozňák, Miroslav

    2011-01-01

    Speech quality assessment is one of the key matters of voice services and every provider should ensure adequate connection quality to end users. Speech quality has to be measured by a trusted method and results have to correlate with intelligibility and clarity of the speech, as perceived by the listener. It can be achieved by subjective methods but in real life we must rely on objective measurements based on reliable models. One of them is E-model that we can consider as...

  18. Threshold model of cascades in empirical temporal networks

    Science.gov (United States)

    Karimi, Fariba; Holme, Petter

    2013-08-01

    Threshold models try to explain the consequences of social influence like the spread of fads and opinions. Along with models of epidemics, they constitute a major theoretical framework of social spreading processes. In threshold models on static networks, an individual changes her state if a certain fraction of her neighbors has done the same. When there are strong correlations in the temporal aspects of contact patterns, it is useful to represent the system as a temporal network. In such a system, not only contacts but also the time of the contacts are represented explicitly. In many cases, bursty temporal patterns slow down disease spreading. However, as we will see, this is not a universal truth for threshold models. In this work we propose an extension of Watts’s classic threshold model to temporal networks. We do this by assuming that an agent is influenced by contacts which lie a certain time into the past. I.e., the individuals are affected by contacts within a time window. In addition to thresholds in the fraction of contacts, we also investigate the number of contacts within the time window as a basis for influence. To elucidate the model’s behavior, we run the model on real and randomized empirical contact datasets.

  19. An Improved Rotary Interpolation Based on FPGA

    Directory of Open Access Journals (Sweden)

    Mingyu Gao

    2014-08-01

    Full Text Available This paper presents an improved rotary interpolation algorithm, which consists of a standard curve interpolation module and a rotary process module. Compared to the conventional rotary interpolation algorithms, the proposed rotary interpolation algorithm is simpler and more efficient. The proposed algorithm was realized on a FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe, which uses rotary ellipse and rotary parabolic as an example. According to the theoretical analysis and practical process validation, the algorithm has the following advantages: firstly, less arithmetic items is conducive for interpolation operation; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  20. Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States

    Science.gov (United States)

    Sousan, Sinan Dhia Jameel

    This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that

  1. Extension Of Lagrange Interpolation

    Directory of Open Access Journals (Sweden)

    Mousa Makey Krady

    2015-01-01

    Full Text Available Abstract In this paper is to present generalization of Lagrange interpolation polynomials in higher dimensions by using Gramers formula .The aim of this paper is to construct a polynomials in space with error tends to zero.

  2. Modeling elephant-mediated cascading effects of water point closure.

    Science.gov (United States)

    Hilbers, Jelle P; Van Langevelde, Frank; Prins, Herbert H T; Grant, C C; Peel, Mike J S; Coughenour, Michael B; De Knegt, Henrik J; Slotow, Rob; Smit, Izak P J; Kiker, Greg A; De Boer, Willem F

    2015-03-01

    Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are, however, alternative ways to control wildlife densities, such as opening or closing water points. The effects of these alternatives are poorly studied. In this paper, we focus on manipulating large herbivores through the closure of water points (WPs). Removal of artificial WPs has been suggested in order to change the distribution of African elephants, which occur in high densities in national parks in Southern Africa and are thought to have a destructive effect on the vegetation. Here, we modeled the long-term effects of different scenarios of WP closure on the spatial distribution of elephants, and consequential effects on the vegetation and other herbivores in Kruger National Park, South Africa. Using a dynamic ecosystem model, SAVANNA, scenarios were evaluated that varied in availability of artificial WPs; levels of natural water; and elephant densities. Our modeling results showed that elephants can indirectly negatively affect the distributions of meso-mixed feeders, meso-browsers, and some meso-grazers under wet conditions. The closure of artificial WPs hardly had any effect during these natural wet conditions. Under dry conditions, the spatial distribution of both elephant bulls and cows changed when the availability of artificial water was severely reduced in the model. These changes in spatial distribution triggered changes in the spatial availability of woody biomass over the simulation period of 80 years, and this led to changes in the rest of the herbivore community, resulting in increased densities of all herbivores, except for giraffe and steenbok, in areas close to rivers. The spatial distributions of elephant bulls and cows showed to be less affected by the closure of WPs than most of the other herbivore species. Our study contributes to ecologically

  3. Molecular dynamics and binary collisions modeling of the primary damage state of collision cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.; Singh, B.N.

    1992-01-01

    The objective of this work is to determine the spectral dependence of defect production and microstructure evolution for the development of fission-fusion correlations. Quantitative information on defect production in cascades in copper obtained from recent molecular dynamics (MD) simulations is compared to defect production information determined earlier with a model based on the binary collision approximation (BCA). The total numbers of residual defects, the fractions of them that are mobile, and the sizes of immobile clusters compare favorably, especially when the termination conditions of the two simulations are taken into account. A strategy is laid out for integrating the details of the cascade quenching phase determined by MD into a BCA-based model that is practical for simulating much higher energies and longer times than MD alone can achieve. The extraction of collisional phase information from MD simulations and the correspondence of MD and BCA versions of the collisional phase demonstrated at low energy

  4. Molecular dynamics and binary collision modeling of the primary damage state of collision cascades

    DEFF Research Database (Denmark)

    Heinisch, H.L.; Singh, B.N.

    1992-01-01

    Quantitative information on defect production in cascades in copper obtained from recent molecular dynamics simulations is compared to defect production information determined earlier with a model based on the binary collision approximation (BCA). The total numbers of residual defects......, the fractions of them that are mobile, and the sizes of immobile clusters compare favorably, especially when the termination conditions of the two simulations are taken into account. A strategy is laid out for integrating the details of the cascade quenching phase determined by MD into a BCA-based model...... that is practical for simulating much higher energies and longer times than MD alone can achieve. The extraction of collisional phase information from MD simulations and the correspondence of MD and BCA versions of the collisional phase is demonstrated at low energy....

  5. Representation of the radiative strength functions in the practical model of cascade gamma decay

    International Nuclear Information System (INIS)

    Vu, D.C.; Sukhovoj, A.M.; Mitsyna, L.V.; Zeinalov, Sh.; Jovancevic, N.; Knezevic, D.; Krmar, M.; Dragic, A.

    2016-01-01

    The developed in Dubna practical model of the cascade gamma decay of neutron resonance allows one, from the fitted intensities of the two-step cascades, to obtain parameters both of level density and of partial widths of emission of nuclear reaction products. In the presented variant of the model a part of phenomenological representations is minimized. Analysis of new results confirms the previous finding that dynamics of interaction between Fermi- and Bose-nuclear states depends on the form of the nucleus. It also follows from the ratios of densities of vibrational and quasi-particle levels that this interaction exists at least up to the binding neutron energy and probably differs for nuclei with varied parities of nucleons. [ru

  6. A Novel Load Capacity Model with a Tunable Proportion of Load Redistribution against Cascading Failures

    Directory of Open Access Journals (Sweden)

    Zhen-Hao Zhang

    2018-01-01

    Full Text Available Defence against cascading failures is of great theoretical and practical significance. A novel load capacity model with a tunable proportion is proposed. We take degree and clustering coefficient into account to redistribute the loads of broken nodes. The redistribution is local, where the loads of broken nodes are allocated to their nearest neighbours. Our model has been applied on artificial networks as well as two real networks. Simulation results show that networks get more vulnerable and sensitive to intentional attacks along with the decrease of average degree. In addition, the critical threshold from collapse to intact states is affected by the tunable parameter. We can adjust the tunable parameter to get the optimal critical threshold and make the systems more robust against cascading failures.

  7. Multi-scale interactions of geological processes during mineralization: cascade dynamics model and multifractal simulation

    Directory of Open Access Journals (Sweden)

    L. Yao

    2011-03-01

    Full Text Available Relations between mineralization and certain geological processes are established mostly by geologist's knowledge of field observations. However, these relations are descriptive and a quantitative model of how certain geological processes strengthen or hinder mineralization is not clear, that is to say, the mechanism of the interactions between mineralization and the geological framework has not been thoroughly studied. The dynamics behind these interactions are key in the understanding of fractal or multifractal formations caused by mineralization, among which singularities arise due to anomalous concentration of metals in narrow space. From a statistical point of view, we think that cascade dynamics play an important role in mineralization and studying them can reveal the nature of the various interactions throughout the process. We have constructed a multiplicative cascade model to simulate these dynamics. The probabilities of mineral deposit occurrences are used to represent direct results of mineralization. Multifractal simulation of probabilities of mineral potential based on our model is exemplified by a case study dealing with hydrothermal gold deposits in southern Nova Scotia, Canada. The extent of the impacts of certain geological processes on gold mineralization is related to the scale of the cascade process, especially to the maximum cascade division number nmax. Our research helps to understand how the singularity occurs during mineralization, which remains unanswered up to now, and the simulation may provide a more accurate distribution of mineral deposit occurrences that can be used to improve the results of the weights of evidence model in mapping mineral potential.

  8. Interferometric interpolation of sparse marine data

    KAUST Repository

    Hanafy, Sherif M.

    2013-10-11

    We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.

  9. On the random cascading model study of anomalous scaling in multiparticle production with continuously diminishing scale

    International Nuclear Information System (INIS)

    Liu Lianshou; Zhang Yang; Wu Yuanfang

    1996-01-01

    The anomalous scaling of factorial moments with continuously diminishing scale is studied using a random cascading model. It is shown that the model currently used have the property of anomalous scaling only for descrete values of elementary cell size. A revised model is proposed which can give good scaling property also for continuously varying scale. It turns out that the strip integral has good scaling property provided the integral regions are chosen correctly, and that this property is insensitive to the concrete way of self-similar subdivision of phase space in the models. (orig.)

  10. Dependence of radiation damage accumulation in iron on underlying models of displacement cascades and subsequent defect migration

    International Nuclear Information System (INIS)

    Souidi, A.; Becquart, C.S.; Domain, C.; Terentyev, D.; Malerba, L.; Calder, A.F.; Bacon, D.J.; Stoller, R.E.; Osetsky, Yu. N.; Hou, M.

    2006-01-01

    Groups of displacement cascades calculated independently with different simulation models and computer codes are compared on a statistical basis. The parameters used for this comparison are the number of Frenkel pairs (FP) produced, the percentages of vacancies and self-interstitial atoms (SIAs) in clusters, the spatial extent and the aspect ratio of the vacancies and the SIAs formed in each cascade. One group of cascades was generated in the binary collision approximation (BCA) and all others by full molecular dynamics (MD). The MD results differ primarily due to the empirical interatomic potentials used and, to some extent, in code strategies. Cascades were generated in simulation boxes at different initial equilibrium temperatures. Only modest differences in the predicted numbers of FP are observed, but the other cascade parameters may differ by more than 100%. The consequences of these differences on long-term cluster growth in a radiation environment are examined by means of object kinetic Monte Carlo (OKMC) simulations. These were repeated with three different parameterizations of SIA and SIA cluster mobility. The differences encompassed low to high mobility, one- and three-dimensional migration of clusters, and complete immobility of large clusters. The OKMC evolution was followed until 0.1 dpa was reached. With the range of OKMC parameters used, cluster populations after 0.1 dpa differ by orders of magnitude. Using the groups of cascades from different sources induced no difference larger than a factor of 2 in the OKMC results. No correlation could be identified between the cascade parameters considered and the number densities of vacancies and SIAs predicted by OKMC to cluster in the long term. However, use of random point defect distributions instead of those obtained for displacement cascades as input for the OKMC modeling led to significantly different results. It is therefore suggested that although the displacement cascade characteristics considered

  11. Representation of radiative strength functions within a practical model of cascade gamma decay

    Energy Technology Data Exchange (ETDEWEB)

    Vu, D. C., E-mail: vuconghnue@gmail.com; Sukhovoj, A. M., E-mail: suchovoj@nf.jinr.ru; Mitsyna, L. V., E-mail: mitsyna@nf.jinr.ru; Zeinalov, Sh., E-mail: zeinal@nf.jinr.ru [Joint Institute for Nuclear Research (Russian Federation); Jovancevic, N., E-mail: nikola.jovancevic@df.uns.ac.rs; Knezevic, D., E-mail: david.knezevic@df.uns.ac.rs; Krmar, M., E-mail: krmar@df.uns.ac.rs [University of Novi Sad, Department of Physics, Faculty of Sciences (Serbia); Dragic, A., E-mail: dragic@ipb.ac.rs [Institute of Physics Belgrade (Serbia)

    2017-03-15

    A practical model developed at the Joint Institute for Nuclear Research (JINR, Dubna) in order to describe the cascade gamma decay of neutron resonances makes it possible to determine simultaneously, from an approximation of the intensities of two-step cascades, parameters of nuclear level densities and partial widths with respect to the emission of nuclear-reaction products. The number of the phenomenological ideas used isminimized in themodel version considered in the present study. An analysis of new results confirms what was obtained earlier for the dependence of dynamics of the interaction of fermion and boson nuclear states on the nuclear shape. From the ratio of the level densities for excitations of the vibrational and quasiparticle types, it also follows that this interaction manifests itself in the region around the neutron binding energy and is probably different in nuclei that have different parities of nucleons.

  12. Calibrating a multi-model approach to defect production in high energy collision cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.; Singh, B.N.; Diaz de la Rubia, T.

    1994-01-01

    A multi-model approach to simulating defect production processes at the atomic scale is described that incorporates molecular dynamics (MD), binary collision approximation (BCA) calculations and stochastic annealing simulations. The central hypothesis is that the simple, fast computer codes capable of simulating large numbers of high energy cascades (e.g., BCA codes) can be made to yield the correct defect configurations when their parameters are calibrated using the results of the more physically realistic MD simulations. The calibration procedure is investigated using results of MD simulations of 25 keV cascades in copper. The configurations of point defects are extracted from the MD cascade simulations at the end of the collisional phase, thus providing information similar to that obtained with a binary collision model. The MD collisional phase defect configurations are used as input to the ALSOME annealing simulation code, and values of the ALSOME quenching parameters are determined that yield the best fit to the post-quenching defect configurations of the MD simulations. ((orig.))

  13. Digital time-interpolator

    International Nuclear Information System (INIS)

    Schuller, S.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1990-01-01

    This report presents a description of the design of a digital time meter. This time meter should be able to measure, by means of interpolation, times of 100 ns with an accuracy of 50 ps. In order to determine the best principle for interpolation, three methods were simulated at the computer with a Pascal code. On the basis of this the best method was chosen and used in the design. In order to test the principal operation of the circuit a part of the circuit was constructed with which the interpolation could be tested. The remainder of the circuit was simulated with a computer. So there are no data available about the operation of the complete circuit in practice. The interpolation part however is the most critical part, the remainder of the circuit is more or less simple logic. Besides this report also gives a description of the principle of interpolation and the design of the circuit. The measurement results at the prototype are presented finally. (author). 3 refs.; 37 figs.; 2 tabs

  14. Multivariate Birkhoff interpolation

    CERN Document Server

    Lorentz, Rudolph A

    1992-01-01

    The subject of this book is Lagrange, Hermite and Birkhoff (lacunary Hermite) interpolation by multivariate algebraic polynomials. It unifies and extends a new algorithmic approach to this subject which was introduced and developed by G.G. Lorentz and the author. One particularly interesting feature of this algorithmic approach is that it obviates the necessity of finding a formula for the Vandermonde determinant of a multivariate interpolation in order to determine its regularity (which formulas are practically unknown anyways) by determining the regularity through simple geometric manipulations in the Euclidean space. Although interpolation is a classical problem, it is surprising how little is known about its basic properties in the multivariate case. The book therefore starts by exploring its fundamental properties and its limitations. The main part of the book is devoted to a complete and detailed elaboration of the new technique. A chapter with an extensive selection of finite elements follows as well a...

  15. Low and intermediate energy pion-nucleus interactions in the cascade-exciton model

    International Nuclear Information System (INIS)

    Mashnik, S.G.

    1993-01-01

    A large variety of experimental data on pion-nucleus interactions in the bombarding energy range of 0-3000 MeV, on nucleon-induced pion production and on cumulative nucleon production, when a two-step process of pion production followed by absorption on nucleon pairs within a target is taken into account, are analyzed with the Cascade-Exciton Model of nuclear reactions.Comparison is made with other up-to-date models of these processes. The contributions of different pion absorption mechanisms and the relative role of different particle production mechanisms in these reactions are discussed

  16. Mathematical modeling of the static and dynamic behavior of the operational parameters of isotopic separation cascades composed of ultracentrifuges

    International Nuclear Information System (INIS)

    Portoghese, Celia Christiani Paschoa

    2002-01-01

    Several different mathematical models that make it possible to plan, design and follow the operation of uranium isotopic separation cascades using the gaseous ultracentrifugation process are presented, discussed and tested. Models to be used in the planning and conception phases use theoretical hypothesis, making it possible to calculate approximate values for the flow rate and isotopic composition of the cascade internal streams. Twelve theoretical models developed to perform this task are discussed and compared. The theoretical models that have greater applicability are identified. Models to be used for the complete dimensioning of a cascade, before its construction, called semi-empirical models, use experimental results obtained in ultracentrifuges individual testes combined with theoretical equations, allowing to calculate accurate values for the flow rate, pressure and isotopic composition of the cascade internal streams. Thirteen semi-empirical models developed to perform this task are presented, five of them are widely discussed and one of them is validated through comparison with experimental results. In order to follow the operation of a cascade, it is necessary to develop models to simulate its behavior in operational conditions other than the nominal, defined in the project. Three semi-empirical models to make this kind of simulation are presented and one of them is validated through comparison with experimental results. Finally, it is necessary to have tools that simulate the cascade behavior during transients. Two dynamic models developed to perform this task are presented and compared. The dynamic model capable to simulate results closer ti the real behaviour of a cascade during three different kinds of transient is identified, through comparison between simulated and experimental results. (author)

  17. A cascade model of information processing and encoding for retinal prosthesis.

    Science.gov (United States)

    Pei, Zhi-Jun; Gao, Guan-Xin; Hao, Bo; Qiao, Qing-Li; Ai, Hui-Jian

    2016-04-01

    Retinal prosthesis offers a potential treatment for individuals suffering from photoreceptor degeneration diseases. Establishing biological retinal models and simulating how the biological retina convert incoming light signal into spike trains that can be properly decoded by the brain is a key issue. Some retinal models have been presented, ranking from structural models inspired by the layered architecture to functional models originated from a set of specific physiological phenomena. However, Most of these focus on stimulus image compression, edge detection and reconstruction, but do not generate spike trains corresponding to visual image. In this study, based on state-of-the-art retinal physiological mechanism, including effective visual information extraction, static nonlinear rectification of biological systems and neurons Poisson coding, a cascade model of the retina including the out plexiform layer for information processing and the inner plexiform layer for information encoding was brought forward, which integrates both anatomic connections and functional computations of retina. Using MATLAB software, spike trains corresponding to stimulus image were numerically computed by four steps: linear spatiotemporal filtering, static nonlinear rectification, radial sampling and then Poisson spike generation. The simulated results suggested that such a cascade model could recreate visual information processing and encoding functionalities of the retina, which is helpful in developing artificial retina for the retinally blind.

  18. The cascade model of teachers’ continuing professional development in Kenya: A time for change?

    Directory of Open Access Journals (Sweden)

    Harry Kipkemoi Bett

    2016-12-01

    Full Text Available Kenya is one of the countries whose teachers the UNESCO (2015 report cited as lacking curriculum support in the classroom. As is the case in many African countries, a large portion of teachers in Kenya enter the teaching profession when inadequately prepared, while those already in the field receive insufficient support in their professional lives. The cascade model has often been utilized in the country whenever need for teachers’ continuing professional development (TCPD has arisen, especially on a large scale. The preference for the model is due to, among others, its cost effectiveness and ability to reach out to many teachers within a short period of time. Many researchers have however cast aspersions with this model for its glaring shortcomings. On the contrary, TCPD programmes that are collaborative in nature and based on teachers’ contexts have been found to be more effective than those that are not. This paper briefly examines cases of the cascade model in Kenya, the challenges associated with this model and proposes the adoption of collaborative and institution-based models to mitigate these challenges. The education sectors in many nations in Africa, and those in the developing world will find the discussions here relevant.

  19. Limits of Risk Predictability in a Cascading Alternating Renewal Process Model.

    Science.gov (United States)

    Lin, Xin; Moussawi, Alaa; Korniss, Gyorgy; Bakdash, Jonathan Z; Szymanski, Boleslaw K

    2017-07-27

    Most risk analysis models systematically underestimate the probability and impact of catastrophic events (e.g., economic crises, natural disasters, and terrorism) by not taking into account interconnectivity and interdependence of risks. To address this weakness, we propose the Cascading Alternating Renewal Process (CARP) to forecast interconnected global risks. However, assessments of the model's prediction precision are limited by lack of sufficient ground truth data. Here, we establish prediction precision as a function of input data size by using alternative long ground truth data generated by simulations of the CARP model with known parameters. We illustrate the approach on a model of fires in artificial cities assembled from basic city blocks with diverse housing. The results confirm that parameter recovery variance exhibits power law decay as a function of the length of available ground truth data. Using CARP, we also demonstrate estimation using a disparate dataset that also has dependencies: real-world prediction precision for the global risk model based on the World Economic Forum Global Risk Report. We conclude that the CARP model is an efficient method for predicting catastrophic cascading events with potential applications to emerging local and global interconnected risks.

  20. Information Theory Analysis of Cascading Process in a Synthetic Model of Fluid Turbulence

    Directory of Open Access Journals (Sweden)

    Massimo Materassi

    2014-02-01

    Full Text Available The use of transfer entropy has proven to be helpful in detecting which is the verse of dynamical driving in the interaction of two processes, X and Y . In this paper, we present a different normalization for the transfer entropy, which is capable of better detecting the information transfer direction. This new normalized transfer entropy is applied to the detection of the verse of energy flux transfer in a synthetic model of fluid turbulence, namely the Gledzer–Ohkitana–Yamada shell model. Indeed, this is a fully well-known model able to model the fully developed turbulence in the Fourier space, which is characterized by an energy cascade towards the small scales (large wavenumbers k, so that the application of the information-theory analysis to its outcome tests the reliability of the analysis tool rather than exploring the model physics. As a result, the presence of a direct cascade along the scales in the shell model and the locality of the interactions in the space of wavenumbers come out as expected, indicating the validity of this data analysis tool. In this context, the use of a normalized version of transfer entropy, able to account for the difference of the intrinsic randomness of the interacting processes, appears to perform better, being able to discriminate the wrong conclusions to which the “traditional” transfer entropy would drive.

  1. Image Interpolation Scheme based on SVM and Improved PSO

    Science.gov (United States)

    Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.

    2018-01-01

    In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.

  2. Propagation of hydro-meteorological uncertainty in a model cascade framework to inundation prediction

    Science.gov (United States)

    Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.

    2015-07-01

    This investigation aims to study the propagation of meteorological uncertainty within a cascade modelling approach to flood prediction. The methodology was comprised of a numerical weather prediction (NWP) model, a distributed rainfall-runoff model and a 2-D hydrodynamic model. The uncertainty evaluation was carried out at the meteorological and hydrological levels of the model chain, which enabled the investigation of how errors that originated in the rainfall prediction interact at a catchment level and propagate to an estimated inundation area and depth. For this, a hindcast scenario is utilised removing non-behavioural ensemble members at each stage, based on the fit with observed data. At the hydrodynamic level, an uncertainty assessment was not incorporated; instead, the model was setup following guidelines for the best possible representation of the case study. The selected extreme event corresponds to a flood that took place in the southeast of Mexico during November 2009, for which field data (e.g. rain gauges; discharge) and satellite imagery were available. Uncertainty in the meteorological model was estimated by means of a multi-physics ensemble technique, which is designed to represent errors from our limited knowledge of the processes generating precipitation. In the hydrological model, a multi-response validation was implemented through the definition of six sets of plausible parameters from past flood events. Precipitation fields from the meteorological model were employed as input in a distributed hydrological model, and resulting flood hydrographs were used as forcing conditions in the 2-D hydrodynamic model. The evolution of skill within the model cascade shows a complex aggregation of errors between models, suggesting that in valley-filling events hydro-meteorological uncertainty has a larger effect on inundation depths than that observed in estimated flood inundation extents.

  3. Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.

    Science.gov (United States)

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo

    2013-01-01

    To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.

  4. The transverse momentum dependence of quark fragmentation functions from cascade models

    International Nuclear Information System (INIS)

    Groot, E.H. de; Engels, J.

    1979-01-01

    A covariant generalization of the onedimensional cascade model for quark fragmentation functions is presented, so as to include the transverse momentum behaviour and the possibility to produce different particles at different vertices along the chain. In the scaling limit the exact solution is given, if the primordial function is of the type αZsup(α-1). T(pT). For the more general case of factorizing primordial functions an analytic expression for the seagull effect is derived, which turns out to be independent of the function T(pT). (orig.) [de

  5. CASCADER: An m-chain gas-phase radionuclide transport and fate model

    International Nuclear Information System (INIS)

    Cawlfield, D.E.; Been, K.B.; Emer, D.F.; Lindstrom, F.T.; Shott, G.J.

    1993-06-01

    Chemicals and radionuclides move either in the gas-phase, liquid-phase, or both phases in soils. They may be acted upon by either biological or abiotic processes through advection and/or diffusion. Furthermore, parent and daughter radionuclides may decay as they are transported in the soil. This is volume two to the CASCADER series, titled CASCADR8. It embodies the concepts presented in volume one of this series. To properly understand how the CASCADR8 model works, the reader should read volume one first. This volume presents the input and output file structure for CASCADR8, and a set of realistic scenarios for buried sources of radon gas

  6. Fluid-structure coupling in the guide vanes cascade of a pump-turbine scale model

    International Nuclear Information System (INIS)

    Roth, S; Hasmatuchi, V; Botero, F; Farhat, M; Avellan, F

    2010-01-01

    The present study concerns fluid-structure coupling phenomena occurring in a guide vane cascade of a pump-turbine scale model placed in the EPFL PF3 test rig. An advanced instrument set is used to monitor both vibrating structures and the surrounding flow. The paper highlights the interaction between vibrating guide vanes and the flow behavior. The pressure fluctuations in the stay vanes region are found to be strongly influenced by the amplitude of the vibrating guide vanes. Moreover, the flow induces different hydrodynamic damping on the vibrating guide vanes depending on the operating point of the pump-turbine.

  7. Cascading Gravity Extending the Dvali-Gabadadze-Porrati Model to Higher Dimension

    CERN Document Server

    de Rham, Claudia; Hofmann, Stefan; Khoury, Justin; Pujolas, Oriol; Redi, Michele; Tolley, Andrew J

    2008-01-01

    We present a higher codimension generalization of the DGP scenario which, unlike previous attempts, is free of ghost instabilities. The 4D propagator is made regular by embedding our visible 3-brane within a 4-brane, each with their own induced gravity terms, in a flat 6D bulk. The model is ghost-free if the tension on the 3-brane is larger than a certain critical value, while the induced metric remains flat. The gravitational force law `cascades' from a 6D behavior at the largest distances followed by a 5D and finally a 4D regime at the shortest scales.

  8. Cascading Gravity: Extending the Dvali-Gabadadze-Porrati Model to Higher Dimension

    International Nuclear Information System (INIS)

    Rham, Claudia de; Dvali, Gia; Hofmann, Stefan; Khoury, Justin; Tolley, Andrew J.; Pujolas, Oriol; Redi, Michele

    2008-01-01

    We present a generalization of the Dvali-Gabadadze-Porrati scenario to higher codimensions which, unlike previous attempts, is free of ghost instabilities. The 4D propagator is made regular by embedding our visible 3-brane within a 4-brane, each with their own induced gravity terms, in a flat 6D bulk. The model is ghost-free if the tension on the 3-brane is larger than a certain critical value, while the induced metric remains flat. The gravitational force law ''cascades'' from a 6D behavior at the largest distances followed by a 5D and finally a 4D regime at the shortest scales

  9. Fluid-structure coupling in the guide vanes cascade of a pump-turbine scale model

    Science.gov (United States)

    Roth, S.; Hasmatuchi, V.; Botero, F.; Farhat, M.; Avellan, F.

    2010-08-01

    The present study concerns fluid-structure coupling phenomena occurring in a guide vane cascade of a pump-turbine scale model placed in the EPFL PF3 test rig. An advanced instrument set is used to monitor both vibrating structures and the surrounding flow. The paper highlights the interaction between vibrating guide vanes and the flow behavior. The pressure fluctuations in the stay vanes region are found to be strongly influenced by the amplitude of the vibrating guide vanes. Moreover, the flow induces different hydrodynamic damping on the vibrating guide vanes depending on the operating point of the pump-turbine.

  10. Potential problems with interpolating fields

    Energy Technology Data Exchange (ETDEWEB)

    Birse, Michael C. [The University of Manchester, Theoretical Physics Division, School of Physics and Astronomy, Manchester (United Kingdom)

    2017-11-15

    A potential can have features that do not reflect the dynamics of the system it describes but rather arise from the choice of interpolating fields used to define it. This is illustrated using a toy model of scattering with two coupled channels. A Bethe-Salpeter amplitude is constructed which is a mixture of the waves in the two channels. The potential derived from this has a strong repulsive core, which arises from the admixture of the closed channel in the wave function and not from the dynamics of the model. (orig.)

  11. Establishment and evaluation of operation function model for cascade hydropower station

    OpenAIRE

    Chang-ming Ji; Ting Zhou; Hai-tao Huang

    2010-01-01

    Toward solving the actual operation problems of cascade hydropower stations under hydrologic uncertainty, this paper presents the process of extraction of statistical characteristics from long-term optimal cascade operation, and proposes a monthly operation function algorithm for the actual operation of cascade hydropower stations through the identification, processing, and screening of available information during long-term optimal operation. Applying the operation function to the cascade hy...

  12. CASCADER: An M-chain gas-phase radionuclide transport and fate model

    International Nuclear Information System (INIS)

    Lindstrom, F.T.; Cawlfield, D.E.; Emer, D.F.; Shott, G.J.; Donahue, M.E.

    1993-02-01

    Chemicals and radionuclides move either in the gas-phase, liquid-phase, or both phases in soils. They may be acted upon by either biological or abiotic processes through advection and diffusion. Furthermore, parent and daughter radionuclides may decay as they are transported in the soil. CASCADER is a gas-phase, one-space dimensional transport and fate model for M-chain radionuclides in very dry homogeneous or heterogeneous soil. This model contains barometric pressure-induced advection and diffusion together with linear irreversible and linear reversible sorption for each radionuclide. The advection velocity is derived from an embedded air-pumping submodel. The air-pumping submodel is based on an assumption of isothermal conditions, which is driven by barometric pressure. CASCADER allows the concentration of source radionuclides to decay via the classical Bateman chain of simple, first-order kinetic processes. The transported radionuclides also decay via first-order processes while in the soil. A mass conserving, flux-type inlet and exit set of boundary conditions are used. The user must supply the initial distribution for the parent radionuclide in the soil. The initial daughter distribution is found using equilibrium rules. The model is user friendly as it uses a prompt-driven, free-form input. The code is ANSI standard Fortran 77

  13. CASCADER: An m-chain gas-phase radionuclide transport and fate model

    International Nuclear Information System (INIS)

    Lindstrom, F.T.; Cawlfield, D.E.; Emer, D.F.; Shott, G.J.; Donahue, M.E.

    1992-06-01

    Chemicals and radionuclides move either in the gas-phase, liquid-phase, or both phases in soils. They may be acted upon by either biological or abiotic processes as they are advected and/or dispersed. Furthermore, parent and daughter radionuclides may decay as they are transported in the soil. CASCADER is a gas-phase, one space dimensional transport and fate model for an m-chain of radionuclides in very dry soil. This model contains barometric pressure-induced advection and diffusion together with linear irreversible and linear reversible sorption for each radionuclide. The advocation velocity is derived from an embedded air-pumping submodel. The airpumping submodel is based on an assumption of isothermal conditions and is barometric pressure driven. CASCADER allows the concentration of source radionuclides to decay via the classical Bateman chain of simple, first-order kinetic processes. The transported radionuclides also decay via first-order processes while in the soil. A mass conserving, flux-type inlet and exit set of boundary conditions is used. The user must supply the initial distribution for the parent radionuclide in the soil. The initial daughter distribution is found using equilibrium rules. The model is user friendly as it uses a prompt-driven, free-form input. The code is ANSI standard Fortran 77

  14. Coupling Poisson rectangular pulse and multiplicative microcanonical random cascade models to generate sub-daily precipitation timeseries

    Science.gov (United States)

    Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph

    2018-07-01

    To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also

  15. Modulation transfer function cascade model for a sampled IR imaging system.

    Science.gov (United States)

    de Luca, L; Cardone, G

    1991-05-01

    The performance of the infrared scanning radiometer (IRSR) is strongly stressed in convective heat transfer applications where high spatial frequencies in the signal that describes the thermal image are present. The need to characterize more deeply the system spatial resolution has led to the formulation of a cascade model for the evaluation of the actual modulation transfer function of a sampled IR imaging system. The model can yield both the aliasing band and the averaged modulation response for a general sampling subsystem. For a line scan imaging system, which is the case of a typical IRSR, a rule of thumb that states whether the combined sampling-imaging system is either imaging-dependent or sampling-dependent is proposed. The model is tested by comparing it with other noncascade models as well as by ad hoc measurements performed on a commercial digitized IRSR.

  16. Composite and Cascaded Generalized-K Fading Channel Modeling and Their Diversity and Performance Analysis

    KAUST Repository

    Ansari, Imran Shafique

    2010-12-01

    The introduction of new schemes that are based on the communication among nodes has motivated the use of composite fading models due to the fact that the nodes experience different multipath fading and shadowing statistics, which subsequently determines the required statistics for the performance analysis of different transceivers. The end-to-end signal-to-noise-ratio (SNR) statistics plays an essential role in the determination of the performance of cascaded digital communication systems. In this thesis, a closed-form expression for the probability density function (PDF) of the end-end SNR for independent but not necessarily identically distributed (i.n.i.d.) cascaded generalized-K (GK) composite fading channels is derived. The developed PDF expression in terms of the Meijer-G function allows the derivation of subsequent performance metrics, applicable to different modulation schemes, including outage probability, bit error rate for coherent as well as non-coherent systems, and average channel capacity that provides insights into the performance of a digital communication system operating in N cascaded GK composite fading environment. Another line of research that was motivated by the introduction of composite fading channels is the error performance. Error performance is one of the main performance measures and derivation of its closed-form expression has proved to be quite involved for certain systems. Hence, in this thesis, a unified closed-form expression, applicable to different binary modulation schemes, for the bit error rate of dual-branch selection diversity based systems undergoing i.n.i.d. GK fading is derived in terms of the extended generalized bivariate Meijer G-function.

  17. Time-interpolator

    International Nuclear Information System (INIS)

    Blok, M. de; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1990-01-01

    This report describes a time-interpolator with which time differences can be measured using digital and analog techniques. It concerns a maximum measuring time of 6.4 μs with a resolution of 100 ps. Use is made of Emitter Coupled Logic (ECL) and analogues of high-frequency techniques. The difficulty which accompanies the use of ECL-logic is keeping as short as possible the mutual connections and closing properly the outputs in order to avoid reflections. The digital part of the time-interpolator consists of a continuous running clock and logic which converts an input signal into a start- and stop signal. The analog part consists of a Time to Amplitude Converter (TAC) and an analog to digital converter. (author). 3 refs.; 30 figs

  18. Geant4 Hadronic Cascade Models and CMS Data Analysis : Computational Challenges in the LHC era

    CERN Document Server

    Heikkinen, Aatos

    This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we es...

  19. Enhanced modeling of band nonparabolicity with application to a mid-IR quantum cascade laser structure

    International Nuclear Information System (INIS)

    Vukovic, N; Radovanovic, J; Milanovic, V

    2014-01-01

    We analyze the influence of conduction-band nonparabolicity on bound electronic states in the active region of a quantum cascade laser (QCL). Our model assumes expansion of the conduction-band dispersion relation up to a fourth order in wavevector and use of a suitable second boundary condition at the interface of two III-V semiconductor layers. Numerical results, obtained by the transfer matrix method, are presented for two mid-infrared GaAs/Al 0.33 Ga 0.67 As QCL active regions, and they are in very good agreement with experimental data found in the literature. Comparison with a different nonparabolicity model is presented for the example of a GaAs/Al 0.38 Ga 0.62 As-based mid-IR QCL. Calculations have also been carried out for one THz QCL structure to illustrate the possible application of the model in the terahertz part of the spectrum. (paper)

  20. Precipitation interpolation in mountainous areas

    Science.gov (United States)

    Kolberg, Sjur

    2015-04-01

    Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.

  1. Comparison of many bodied and binary collision cascade models up to 1 keV

    International Nuclear Information System (INIS)

    Schwartz, D.M.; Schiffgens, J.D.; Doran, D.G.; Odette, G.R.; Ariyasu, R.G.

    1976-01-01

    A quasi-dynamical code ADDES has been developed to model displacement cascades in copper for primary knockon atom energies up to several keV. ADDES is like a dynamical code in that it employs a many body treatment, yet similar to a binary collision code in that it incorporates the basic assumption that energy transfers below several eV can be ignored in describing cascade evolution. This paper is primarily concerned with (1) a continuing effort to validate the assumptions and specific parameters in the code by the comparison of ADDES results with experiment and with results from a dynamical code, and (2) comparisons of ADDES results with those from a binary collision code. The directional dependence of the displacement threshold is in reasonable agreement with the measurements of Jung et al. The behavior of focused replacement sequences is very similar to that obtained with the dynamical codes GRAPE and COMENT. Qualitative agreement was found between ADDES and COMENT for a higher energy (500 eV) defocused event while differences, still under study, are apparent in a 250 eV high index event. Comparisons of ADDES with the binary collision code MARLOWE show surprisingly good agreement in the 250 to 1000 eV range for both number and separation of Frenkel pairs. A preliminary observation, perhaps significant to displacement calculations utilizing the concept of a mean displacement energy, is the dissipation of 300 to 400 eV in a replacement sequence producing a single interstitial

  2. Cascaded analysis of signal and noise propagation through a heterogeneous breast model

    International Nuclear Information System (INIS)

    Mainprize, James G.; Yaffe, Martin J.

    2010-01-01

    Purpose: The detectability of lesions in radiographic images can be impaired by patterns caused by the surrounding anatomic structures. The presence of such patterns is often referred to as anatomic noise. Others have previously extended signal and noise propagation theory to include variable background structure as an additional noise term and used in simulations for analysis by human and ideal observers. Here, the analytic forms of the signal and noise transfer are derived to obtain an exact expression for any input random distribution and the ''power law'' filter used to generate the texture of the tissue distribution. Methods: A cascaded analysis of propagation through a heterogeneous model is derived for x-ray projection through simulated heterogeneous backgrounds. This is achieved by considering transmission through the breast as a correlated amplification point process. The analytic forms of the cascaded analysis were compared to monoenergetic Monte Carlo simulations of x-ray propagation through power law structured backgrounds. Results: As expected, it was found that although the quantum noise power component scales linearly with the x-ray signal, the anatomic noise will scale with the square of the x-ray signal. There was a good agreement between results obtained using analytic expressions for the noise power and those from Monte Carlo simulations for different background textures, random input functions, and x-ray fluence. Conclusions: Analytic equations for the signal and noise properties of heterogeneous backgrounds were derived. These may be used in direct analysis or as a tool to validate simulations in evaluating detectability.

  3. Simulation model of harmonics reduction technique using shunt active filter by cascade multilevel inverter method

    Science.gov (United States)

    Andreh, Angga Muhamad; Subiyanto, Sunardiyo, Said

    2017-01-01

    Development of non-linear loading in the application of industry and distribution system and also harmonic compensation becomes important. Harmonic pollution is an urgent problem in increasing power quality. The main contribution of the study is the modeling approach used to design a shunt active filter and the application of the cascade multilevel inverter topology to improve the power quality of electrical energy. In this study, shunt active filter was aimed to eliminate dominant harmonic component by injecting opposite currents with the harmonic component system. The active filter was designed by shunt configuration with cascaded multilevel inverter method controlled by PID controller and SPWM. With this shunt active filter, the harmonic current can be reduced so that the current wave pattern of the source is approximately sinusoidal. Design and simulation were conducted by using Power Simulator (PSIM) software. Shunt active filter performance experiment was conducted on the IEEE four bus test system. The result of shunt active filter installation on the system (IEEE four bus) could reduce THD current from 28.68% to 3.09%. With this result, the active filter can be applied as an effective method to reduce harmonics.

  4. Optical feedback effects on terahertz quantum cascade lasers: modelling and applications

    Science.gov (United States)

    Rakić, Aleksandar D.; Lim, Yah Leng; Taimre, Thomas; Agnew, Gary; Qi, Xiaoqiong; Bertling, Karl; Han, She; Wilson, Stephen J.; Kundu, Iman; Grier, Andrew; Ikonić, Zoran; Valavanis, Alexander; Demić, Aleksandar; Keeley, James; Li, Lianhe H.; Linfield, Edmund H.; Davies, A. Giles; Harrison, Paul; Ferguson, Blake; Walker, Graeme; Prow, Tarl; Indjin, Dragan; Soyer, H. Peter

    2016-11-01

    Terahertz (THz) quantum cascade lasers (QCLs) are compact sources of radiation in the 1-5 THz range with significant potential for applications in sensing and imaging. Laser feedback interferometry (LFI) with THz QCLs is a technique utilizing the sensitivity of the QCL to the radiation reflected back into the laser cavity from an external target. We will discuss modelling techniques and explore the applications of LFI in biological tissue imaging and will show that the confocal nature of the QCL in LFI systems, with their innate capacity for depth sectioning, makes them suitable for skin diagnostics with the well-known advantages of more conventional confocal microscopes. A demonstration of discrimination of neoplasia from healthy tissue using a THz, LFI-based system in the context of melanoma is presented using a transgenic mouse model.

  5. Stopped pion absorption by medium and heavy nuclei in the cascade-exciton model

    International Nuclear Information System (INIS)

    Mashnik, S.G.

    1992-03-01

    A large variety of experimental data on stopped negative pion absorption by nuclei from C to Bi (energy spectra and multiplicities of n, p, d, t, 3 He and 4 He; angular correlations of two secondary particles; spectra of the energy release in the ''live'' 28 Si target on recording protons, deuterons and tritons in the energy range 40-70 MeV, 30-60 MeV and 30-50 MeV, respectively; isotope yields; momentum and angular momentum distributions of residual nuclei) are analyzed within the framework of the cascade-exciton model of nuclear reactions. Comparison is made with other up-to-date models of the process. The contributions of different pion absorption mechanisms and the relative role of different particle production mechanisms in these reactions are discussed. (author). 59 refs, 13 figs, 4 tabs

  6. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    Science.gov (United States)

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  7. Separation of a multicomponent mixture by gaseous diffusion: modelization of the enrichment in a capillary - application to a pilot cascade

    International Nuclear Information System (INIS)

    Doneddu, F.

    1982-01-01

    Starting from the modelization of gaseous flow in a porous medium (flow in a capillary), we generalize the law of enrichment in an infinite cylindrical capillary, established for an isotropic linear mixture, to a multicomponent mixture. A generalization is given of the notion of separation yields and characteristic pressure classically used for separations of isotropic linear mixtures. We present formulas for diagonalizing the diffusion operator, modelization of a multistage, gaseous diffusion cascade and comparison with the experimental results of a drain cascade (N 2 -SF 6 -UF 6 mixture). [fr

  8. Smooth Phase Interpolated Keying

    Science.gov (United States)

    Borah, Deva K.

    2007-01-01

    Smooth phase interpolated keying (SPIK) is an improved method of computing smooth phase-modulation waveforms for radio communication systems that convey digital information. SPIK is applicable to a variety of phase-shift-keying (PSK) modulation schemes, including quaternary PSK (QPSK), octonary PSK (8PSK), and 16PSK. In comparison with a related prior method, SPIK offers advantages of better performance and less complexity of implementation. In a PSK scheme, the underlying information waveform that one seeks to convey consists of discrete rectangular steps, but the spectral width of such a waveform is excessive for practical radio communication. Therefore, the problem is to smooth the step phase waveform in such a manner as to maintain power and bandwidth efficiency without incurring an unacceptably large error rate and without introducing undesired variations in the amplitude of the affected radio signal. Although the ideal constellation of PSK phasor points does not cause amplitude variations, filtering of the modulation waveform (in which, typically, a rectangular pulse is converted to a square-root raised cosine pulse) causes amplitude fluctuations. If a power-efficient nonlinear amplifier is used in the radio communication system, the fluctuating-amplitude signal can undergo significant spectral regrowth, thus compromising the bandwidth efficiency of the system. In the related prior method, one seeks to solve the problem in a procedure that comprises two major steps: phase-value generation and phase interpolation. SPIK follows the two-step approach of the related prior method, but the details of the steps are different. In the phase-value-generation step, the phase values of symbols in the PSK constellation are determined by a phase function that is said to be maximally smooth and that is chosen to minimize the spectral spread of the modulated signal. In this step, the constellation is divided into two groups by assigning, to information symbols, phase values

  9. Interpolating string field theories

    International Nuclear Information System (INIS)

    Zwiebach, B.

    1992-01-01

    This paper reports that a minimal area problem imposing different length conditions on open and closed curves is shown to define a one-parameter family of covariant open-closed quantum string field theories. These interpolate from a recently proposed factorizable open-closed theory up to an extended version of Witten's open string field theory capable of incorporating on shell closed strings. The string diagrams of the latter define a new decomposition of the moduli spaces of Riemann surfaces with punctures and boundaries based on quadratic differentials with both first order and second order poles

  10. Variations on Debris Disks. IV. An Improved Analytical Model for Collisional Cascades

    Science.gov (United States)

    Kenyon, Scott J.; Bromley, Benjamin C.

    2017-04-01

    We derive a new analytical model for the evolution of a collisional cascade in a thin annulus around a single central star. In this model, r max the size of the largest object changes with time, {r}\\max \\propto {t}-γ , with γ ≈ 0.1-0.2. Compared to standard models where r max is constant in time, this evolution results in a more rapid decline of M d , the total mass of solids in the annulus, and L d , the luminosity of small particles in the annulus: {M}d\\propto {t}-(γ +1) and {L}d\\propto {t}-(γ /2+1). We demonstrate that the analytical model provides an excellent match to a comprehensive suite of numerical coagulation simulations for annuli at 1 au and at 25 au. If the evolution of real debris disks follows the predictions of the analytical or numerical models, the observed luminosities for evolved stars require up to a factor of two more mass than predicted by previous analytical models.

  11. The performance of a new Geant4 Bertini intra-nuclear cascade model in high throughput computing (HTC) cluster architecture

    Energy Technology Data Exchange (ETDEWEB)

    Aatos, Heikkinen; Andi, Hektor; Veikko, Karimaki; Tomas, Linden [Helsinki Univ., Institute of Physics (Finland)

    2003-07-01

    We study the performance of a new Bertini intra-nuclear cascade model implemented in the general detector simulation tool-kit Geant4 with a High Throughput Computing (HTC) cluster architecture. A 60 node Pentium III open-Mosix cluster is used with the Mosix kernel performing automatic process load-balancing across several CPUs. The Mosix cluster consists of several computer classes equipped with Windows NT workstations that automatically boot, daily and become nodes of the Mosix cluster. The models included in our study are a Bertini intra-nuclear cascade model with excitons, consisting of a pre-equilibrium model, a nucleus explosion model, a fission model and an evaporation model. The speed and accuracy obtained for these models is presented. (authors)

  12. Power scaling and experimentally fitted model for broad area quantum cascade lasers in continuous wave operation

    Science.gov (United States)

    Suttinger, Matthew; Go, Rowel; Figueiredo, Pedro; Todi, Ankesh; Shu, Hong; Leshin, Jason; Lyakh, Arkadiy

    2018-01-01

    Experimental and model results for 15-stage broad area quantum cascade lasers (QCLs) are presented. Continuous wave (CW) power scaling from 1.62 to 2.34 W has been experimentally demonstrated for 3.15-mm long, high reflection-coated QCLs for an active region width increased from 10 to 20 μm. A semiempirical model for broad area devices operating in CW mode is presented. The model uses measured pulsed transparency current, injection efficiency, waveguide losses, and differential gain as input parameters. It also takes into account active region self-heating and sublinearity of pulsed power versus current laser characteristic. The model predicts that an 11% improvement in maximum CW power and increased wall-plug efficiency can be achieved from 3.15 mm×25 μm devices with 21 stages of the same design, but half doping in the active region. For a 16-stage design with a reduced stage thickness of 300 Å, pulsed rollover current density of 6 kA/cm2, and InGaAs waveguide layers, an optical power increase of 41% is projected. Finally, the model projects that power level can be increased to ˜4.5 W from 3.15 mm×31 μm devices with the baseline configuration with T0 increased from 140 K for the present design to 250 K.

  13. Model for transport and reaction of defects and carriers within displacement cascades in gallium arsenide

    International Nuclear Information System (INIS)

    Wampler, William R.; Myers, Samuel M.

    2015-01-01

    A model is presented for recombination of charge carriers at evolving displacement damage in gallium arsenide, which includes clustering of the defects in atomic displacement cascades produced by neutron or ion irradiation. The carrier recombination model is based on an atomistic description of capture and emission of carriers by the defects with time evolution resulting from the migration and reaction of the defects. The physics and equations on which the model is based are presented, along with the details of the numerical methods used for their solution. The model uses a continuum description of diffusion, field-drift and reaction of carriers, and defects within a representative spherically symmetric cluster of defects. The initial radial defect profiles within the cluster were determined through pair-correlation-function analysis of the spatial distribution of defects obtained from the binary-collision code MARLOWE, using recoil energies for fission neutrons. Properties of the defects are discussed and values for their parameters are given, many of which were obtained from density functional theory. The model provides a basis for predicting the transient response of III-V heterojunction bipolar transistors to displacement damage from energetic particle irradiation

  14. Image Interpolation with Contour Stencils

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    Image interpolation is the problem of increasing the resolution of an image. Linear methods must compromise between artifacts like jagged edges, blurring, and overshoot (halo) artifacts. More recent works consider nonlinear methods to improve interpolation of edges and textures. In this paper we apply contour stencils for estimating the image contours based on total variation along curves and then use this estimation to construct a fast edge-adaptive interpolation.

  15. Quasi interpolation with Voronoi splines.

    Science.gov (United States)

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  16. Pixel Interpolation Methods

    OpenAIRE

    Mintěl, Tomáš

    2009-01-01

    Tato diplomová práce se zabývá akcelerací interpolačních metod s využitím GPU a architektury NVIDIA (R) CUDA TM. Grafický výstup je reprezentován demonstrační aplikací pro transformaci obrazu nebo videa s použitím vybrané interpolace. Časově kritické části kódu jsou přesunuty na GPU a vykonány paralelně. Pro práci s obrazem a videem jsou použity vysoce optimalizované algoritmy z knihovny OpenCV, od firmy Intel. This master's thesis deals with acceleration of pixel interpolation methods usi...

  17. Micro-angiography for neuro-vascular imaging. II. Cascade model analysis

    International Nuclear Information System (INIS)

    Ganguly, Arundhuti; Rudin, Stephen; Bednarek, Daniel R.; Hoffmann, Kenneth R.

    2003-01-01

    A micro-angiographic detector was designed and its performance was previously tested to evaluate its feasibility as an improvement over current x-ray detectors for neuro-interventional imaging. The detector was shown to have a modulation transfer function value of about 2% at the Nyquist frequency of 10 cycles/mm and a zero frequency detective quantum efficiency [DQE(0)] value of about 55%. An assessment of the system was required to evaluate whether the current system was performing at its full potential and to determine if any of its components could be optimized to further improve the output. For the purpose, in this study, the parallel cascade theory was used to analyze the performance of the detector under neuro-angiographic conditions by studying the output at the various stages in the imaging chain. A simple model for the spread of light in the CsI(Tl) entrance phosphor was developed and the resolution degradation due to K-fluorescence absorption was calculated. The total gain of the system was found to result in 21 e - (rms) detected at the charge coupled device per absorbed x-ray photon. The gain and the spread of quanta in the imaging chain were used to calculate theoretically the DQE using the parallel cascade model. The results of the model-based calculations matched fairly well with the experimental data previously obtained. This model was then used to optimize the phosphor thickness for the detector. The results showed that the area under the DQE curve had a maximum value at 150 μm of CsI(Tl), though when weighted by the squared signal in frequency space of a 100-μm-diam iodinated vessel, the integral DQE reached a maximum at 250 μm of CsI(Tl). Further, possible locations for gain increase in the imaging chain were determined, and the output of the improved system was simulated. Thus a theoretical analysis for the micro-angiographic detector was performed to better assess its potential

  18. Producing Distribution Maps for a Spatially-Explicit Ecosystem Model Using Large Monitoring and Environmental Databases and a Combination of Interpolation and Extrapolation

    Directory of Open Access Journals (Sweden)

    Arnaud Grüss

    2018-01-01

    Full Text Available To be able to simulate spatial patterns of predator-prey interactions, many spatially-explicit ecosystem modeling platforms, including Atlantis, need to be provided with distribution maps defining the annual or seasonal spatial distributions of functional groups and life stages. We developed a methodology combining extrapolation and interpolation of the predictions made by statistical habitat models to produce distribution maps for the fish and invertebrates represented in the Atlantis model of the Gulf of Mexico (GOM Large Marine Ecosystem (LME (“Atlantis-GOM”. This methodology consists of: (1 compiling a large monitoring database, gathering all the fisheries-independent and fisheries-dependent data collected in the northern (U.S. GOM since 2000; (2 compiling a large environmental database, storing all the environmental parameters known to influence the spatial distribution patterns of fish and invertebrates of the GOM; (3 fitting binomial generalized additive models (GAMs to the large monitoring and environmental databases, and geostatistical binomial generalized linear mixed models (GLMMs to the large monitoring database; and (4 employing GAM predictions to infer spatial distributions in the southern GOM, and GLMM predictions to infer spatial distributions in the U.S. GOM. Thus, our methodology allows for reasonable extrapolation in the southern GOM based on a large amount of monitoring and environmental data, and for interpolation in the U.S. GOM accurately reflecting the probability of encountering fish and invertebrates in that region. We used an iterative cross-validation procedure to validate GAMs. When a GAM did not pass the validation test, we employed a GAM for a related functional group/life stage to generate distribution maps for the southern GOM. In addition, no geostatistical GLMMs were fit for the functional groups and life stages whose depth, longitudinal and latitudinal ranges within the U.S. GOM are not entirely covered by

  19. Market disruption, cascading effects, and economic recovery:a life-cycle hypothesis model.

    Energy Technology Data Exchange (ETDEWEB)

    Sprigg, James A.

    2004-11-01

    This paper builds upon previous work [Sprigg and Ehlen, 2004] by introducing a bond market into a model of production and employment. The previous paper described an economy in which households choose whether to enter the labor and product markets based on wages and prices. Firms experiment with prices and employment levels to maximize their profits. We developed agent-based simulations using Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate that multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment, but also suffer from market noise stemming from consumer churn. In this paper we introduce a bond market as a mechanism for household savings. We simulate an economy of continuous overlapping generations in which each household grows older in the course of the simulation and continually revises its target level of savings according to a life-cycle hypothesis. Households can seek employment, earn income, purchase goods, and contribute to savings until they reach the mandatory retirement age; upon retirement households must draw from savings in order to purchase goods. This paper demonstrates the simultaneous convergence of product, labor, and savings markets to their calculated equilibria, and simulates how a disruption to a productive sector will create cascading effects in all markets. Subsequent work will use similar models to simulate how disruptions, such as terrorist attacks, would interplay with consumer confidence to affect financial markets and the broader economy.

  20. Influence maximization in social networks under an independent cascade-based model

    Science.gov (United States)

    Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan

    2016-02-01

    The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.

  1. Technique for image interpolation using polynomial transforms

    NARCIS (Netherlands)

    Escalante Ramírez, B.; Martens, J.B.; Haskell, G.G.; Hang, H.M.

    1993-01-01

    We present a new technique for image interpolation based on polynomial transforms. This is an image representation model that analyzes an image by locally expanding it into a weighted sum of orthogonal polynomials. In the discrete case, the image segment within every window of analysis is

  2. A disposition of interpolation techniques

    NARCIS (Netherlands)

    Knotters, M.; Heuvelink, G.B.M.

    2010-01-01

    A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated

  3. Experiment-based modelling of hardening and localized plasticity in metals irradiated under cascade damage conditions

    International Nuclear Information System (INIS)

    Singh, B.N.; Ghoniem, N.M.; Trinkaus, H.

    2002-01-01

    The analysis of the available experimental observations shows that the occurrence of a sudden yield drop and the associated plastic flow localization are the major concerns regarding the performance and lifetime of materials exposed to fission or fusion neutrons. In the light of the known mechanical properties and microstructures of the as-irradiated and irradiated and deformed materials, it has been argued that the increase in the upper yield stress, the sudden yield drop and the initiation of plastic flow localization, can be rationalized in terms of the cascade induced source hardening (CISH) model. Various aspects of the model (main assumptions and predictions) have been investigated using analytical calculations, 3-D dislocation dynamics and molecular dynamics simulations. The main results and conclusions are briefly summarized. Finally, it is pointed out that even though the formation of cleared channels may be rationalized in terms of climb-controlled glide of the source dislocation, a number of problems regarding the initiation and the evolution of these channels remain unsolved

  4. Experiment-based modelling of hardening and localized plasticity in metals irradiated under cascade damage conditions

    Energy Technology Data Exchange (ETDEWEB)

    Singh, B.N. E-mail: bachu.singh@risoe.dk; Ghoniem, N.M.; Trinkaus, H

    2002-12-01

    The analysis of the available experimental observations shows that the occurrence of a sudden yield drop and the associated plastic flow localization are the major concerns regarding the performance and lifetime of materials exposed to fission or fusion neutrons. In the light of the known mechanical properties and microstructures of the as-irradiated and irradiated and deformed materials, it has been argued that the increase in the upper yield stress, the sudden yield drop and the initiation of plastic flow localization, can be rationalized in terms of the cascade induced source hardening (CISH) model. Various aspects of the model (main assumptions and predictions) have been investigated using analytical calculations, 3-D dislocation dynamics and molecular dynamics simulations. The main results and conclusions are briefly summarized. Finally, it is pointed out that even though the formation of cleared channels may be rationalized in terms of climb-controlled glide of the source dislocation, a number of problems regarding the initiation and the evolution of these channels remain unsolved.

  5. Open standards for cascade models for RHIC: Volume 1. Proceedings of RIKEN BNL Research Center workshop

    International Nuclear Information System (INIS)

    1997-01-01

    It is widely recognized that cascade models are potentially effective and powerful tools for interpreting and predicting multi-particle observables in heavy ion physics. However, the lack of common standards, documentation, version control, and accessibility have made it difficult to apply objective scientific criteria for evaluating the many physical and algorithmic assumptions or even to reproduce some published results. The first RIKEN Research Center workshop was proposed by Yang Pang to address this problem by establishing open standards for original codes for applications to nuclear collisions at RHIC energies. The aim of this first workshop is: (1) to prepare a WWW depository site for original source codes and detailed documentation with examples; (2) to develop and perform standardized test for the models such as Lorentz invariance, kinetic theory comparisons, and thermodynamic simulations; (3) to publish a compilation of results of the above work in a journal e.g., ''Heavy Ion Physics''; and (4) to establish a policy statement on a set of minimal requirements for inclusion in the OSCAR-WWW depository

  6. Statistical distributions of avalanche size and waiting times in an inter-sandpile cascade model

    Science.gov (United States)

    Batac, Rene; Longjas, Anthony; Monterola, Christopher

    2012-02-01

    Sandpile-based models have successfully shed light on key features of nonlinear relaxational processes in nature, particularly the occurrence of fat-tailed magnitude distributions and exponential return times, from simple local stress redistributions. In this work, we extend the existing sandpile paradigm into an inter-sandpile cascade, wherein the avalanches emanating from a uniformly-driven sandpile (first layer) is used to trigger the next (second layer), and so on, in a successive fashion. Statistical characterizations reveal that avalanche size distributions evolve from a power-law p(S)≈S-1.3 for the first layer to gamma distributions p(S)≈Sαexp(-S/S0) for layers far away from the uniformly driven sandpile. The resulting avalanche size statistics is found to be associated with the corresponding waiting time distribution, as explained in an accompanying analytic formulation. Interestingly, both the numerical and analytic models show good agreement with actual inventories of non-uniformly driven events in nature.

  7. Improved Dynamic Modeling of the Cascade Distillation Subsystem and Analysis of Factors Affecting Its Performance

    Science.gov (United States)

    Perry, Bruce A.; Anderson, Molly S.

    2015-01-01

    The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station Water Processor Assembly to form a complete water recovery system for future missions. A preliminary chemical process simulation was previously developed using Aspen Custom Modeler® (ACM), but it could not simulate thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. This paper describes modifications to the ACM simulation of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version can be used to model thermal startup and predicts the total energy consumption of the CDS. The simulation has been validated for both NaC1 solution and pretreated urine feeds and no longer requires retuning when operating parameters change. The simulation was also used to predict how internal processes and operating conditions of the CDS affect its performance. In particular, it is shown that the coefficient of performance of the thermoelectric heat pump used to provide heating and cooling for the CDS is the largest factor in determining CDS efficiency. Intrastage heat transfer affects CDS performance indirectly through effects on the coefficient of performance.

  8. Contrast-guided image interpolation.

    Science.gov (United States)

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  9. Fast image interpolation via random forests.

    Science.gov (United States)

    Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

    2015-10-01

    This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

  10. Improved Finite-Control-Set Model Predictive Control for Cascaded H-Bridge Inverters

    Directory of Open Access Journals (Sweden)

    Roh Chan

    2018-02-01

    Full Text Available In multilevel cascaded H-bridge (CHB inverters, the number of voltage vectors generated by the inverter quickly increases with increasing voltage level. However, because the sampling period is short, it is difficult to consider all the vectors as the voltage level increases. This paper proposes a model predictive control algorithm with reduced computational complexity and fast dynamic response for CHB inverters. The proposed method presents a robust approach to interpret a next step as a steady or transient state by comparing an optimal voltage vector at a present step and a reference voltage vector at the next step. During steady state, only an optimal vector at a present step and its adjacent vectors are considered as a candidate-vector subset. On the other hand, this paper defines a new candidate vector subset for the transient state, which consists of more vectors than those in the subset used for the steady state for fast dynamic speed; however, the vectors are less than all the possible vectors generated by the CHB inverter, for calculation simplicity. In conclusion, the proposed method can reduce the computational complexity without significantly deteriorating the dynamic responses.

  11. Downscale cascades in tracer transport test cases: an intercomparison of the dynamical cores in the Community Atmosphere Model CAM5

    Directory of Open Access Journals (Sweden)

    J. Kent

    2012-12-01

    Full Text Available The accurate modeling of cascades to unresolved scales is an important part of the tracer transport component of dynamical cores of weather and climate models. This paper aims to investigate the ability of the advection schemes in the National Center for Atmospheric Research's Community Atmosphere Model version 5 (CAM5 to model this cascade. In order to quantify the effects of the different advection schemes in CAM5, four two-dimensional tracer transport test cases are presented. Three of the tests stretch the tracer below the scale of coarse resolution grids to ensure the downscale cascade of tracer variance. These results are compared with a high resolution reference solution, which is simulated on a resolution fine enough to resolve the tracer during the test. The fourth test has two separate flow cells, and is designed so that any tracer in the western hemisphere should not pass into the eastern hemisphere. This is to test whether the diffusion in transport schemes, often in the form of explicit hyper-diffusion terms or implicit through monotonic limiters, contains unphysical mixing.

    An intercomparison of three of the dynamical cores of the National Center for Atmospheric Research's Community Atmosphere Model version 5 is performed. The results show that the finite-volume (CAM-FV and spectral element (CAM-SE dynamical cores model the downscale cascade of tracer variance better than the semi-Lagrangian transport scheme of the Eulerian spectral transform core (CAM-EUL. Each scheme tested produces unphysical mass in the eastern hemisphere of the separate cells test.

  12. Ericksen number and Deborah number cascade predictions of a model for liquid crystalline polymers for simple shear flow

    Science.gov (United States)

    Klein, D. Harley; Leal, L. Gary; García-Cervera, Carlos J.; Ceniceros, Hector D.

    2007-02-01

    We consider the behavior of the Doi-Marrucci-Greco (DMG) model for nematic liquid crystalline polymers in planar shear flow. We found the DMG model to exhibit dynamics in both qualitative and quantitative agreement with experimental observations reported by Larson and Mead [Liq. Cryst. 15, 151 (1993)] for the Ericksen number and Deborah number cascades. For increasing shear rates within the Ericksen number cascade, the DMG model displays three distinct regimes: stable simple shear, stable roll cells, and irregular structure accompanied by disclination formation. In accordance with experimental observations, the model predicts both ±1 and ±1/2 disclinations. Although ±1 defects form via the ridge-splitting mechanism first identified by Feng, Tao, and Leal [J. Fluid Mech. 449, 179 (2001)], a new mechanism is identified for the formation of ±1/2 defects. Within the Deborah number cascade, with increasing Deborah number, the DMG model exhibits a streamwise banded texture, in the absence of disclinations and roll cells, followed by a monodomain wherein the mean orientation lies within the shear plane throughout the domain.

  13. Interpolation for de-Dopplerisation

    Science.gov (United States)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  14. Occlusion-Aware View Interpolation

    Directory of Open Access Journals (Sweden)

    Janusz Konrad

    2009-01-01

    Full Text Available View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

  15. Occlusion-Aware View Interpolation

    Directory of Open Access Journals (Sweden)

    Ince Serdar

    2008-01-01

    Full Text Available Abstract View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

  16. Improved Dynamic Modeling of the Cascade Distillation Subsystem and Integration with Models of Other Water Recovery Subsystems

    Science.gov (United States)

    Perry, Bruce; Anderson, Molly

    2015-01-01

    The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station (ISS) Water Processor Assembly (WPA) to form a complete Water Recovery System (WRS) for future missions. Independent chemical process simulations with varying levels of detail have previously been developed using Aspen Custom Modeler (ACM) to aid in the analysis of the CDS and several WPA components. The existing CDS simulation could not model behavior during thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. The first part of this paper describes modifications to the ACM model of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version of the model can accurately predict behavior during thermal startup for both NaCl solution and pretreated urine feeds. The model is used to predict how changing operating parameters and design features of the CDS affects its performance, and conclusions from these predictions are discussed. The second part of this paper describes the integration of the modified CDS model and the existing WPA component models into a single WRS model. The integrated model is used to demonstrate the effects that changes to one component can have on the dynamic behavior of the system as a whole.

  17. The Case for Using the Spherical Model to Calculate the Interpolated Points in the Connectivity Software Deployment Module

    National Research Council Canada - National Science Library

    Still, G. W; Nealon, James F

    2008-01-01

    .... The developers must decide which model of the earth to use as the basis of the calculations. Thus, a comparison was made between the National Geodetic Survey-provided computer programs Forward an Inverse based on the WGS84 Oblate Spheroid (OS...

  18. BIMOND3, Monotone Bivariate Interpolation

    International Nuclear Information System (INIS)

    Fritsch, F.N.; Carlson, R.E.

    2001-01-01

    1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data

  19. Spectral Cascade-Transport Turbulence Model Development for Two-Phase Flows

    Science.gov (United States)

    Brown, Cameron Scott

    Turbulence modeling remains a challenging problem in nuclear reactor applications, particularly for the turbulent multiphase flow conditions in nuclear reactor subchannels. Understanding the fundamental physics of turbulent multiphase flows is crucial for the improvement and further development of multiphase flow models used in reactor operation and safety calculations. Reactor calculations with Reynolds-averaged Navier-Stokes (RANS) approach continue to become viable tools for reactor analysis. The on-going increase in available computational resources allows for turbulence models that are more complex than the traditional two-equation models to become practical choices for nuclear reactor computational fluid dynamic (CFD) and multiphase computational fluid dynamic (M-CFD) simulations. Similarly, increased computational capabilities continue to allow for higher Reynolds numbers and more complex geometries to be evaluated using direct numerical simulation (DNS), thus providing more validation and verification data for turbulence model development. Spectral turbulence models are a promising approach to M-CFD simulations. These models resolve mean flow parameters as well as the turbulent kinetic energy spectrum, reproducing more physical details of the turbulence than traditional two-equation type models. Previously, work performed by other researchers on a spectral cascade-transport model has shown that the model behaves well for single and bubbly twophase decay of isotropic turbulence, single and two-phase uniform shear flow, and single-phase flow in a channel without resolving the near-wall boundary layer for relatively low Reynolds number. Spectral models are great candidates for multiphase RANS modeling since bubble source terms can be modeled as contributions to specific turbulence scales. This work focuses on the improvement and further development of the spectral cascadetransport model (SCTM) to become a three-dimensional (3D) turbulence model for use in M

  20. The research on NURBS adaptive interpolation technology

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Zhang, Sujia; Zhang, Feng

    2017-04-01

    In order to solve the problems of Research on NURBS Adaptive Interpolation Technology, such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for NURBS adaptive interpolation method of NURBS curve and simulation. We can use NURBS adaptive interpolation that calculates (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meets the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  1. Testing the generality of a trophic-cascade model for plague

    Science.gov (United States)

    Collinge, S.K.; Johnson, W.C.; Ray, C.; Matchett, R.; Grensten, J.; Cully, J.F.; Gage, K.L.; Kosoy, M.Y.; Loye, J.E.; Martin, A.P.

    2005-01-01

    Climate may affect the dynamics of infectious diseases by shifting pathogen, vector, or host species abundance, population dynamics, or community interactions. Black-tailed prairie dogs (Cynomys ludovicianus) are highly susceptible to plague, yet little is known about factors that influence the dynamics of plague epizootics in prairie dogs. We investigated temporal patterns of plague occurrence in black-tailed prairie dogs to assess the generality of links between climate and plague occurrence found in previous analyses of human plague cases. We examined long-term data on climate and plague occurrence in prairie dog colonies within two study areas. Multiple regression analyses revealed that plague occurrence in prairie dogs was not associated with climatic variables in our Colorado study area. In contrast, plague occurrence was strongly associated with climatic variables in our Montana study area. The models with most support included a positive association with precipitation in April-July of the previous year, in addition to a positive association with the number of "warm" days and a negative association with the number of "hot" days in the same year as reported plague events. We conclude that the timing and magnitude of precipitation and temperature may affect plague occurrence in some geographic areas. The best climatic predictors of plague occurrence in prairie dogs within our Montana study area are quite similar to the best climatic predictors of human plague cases in the southwestern United States. This correspondence across regions and species suggests support for a (temperature-modulated) trophic-cascade model for plague, including climatic effects on rodent abundance, flea abundance, and pathogen transmission, at least in regions that experience strong climatic signals. ?? 2005 EcoHealth Journal Consortium.

  2. Intranuclear cascade+percolation+evaporation model applied to the {sup 12}C+{sup 197}Au system at 1 GeV/nucleon

    Energy Technology Data Exchange (ETDEWEB)

    Volant, C.; Turzo, K.; Trautmann, W.; Auger, G.; Begemann-Blaich, M.-L.; Bittiger, R.; Borderie, B.; Botvina, A.S.; Bougault, R.; Bouriquet, B.; Charvet, J.-L.; Chbihi, A.; Dayras, R.; Dore, D.; Durand, D.; Frankland, J.D.; Galichet, E.; Gourio, D.; Guinet, D.; Hudan, S.; Imme, G.; Lautesse, Ph.; Lavaud, F.; Le Fevre, A.; Lopez, O.; Lukasik, J.; Lynen, U.; Mueller, W.F.J.; Nalpas, L.; Orth, H.; Plagnol, E.; Raciti, G.; Rosato, E.; Saija, A.; Schwarz, C.; Seidel, W.; Sfienti, C.; Steckmeyer, J.C.; Tamain, B.; Trzcinski, A.; Vient, E.; Vigilante, M.; Zwieglinski, B

    2004-04-05

    The nucleus-nucleus Liege intranuclear-cascade+percolation+evaporation model has been applied to the {sup 12}C+{sup 197}Au data measured by the INDRA-ALADIN collaboration at GSI. After the intranuclear cascade stage, the data are better reproduced when using the Statistical Multiframentation Model as afterburner. Further checks of the model are done on data from the EOS and KAOS collaborations.

  3. Interpolation in Spaces of Functions

    Directory of Open Access Journals (Sweden)

    K. Mosaleheh

    2006-03-01

    Full Text Available In this paper we consider the interpolation by certain functions such as trigonometric and rational functions for finite dimensional linear space X. Then we extend this to infinite dimensional linear spaces

  4. A cascade model of mentorship for frontline health workers in rural health facilities in Eastern Uganda: processes, achievements and lessons.

    Science.gov (United States)

    Ajeani, Judith; Mangwi Ayiasi, Richard; Tetui, Moses; Ekirapa-Kiracho, Elizabeth; Namazzi, Gertrude; Muhumuza Kananura, Ronald; Namusoke Kiwanuka, Suzanne; Beyeza-Kashesya, Jolly

    2017-08-01

    There is increasing demand for trainers to shift from traditional didactic training to innovative approaches that are more results-oriented. Mentorship is one such approach that could bridge the clinical knowledge gap among health workers. This paper describes the experiences of an attempt to improve health-worker performance in maternal and newborn health in three rural districts through a mentoring process using the cascade model. The paper further highlights achievements and lessons learnt during implementation of the cascade model. The cascade model started with initial training of health workers from three districts of Pallisa, Kibuku and Kamuli from where potential local mentors were selected for further training and mentorship by central mentors. These local mentors then went on to conduct mentorship visits supported by the external mentors. The mentorship process concentrated on partograph use, newborn resuscitation, prevention and management of Post-Partum Haemorrhage (PPH), including active management of third stage of labour, preeclampsia management and management of the sick newborn. Data for this paper was obtained from key informant interviews with district-level managers and local mentors. Mentorship improved several aspects of health-care delivery, ranging from improved competencies and responsiveness to emergencies and health-worker professionalism. In addition, due to better district leadership for Maternal and Newborn Health (MNH), there were improved supplies/medicine availability, team work and innovative local problem-solving approaches. Health workers were ultimately empowered to perform better. The study demonstrated that it is possible to improve the competencies of frontline health workers through performance enhancement for MNH services using locally built capacity in clinical mentorship for Emergency Obstetric and Newborn Care (EmONC). The cascade mentoring process needed strong external mentorship support at the start to ensure improved

  5. Cascading water underneath Wilkes Land, East Antarctic ice sheet, observed using altimetry and digital elevation models

    Science.gov (United States)

    Flament, T.; Berthier, E.; Rémy, F.

    2014-04-01

    We describe a major subglacial lake drainage close to the ice divide in Wilkes Land, East Antarctica, and the subsequent cascading of water underneath the ice sheet toward the coast. To analyse the event, we combined altimetry data from several sources and subglacial topography. We estimated the total volume of water that drained from Lake CookE2 by differencing digital elevation models (DEM) derived from ASTER and SPOT5 stereo imagery acquired in January 2006 and February 2012. At 5.2 ± 1.5 km3, this is the largest single subglacial drainage event reported so far in Antarctica. Elevation differences between ICESat laser altimetry spanning 2003-2009 and the SPOT5 DEM indicate that the discharge started in November 2006 and lasted approximately 2 years. A 13 m uplift of the surface, corresponding to a refilling of about 0.6 ± 0.3 km3, was observed between the end of the discharge in October 2008 and February 2012. Using the 35-day temporal resolution of Envisat radar altimetry, we monitored the subsequent filling and drainage of connected subglacial lakes located downstream of CookE2. The total volume of water traveling within the theoretical 500-km-long flow paths computed with the BEDMAP2 data set is similar to the volume that drained from Lake CookE2, and our observations suggest that most of the water released from Lake CookE2 did not reach the coast but remained trapped underneath the ice sheet. Our study illustrates how combining multiple remote sensing techniques allows monitoring of the timing and magnitude of subglacial water flow beneath the East Antarctic ice sheet.

  6. Correcting atmospheric effects on InSAR with MERIS water vapour data and elevation-dependent interpolation model

    KAUST Repository

    Li, Z. W.; Xu, Wenbin; Feng, G. C.; Hu, J.; Wang, C. C.; Ding, X. L.; Zhu, J. J.

    2012-01-01

    The propagation delay when radar signals travel from the troposphere has been one of the major limitations for the applications of high precision repeat-pass Interferometric Synthetic Aperture Radar (InSAR). In this paper, we first present an elevation-dependent atmospheric correction model for Advanced Synthetic Aperture Radar (ASAR—the instrument aboard the ENVISAT satellite) interferograms with Medium Resolution Imaging Spectrometer (MERIS) integrated water vapour (IWV) data. Then, using four ASAR interferometric pairs over Southern California as examples, we conduct the atmospheric correction experiments with cloud-free MERIS IWV data. The results show that after the correction the rms differences between InSAR and GPS have reduced by 69.6 per cent, 29 per cent, 31.8 per cent and 23.3 per cent, respectively for the four selected interferograms, with an average improvement of 38.4 per cent. Most importantly, after the correction, six distinct deformation areas have been identified, that is, Long Beach–Santa Ana Basin, Pomona–Ontario, San Bernardino and Elsinore basin, with the deformation velocities along the radar line-of-sight (LOS) direction ranging from −20 mm yr−1 to −30 mm yr−1 and on average around −25 mm yr−1, and Santa Fe Springs and Wilmington, with a slightly low deformation rate of about −10 mm yr−1 along LOS. Finally, through the method of stacking, we generate a mean deformation velocity map of Los Angeles over a period of 5 yr. The deformation is quite consistent with the historical deformation of the area. Thus, using the cloud-free MERIS IWV data correcting synchronized ASAR interferograms can significantly reduce the atmospheric effects in the interferograms and further better capture the ground deformation and other geophysical signals.

  7. Correcting atmospheric effects on InSAR with MERIS water vapour data and elevation-dependent interpolation model

    KAUST Repository

    Li, Z. W.

    2012-05-01

    The propagation delay when radar signals travel from the troposphere has been one of the major limitations for the applications of high precision repeat-pass Interferometric Synthetic Aperture Radar (InSAR). In this paper, we first present an elevation-dependent atmospheric correction model for Advanced Synthetic Aperture Radar (ASAR—the instrument aboard the ENVISAT satellite) interferograms with Medium Resolution Imaging Spectrometer (MERIS) integrated water vapour (IWV) data. Then, using four ASAR interferometric pairs over Southern California as examples, we conduct the atmospheric correction experiments with cloud-free MERIS IWV data. The results show that after the correction the rms differences between InSAR and GPS have reduced by 69.6 per cent, 29 per cent, 31.8 per cent and 23.3 per cent, respectively for the four selected interferograms, with an average improvement of 38.4 per cent. Most importantly, after the correction, six distinct deformation areas have been identified, that is, Long Beach–Santa Ana Basin, Pomona–Ontario, San Bernardino and Elsinore basin, with the deformation velocities along the radar line-of-sight (LOS) direction ranging from −20 mm yr−1 to −30 mm yr−1 and on average around −25 mm yr−1, and Santa Fe Springs and Wilmington, with a slightly low deformation rate of about −10 mm yr−1 along LOS. Finally, through the method of stacking, we generate a mean deformation velocity map of Los Angeles over a period of 5 yr. The deformation is quite consistent with the historical deformation of the area. Thus, using the cloud-free MERIS IWV data correcting synchronized ASAR interferograms can significantly reduce the atmospheric effects in the interferograms and further better capture the ground deformation and other geophysical signals.

  8. BCDForest: a boosting cascade deep forest model towards the classification of cancer subtypes based on gene expression data.

    Science.gov (United States)

    Guo, Yang; Liu, Shuhui; Li, Zhanhuai; Shang, Xuequn

    2018-04-11

    The classification of cancer subtypes is of great importance to cancer disease diagnosis and therapy. Many supervised learning approaches have been applied to cancer subtype classification in the past few years, especially of deep learning based approaches. Recently, the deep forest model has been proposed as an alternative of deep neural networks to learn hyper-representations by using cascade ensemble decision trees. It has been proved that the deep forest model has competitive or even better performance than deep neural networks in some extent. However, the standard deep forest model may face overfitting and ensemble diversity challenges when dealing with small sample size and high-dimensional biology data. In this paper, we propose a deep learning model, so-called BCDForest, to address cancer subtype classification on small-scale biology datasets, which can be viewed as a modification of the standard deep forest model. The BCDForest distinguishes from the standard deep forest model with the following two main contributions: First, a named multi-class-grained scanning method is proposed to train multiple binary classifiers to encourage diversity of ensemble. Meanwhile, the fitting quality of each classifier is considered in representation learning. Second, we propose a boosting strategy to emphasize more important features in cascade forests, thus to propagate the benefits of discriminative features among cascade layers to improve the classification performance. Systematic comparison experiments on both microarray and RNA-Seq gene expression datasets demonstrate that our method consistently outperforms the state-of-the-art methods in application of cancer subtype classification. The multi-class-grained scanning and boosting strategy in our model provide an effective solution to ease the overfitting challenge and improve the robustness of deep forest model working on small-scale data. Our model provides a useful approach to the classification of cancer subtypes

  9. Modeling and Analysis of the Common Mode Voltage in a Cascaded H-Bridge Electronic Power Transformer

    Directory of Open Access Journals (Sweden)

    Yun Yang

    2017-09-01

    Full Text Available Electronic power transformers (EPTs have been identified as emerging intelligent electronic devices in the future smart grid, e.g., the Energy Internet, especially in the application of renewable energy conversion and management. Considering that the EPT is directly connected to the medium-voltage grid, e.g., a10 kV distribution system, and its cascaded H-bridges structure, the common mode voltage (CMV issue will be more complex and severe. The CMV will threaten the insulation of the entire EPT device and even produce common mode current. This paper investigates the generated mechanism and characteristics of the CMV in a cascaded H-bridge EPT (CHB-EPT under both balanced and fault grid conditions. First, the CHB-EPT system is introduced. Then, a three-phase simplified circuit model of the high-voltage side of the EPT system is presented. Combined with a unipolar modulation strategy and carrier phase shifting technology by rigorous mathematical analysis and derivation, the EPT internal CMV and its characteristics are obtained. Moreover, the influence of the sinusoidal pulse width modulation dead time is considered and discussed based on analytical calculation. Finally, the simulation results are provided to verify the validity of the aforementioned model and the analysis results. The proposed theoretical analysis method is also suitable for other similar cascaded converters and can provide a useful theoretical guide for structural design and power density optimization.

  10. Contribution to the energy calibration of GLAST-LAT's calorimeter and validation of hadronic cascade models available with GEANT4

    International Nuclear Information System (INIS)

    Bregeon, J.

    2005-09-01

    GLAST is the new generation of Gamma-ray telescope and should dramatically improve our knowledge of the gamma-ray sky when it is launched on September 7. 2007. Data from the beam test that was held at GANIL with low energy ions were analyzed in order to measure the light quenching factor of CsI for all kinds of ions from proton to krypton of energy between 0 and 73 MeV per nucleon. These results have been very useful to understand the light quenching for relativistic ions that was measured during the GSI beam test. The knowledge of light quenching in GLAST CsI detectors for high energy ions is required for the on-orbit calibration with cosmic rays to succeed. Hadronic background rejection is another major issue for GLAST, thus, all the algorithms rely on the GLAST official Monte-Carlo simulation, GlastRelease. Hadronic cascade data from the GSI beam test and from another beam test held at CERN on the SPS have been used to benchmark hadronic cascade simulation within the framework of GEANT4, on which GlastRelease is based. Testing the good reproduction of simple parameters in GLAST-like calorimeters for hadronic cascades generated by 1.7 GeV, 3.4 GeV, 10 GeV and 20 GeV protons or pions led us to the conclusion that at high energy the default LHEP model is good enough, whereas at low energy the Bertini intra-nuclear cascade model should be used. (author)

  11. The Impact of the Topology on Cascading Failures in a Power Grid Model

    NARCIS (Netherlands)

    Koç, Y.; Warnier, M.; Mieghem, P. van; Kooij, R.E.; Brazier, F.M.T.

    2014-01-01

    Cascading failures are one of the main reasons for large scale blackouts in power transmission grids. Secure electrical power supply requires, together with careful operation, a robust design of the electrical power grid topology. Currently, the impact of the topology on grid robustness is mainly

  12. Modeling the radar scatter off of high-energy neutrino-induced particle cascades in ice

    NARCIS (Netherlands)

    de Vries, Krijn D.; van Eijndhoven, Nick; O'Murchadha, Aongus; Toscano, Simona; Scholten, Olaf

    2017-01-01

    We discuss the radar detection method as a probe for high-energy neutrino induced particle cascades in ice. In a previous work we showed that the radar detection techniqe is a promising method to probe the high-energy cosmic neutrino flux above PeV energies. This was done by considering a simplified

  13. Ariadne version 4 - a program for simulation of QCD cascades implementing the colour dipole model

    International Nuclear Information System (INIS)

    Loennblad, L.

    1992-01-01

    The fourth version of the Ariadne program for generating QCD cascades in the colour dipole approximation is presented. The underlying physics issues are discussed and a manual for using the program is given together with a few sample programs. The major changes from previous versions are the introduction of photon radiation from quarks and inclusion of interfaces to the LEPTO and PYTHIA programs. (orig.)

  14. Numerical modeling of energy-separation in cascaded Leontiev tubes with a central body

    Directory of Open Access Journals (Sweden)

    Makarov Maksim

    2017-01-01

    Full Text Available Designs of two- and three-cascaded Leontiev tubes are proposed in the paper. The results of numerical simulation of the energy separation in such tubes are presented. The efficiency parameters are determined in direct flows of helium-xenon coolant with low Prandtl number.

  15. Cascade and intermittency model for turbulent compressible self-gravitating matter and self-binding phase-space density fluctuations

    International Nuclear Information System (INIS)

    Biglari, H.; Diamond, P.H.

    1988-01-01

    A simple physical model which describes the dynamics of turbulence and the spectrum of density fluctuations in compressible, self-gravitating matter and self-binding, phase-space density fluctuations is presented. The two systems are analogous to each other in that each tends to self-organize into hierarchical structures via the mechanism of Jeans collapse. The model, the essential physical ingredient of which is a cascade constrained by the physical requirement of quasivirialization, is shown to exhibit interesting geometric properties such as intrinsic intermittency and anisotropy

  16. Learning Cascading

    CERN Document Server

    Covert, Michael

    2015-01-01

    This book is intended for software developers, system architects and analysts, big data project managers, and data scientists who wish to deploy big data solutions using the Cascading framework. You must have a basic understanding of the big data paradigm and should be familiar with Java development techniques.

  17. Interpolation of diffusion weighted imaging datasets

    DEFF Research Database (Denmark)

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W

    2014-01-01

    anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal......Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...

  18. Research of Cubic Bezier Curve NC Interpolation Signal Generator

    Directory of Open Access Journals (Sweden)

    Shijun Ji

    2014-08-01

    Full Text Available Interpolation technology is the core of the computer numerical control (CNC system, and the precision and stability of the interpolation algorithm directly affect the machining precision and speed of CNC system. Most of the existing numerical control interpolation technology can only achieve circular arc interpolation, linear interpolation or parabola interpolation, but for the numerical control (NC machining of parts with complicated surface, it needs to establish the mathematical model and generate the curved line and curved surface outline of parts and then discrete the generated parts outline into a large amount of straight line or arc to carry on the processing, which creates the complex program and a large amount of code, so it inevitably introduce into the approximation error. All these factors affect the machining accuracy, surface roughness and machining efficiency. The stepless interpolation of cubic Bezier curve controlled by analog signal is studied in this paper, the tool motion trajectory of Bezier curve can be directly planned out in CNC system by adjusting control points, and then these data were put into the control motor which can complete the precise feeding of Bezier curve. This method realized the improvement of CNC trajectory controlled ability from the simple linear and circular arc to the complex project curve, and it provides a new way for economy realizing the curve surface parts with high quality and high efficiency machining.

  19. Response of Renal Podocytes to Excessive Hydrostatic Pressure: a Pathophysiologic Cascade in a Malignant Hypertension Model

    Directory of Open Access Journals (Sweden)

    Ramzia Abu Hamad

    2017-12-01

    Full Text Available Background/Aims: Renal injuries induced by increased intra-glomerular pressure coincide with podocyte detachment from the glomerular basement membrane (GBM. In previous studies, it was demonstrated that mesangial cells have a crucial role in the pathogenesis of malignant hypertension. However, the exact pathophysiological cascade responsible for podocyte detachment and its relationship with mesangial cells has not been fully elucidated yet and this was the aim of the current study. Methods: Rat renal mesangial or podocytes were exposed to high hydrostatic pressure in an in-vitro model of malignant hypertension. The resulted effects on podocyte detachment, apoptosis and expression of podocin and integrinβ1 in addition to Angiotensin-II and TGF-β1 generation were evaluated. To simulate the paracrine effect podocytes were placed in mesangial cell media pre-exposed to pressure, or in media enriched with Angiotensin-II, TGF-β1 or receptor blockers. Results: High pressure resulted in increased Angiotensin-II levels in mesangial and podocyte cells. Angiotensin-II via the AT1 receptors reduced podocin expression and integrinβ1, culminating in detachment of both viable and apoptotic podocytes. Mesangial cells exposed to pressure had a greater increase in Angiotensin-II than pressure-exposed podocytes. The massively increased concentration of Angiotensin-II by mesangial cells, together with increased TGF-β1 production, resulted in increased apoptosis and detachment of non-viable apoptotic podocytes. Unlike the direct effect of pressure on podocytes, the mesangial mediated effects were not related to changes in adhesion proteins expression. Conclusions: Hypertension induces podocyte detachment by autocrine and paracrine effects. In a direct response to pressure, podocytes increase Angiotensin-II levels. This leads, via AT1 receptors, to structural changes in adhesion proteins, culminating in viable podocyte detachment. Paracrine effects of

  20. Cascade Model for Online Discussion Boards in an E-Learning Environment

    Directory of Open Access Journals (Sweden)

    Vibha Kumar

    2010-03-01

    Full Text Available This report is an outcome of five years of teaching and managing groups of students in an online learning environment. Some course management software allow the user to create groups and add different links within each group. Distinct platforms, with various sections, can be formed within those links for any given project. Students, as well as instructors, can manage the project for 6 to 8 weeks, cascading one discussion board into one or multiple platforms. This provides better understanding of the project material due to the step by step layout of the given exercise, leading to increased group management and greater communication among the student group members. This report provides the step-by-step procedure for cascading one discussion board into platforms, to manage online projects and provide a more controlled online environment for students in higher education.

  1. Hadronic cascade processes

    International Nuclear Information System (INIS)

    Ilgenfritz, E.M.; Kripfganz, J.; Moehring, H.J.

    1977-01-01

    The analytical treatment of hadronic decay cascades within the framework of the statistical bootstrap model is demonstrated on the basis of a simple variant. Selected problems for a more comprehensive formulation of the model such as angular momentum conservation, quantum statistical effects, and the immediate applicability to particle production processes at high energies are discussed in detail

  2. Modeling DNA?damage-induced pneumopathy in mice: insight from danger signaling cascades

    OpenAIRE

    Wirsd?rfer, Florian; Jendrossek, Verena

    2017-01-01

    Radiation-induced pneumonitis and fibrosis represent severe and dose-limiting side effects in the radiotherapy of thorax-associated neoplasms leading to decreased quality of life or - as a consequence of treatment with suboptimal radiation doses - to fatal outcomes by local recurrence or metastatic disease. It is assumed that the initial radiation-induced damage to the resident cells triggers a multifaceted damage-signalling cascade in irradiated normal tissues including a multifactorial secr...

  3. Permanently calibrated interpolating time counter

    International Nuclear Information System (INIS)

    Jachna, Z; Szplet, R; Kwiatkowski, P; Różyc, K

    2015-01-01

    We propose a new architecture of an integrated time interval counter that provides its permanent calibration in the background. Time interval measurement and the calibration procedure are based on the use of a two-stage interpolation method and parallel processing of measurement and calibration data. The parallel processing is achieved by a doubling of two-stage interpolators in measurement channels of the counter, and by an appropriate extension of control logic. Such modification allows the updating of transfer characteristics of interpolators without the need to break a theoretically infinite measurement session. We describe the principle of permanent calibration, its implementation and influence on the quality of the counter. The precision of the presented counter is kept at a constant level (below 20 ps) despite significant changes in the ambient temperature (from −10 to 60 °C), which can cause a sevenfold decrease in the precision of the counter with a traditional calibration procedure. (paper)

  4. Model approach for stress induced steroidal hormone cascade changes in severe mental diseases.

    Science.gov (United States)

    Volko, Claus D; Regidor, Pedro A; Rohr, Uwe D

    2016-03-01

    Stress was described by Cushing and Selye as an adaptation to a foreign stressor by the anterior pituitary increasing ACTH, which stimulates the release of glucocorticoid and mineralocorticoid hormones. The question is raised whether stress can induce additional steroidal hormone cascade changes in severe mental diseases (SMD), since stress is the common denominator. A systematic literature review was conducted in PubMed, where the steroidal hormone cascade of patients with SMD was compared to the impact of increasing stress on the steroidal hormone cascade (a) in healthy amateur marathon runners with no overtraining; (b) in healthy well-trained elite soldiers of a ranger training unit in North Norway, who were under extreme physical and mental stress, sleep deprivation, and insufficient calories for 1 week; and, (c) in soldiers suffering from post traumatic stress disorder (PTSD), schizophrenia (SI), and bipolar disorders (BD). (a) When physical stress is exposed moderately to healthy men and women for 3-5 days, as in the case of amateur marathon runners, only few steroidal hormones are altered. A mild reduction in testosterone, cholesterol and triglycerides is detected in blood and in saliva, but there was no decrease in estradiol. Conversely, there is an increase of the glucocorticoids, aldosterone and cortisol. Cellular immunity, but not specific immunity, is reduced for a short time in these subjects. (b) These changes are also seen in healthy elite soldiers exposed to extreme physical and mental stress but to a somewhat greater extent. For instance, the aldosterone is increased by a factor of three. (c) In SMD, an irreversible effect on the entire steroidal hormone cascade is detected. Hormones at the top of the cascade, such as cholesterol, dehydroepiandrosterone (DHEA), aldosterone and other glucocorticoids, are increased. However, testosterone and estradiol and their metabolites, and other hormones at the lower end of the cascade, seem to be reduced. 1

  5. Spatial interpolation of point velocities in stream cross-section

    Directory of Open Access Journals (Sweden)

    Hasníková Eliška

    2015-03-01

    Full Text Available The most frequently used instrument for measuring velocity distribution in the cross-section of small rivers is the propeller-type current meter. Output of measuring using this instrument is point data of a tiny bulk. Spatial interpolation of measured data should produce a dense velocity profile, which is not available from the measuring itself. This paper describes the preparation of interpolation models.

  6. A Study of Residual Amplitude Modulation Suppression in Injection Locked Quantum Cascade Lasers Based on a Simplified Rate Equation Model

    International Nuclear Information System (INIS)

    Webb, J F; Yong, K S C; Haldar, M K

    2015-01-01

    Using results that come out of a simplified rate equation model, the suppression of residual amplitude modulation in injection locked quantum cascade lasers with the master laser modulated by its drive current is investigated. Quasi-static and dynamic expressions for intensity modulation are used. The suppression peaks at a specific value of the injection ratio for a given detuning and linewidth enhancement factor. The intensity modulation suppression remains constant over a range of frequencies. The effects of injection ratio, detuning, coupling efficiency and linewidth enhancement factor are considered. (paper)

  7. Meson and baryon production in K/sup +/ and. pi. /sup +/ beam jets and quark-diquark cascade model

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, Kisei [Kagoshima Univ. (Japan). Faculty of Education; Noda, Hujio; Tashiro, Tsutomu

    1982-11-01

    A quark-diquark cascade model which includes flavor dependence and resonance effect is studied. The inclusive distributions of vector and pseudoscalar mesons and octet baryons and antibaryons in K/sup +/ and ..pi../sup +/ beam jets are analyzed. The contribution of decuplet baryons to the octet baryon spectra is very important in meson beam jet. The effects of the asymmetric u- and anti s-quark distributions in K/sup +/ and the SU(6)-symmetry breaking for the produced octet baryon are discussed in connection with the ..pi../sup +//K/sup +/ beam ratio and other data.

  8. From Cascade to Bottom-Up Ecosystem Services Model: How Does Social Cohesion Emerge from Urban Agriculture?

    Directory of Open Access Journals (Sweden)

    Anna Petit-Boix

    2018-03-01

    Full Text Available Given the expansion of urban agriculture (UA, we need to understand how this system provides ecosystem services, including foundational societal needs such as social cohesion, i.e., people’s willingness to cooperate with one another. Although social cohesion in UA has been documented, there is no framework for its emergence and how it can be modeled within a sustainability framework. In this study, we address this literature gap by showing how the popular cascade ecosystem services model can be modified to include social structures. We then transform the cascade model into a bottom-up causal framework for UA. In this bottom-up framework, basic biophysical (e.g., land availability and social (e.g., leadership ecosystem structures and processes lead to human activities (e.g., learning that can foster specific human attitudes and feelings (e.g., trust. These attitudes and feelings, when aggregated (e.g., social network, generate an ecosystem value of social cohesion. These cause-effect relationships can support the development of causality pathways in social life cycle assessment (S-LCA and further our understanding of the mechanisms behind social impacts and benefits. The framework also supports UA studies by showing the sustainability of UA as an emergent food supplier in cities.

  9. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  10. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    Science.gov (United States)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  11. Modeling and analyzing cascading dynamics of the Internet based on local congestion information

    Science.gov (United States)

    Zhu, Qian; Nie, Jianlong; Zhu, Zhiliang; Yu, Hai; Xue, Yang

    2018-06-01

    Cascading failure has already become one of the vital issues in network science. By considering realistic network operational settings, we propose the congestion function to represent the congested extent of node and construct a local congestion-aware routing strategy with a tunable parameter. We investigate the cascading failures on the Internet triggered by deliberate attacks. Simulation results show that the tunable parameter has an optimal value that makes the network achieve a maximum level of robustness. The robustness of the network has a positive correlation with tolerance parameter, but it has a negative correlation with the packets generation rate. In addition, there exists a threshold of the attacking proportion of nodes that makes the network achieve the lowest robustness. Moreover, by introducing the concept of time delay for information transmission on the Internet, we found that an increase of the time delay will decrease the robustness of the network rapidly. The findings of the paper will be useful for enhancing the robustness of the Internet in the future.

  12. A Note on Cubic Convolution Interpolation

    OpenAIRE

    Meijering, E.; Unser, M.

    2003-01-01

    We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.

  13. Node insertion in Coalescence Fractal Interpolation Function

    International Nuclear Information System (INIS)

    Prasad, Srijanani Anurag

    2013-01-01

    The Iterated Function System (IFS) used in the construction of Coalescence Hidden-variable Fractal Interpolation Function (CHFIF) depends on the interpolation data. The insertion of a new point in a given set of interpolation data is called the problem of node insertion. In this paper, the effect of insertion of new point on the related IFS and the Coalescence Fractal Interpolation Function is studied. Smoothness and Fractal Dimension of a CHFIF obtained with a node are also discussed

  14. Comparing interpolation schemes in dynamic receive ultrasound beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Andresen, Henrik; Nikolov, Svetoslav

    2005-01-01

    In medical ultrasound interpolation schemes are of- ten applied in receive focusing for reconstruction of image points. This paper investigates the performance of various interpolation scheme by means of ultrasound simulations of point scatterers in Field II. The investigation includes conventional...... B-mode imaging and synthetic aperture (SA) imaging using a 192-element, 7 MHz linear array transducer with λ pitch as simulation model. The evaluation consists primarily of calculations of the side lobe to main lobe ratio, SLMLR, and the noise power of the interpolation error. When using...... conventional B-mode imaging and linear interpolation, the difference in mean SLMLR is 6.2 dB. With polynomial interpolation the ratio is in the range 6.2 dB to 0.3 dB using 2nd to 5th order polynomials, and with FIR interpolation the ratio is in the range 5.8 dB to 0.1 dB depending on the filter design...

  15. Topics in multivariate approximation and interpolation

    CERN Document Server

    Jetter, Kurt

    2005-01-01

    This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr

  16. Bayer Demosaicking with Polynomial Interpolation.

    Science.gov (United States)

    Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil

    2016-08-30

    Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

  17. Inferring network structure from cascades

    Science.gov (United States)

    Ghonge, Sushrut; Vural, Dervis Can

    2017-07-01

    Many physical, biological, and social phenomena can be described by cascades taking place on a network. Often, the activity can be empirically observed, but not the underlying network of interactions. In this paper we offer three topological methods to infer the structure of any directed network given a set of cascade arrival times. Our formulas hold for a very general class of models where the activation probability of a node is a generic function of its degree and the number of its active neighbors. We report high success rates for synthetic and real networks, for several different cascade models.

  18. Eco-hydrologic model cascades: Simulating land use and climate change impacts on hydrology, hydraulics and habitats for fish and macroinvertebrates.

    Science.gov (United States)

    Guse, Björn; Kail, Jochem; Radinger, Johannes; Schröder, Maria; Kiesel, Jens; Hering, Daniel; Wolter, Christian; Fohrer, Nicola

    2015-11-15

    Climate and land use changes affect the hydro- and biosphere at different spatial scales. These changes alter hydrological processes at the catchment scale, which impact hydrodynamics and habitat conditions for biota at the river reach scale. In order to investigate the impact of large-scale changes on biota, a cascade of models at different scales is required. Using scenario simulations, the impact of climate and land use change can be compared along the model cascade. Such a cascade of consecutively coupled models was applied in this study. Discharge and water quality are predicted with a hydrological model at the catchment scale. The hydraulic flow conditions are predicted by hydrodynamic models. The habitat suitability under these hydraulic and water quality conditions is assessed based on habitat models for fish and macroinvertebrates. This modelling cascade was applied to predict and compare the impacts of climate- and land use changes at different scales to finally assess their effects on fish and macroinvertebrates. Model simulations revealed that magnitude and direction of change differed along the modelling cascade. Whilst the hydrological model predicted a relevant decrease of discharge due to climate change, the hydraulic conditions changed less. Generally, the habitat suitability for fish decreased but this was strongly species-specific and suitability even increased for some species. In contrast to climate change, the effect of land use change on discharge was negligible. However, land use change had a stronger impact on the modelled nitrate concentrations affecting the abundances of macroinvertebrates. The scenario simulations for the two organism groups illustrated that direction and intensity of changes in habitat suitability are highly species-dependent. Thus, a joined model analysis of different organism groups combined with the results of hydrological and hydrodynamic models is recommended to assess the impact of climate and land use changes on

  19. Information cascade on networks

    Science.gov (United States)

    Hisakado, Masato; Mori, Shintaro

    2016-05-01

    In this paper, we discuss a voting model by considering three different kinds of networks: a random graph, the Barabási-Albert (BA) model, and a fitness model. A voting model represents the way in which public perceptions are conveyed to voters. Our voting model is constructed by using two types of voters-herders and independents-and two candidates. Independents conduct voting based on their fundamental values; on the other hand, herders base their voting on the number of previous votes. Hence, herders vote for the majority candidates and obtain information relating to previous votes from their networks. We discuss the difference between the phases on which the networks depend. Two kinds of phase transitions, an information cascade transition and a super-normal transition, were identified. The first of these is a transition between a state in which most voters make the correct choices and a state in which most of them are wrong. The second is a transition of convergence speed. The information cascade transition prevails when herder effects are stronger than the super-normal transition. In the BA and fitness models, the critical point of the information cascade transition is the same as that of the random network model. However, the critical point of the super-normal transition disappears when these two models are used. In conclusion, the influence of networks is shown to only affect the convergence speed and not the information cascade transition. We are therefore able to conclude that the influence of hubs on voters' perceptions is limited.

  20. A case study of aerosol data assimilation with the Community Multi-scale Air Quality Model over the contiguous United States using 3D-Var and optimal interpolation methods

    Science.gov (United States)

    Tang, Youhua; Pagowski, Mariusz; Chai, Tianfeng; Pan, Li; Lee, Pius; Baker, Barry; Kumar, Rajesh; Delle Monache, Luca; Tong, Daniel; Kim, Hyun-Cheol

    2017-12-01

    This study applies the Gridpoint Statistical Interpolation (GSI) 3D-Var assimilation tool originally developed by the National Centers for Environmental Prediction (NCEP), to improve surface PM2.5 predictions over the contiguous United States (CONUS) by assimilating aerosol optical depth (AOD) and surface PM2.5 in version 5.1 of the Community Multi-scale Air Quality (CMAQ) modeling system. An optimal interpolation (OI) method implemented earlier (Tang et al., 2015) for the CMAQ modeling system is also tested for the same period (July 2011) over the same CONUS. Both GSI and OI methods assimilate surface PM2.5 observations at 00:00, 06:00, 12:00 and 18:00 UTC, and MODIS AOD at 18:00 UTC. The assimilations of observations using both GSI and OI generally help reduce the prediction biases and improve correlation between model predictions and observations. In the GSI experiments, assimilation of surface PM2.5 (particle matter with diameter big differences besides the data assimilation schemes. For instance, the OI uses relatively big model uncertainties, which helps yield smaller mean biases, but sometimes causes the RMSE to increase. We also examine and discuss the sensitivity of the assimilation experiments' results to the AOD forward operators.

  1. Experimental study on cascaded attitude angle control of a multi-rotor unmanned aerial vehicle with the simple internal model control method

    International Nuclear Information System (INIS)

    Song, Jun Beom; Byun, Young Seop; Jeong, Jin Seok; Kim, Jeong; Kang, Beom Soo

    2016-01-01

    This paper proposes a cascaded control structure and a method of practical application for attitude control of a multi-rotor Unmanned aerial vehicle (UAV). The cascade control, which has tighter control capability than a single-loop control, is rarely used in attitude control of a multi-rotor UAV due to the input-output relation, which is no longer simply a set-point to Euler angle response transfer function of a single-loop PID control, but there are multiply measured signals and interactive control loops that increase the complexity of evaluation in conventional way of design. However, it is proposed in this research a method that can optimize a cascade control with a primary and secondary loops and a PID controller for each loop. An investigation of currently available PID-tuning methods lead to selection of the Simple internal model control (SIMC) method, which is based on the Internal model control (IMC) and direct-synthesis method. Through the analysis and experiments, this research proposes a systematic procedure to implement a cascaded attitude controller, which includes the flight test, system identification and SIMC-based PID-tuning. The proposed method was validated successfully from multiple applications where the application to roll axis lead to a PID-PID cascade control, but the application to yaw axis lead to that of PID-PI

  2. Experimental study on cascaded attitude angle control of a multi-rotor unmanned aerial vehicle with the simple internal model control method

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jun Beom [Dept. of Aviation Maintenance, Dongwon Institute of Science and Technology, Yangsan (Korea, Republic of); Byun, Young Seop; Jeong, Jin Seok; Kim, Jeong; Kang, Beom Soo [Dept. of Aerospace Engineering, Pusan National University, Busan (Korea, Republic of)

    2016-11-15

    This paper proposes a cascaded control structure and a method of practical application for attitude control of a multi-rotor Unmanned aerial vehicle (UAV). The cascade control, which has tighter control capability than a single-loop control, is rarely used in attitude control of a multi-rotor UAV due to the input-output relation, which is no longer simply a set-point to Euler angle response transfer function of a single-loop PID control, but there are multiply measured signals and interactive control loops that increase the complexity of evaluation in conventional way of design. However, it is proposed in this research a method that can optimize a cascade control with a primary and secondary loops and a PID controller for each loop. An investigation of currently available PID-tuning methods lead to selection of the Simple internal model control (SIMC) method, which is based on the Internal model control (IMC) and direct-synthesis method. Through the analysis and experiments, this research proposes a systematic procedure to implement a cascaded attitude controller, which includes the flight test, system identification and SIMC-based PID-tuning. The proposed method was validated successfully from multiple applications where the application to roll axis lead to a PID-PID cascade control, but the application to yaw axis lead to that of PID-PI.

  3. Period doubling cascades of limit cycles in cardiac action potential models as precursors to chaotic early Afterdepolarizations.

    Science.gov (United States)

    Kügler, Philipp; Bulelzai, M A K; Erhardt, André H

    2017-04-04

    Early afterdepolarizations (EADs) are pathological voltage oscillations during the repolarization phase of cardiac action potentials (APs). EADs are caused by drugs, oxidative stress or ion channel disease, and they are considered as potential precursors to cardiac arrhythmias in recent attempts to redefine the cardiac drug safety paradigm. The irregular behaviour of EADs observed in experiments has been previously attributed to chaotic EAD dynamics under periodic pacing, made possible by a homoclinic bifurcation in the fast subsystem of the deterministic AP system of differential equations. In this article we demonstrate that a homoclinic bifurcation in the fast subsystem of the action potential model is neither a necessary nor a sufficient condition for the genesis of chaotic EADs. We rather argue that a cascade of period doubling (PD) bifurcations of limit cycles in the full AP system paves the way to chaotic EAD dynamics across a variety of models including a) periodically paced and spontaneously active cardiomyocytes, b) periodically paced and non-active cardiomyocytes as well as c) unpaced and spontaneously active cardiomyocytes. Furthermore, our bifurcation analysis reveals that chaotic EAD dynamics may coexist in a stable manner with fully regular AP dynamics, where only the initial conditions decide which type of dynamics is displayed. EADs are a potential source of cardiac arrhythmias and hence are of relevance both from the viewpoint of drug cardiotoxicity testing and the treatment of cardiomyopathies. The model-independent association of chaotic EADs with period doubling cascades of limit cycles introduced in this article opens novel opportunities to study chaotic EADs by means of bifurcation control theory and inverse bifurcation analysis. Furthermore, our results may shed new light on the synchronization and propagation of chaotic EADs in homogeneous and heterogeneous multicellular and cardiac tissue preparations.

  4. Recent topographic evolution and erosion of the deglaciated Washington Cascades inferred from a stochastic landscape evolution model

    Science.gov (United States)

    Moon, Seulgi; Shelef, Eitan; Hilley, George E.

    2015-05-01

    In this study, we model postglacial surface processes and examine the evolution of the topography and denudation rates within the deglaciated Washington Cascades to understand the controls on and time scales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-time scale denudation rates measured from cosmogenic 10Be isotopes. The probability distributions of those model parameters calculated based on a Bayesian inversion scheme show comparable ranges from previous studies in similar rock types and climatic conditions. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), whereas precipitation and slopes affect the spatial variation in landslide denudation rates. Simulation results show that postglacial denudation rates decay over time and take longer than 100 kyr to reach time-invariant rates. Over time, the landslides in the model consume the steep slopes characteristic of deglaciated landscapes. This response time scale is on the order of or longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may produce a significant and prolonged impact on denudation and topography.

  5. Monte-Carlo Modeling of Parameters of a Subcritical Cascade Reactor Based on MSBR and LMFBR Technologies

    CERN Document Server

    Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H

    2001-01-01

    Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.

  6. Aggression, Sibling Antagonism, and Theory-of-Mind During the First Year of Siblinghood: A Developmental Cascade Model

    Science.gov (United States)

    Song, Ju-Hyun; Volling, Brenda L.; Lane, Jonathan D.; Wellman, Henry M.

    2016-01-01

    A developmental cascade model was tested to examine longitudinal associations among firstborn children’s aggression, Theory-of-Mind, and antagonism toward their younger sibling during the first year of siblinghood. Aggression and Theory-of-Mind were assessed before the birth of a sibling, and 4 and 12 months after the birth, and antagonism was examined at 4 and 12 months in a sample of 208 firstborn children (initial M age = 30 months, 56% girls) from primarily European American, middle- class families. Firstborns’ aggression consistently predicted high sibling antagonism both directly and through poorer Theory-of-Mind. Results highlight the importance of examining longitudinal influences across behavioral, social-cognitive, and relational factors that are closely intertwined even from the early years of life. PMID:27096923

  7. Monte-Carlo modeling of parameters of a subcritical cascade reactor based on MSBR and LMFBR technologies

    International Nuclear Information System (INIS)

    Bznuni, S.A.; Zhamkochyan, V.M.; Khudaverdyan, A.G.; Barashenkov, V.S.; Sosnin, A.N.; Polanski, A.

    2001-01-01

    Parameters are investigated of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k eff = 0.94 - 0.98), is capable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10 14 cm 12 · s -1 , in the fast booster zone is 5.12 · 10 15 cm 12 · s -1 at k eff = 0.98 and proton beam current I = 2.1 mA. (author)

  8. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology. Appendix C -- Finite Element Models Solution Database File, Appendix D -- Benchmark Finite Element Models Solution Database File

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  9. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    Science.gov (United States)

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for

  10. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  11. Developmental cascade models linking peer victimization, depression, and academic achievement in Chinese children.

    Science.gov (United States)

    Liu, Junsheng; Bullock, Amanda; Coplan, Robert J; Chen, Xinyin; Li, Dan; Zhou, Ying

    2018-03-01

    This study explored the longitudinal relations among peer victimization, depression, and academic achievement in Chinese primary school students. Participants were N = 945 fourth-grade students (485 boys, 460 girls; M age  = 10.16 years, SD = 2 months) attending elementary schools in Shanghai, People's Republic of China. Three waves of data on peer victimization, depression, and academic achievement were collected from peer nominations, self-reports, and school records, respectively. The results indicated that peer victimization had both direct and indirect effects on later depression and academic achievement. Depression also had both direct and indirect negative effects on later academic achievement, but demonstrated only an indirect effect on later peer victimization. Finally, academic achievement had both direct and indirect negative effects on later peer victimization and depression. The findings show that there are cross-cultural similarities and differences in the various transactions that exist among peer victimization, depression, and academic achievement. Statement of contribution What is already known on this subject? Peer victimization directly and indirectly relates to depression and academic achievement. Depression directly and indirectly relates to academic achievement. Academic achievement directly and indirectly relates to depression. What the present study adds? A developmental cascade approach was used to assess the interrelations among peer victimization, depression, and academic achievement. Academic achievement mediates the relation between peer victimization and depression. Depression is related to peer victimization through academic achievement. Academic achievement directly and indirectly relates to peer victimization. Academic achievement is related to depression through peer victimization. © 2017 The British Psychological Society.

  12. Equivalent circuit-level model of quantum cascade lasers with integrated hot-electron and hot-phonon effects

    Science.gov (United States)

    Yousefvand, H. R.

    2017-12-01

    We report a study of the effects of hot-electron and hot-phonon dynamics on the output characteristics of quantum cascade lasers (QCLs) using an equivalent circuit-level model. The model is developed from the energy balance equation to adopt the electron temperature in the active region levels, the heat transfer equation to include the lattice temperature, the nonequilibrium phonon rate to account for the hot phonon dynamics and simplified two-level rate equations to incorporate the carrier and photon dynamics in the active region. This technique simplifies the description of the electron-phonon interaction in QCLs far from the equilibrium condition. Using the presented model, the steady and transient responses of the QCLs for a wide range of sink temperatures (80 to 320 K) are investigated and analysed. The model enables us to explain the operating characteristics found in QCLs. This predictive model is expected to be applicable to all QCL material systems operating in pulsed and cw regimes.

  13. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    Science.gov (United States)

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  14. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    Directory of Open Access Journals (Sweden)

    Peilu Liu

    2017-10-01

    Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  15. Hadron cascades produced by electromagnetic cascades

    International Nuclear Information System (INIS)

    Nelson, W.R.; Jenkins, T.M.; Ranft, J.

    1986-12-01

    A method for calculating high energy hadron cascades induced by multi-GeV electron and photon beams is described. Using the EGS4 computer program, high energy photons in the EM shower are allowed to interact hadronically according to the vector meson dominance (VMD) model, facilitated by a Monte Carlo version of the dual multistring fragmentation model which is used in the hadron cascade code FLUKA. The results of this calculation compare very favorably with experimental data on hadron production in photon-proton collisions and on the hadron production by electron beams on targets (i.e., yields in secondary particle beam lines). Electron beam induced hadron star density contours are also presented and are compared with those produced by proton beams. This FLUKA-EGS4 coupling technique could find use in the design of secondary beams, in the determination high energy hadron source terms for shielding purposes, and in the estimation of induced radioactivity in targets, collimators and beam dumps

  16. Multifractality, imperfect scaling and hydrological properties of rainfall time series simulated by continuous universal multifractal and discrete random cascade models

    Directory of Open Access Journals (Sweden)

    F. Serinaldi

    2010-12-01

    Full Text Available Discrete multiplicative random cascade (MRC models were extensively studied and applied to disaggregate rainfall data, thanks to their formal simplicity and the small number of involved parameters. Focusing on temporal disaggregation, the rationale of these models is based on multiplying the value assumed by a physical attribute (e.g., rainfall intensity at a given time scale L, by a suitable number b of random weights, to obtain b attribute values corresponding to statistically plausible observations at a smaller L/b time resolution. In the original formulation of the MRC models, the random weights were assumed to be independent and identically distributed. However, for several studies this hypothesis did not appear to be realistic for the observed rainfall series as the distribution of the weights was shown to depend on the space-time scale and rainfall intensity. Since these findings contrast with the scale invariance assumption behind the MRC models and impact on the applicability of these models, it is worth studying their nature. This study explores the possible presence of dependence of the parameters of two discrete MRC models on rainfall intensity and time scale, by analyzing point rainfall series with 5-min time resolution. Taking into account a discrete microcanonical (MC model based on beta distribution and a discrete canonical beta-logstable (BLS, the analysis points out that the relations between the parameters and rainfall intensity across the time scales are detectable and can be modeled by a set of simple functions accounting for the parameter-rainfall intensity relationship, and another set describing the link between the parameters and the time scale. Therefore, MC and BLS models were modified to explicitly account for these relationships and compared with the continuous in scale universal multifractal (CUM model, which is used as a physically based benchmark model. Monte Carlo simulations point out

  17. Interpolation of rational matrix functions

    CERN Document Server

    Ball, Joseph A; Rodman, Leiba

    1990-01-01

    This book aims to present the theory of interpolation for rational matrix functions as a recently matured independent mathematical subject with its own problems, methods and applications. The authors decided to start working on this book during the regional CBMS conference in Lincoln, Nebraska organized by F. Gilfeather and D. Larson. The principal lecturer, J. William Helton, presented ten lectures on operator and systems theory and the interplay between them. The conference was very stimulating and helped us to decide that the time was ripe for a book on interpolation for matrix valued functions (both rational and non-rational). When the work started and the first partial draft of the book was ready it became clear that the topic is vast and that the rational case by itself with its applications is already enough material for an interesting book. In the process of writing the book, methods for the rational case were developed and refined. As a result we are now able to present the rational case as an indepe...

  18. Stochastic background of atmospheric cascades

    International Nuclear Information System (INIS)

    Wilk, G.; Wlodarczyk, Z.

    1993-01-01

    Fluctuations in the atmospheric cascades developing during the propagation of very high energy cosmic rays through the atmosphere are investigated using stochastic branching model of pure birth process with immigration. In particular, we show that the multiplicity distributions of secondaries emerging from gamma families are much narrower than those resulting from hadronic families. We argue that the strong intermittent like behaviour found recently in atmospheric families results from the fluctuations in the cascades themselves and are insensitive to the details of elementary interactions

  19. Conjunction of radial basis function interpolator and artificial intelligence models for time-space modeling of contaminant transport in porous media

    Science.gov (United States)

    Nourani, Vahid; Mousavi, Shahram; Dabrowska, Dominika; Sadikoglu, Fahreddin

    2017-05-01

    As an innovation, both black box and physical-based models were incorporated into simulating groundwater flow and contaminant transport. Time series of groundwater level (GL) and chloride concentration (CC) observed at different piezometers of study plain were firstly de-noised by the wavelet-based de-noising approach. The effect of de-noised data on the performance of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) was evaluated. Wavelet transform coherence was employed for spatial clustering of piezometers. Then for each cluster, ANN and ANFIS models were trained to predict GL and CC values. Finally, considering the predicted water heads of piezometers as interior conditions, the radial basis function as a meshless method which solves partial differential equations of GFCT, was used to estimate GL and CC values at any point within the plain where there is not any piezometer. Results indicated that efficiency of ANFIS based spatiotemporal model was more than ANN based model up to 13%.

  20. Mathematical modeling of a continuous alcoholic fermentation process in a two-stage tower reactor cascade with flocculating yeast recycle.

    Science.gov (United States)

    de Oliveira, Samuel Conceição; de Castro, Heizir Ferreira; Visconti, Alexandre Eliseu Stourdze; Giudici, Reinaldo

    2015-03-01

    Experiments of continuous alcoholic fermentation of sugarcane juice with flocculating yeast recycle were conducted in a system of two 0.22-L tower bioreactors in series, operated at a range of dilution rates (D 1 = D 2 = 0.27-0.95 h(-1)), constant recycle ratio (α = F R /F = 4.0) and a sugar concentration in the feed stream (S 0) around 150 g/L. The data obtained in these experimental conditions were used to adjust the parameters of a mathematical model previously developed for the single-stage process. This model considers each of the tower bioreactors as a perfectly mixed continuous reactor and the kinetics of cell growth and product formation takes into account the limitation by substrate and the inhibition by ethanol and biomass, as well as the substrate consumption for cellular maintenance. The model predictions agreed satisfactorily with the measurements taken in both stages of the cascade. The major differences with respect to the kinetic parameters previously estimated for a single-stage system were observed for the maximum specific growth rate, for the inhibition constants of cell growth and for the specific rate of substrate consumption for cell maintenance. Mathematical models were validated and used to simulate alternative operating conditions as well as to analyze the performance of the two-stage process against that of the single-stage process.

  1. Disaggregating radar-derived rainfall measurements in East Azarbaijan, Iran, using a spatial random-cascade model

    Science.gov (United States)

    Fouladi Osgouei, Hojjatollah; Zarghami, Mahdi; Ashouri, Hamed

    2017-07-01

    The availability of spatial, high-resolution rainfall data is one of the most essential needs in the study of water resources. These data are extremely valuable in providing flood awareness for dense urban and industrial areas. The first part of this paper applies an optimization-based method to the calibration of radar data based on ground rainfall gauges. Then, the climatological Z-R relationship for the Sahand radar, located in the East Azarbaijan province of Iran, with the help of three adjacent rainfall stations, is obtained. The new climatological Z-R relationship with a power-law form shows acceptable statistical performance, making it suitable for radar-rainfall estimation by the Sahand radar outputs. The second part of the study develops a new heterogeneous random-cascade model for spatially disaggregating the rainfall data resulting from the power-law model. This model is applied to the radar-rainfall image data to disaggregate rainfall data with coverage area of 512 × 512 km2 to a resolution of 32 × 32 km2. Results show that the proposed model has a good ability to disaggregate rainfall data, which may lead to improvement in precipitation forecasting, and ultimately better water-resources management in this arid region, including Urmia Lake.

  2. Evaluation of various interpolants available in DICE

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reu, Phillip L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Crozier, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an image correlation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel intensity data as a continuous signal. Interpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guide the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the interpolants implemented.

  3. Time Reversal Reconstruction Algorithm Based on PSO Optimized SVM Interpolation for Photoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Mingjian Sun

    2015-01-01

    Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.

  4. Atom-atom collision cascades localization

    International Nuclear Information System (INIS)

    Kirsanov, V.V.

    1980-01-01

    The presence of an impurity and thermal vibration influence on the atom-atom collision cascade development is analysed by the computer simulation method (the modificated dynamic model). It is discovered that the relatively low energetic cascades are localized with the temperature increase of an irradiated crystal. On the basis of the given effect the mechanism of splitting of the high energetic cascades into subcascades is proposed. It accounts for two factors: the primary knocked atom energy and the irradiated crystal temperature. Introduction of an impurity also localizes the cascades independently from the impurity atom mass. The cascades localization leads to intensification of the process of annealing in the cascades and reduction of the post-cascade vacancy cluster sizes. (author)

  5. Analysis of ECT Synchronization Performance Based on Different Interpolation Methods

    Directory of Open Access Journals (Sweden)

    Yang Zhixin

    2014-01-01

    Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.

  6. Effective screening strategy using ensembled pharmacophore models combined with cascade docking: application to p53-MDM2 interaction inhibitors.

    Science.gov (United States)

    Xue, Xin; Wei, Jin-Lian; Xu, Li-Li; Xi, Mei-Yang; Xu, Xiao-Li; Liu, Fang; Guo, Xiao-Ke; Wang, Lei; Zhang, Xiao-Jin; Zhang, Ming-Ye; Lu, Meng-Chen; Sun, Hao-Peng; You, Qi-Dong

    2013-10-28

    Protein-protein interactions (PPIs) play a crucial role in cellular function and form the backbone of almost all biochemical processes. In recent years, protein-protein interaction inhibitors (PPIIs) have represented a treasure trove of potential new drug targets. Unfortunately, there are few successful drugs of PPIIs on the market. Structure-based pharmacophore (SBP) combined with docking has been demonstrated as a useful Virtual Screening (VS) strategy in drug development projects. However, the combination of target complexity and poor binding affinity prediction has thwarted the application of this strategy in the discovery of PPIIs. Here we report an effective VS strategy on p53-MDM2 PPI. First, we built a SBP model based on p53-MDM2 complex cocrystal structures. The model was then simplified by using a Receptor-Ligand complex-based pharmacophore model considering the critical binding features between MDM2 and its small molecular inhibitors. Cascade docking was subsequently applied to improve the hit rate. Based on this strategy, we performed VS on NCI and SPECS databases and successfully discovered 6 novel compounds from 15 hits with the best, compound 1 (NSC 5359), K(i) = 180 ± 50 nM. These compounds can serve as lead compounds for further optimization.

  7. Interpolation of vector fields from human cardiac DT-MRI

    International Nuclear Information System (INIS)

    Yang, F; Zhu, Y M; Rapacchi, S; Robini, M; Croisille, P; Luo, J H

    2011-01-01

    There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.

  8. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  9. Importance of interpolation and coincidence errors in data fusion

    Science.gov (United States)

    Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

    2018-02-01

    The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  10. Research on interpolation methods in medical image processing.

    Science.gov (United States)

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  11. An adaptive interpolation scheme for molecular potential energy surfaces

    Science.gov (United States)

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-01

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

  12. Influence of spatial temperature estimation method in ecohydrologic modeling in the western Oregon Cascades

    Science.gov (United States)

    E. Garcia; C.L. Tague; J. Choate

    2013-01-01

    Most spatially explicit hydrologic models require estimates of air temperature patterns. For these models, empirical relationships between elevation and air temperature are frequently used to upscale point measurements or downscale regional and global climate model estimates of air temperature. Mountainous environments are particularly sensitive to air temperature...

  13. A case study of aerosol data assimilation with the Community Multi-scale Air Quality Model over the contiguous United States using 3D-Var and optimal interpolation methods

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2017-12-01

    Full Text Available This study applies the Gridpoint Statistical Interpolation (GSI 3D-Var assimilation tool originally developed by the National Centers for Environmental Prediction (NCEP, to improve surface PM2.5 predictions over the contiguous United States (CONUS by assimilating aerosol optical depth (AOD and surface PM2.5 in version 5.1 of the Community Multi-scale Air Quality (CMAQ modeling system. An optimal interpolation (OI method implemented earlier (Tang et al., 2015 for the CMAQ modeling system is also tested for the same period (July 2011 over the same CONUS. Both GSI and OI methods assimilate surface PM2.5 observations at 00:00, 06:00, 12:00 and 18:00 UTC, and MODIS AOD at 18:00 UTC. The assimilations of observations using both GSI and OI generally help reduce the prediction biases and improve correlation between model predictions and observations. In the GSI experiments, assimilation of surface PM2.5 (particle matter with diameter < 2.5 µm leads to stronger increments in surface PM2.5 compared to its MODIS AOD assimilation at the 550 nm wavelength. In contrast, we find a stronger OI impact of the MODIS AOD on surface aerosols at 18:00 UTC compared to the surface PM2.5 OI method. GSI produces smoother result and yields overall better correlation coefficient and root mean squared error (RMSE. It should be noted that the 3D-Var and OI methods used here have several big differences besides the data assimilation schemes. For instance, the OI uses relatively big model uncertainties, which helps yield smaller mean biases, but sometimes causes the RMSE to increase. We also examine and discuss the sensitivity of the assimilation experiments' results to the AOD forward operators.

  14. The Cascading Effects of Multiple Dimensions of Implementation on Program Outcomes: a Test of a Theoretical Model.

    Science.gov (United States)

    Berkel, Cady; Mauricio, Anne M; Sandler, Irwin N; Wolchik, Sharlene A; Gallo, Carlos G; Brown, C Hendricks

    2017-12-14

    This study tests a theoretical cascade model in which multiple dimensions of facilitator delivery predict indicators of participant responsiveness, which in turn lead to improvements in targeted program outcomes. An effectiveness trial of the 10-session New Beginnings Program for divorcing families was implemented in partnership with four county-level family courts. This study included 366 families assigned to the intervention condition who attended at least one session. Independent observers provided ratings of program delivery (i.e., fidelity to the curriculum and process quality). Facilitators reported on parent attendance and parents' competence in home practice of program skills. At pretest and posttest, children reported on parenting and parents reported child mental health. We hypothesized effects of quality on attendance, fidelity and attendance on home practice, and home practice on improvements in parenting and child mental health. Structural Equation Modeling with mediation and moderation analyses were used to test these associations. Results indicated quality was significantly associated with attendance, and attendance moderated the effect of fidelity on home practice. Home practice was a significant mediator of the links between fidelity and improvements in parent-child relationship quality and child externalizing and internalizing problems. Findings provide support for fidelity to the curriculum, process quality, attendance, and home practice as valid predictors of program outcomes for mothers and fathers. Future directions for assessing implementation in community settings are discussed.

  15. Pharmacodynamic/Pharmacogenomic Modeling of Insulin Resistance Genes in Rat Muscle After Methylprednisolone Treatment: Exploring Regulatory Signaling Cascades

    Directory of Open Access Journals (Sweden)

    Zhenling Yao

    2008-01-01

    Full Text Available Corticosteroids (CS effects on insulin resistance related genes in rat skeletal muscle were studied. In our acute study, adrenalectomized (ADX rats were given single doses of 50 mg/kg methylprednisolone (MPL intravenously. In our chronic study, ADX rats were implanted with Alzet mini-pumps giving zero-order release rates of 0.3 mg/kg/h MPL and sacrificed at various times up to 7 days. Total RNA was extracted from gastrocnemius muscles and hybridized to Affymetrix GeneChips. Data mining and literature searches identified 6 insulin resistance related genes which exhibited complex regulatory pathways. Insulin receptor substrate-1 (IRS-1, uncoupling protein 3 (UCP3, pyruvate dehydrogenase kinase isoenzyme 4 (PDK4, fatty acid translocase (FAT and glycerol-3-phosphate acyltransferase (GPAT dynamic profiles were modeled with mutual effects by calculated nuclear drug-receptor complex (DR(N and transcription factors. The oscillatory feature of endothelin-1 (ET-1 expression was depicted by a negative feedback loop. These integrated models provide test- able quantitative hypotheses for these regulatory cascades.

  16. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    Science.gov (United States)

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  17. Differential Interpolation Effects in Free Recall

    Science.gov (United States)

    Petrusic, William M.; Jamieson, Donald G.

    1978-01-01

    Attempts to determine whether a sufficiently demanding and difficult interpolated task (shadowing, i.e., repeating aloud) would decrease recall for earlier-presented items as well as for more recent items. Listening to music was included as a second interpolated task. Results support views that serial position effects reflect a single process.…

  18. Transfinite C2 interpolant over triangles

    International Nuclear Information System (INIS)

    Alfeld, P.; Barnhill, R.E.

    1984-01-01

    A transfinite C 2 interpolant on a general triangle is created. The required data are essentially C 2 , no compatibility conditions arise, and the precision set includes all polynomials of degree less than or equal to eight. The symbol manipulation language REDUCE is used to derive the scheme. The scheme is discretized to two different finite dimensional C 2 interpolants in an appendix

  19. Composite and Cascaded Generalized-K Fading Channel Modeling and Their Diversity and Performance Analysis

    KAUST Repository

    Ansari, Imran Shafique

    2010-01-01

    The introduction of new schemes that are based on the communication among nodes has motivated the use of composite fading models due to the fact that the nodes experience different multipath fading and shadowing statistics, which subsequently

  20. MODELING COLLISIONAL CASCADES IN DEBRIS DISKS: STEEP DUST-SIZE DISTRIBUTIONS

    International Nuclear Information System (INIS)

    Gáspár, András; Psaltis, Dimitrios; Rieke, George H.; Özel, Feryal

    2012-01-01

    We explore the evolution of the mass distribution of dust in collision-dominated debris disks, using the collisional code introduced in our previous paper. We analyze the equilibrium distribution and its dependence on model parameters by evolving over 100 models to 10 Gyr. With our numerical models, we confirm that systems reach collisional equilibrium with a mass distribution that is steeper than the traditional solution by Dohnanyi. Our model yields a quasi-steady-state slope of n(m) ∼ m –1.88 [n(a) ∼ a –3.65 ] as a robust solution for a wide range of possible model parameters. We also show that a simple power-law function can be an appropriate approximation for the mass distribution of particles in certain regimes. The steeper solution has observable effects in the submillimeter and millimeter wavelength regimes of the electromagnetic spectrum. We assemble data for nine debris disks that have been observed at these wavelengths and, using a simplified absorption efficiency model, show that the predicted slope of the particle-mass distribution generates spectral energy distributions that are in agreement with the observed ones.

  1. Analysis of velocity planning interpolation algorithm based on NURBS curve

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.

  2. Development of a Higher Fidelity Model for the Cascade Distillation Subsystem (CDS)

    Science.gov (United States)

    Perry, Bruce; Anderson, Molly

    2014-01-01

    Significant improvements have been made to the ACM model of the CDS, enabling accurate predictions of dynamic operations with fewer assumptions. The model has been utilized to predict how CDS performance would be impacted by changing operating parameters, revealing performance trade-offs and possibilities for improvement. CDS efficiency is driven by the THP coefficient of performance, which in turn is dependent on heat transfer within the system. Based on the remaining limitations of the simulation, priorities for further model development include: center dot Relaxing the assumption of total condensation center dot Incorporating dynamic simulation capability for the buildup of dissolved inert gasses in condensers center dot Examining CDS operation with more complex feeds center dot Extending heat transfer analysis to all surfaces

  3. A Scalable Approach to Modeling Cascading Risk in the MDAP Network

    Science.gov (United States)

    2014-05-01

    Populate Decision Process Model. • Identify challenges to data acquisition. Legend: ATIE_MOD Automated Text & Image  Extraction Module  IID_MOD...8217:~ TI ~.O.Y <D1Y o:yle-~Jti<NI:Aboolate:tos>:J14 : lert•tl ::J!i <DtV o; vlc "~’"""’al>oolote:tos~: 3l4: 1•tt:t’l...DAES, PE docs, SARS – Topic models built from MDAP hub data seem to be relevant to neighbors. – Challenges : Formatting and Content inconsistencies

  4. Laos Organization Name Using Cascaded Model Based on SVM and CRF

    Directory of Open Access Journals (Sweden)

    Duan Shaopeng

    2017-01-01

    Full Text Available According to the characteristics of Laos organization name, this paper proposes a two layer model based on conditional random field (CRF and support vector machine (SVM for Laos organization name recognition. A layer of model uses CRF to recognition simple organization name, and the result is used to support the decision of the second level. Based on the driving method, the second layer uses SVM and CRF to recognition the complicated organization name. Finally, the results of the two levels are combined, And by a subsequent treatment to correct results of low confidence recognition. The results show that this approach based on SVM and CRF is efficient in recognizing organization name through open test for real linguistics, and the recalling rate achieve 80. 83%and the precision rate achieves 82. 75%.

  5. Application of ordinary kriging for interpolation of micro-structured technical surfaces

    International Nuclear Information System (INIS)

    Raid, Indek; Kusnezowa, Tatjana; Seewig, Jörg

    2013-01-01

    Kriging is an interpolation technique used in geostatistics. In this paper we present kriging applied in the field of three-dimensional optical surface metrology. Technical surfaces are not always optically cooperative, meaning that measurements of technical surfaces contain invalid data points because of different effects. These data points need to be interpolated to obtain a complete area in order to fulfil further processing. We present an elementary type of kriging, known as ordinary kriging, and apply it to interpolate measurements of different technical surfaces containing different kinds of realistic defects. The result of the interpolation with kriging is compared to six common interpolation techniques: nearest neighbour, natural neighbour, inverse distance to a power, triangulation with linear interpolation, modified Shepard's method and radial basis function. In order to quantify the results of different interpolations, the topographies are compared to defect-free reference topographies. Kriging is derived from a stochastic model that suggests providing an unbiased, linear estimation with a minimized error variance. The estimation with kriging is based on a preceding statistical analysis of the spatial structure of the surface. This comprises the choice and adaptation of specific models of spatial continuity. In contrast to common methods, kriging furthermore considers specific anisotropy in the data and adopts the interpolation accordingly. The gained benefit requires some additional effort in preparation and makes the overall estimation more time-consuming than common methods. However, the adaptation to the data makes this method very flexible and accurate. (paper)

  6. Human Brain Networks: Spiking Neuron Models, Multistability, Synchronization, Thermodynamics, Maximum Entropy Production, and Anesthetic Cascade Mechanisms

    Directory of Open Access Journals (Sweden)

    Wassim M. Haddad

    2014-07-01

    Full Text Available Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a

  7. Abnormal cascading failure spreading on complex networks

    International Nuclear Information System (INIS)

    Wang, Jianwei; Sun, Enhui; Xu, Bo; Li, Peng; Ni, Chengzhang

    2016-01-01

    Applying the mechanism of the preferential selection of the flow destination, we develop a new method to quantify the initial load on an edge, of which the flow is transported along the path with the shortest edge weight between two nodes. Considering the node weight, we propose a cascading model on the edge and investigate cascading dynamics induced by the removal of the edge with the largest load. We perform simulated attacks on four types of constructed networks and two actual networks and observe an interesting and counterintuitive phenomenon of the cascading spreading, i.e., gradually improving the capacity of nodes does not lead to the monotonous increase in the robustness of these networks against cascading failures. The non monotonous behavior of cascading dynamics is well explained by the analysis on a simple graph. We additionally study the effect of the parameter of the node weight on cascading dynamics and evaluate the network robustness by a new metric.

  8. Precursors of adolescent substance use from early childhood and early adolescence: testing a developmental cascade model.

    Science.gov (United States)

    Sitnick, Stephanie L; Shaw, Daniel S; Hyde, Luke W

    2014-02-01

    This study examined developmentally salient risk and protective factors of adolescent substance use assessed during early childhood and early adolescence using a sample of 310 low-income boys. Child problem behavior and proximal family risk and protective factors (i.e., parenting and maternal depression) during early childhood, as well as child and family factors and peer deviant behavior during adolescence, were explored as potential precursors to later substance use during adolescence using structural equation modeling. Results revealed that early childhood risk and protective factors (i.e., child externalizing problems, mothers' depressive symptomatology, and nurturant parenting) were indirectly related to substance use at the age of 17 via risk and protective factors during early and middle adolescence (i.e., parental knowledge and externalizing problems). The implications of these findings for early prevention and intervention are discussed.

  9. Semi-Lagrangian methods in air pollution models

    Directory of Open Access Journals (Sweden)

    A. B. Hansen

    2011-06-01

    Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.

    The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.

    Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.

    All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.

    The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.

    The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme

  10. Study of fission cross sections induced by nucleons and pions using the cascade-exciton model CEM95

    International Nuclear Information System (INIS)

    Yasin, Z.; Shahzad, M. I.

    2007-01-01

    Nucleon and pion-induced fission cross sections at intermediate and at higher energies are important in current nuclear applications, such as accelerator driven-systems (ADS), in medicine, for effects on electronics etc. In the present work, microscopic fission cross sections induced by nucleons and pions are calculated using the cascade-exciton model code CEM95 for different projectile-target combinations; at various energies and the computed cross sections are compared with the experimental data found in literature. A new approach is used to compute the fission cross sections in which a change of the ratio of the level density parameter in fission to neutron emission channels was taken into account with the change in the incident energy of the projectile. We are unable to describe well the fission cross sections without using this new approach. Proton induced fission cross sections are calculated for targets 1 97Au, 2 08Pb, 2 09Bi, 2 38U and 2 39Pu in the energy range from 20 MeV to 2000 MeV. Neutron induced fission cross sections are computed for 2 38U and 2 39Pu in the energy range from 20 MeV to 200 MeV. Negative pion induced cross sections for fission are calculated for targets 1 97Au and 2 08Pb from 50 MeV to 2500 MeV energy range. The calculated cross sections are essential to build a data library file for accelerator driven systems just like was built for conventional nuclear reactors. The computed values exhibited reasonable agreement with the experimental values found in the literature across a wide range of beam energies

  11. Local measurement and numerical modeling of mass/heat transfer from a turbine blade in a linear cascade with tip clearance

    Science.gov (United States)

    Jin, Peitong

    2000-11-01

    Local mass/heat transfer measurements from the turbine blade near-tip and the tip surfaces are performed using the naphthalene sublimation technique. The experiments are conducted in a linear cascade consisting of five high-pressure blades with a central test-blade configuration. The incoming flow conditions are close to those of the gas turbine engine environment (boundary layer displacement thickness is about 0.01 of chord) with an exit Reynolds number of 6.2 x 105. The effects of tip clearance level (0.86%--6.90% of chord), mainstream Reynolds number and turbulence intensity (0.2 and 12.0%) are investigated. Two methods of flow visualization---oil and lampblack, laser light sheet smoke wire---as well as static pressure measurement on the blade surface are used to study the tip leakage flow and vortex in the cascade. In addition, numerical modeling of the flow and heat transfer processes in the linear cascade with different tip clearances is conducted using commercial software incorporating advanced turbulence models. The present study confirms many important results on the tip leakage flow and vortex from the literature, contributes to the current understanding in the effects of tip leakage flow and vortex on local heat transfer from the blade near-tip and the tip surfaces, and provides detailed local and average heat/mass transfer data applicable to turbine blade tip cooling design.

  12. Developmental cascade models of a parenting-focused program for divorced families on mental health problems and substance use in emerging adulthood.

    Science.gov (United States)

    Wolchik, Sharlene A; Tein, Jenn-Yun; Sandler, Irwin N; Kim, Han-Joe

    2016-08-01

    A developmental cascade model from functioning in adolescence to emerging adulthood was tested using data from a 15-year longitudinal follow-up of 240 emerging adults whose families participated in a randomized, experimental trial of a preventive program for divorced families. Families participated in the program or literature control condition when the offspring were ages 9-12. Short-term follow-ups were conducted 3 months and 6 months following completion of the program when the offspring were in late childhood/early adolescence. Long-term follow-ups were conducted 6 years and 15 years after program completion when the offspring were in middle to late adolescence and emerging adulthood, respectively. It was hypothesized that the impact of the program on mental health and substance use outcomes in emerging adulthood would be explained by developmental cascade effects of program effects in adolescence. The results provided support for a cascade effects model. Specifically, academic competence in adolescence had cross-domain effects on internalizing problems and externalizing problems in emerging adulthood. In addition, adaptive coping in adolescence was significantly, negatively related to binge drinking. It was unexpected that internalizing symptoms in adolescence were significantly negatively related to marijuana use and alcohol use. Gender differences occurred in the links between mental health problems and substance use in adolescence and mental health problems and substance use in emerging adulthood.

  13. NOAA Optimum Interpolation (OI) SST V2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...

  14. Kuu plaat : Interpol Antics. Plaadid kauplusest Lasering

    Index Scriptorium Estoniae

    2005-01-01

    Heliplaatidest: "Interpol Antics", Scooter "Mind the Gap", Slide-Fifty "The Way Ahead", Psyhhoterror "Freddy, löö esimesena!", Riho Sibul "Must", Bossacucanova "Uma Batida Diferente", "Biscantorat - Sound of the spirit from Glenstal Abbey"

  15. Revisiting Veerman’s interpolation method

    DEFF Research Database (Denmark)

    Christiansen, Peter; Bay, Niels Oluf

    2016-01-01

    and (c) FEsimulations. A comparison of the determined forming limits yields insignificant differences in the limit strain obtainedwith Veerman’s method or exact Lagrangian interpolation for the two sheet metal forming processes investigated. Theagreement with the FE-simulations is reasonable.......This article describes an investigation of Veerman’s interpolation method and its applicability for determining sheet metalformability. The theoretical foundation is established and its mathematical assumptions are clarified. An exact Lagrangianinterpolation scheme is also established...... for comparison. Bulge testing and tensile testing of aluminium sheets containingelectro-chemically etched circle grids are performed to experimentally determine the forming limit of the sheet material.The forming limit is determined using (a) Veerman’s interpolation method, (b) exact Lagrangian interpolation...

  16. NOAA Daily Optimum Interpolation Sea Surface Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA 1/4° daily Optimum Interpolation Sea Surface Temperature (or daily OISST) is an analysis constructed by combining observations from different platforms...

  17. Including local rainfall dynamics and uncertain boundary conditions into a 2-D regional-local flood modelling cascade

    Science.gov (United States)

    Bermúdez, María; Neal, Jeffrey C.; Bates, Paul D.; Coxon, Gemma; Freer, Jim E.; Cea, Luis; Puertas, Jerónimo

    2016-04-01

    Flood inundation models require appropriate boundary conditions to be specified at the limits of the domain, which commonly consist of upstream flow rate and downstream water level. These data are usually acquired from gauging stations on the river network where measured water levels are converted to discharge via a rating curve. Derived streamflow estimates are therefore subject to uncertainties in this rating curve, including extrapolating beyond the maximum observed ratings magnitude. In addition, the limited number of gauges in reach-scale studies often requires flow to be routed from the nearest upstream gauge to the boundary of the model domain. This introduces additional uncertainty, derived not only from the flow routing method used, but also from the additional lateral rainfall-runoff contributions downstream of the gauging point. Although generally assumed to have a minor impact on discharge in fluvial flood modeling, this local hydrological input may become important in a sparse gauge network or in events with significant local rainfall. In this study, a method to incorporate rating curve uncertainty and the local rainfall-runoff dynamics into the predictions of a reach-scale flood inundation model is proposed. Discharge uncertainty bounds are generated by applying a non-parametric local weighted regression approach to stage-discharge measurements for two gauging stations, while measured rainfall downstream from these locations is cascaded into a hydrological model to quantify additional inflows along the main channel. A regional simplified-physics hydraulic model is then applied to combine these inputs and generate an ensemble of discharge and water elevation time series at the boundaries of a local-scale high complexity hydraulic model. Finally, the effect of these rainfall dynamics and uncertain boundary conditions are evaluated on the local-scale model. Improvements in model performance when incorporating these processes are quantified using observed

  18. Contingency Analysis of Cascading Line Outage Events

    Energy Technology Data Exchange (ETDEWEB)

    Thomas L Baldwin; Magdy S Tawfik; Miles McQueen

    2011-03-01

    As the US power systems continue to increase in size and complexity, including the growth of smart grids, larger blackouts due to cascading outages become more likely. Grid congestion is often associated with a cascading collapse leading to a major blackout. Such a collapse is characterized by a self-sustaining sequence of line outages followed by a topology breakup of the network. This paper addresses the implementation and testing of a process for N-k contingency analysis and sequential cascading outage simulation in order to identify potential cascading modes. A modeling approach described in this paper offers a unique capability to identify initiating events that may lead to cascading outages. It predicts the development of cascading events by identifying and visualizing potential cascading tiers. The proposed approach was implemented using a 328-bus simplified SERC power system network. The results of the study indicate that initiating events and possible cascading chains may be identified, ranked and visualized. This approach may be used to improve the reliability of a transmission grid and reduce its vulnerability to cascading outages.

  19. Integration and interpolation of sampled waveforms

    International Nuclear Information System (INIS)

    Stearns, S.D.

    1978-01-01

    Methods for integrating, interpolating, and improving the signal-to-noise ratio of digitized waveforms are discussed with regard to seismic data from underground tests. The frequency-domain integration method and the digital interpolation method of Schafer and Rabiner are described and demonstrated using test data. The use of bandpass filtering for noise reduction is also demonstrated. With these methods, a backlog of seismic test data has been successfully processed

  20. Wideband DOA Estimation through Projection Matrix Interpolation

    OpenAIRE

    Selva, J.

    2017-01-01

    This paper presents a method to reduce the complexity of the deterministic maximum likelihood (DML) estimator in the wideband direction-of-arrival (WDOA) problem, which is based on interpolating the array projection matrix in the temporal frequency variable. It is shown that an accurate interpolator like Chebyshev's is able to produce DML cost functions comprising just a few narrowband-like summands. Actually, the number of such summands is far smaller (roughly by factor ten in the numerical ...

  1. Interpolation for a subclass of H

    Indian Academy of Sciences (India)

    |g(zm)| ≤ c |zm − zm |, ∀m ∈ N. Thus it is natural to pose the following interpolation problem for H. ∞. : DEFINITION 4. We say that (zn) is an interpolating sequence in the weak sense for H. ∞ if given any sequence of complex numbers (λn) verifying. |λn| ≤ c ψ(zn,z. ∗ n) |zn − zn |, ∀n ∈ N,. (4) there exists a product fg ∈ H.

  2. Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI

    Science.gov (United States)

    Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.

    2015-01-01

    Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085

  3. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  4. Rescuing Ecosystems from Extinction Cascades

    Science.gov (United States)

    Sahasrabudhe, Sagar; Motter, Adilson

    2010-03-01

    Food web perturbations stemming from climate change, overexploitation, invasive species, and natural disasters often cause an initial loss of species that results in a cascade of secondary extinctions. Using a predictive modeling framework, here we will present a systematic network-based approach to reduce the number of secondary extinctions. We will show that the extinction of one species can often be compensated by the concurrent removal of a second specific species, which is a counter-intuitive effect not previously tested in complex food webs. These compensatory perturbations frequently involve long-range interactions that are not a priori evident from local predator-prey relationships. Strikingly, in numerous cases even the early removal of a species that would eventually be extinct by the cascade is found to significantly reduce the number of cascading extinctions. Other nondestructive interventions based on partial removals and growth suppression and/or mortality increase are shown to sometimes prevent all secondary extinctions.

  5. Modelo digital do terreno através de diferentes interpolações do programa Surfer 12 | Digital terrain model through different interpolations in the surfer 12 software

    Directory of Open Access Journals (Sweden)

    José Machado

    2016-04-01

    the MDT interpolation of measured points is required. The use of TDM, 3D surfaces and contours in moving fast computer programs and can create some problems, such as the type of interpolation used. This work aims to analyze the interpolation methods in points quoted from an irregular geometric figure generated by the Surfer program. They used 12 interpolations available (Data Metrics, Inverse Distance, Kriging, Local Polynomial, Minimum Curvature, Modified Shepard Method, Moving Average, Natural Neighbor, Nearest Neighbor, Polynomial Regression, Radial fuction and Triangulation with Linear Interpolation and analyzed the generated topographic maps. The relief was generated graphical representation via the MDT. They were awarded the excellent concepts, excellent, good, average and bad representation of relief and discussed according Relief representations to the listed geometric image. Data Metrics, Polynomial Regression, Moving Average e Local Polynomial (bad; Moving Average e Modified Shepard Method (regular; Nearest Neighbor (media; Inverse Distance (good; Kriging e Radial Function (great e Triangulation With Linear Interpolation e Natural Neighbor (excellent conditions to representation presented dates.

  6. Human resource assignment and role representation mechanism with the "cascading staff-group authoring" and "relation/situation" model.

    Science.gov (United States)

    Hirose, Y; Sasaki, Y; Kinoshita, A

    2001-01-01

    We have previously reported the access control mechanism and audit strategy of the "patient-doctor relation and clinical situation at the point-of-care" model with multi-axial access control matrix (ACM). This mechanism overcomes the deficit of ACM in the aspect of data accessibility but does not resolve the representation of the staff's affiliate and/or plural membership in the complex real world. Care groups inside a department or inter-department clinical team plays significant clinical role but also spend great amount of time and money in the hospital. Therefore the impact of human resource assignment and cost of such stakeholders to the hospital management is huge, so that they should be accurately treated in the hospital information system. However multi-axial ACM has problems with the representation of staff groups due to static parameters such as department/license because staffs belong to a group rather temporarily and/or a medical staff may belong to plural groups. As a solution, we have designed and implemented "cascading staff-group authoring" method with "relation and situation" model and multi-axial ACM. In this mechanism, (i) a system administrator certifies "group chief certifying person" according to the request and authorization by the department director, (ii) the "group chief certifying person" certifies "group chief(s)", (iii) the "group chief" recruits its members from the medical staffs, and at the same time the "group chief" decides the profit distribution policy of this group. This will enable medical staff to access EMR according to the role he/she plays whether it is as a department staff or as a group member. This solution has worked successfully over the past few years. It provides end-users with a flexible and time-to-time staff-group authoring environment using a simple human-interfaced tool without security breach and without system administration cost. In addition, profit and cost distribution is clarified among departments and

  7. Dynamic Stability Analysis Using High-Order Interpolation

    Directory of Open Access Journals (Sweden)

    Juarez-Toledo C.

    2012-10-01

    Full Text Available A non-linear model with robust precision for transient stability analysis in multimachine power systems is proposed. The proposed formulation uses the interpolation of Lagrange and Newton's Divided Difference. The High-Order Interpolation technique developed can be used for evaluation of the critical conditions of the dynamic system.The technique is applied to a 5-area 45-machine model of the Mexican interconnected system. As a particular case, this paper shows the application of the High-Order procedure for identifying the slow-frequency mode for a critical contingency. Numerical examples illustrate the method and demonstrate the ability of the High-Order technique to isolate and extract temporal modal behavior.

  8. Phenotypic and evolutionary implications of modulating the ERK-MAPK cascade using the dentition as a model

    Science.gov (United States)

    Marangoni, Pauline; Charles, Cyril; Tafforeau, Paul; Laugel-Haushalter, Virginie; Joo, Adriane; Bloch-Zupan, Agnès; Klein, Ophir D.; Viriot, Laurent

    2015-06-01

    The question of phenotypic convergence across a signalling pathway has important implications for both developmental and evolutionary biology. The ERK-MAPK cascade is known to play a central role in dental development, but the relative roles of its components remain unknown. Here we investigate the diversity of dental phenotypes in Spry2-/-, Spry4-/-, and Rsk2-/Y mice, including the incidence of extra teeth, which were lost in the mouse lineage 45 million years ago (Ma). In addition, Sprouty-specific anomalies mimic a phenotype that is absent in extant mice but present in mouse ancestors prior to 9 Ma. Although the mutant lines studied display convergent phenotypes, each gene has a specific role in tooth number determination and crown patterning. The similarities found between teeth in fossils and mutants highlight the pivotal role of the ERK-MAPK cascade during the evolution of the dentition in rodents.

  9. Toward a parallel and cascading model of the writing system: A review of research on writing processes coordination

    OpenAIRE

    Thierry Olive

    2014-01-01

    Efficient coordination of the different writing processes is central to producing good-quality texts, and is a fundamental component of writing skill. In this article, I propose a general theoretical framework for considering how writing processes are coordinated, in which writing processes are concurrently activated with more or less overlap between processes depending on their working memory demands, and with the flow of information cascading from central to peripheral levels of processing....

  10. Investigation of cascade regions of damage in alpha iron by a computer simulation method (crystal model). Issledovaniye kaskadnykh oblastey povrezhdeniya v. cap alpha. -zheleze metodom mashinnogo modelirovaniya (kristallicheskaya model)

    Energy Technology Data Exchange (ETDEWEB)

    Kevorkyan, Yu R

    1974-01-01

    A SPIKE program is used to study regions of structural damage that arise as a result of cascades of atomic collisions in single-crystal alpha iron. The model of the cascade process realized in the program uses a pair collision approximation and accounts for the influence of the crystal structure of the material. The following characteristics of regions of damage are found as a function of the energy of the primary knock-on atom: volume of the region, displacement effectiveness, size distribution of complexes of vacancies and injections. The results are compared with data in the literature. An appendix gives the text of the SPIKE program in FORTRAN.

  11. Modeling the Vakhsh Cascade in the Amu Darya River Basin - Implementing Future Storage Facilities in a Hydrological Model for Impact Assessment

    Science.gov (United States)

    Steiner, J. F.; Siegfried, T.; Yakovlev, A.

    2014-12-01

    In the Amu Darya River Basin in Central Asia, the Vakhsh catchment in Tajikistan is a major source of hydropower energy for the country. With a number of large dams already constructed, upstream Tajikistan is interested in the construction of one more large dam and a number of smaller storage facilities with the prospect of supplying its neighboring states with hydropower through a newly planned power grid. The impact of new storage facilities along the river is difficult to estimate and causes considerable concern and consternation among the downstream users. Today, it is one of the vexing poster child studies in international water conflict that awaits resolution. With a lack of meteorological data and a complex topography that makes application of remote sensed data difficult it is a challenge to model runoff correctly. Large parts of the catchment is glacierized and ranges from just 500 m asl to peaks above 7000 m asl. Based on in-situ time series for temperature and precipitation we find local correction factors for remote sensed products. Using this data we employ a model based on the Budyko framework with an extension for snow and ice in the higher altitude bands. The model furthermore accounts for groundwater and soil storage. Runoff data from a number of stations are used for the calibration of the model parameters. With an accurate representation of the existing and planned reservoirs in the Vakhsh cascade we study the potential impacts from the construction of the new large reservoir in the river. Impacts are measured in terms of a) the timing and availability of new hydropower energy, also in light of its potential for export to South Asia, b) shifting challenges with regard to river sediment loads and siltation of reservoirs and c) impacts on downstream runoff and the timely availability of irrigation water there. With our coupled hydro-climatological approach, the challenges of optimal cascade management can be addressed so as to minimize detrimental

  12. Edge-detect interpolation for direct digital periapical images

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    The purpose of this study was to aid in the use of the digital images by edge-detect interpolation for direct digital periapical images using edge-deted interpolation. This study was performed by image processing of 20 digital periapical images; pixel replication, linear non-interpolation, linear interpolation, and edge-sensitive interpolation. The obtained results were as follows ; 1. Pixel replication showed blocking artifact and serious image distortion. 2. Linear interpolation showed smoothing effect on the edge. 3. Edge-sensitive interpolation overcame the smoothing effect on the edge and showed better image.

  13. Predicting the occurrence of channelized debris flow by an integrated cascading model: A case study of a small debris flow-prone catchment in Zhejiang Province, China

    Science.gov (United States)

    Wei, Zhen-lei; Xu, Yue-Ping; Sun, Hong-yue; Xie, Wei; Wu, Gang

    2018-05-01

    Excessive water in a channel is an important factor that triggers channelized debris flows. Floods and debris flows often occur in a cascading manner, and thus, calculating the amount of runoff accurately is important for predicting the occurrence of debris flows. In order to explore the runoff-rainfall relationship, we placed two measuring facilities at the outlet of a small, debris flow-prone headwater catchment to explore the hydrological response of the catchment. The runoff responses generally consisted of a rapid increase in runoff followed by a slower decrease. The peak runoff often occurred after the rainfall ended. The runoff discharge data were simulated by two different modeling approaches, i.e., the NAM model and the Hydrologic Engineering Center-Hydrologic Modeling System (HEC-HMS) model. The results showed that the NAM model performed better than the HEC-HMS model. The NAM model provided acceptable simulations, while the HEC-HMS model did not. Then, we coupled the calculated results of the NAM model with an empirically based debris flow initiation model to obtain a new integrated cascading disaster modeling system to provide improved disaster preparedness and hazard management. In this case study, we found that the coupled model could correctly predict the occurrence of debris flows. Furthermore, we evaluated the effect of the range of input parameter values on the hydrographical shape of the runoff. We also used the grey relational analysis to conduct a sensitivity analysis of the parameters of the model. This study highlighted the important connections between rainfall, hydrological processes, and debris flow, and it provides a useful prototype model system for operational forecasting of debris flows.

  14. Interpolation of property-values between electron numbers is inconsistent with ensemble averaging

    Energy Technology Data Exchange (ETDEWEB)

    Miranda-Quintana, Ramón Alain [Laboratory of Computational and Theoretical Chemistry, Faculty of Chemistry, University of Havana, Havana (Cuba); Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1 (Canada); Ayers, Paul W. [Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1 (Canada)

    2016-06-28

    In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integer electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.

  15. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    Directory of Open Access Journals (Sweden)

    J. Polec

    1999-09-01

    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  16. Regional subsidence modelling in Murcia city (SE Spain using 1-D vertical finite element analysis and 2-D interpolation of ground surface displacements

    Directory of Open Access Journals (Sweden)

    S. Tessitore

    2015-11-01

    Full Text Available Subsidence is a hazard that may have natural or anthropogenic origin causing important economic losses. The area of Murcia city (SE Spain has been affected by subsidence due to groundwater overexploitation since the year 1992. The main observed historical piezometric level declines occurred in the periods 1982–1984, 1992–1995 and 2004–2008 and showed a close correlation with the temporal evolution of ground displacements. Since 2008, the pressure recovery in the aquifer has led to an uplift of the ground surface that has been detected by the extensometers. In the present work an elastic hydro-mechanical finite element code has been used to compute the subsidence time series for 24 geotechnical boreholes, prescribing the measured groundwater table evolution. The achieved results have been compared with the displacements estimated through an advanced DInSAR technique and measured by the extensometers. These spatio-temporal comparisons have showed that, in spite of the limited geomechanical data available, the model has turned out to satisfactorily reproduce the subsidence phenomenon affecting Murcia City. The model will allow the prediction of future induced deformations and the consequences of any piezometric level variation in the study area.

  17. Trophic cascades of bottom-up and top-down forcing on nutrients and plankton in the Kattegat, evaluated by modelling

    DEFF Research Database (Denmark)

    Petersen, Marcell Elo; Maar, Marie; Larsen, Janus

    2017-01-01

    The aim of the study was to investigate the relative importance of bottom-up and top-down forcing on trophic cascades in the pelagic food-web and the implications for water quality indicators (summer phytoplankton biomass and winter nutrients) in relation to management. The 3D ecological model....... On annual basis, the system was more bottom-up than top-down controlled. Microzooplankton was found to play an important role in the pelagic food web as mediator of nutrient and energy fluxes. This study demonstrated that the best scenario for improved water quality was a combined reduction in nutrient...

  18. Broadband external cavity quantum cascade laser based sensor for gasoline detection

    Science.gov (United States)

    Ding, Junya; He, Tianbo; Zhou, Sheng; Li, Jinsong

    2018-02-01

    A new type of tunable diode spectroscopy sensor based on an external cavity quantum cascade laser (ECQCL) and a quartz crystal tuning fork (QCTF) were used for quantitative analysis of volatile organic compounds. In this work, the sensor system had been tested on different gasoline sample analysis. For signal processing, the self-established interpolation algorithm and multiple linear regression algorithm model were used for quantitative analysis of major volatile organic compounds in gasoline samples. The results were very consistent with that of the standard spectra taken from the Pacific Northwest National Laboratory (PNNL) database. In future, The ECQCL sensor will be used for trace explosive, chemical warfare agent, and toxic industrial chemical detection and spectroscopic analysis, etc.

  19. Image interpolation via graph-based Bayesian label propagation.

    Science.gov (United States)

    Xianming Liu; Debin Zhao; Jiantao Zhou; Wen Gao; Huifang Sun

    2014-03-01

    In this paper, we propose a novel image interpolation algorithm via graph-based Bayesian label propagation. The basic idea is to first create a graph with known and unknown pixels as vertices and with edge weights encoding the similarity between vertices, then the problem of interpolation converts to how to effectively propagate the label information from known points to unknown ones. This process can be posed as a Bayesian inference, in which we try to combine the principles of local adaptation and global consistency to obtain accurate and robust estimation. Specially, our algorithm first constructs a set of local interpolation models, which predict the intensity labels of all image samples, and a loss term will be minimized to keep the predicted labels of the available low-resolution (LR) samples sufficiently close to the original ones. Then, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on all samples. Moreover, a graph-Laplacian-based manifold regularization term is incorporated to penalize the global smoothness of intensity labels, such smoothing can alleviate the insufficient training of the local models and make them more robust. Finally, we construct a unified objective function to combine together the global loss of the locally linear regression, square error of prediction bias on the available LR samples, and the manifold regularization term. It can be solved with a closed-form solution as a convex optimization problem. Experimental results demonstrate that the proposed method achieves competitive performance with the state-of-the-art image interpolation algorithms.

  20. New families of interpolating type IIB backgrounds

    Science.gov (United States)

    Minasian, Ruben; Petrini, Michela; Zaffaroni, Alberto

    2010-04-01

    We construct new families of interpolating two-parameter solutions of type IIB supergravity. These correspond to D3-D5 systems on non-compact six-dimensional manifolds which are mathbb{T}2 fibrations over Eguchi-Hanson and multi-center Taub-NUT spaces, respectively. One end of the interpolation corresponds to a solution with only D5 branes and vanishing NS three-form flux. A topology changing transition occurs at the other end, where the internal space becomes a direct product of the four-dimensional surface and the two-torus and the complexified NS-RR three-form flux becomes imaginary self-dual. Depending on the choice of the connections on the torus fibre, the interpolating family has either mathcal{N}=2 or mathcal{N}=1 supersymmetry. In the mathcal{N}=2 case it can be shown that the solutions are regular.

  1. Interpolation of quasi-Banach spaces

    International Nuclear Information System (INIS)

    Tabacco Vignati, A.M.

    1986-01-01

    This dissertation presents a method of complex interpolation for familities of quasi-Banach spaces. This method generalizes the theory for families of Banach spaces, introduced by others. Intermediate spaces in several particular cases are characterized using different approaches. The situation when all the spaces have finite dimensions is studied first. The second chapter contains the definitions and main properties of the new interpolation spaces, and an example concerning the Schatten ideals associated with a separable Hilbert space. The case of L/sup P/ spaces follows from the maximal operator theory contained in Chapter III. Also introduced is a different method of interpolation for quasi-Banach lattices of functions, and conditions are given to guarantee that the two techniques yield the same result. Finally, the last chapter contains a different, and more direct, approach to the case of Hardy spaces

  2. Quadratic Interpolation and Linear Lifting Design

    Directory of Open Access Journals (Sweden)

    Joel Solé

    2007-03-01

    Full Text Available A quadratic image interpolation method is stated. The formulation is connected to the optimization of lifting steps. This relation triggers the exploration of several interpolation possibilities within the same context, which uses the theory of convex optimization to minimize quadratic functions with linear constraints. The methods consider possible knowledge available from a given application. A set of linear equality constraints that relate wavelet bases and coefficients with the underlying signal is introduced in the formulation. As a consequence, the formulation turns out to be adequate for the design of lifting steps. The resulting steps are related to the prediction minimizing the detail signal energy and to the update minimizing the l2-norm of the approximation signal gradient. Results are reported for the interpolation methods in terms of PSNR and also, coding results are given for the new update lifting steps.

  3. Optimized Quasi-Interpolators for Image Reconstruction.

    Science.gov (United States)

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  4. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.

    2014-12-01

    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  5. Novel interpolative perturbation theory for quantum anharmonic oscillators in one and three dimensions and the ground state of a lattice model of the phi4 field theory

    International Nuclear Information System (INIS)

    Ginsburg, C.A.

    1977-01-01

    A new method for approximating the eigenfunctions and eigenvalues of anharmonic oscillators. An attempt was made to develop an analytic method which provides simple formulae for all values of the parameters as the W.K.B. approximation and perturbation theory do for certain limiting case, and which has the convergence properties associated with the computer methods. The procedure is based upon combining knowledge of the asymptotic behavior of the wave function for large and small values of the coordinate(s) to obtain approximations valid for all values of coordinate(s) and all strengths of the anharmonicity. A systematic procedure for improving these approximations is developed. Finally the groundstate of a lattice model of the phi 4 field theory which consists of an infinite number of coupled anharmonic oscillators. A first order calculation yields a covariant expression for the groundstate eigenvalue with the physical mass, m, given by a characteristic polynomial which involves the bare mass, μ, the lattice spacing, l, and the coupling constant, lambda. For l > 0, μ can be adjusted (a mass renormalization) 0 < m < infinity. As l → 0 lambda (l) (a charge renormalization) is adjusted so that lambda/sup 1/3//l → eta, a constant, as l → 0. Then eta can be chosen so that m can take any experimental value

  6. Positivity Preserving Interpolation Using Rational Bicubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2015-01-01

    Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.

  7. Interpolation algorithm for asynchronous ADC-data

    Directory of Open Access Journals (Sweden)

    S. Bramburger

    2017-09-01

    Full Text Available This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  8. Cascade quantum teleportation

    Institute of Scientific and Technical Information of China (English)

    ZHOU Nan-run; GONG Li-hua; LIU Ye

    2006-01-01

    In this letter a cascade quantum teleportation scheme is proposed. The proposed scheme needs less local quantum operations than those of quantum multi-teleportation. A quantum teleportation scheme based on entanglement swapping is presented and compared with the cascade quantum teleportation scheme. Those two schemes can effectively teleport quantum information and extend the distance of quantum communication.

  9. A Hybrid Method for Interpolating Missing Data in Heterogeneous Spatio-Temporal Datasets

    Directory of Open Access Journals (Sweden)

    Min Deng

    2016-02-01

    Full Text Available Space-time interpolation is widely used to estimate missing or unobserved values in a dataset integrating both spatial and temporal records. Although space-time interpolation plays a key role in space-time modeling, existing methods were mainly developed for space-time processes that exhibit stationarity in space and time. It is still challenging to model heterogeneity of space-time data in the interpolation model. To overcome this limitation, in this study, a novel space-time interpolation method considering both spatial and temporal heterogeneity is developed for estimating missing data in space-time datasets. The interpolation operation is first implemented in spatial and temporal dimensions. Heterogeneous covariance functions are constructed to obtain the best linear unbiased estimates in spatial and temporal dimensions. Spatial and temporal correlations are then considered to combine the interpolation results in spatial and temporal dimensions to estimate the missing data. The proposed method is tested on annual average temperature and precipitation data in China (1984–2009. Experimental results show that, for these datasets, the proposed method outperforms three state-of-the-art methods—e.g., spatio-temporal kriging, spatio-temporal inverse distance weighting, and point estimation model of biased hospitals-based area disease estimation methods.

  10. Mechanisms of cascade collapse

    International Nuclear Information System (INIS)

    Diaz de la Rubia, T.; Smalinskas, K.; Averback, R.S.; Robertson, I.M.; Hseih, H.; Benedek, R.

    1988-12-01

    The spontaneous collapse of energetic displacement cascades in metals into vacancy dislocation loops has been investigated by molecular dynamics (MD) computer simulation and transmission electron microscopy (TEM). Simulations of 5 keV recoil events in Cu and Ni provide the following scenario of cascade collapse: atoms are ejected from the central region of the cascade by replacement collision sequences; the central region subsequently melts; vacancies are driven to the center of the cascade during resolidification where they may collapse into loops. Whether or not collapse occurs depends critically on the melting temperature of the metal and the energy density and total energy in the cascade. Results of TEM are presented in support of this mechanism. 14 refs., 4 figs., 1 tab

  11. Prevention of mother-to-child transmission of HIV Option B+ cascade in rural Tanzania: The One Stop Clinic model.

    Directory of Open Access Journals (Sweden)

    Anna Gamell

    Full Text Available Strategies to improve the uptake of Prevention of Mother-To-Child Transmission of HIV (PMTCT are needed. We integrated HIV and maternal, newborn and child health services in a One Stop Clinic to improve the PMTCT cascade in a rural Tanzanian setting.The One Stop Clinic of Ifakara offers integral care to HIV-infected pregnant women and their families at one single place and time. All pregnant women and HIV-exposed infants attended during the first year of Option B+ implementation (04/2014-03/2015 were included. PMTCT was assessed at the antenatal clinic (ANC, HIV care and labour ward, and compared with the pre-B+ period. We also characterised HIV-infected pregnant women and evaluated the MTCT rate.1,579 women attended the ANC. Seven (0.4% were known to be HIV-infected. Of the remainder, 98.5% (1,548/1,572 were offered an HIV test, 94% (1,456/1,548 accepted and 38 (2.6% tested HIV-positive. 51 were re-screened for HIV during late pregnancy and one had seroconverted. The HIV prevalence at the ANC was 3.1% (46/1,463. Of the 39 newly diagnosed women, 35 (90% were linked to care. HIV test was offered to >98% of ANC clients during both the pre- and post-B+ periods. During the post-B+ period, test acceptance (94% versus 90.5%, p<0.0001 and linkage to care (90% versus 26%, p<0.0001 increased. Ten additional women diagnosed outside the ANC were linked to care. 82% (37/45 of these newly-enrolled women started antiretroviral treatment (ART. After a median time of 17 months, 27% (12/45 were lost to follow-up. 79 women under HIV care became pregnant and all received ART. After a median follow-up time of 19 months, 6% (5/79 had been lost. 5,727 women delivered at the hospital, 20% (1,155/5,727 had unknown HIV serostatus. Of these, 30% (345/1,155 were tested for HIV, and 18/345 (5.2% were HIV-positive. Compared to the pre-B+ period more women were tested during labour (30% versus 2.4%, p<0.0001. During the study, the MTCT rate was 2.2%.The implementation of

  12. Geodetic observations and modeling of magmatic inflation at the Three Sisters volcanic center, central Oregon Cascade Range, USA

    Science.gov (United States)

    Dzurisin, Daniel; Lisowski, Michael; Wicks, Charles W.; Poland, Michael P.; Endo, Elliot T.

    2006-02-01

    Tumescence at the Three Sisters volcanic center began sometime between summer 1996 and summer 1998 and was discovered in April 2001 using interferometric synthetic aperture radar (InSAR). Swelling is centered about 5 km west of the summit of South Sister, a composite basaltic-andesite to rhyolite volcano that last erupted between 2200 and 2000 yr ago, and it affects an area ˜20 km in diameter within the Three Sisters Wilderness. Yearly InSAR observations show that the average maximum displacement rate was 3-5 cm/yr through summer 2001, and the velocity of a continuous GPS station within the deforming area was essentially constant from June 2001 to June 2004. The background level of seismic activity has been low, suggesting that temperatures in the source region are high enough or the strain rate has been low enough to favor plastic deformation over brittle failure. A swarm of about 300 small earthquakes ( Mmax = 1.9) in the northeast quadrant of the deforming area on March 23-26, 2004, was the first notable seismicity in the area for at least two decades. The U.S. Geological Survey (USGS) established tilt-leveling and EDM networks at South Sister in 1985-1986, resurveyed them in 2001, the latter with GPS, and extended them to cover more of the deforming area. The 2001 tilt-leveling results are consistent with the inference drawn from InSAR that the current deformation episode did not start before 1996, i.e., the amount of deformation during 1995-2001 from InSAR fully accounts for the net tilt at South Sister during 1985-2001 from tilt-leveling. Subsequent InSAR, GPS, and leveling observations constrain the source location, geometry, and inflation rate as a function of time. A best-fit source model derived from simultaneous inversion of all three datasets is a dipping sill located 6.5 ± 2.5 km below the surface with a volume increase of 5.0 × 10 6 ± 1.5 × 10 6 m 3/yr (95% confidence limits). The most likely cause of tumescence is a pulse of basaltic magma

  13. Bankruptcy cascades in interbank markets.

    Directory of Open Access Journals (Sweden)

    Gabriele Tedeschi

    Full Text Available We study a credit network and, in particular, an interbank system with an agent-based model. To understand the relationship between business cycles and cascades of bankruptcies, we model a three-sector economy with goods, credit and interbank market. In the interbank market, the participating banks share the risk of bad debits, which may potentially spread a bank's liquidity problems through the network of banks. Our agent-based model sheds light on the correlation between bankruptcy cascades and the endogenous economic cycle of booms and recessions. It also demonstrates the serious trade-off between, on the one hand, reducing risks of individual banks by sharing them and, on the other hand, creating systemic risks through credit-related interlinkages of banks. As a result of our study, the dynamics underlying the meltdown of financial markets in 2008 becomes much better understandable.

  14. Importance of interpolation and coincidence errors in data fusion

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2018-02-01

    Full Text Available The complete data fusion (CDF method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  15. Reconstruction of reflectance data using an interpolation technique.

    Science.gov (United States)

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  16. A two-level model of rise time in quantum cascade laser materials applied to 5 micron, 9 micron and terahertz-range wavelengths

    International Nuclear Information System (INIS)

    Webb, J F; Yong, K S C; Haldar, M K

    2014-01-01

    An equivalent circuit simulation of a two-level rate equation model for quantum cascade laser (QCL) materials is used to study the turn on delay and rise time for three QCLs with 5 micron, 9 micron and terahertz-range wavelengths. In order to do this it is necessary that the model can deal with large signal responses and not be restricted to small signal responses; the model used here is capable of this. The effect of varying some of the characteristic times in the model is also investigated. The comparison of the terahertz wave QCL with the others is particularly important given the increased interest in terahertz sources which have a large range of important applications, such as in medical imaging

  17. Defect production in simulated cascades: Cascade quenching and short-term annealing

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1983-01-01

    Defect production in displacement cascades in copper has been modeled using the MARLOWE code to generate cascades and the stochastic annealing code ALSOME to simulate cascade quenching and short-term annealing of isolated cascades. Quenching is accomplished by using exaggerated values for defect mobilities and for critical reaction distances in ALSOME for a very short time. The quenched cascades are then short-term annealed with normal parameter values. The quenching parameter values were empirically determined by comparison with results of resistivity measurements. Throughout the collisional, quenching and short-term annealing phases of cascade development, the high energy cascades continue to behave as a collection of independent lower energy lobes. For recoils above about 30 keV the total number of defects and the numbers of free defects scale with the damage energy. As the energy decreases from 30 keV, defect production varies with the changing nature of the cascade configuration, resulting in more defects per unit damage energy. The simulated annealing of a low fluence of interacting cascades revealed an interstitial shielding effect on depleted zones during Stage I recovery. (orig.)

  18. The Diffraction Response Interpolation Method

    DEFF Research Database (Denmark)

    Jespersen, Søren Kragh; Wilhjelm, Jens Erik; Pedersen, Peder C.

    1998-01-01

    Computer modeling of the output voltage in a pulse-echo system is computationally very demanding, particularly whenconsidering reflector surfaces of arbitrary geometry. A new, efficient computational tool, the diffraction response interpolationmethod (DRIM), for modeling of reflectors in a fluid...... medium, is presented. The DRIM is based on the velocity potential impulseresponse method, adapted to pulse-echo applications by the use of acoustical reciprocity. Specifically, the DRIM operates bydividing the reflector surface into planar elements, finding the diffraction response at the corners...

  19. Cascades in model steels: The effect of cementite (Fe3C) and Cr23C6 particles on short-term crystal damage

    Science.gov (United States)

    Henriksson, K. O. E.

    2015-06-01

    Ferritic stainless steel can be modeled as an iron matrix containing precipitates of cementite (Fe3C) and Cr23C6. When used in nuclear power production the steels in the vicinity of the core start to accumulate damage due to neutrons. The role of the afore-mentioned carbides in this process is not well understood. In order to clarify the situation bulk cascades created by primary recoils in model steels have been carried out in the present work. Investigated configurations consisted of bulk ferrite containing spherical particles (diameter of 4 nm) of either (1) Fe3C or (2) Cr23C6. Primary recoils were initiated at different distances from the inclusions, with recoil energies varying between 100 eV and 1 keV. Results for the number of point defects such as vacancies and antisites are presented. These findings indicate that defects are also remaining when cascades are started outside the carbide inclusions. The work uses a recently developed Abell-Brenner-Tersoff potential for the Fe-Cr-C system.

  20. Dynamics robustness of cascading systems.

    Directory of Open Access Journals (Sweden)

    Jonathan T Young

    2017-03-01

    Full Text Available A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1 Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2 Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it

  1. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.; Efendiev, Yalchin R.; Galvis, Juan; Ghommem, Mehdi

    2014-01-01

    residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully

  2. Spectral Compressive Sensing with Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Dadkhahi, Hamid; F. Duarte, Marco

    2013-01-01

    . In this paper, we introduce a greedy recovery algorithm that leverages a band-exclusion function and a polar interpolation function to address these two issues in spectral compressive sensing. Our algorithm is geared towards line spectral estimation from compressive measurements and outperforms most existing...

  3. SU-F-J-41: Experimental Validation of a Cascaded Linear System Model for MVCBCT with a Multi-Layer EPID

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Y; Rottmann, J; Myronakis, M; Berbeco, R [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana Farber Cancer Institute and Harvard Medical School, Boston, MA. (United States); Fueglistaller, R; Morf, D [Varian Medical Systems, Dattwil, Aargau (Switzerland); Wang, A; Shedlock, D; Star-Lack, J [Varian Medical Systems, Palo Alto, CA (United States)

    2016-06-15

    Purpose: The purpose of this study was to validate the use of a cascaded linear system model for MV cone-beam CT (CBCT) using a multi-layer (MLI) electronic portal imaging device (EPID) and provide experimental insight into image formation. A validated 3D model provides insight into salient factors affecting reconstructed image quality, allowing potential for optimizing detector design for CBCT applications. Methods: A cascaded linear system model was developed to investigate the potential improvement in reconstructed image quality for MV CBCT using an MLI EPID. Inputs to the three-dimensional (3D) model include projection space MTF and NPS. Experimental validation was performed on a prototype MLI detector installed on the portal imaging arm of a Varian TrueBeam radiotherapy system. CBCT scans of up to 898 projections over 360 degrees were acquired at exposures of 16 and 64 MU. Image volumes were reconstructed using a Feldkamp-type (FDK) filtered backprojection (FBP) algorithm. Flat field images and scans of a Catphan model 604 phantom were acquired. The effect of 2×2 and 4×4 detector binning was also examined. Results: Using projection flat fields as an input, examination of the modeled and measured NPS in the axial plane exhibits good agreement. Binning projection images was shown to improve axial slice SDNR by a factor of approximately 1.4. This improvement is largely driven by a decrease in image noise of roughly 20%. However, this effect is accompanied by a subsequent loss in image resolution. Conclusion: The measured axial NPS shows good agreement with the theoretical calculation using a linear system model. Binning of projection images improves SNR of large objects on the Catphan phantom by decreasing noise. Specific imaging tasks will dictate the implementation image binning to two-dimensional projection images. The project was partially supported by a grant from Varian Medical Systems, Inc. and grant No. R01CA188446-01 from the National Cancer Institute.

  4. Direction-controlled DTI interpolation

    NARCIS (Netherlands)

    Florack, L.M.J.; Dela Haije, T.C.J.; Fuster, A.; Hotz, I.; Schultz, T.

    2015-01-01

    Diffusion Tensor Imaging (DTI) is a popular model for representing diffusion weighted magnetic resonance images due to its simplicity and the fact that it strikes a good balance between signal fit and robustness. Nevertheless, problematic issues remain. One of these concerns the problem of

  5. Comparison of elevation and remote sensing derived products as auxiliary data for climate surface interpolation

    Science.gov (United States)

    Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul

    2013-01-01

    Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra

  6. Validation of China-wide interpolated daily climate variables from 1960 to 2011

    Science.gov (United States)

    Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang

    2015-02-01

    Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based

  7. Comparison of different wind data interpolation methods for a region with complex terrain in Central Asia

    Science.gov (United States)

    Reinhardt, Katja; Samimi, Cyrus

    2018-01-01

    While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and

  8. On calculating of squared-off cascades for multicomponent isotope separation

    International Nuclear Information System (INIS)

    Potapov, D.V.; Soulaberidze, G.A.; Chuzhinov, V.A.; Filipppov, I.G.

    1996-01-01

    Calculation on a cascade of specified configuration (specified number of stages and flows in the enriching and stripping sections of the cascade) is performed with two approaches. The first one, which is advisable to use for for calculation of so-called 'long' cascades (for example, squared-off cascades of distillation columns), is based on either analytical transitions enabling the problem to be reduced to to the algebraic transcendental equations, or based on the direct integration of the equations describing the cascade separation process, with the subsequent iteration on the boundary conditions and the balance equations. This approach also involves the orthogonal-collocation technique consisting in the approximation of the differential equations solution by an Lagrangian polynomial interpolation

  9. Cascade of links in complex networks

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Yeqian; Sun, Bihui [Department of Management Science, School of Government, Beijing Normal University, 100875 Beijing (China); Zeng, An, E-mail: anzeng@bnu.edu.cn [School of Systems Science, Beijing Normal University, 100875 Beijing (China)

    2017-01-30

    Cascading failure is an important process which has been widely used to model catastrophic events such as blackouts and financial crisis in real systems. However, so far most of the studies in the literature focus on the cascading process on nodes, leaving the possibility of link cascade overlooked. In many real cases, the catastrophic events are actually formed by the successive disappearance of links. Examples exist in the financial systems where the firms and banks (i.e. nodes) still exist but many financial trades (i.e. links) are gone during the crisis, and the air transportation systems where the airports (i.e. nodes) are still functional but many airlines (i.e. links) stop operating during bad weather. In this letter, we develop a link cascade model in complex networks. With this model, we find that both artificial and real networks tend to collapse even if a few links are initially attacked. However, the link cascading process can be effectively terminated by setting a few strong nodes in the network which do not respond to any link reduction. Finally, a simulated annealing algorithm is used to optimize the location of these strong nodes, which significantly improves the robustness of the networks against the link cascade. - Highlights: • We propose a link cascade model in complex networks. • Both artificial and real networks tend to collapse even if a few links are initially attacked. • The link cascading process can be effectively terminated by setting a few strong nodes. • A simulated annealing algorithm is used to optimize the location of these strong nodes.

  10. Cascade of links in complex networks

    International Nuclear Information System (INIS)

    Feng, Yeqian; Sun, Bihui; Zeng, An

    2017-01-01

    Cascading failure is an important process which has been widely used to model catastrophic events such as blackouts and financial crisis in real systems. However, so far most of the studies in the literature focus on the cascading process on nodes, leaving the possibility of link cascade overlooked. In many real cases, the catastrophic events are actually formed by the successive disappearance of links. Examples exist in the financial systems where the firms and banks (i.e. nodes) still exist but many financial trades (i.e. links) are gone during the crisis, and the air transportation systems where the airports (i.e. nodes) are still functional but many airlines (i.e. links) stop operating during bad weather. In this letter, we develop a link cascade model in complex networks. With this model, we find that both artificial and real networks tend to collapse even if a few links are initially attacked. However, the link cascading process can be effectively terminated by setting a few strong nodes in the network which do not respond to any link reduction. Finally, a simulated annealing algorithm is used to optimize the location of these strong nodes, which significantly improves the robustness of the networks against the link cascade. - Highlights: • We propose a link cascade model in complex networks. • Both artificial and real networks tend to collapse even if a few links are initially attacked. • The link cascading process can be effectively terminated by setting a few strong nodes. • A simulated annealing algorithm is used to optimize the location of these strong nodes.

  11. Phenomenological scattering-rate model for the simulation of the current density and emission power in mid-infrared quantum cascade lasers

    Energy Technology Data Exchange (ETDEWEB)

    Kurlov, S. S. [Department of Physics, Humboldt-Universität zu Berlin, Newtonstraße 15, 12489 Berlin (Germany); Institute of Semiconductor Physics, National Academy of Sciences, pr. Nauki 45, Kiev-03028 (Ukraine); Flores, Y. V.; Elagin, M.; Semtsiv, M. P.; Masselink, W. T. [Department of Physics, Humboldt-Universität zu Berlin, Newtonstraße 15, 12489 Berlin (Germany); Schrottke, L.; Grahn, H. T. [Paul-Drude-Institut für Festkörperelektronik, Hausvogteiplatz 5–7, 10117 Berlin (Germany); Tarasov, G. G. [Institute of Semiconductor Physics, National Academy of Sciences, pr. Nauki 45, Kiev-03028 (Ukraine)

    2016-04-07

    A phenomenological scattering-rate model introduced for terahertz quantum cascade lasers (QCLs) [Schrottke et al., Semicond. Sci. Technol. 25, 045025 (2010)] is extended to mid-infrared (MIR) QCLs by including the energy dependence of the intersubband scattering rates for energies higher than the longitudinal optical phonon energy. This energy dependence is obtained from a phenomenological fit of the intersubband scattering rates based on published lifetimes of a number of MIR QCLs. In our approach, the total intersubband scattering rate is written as the product of the exchange integral for the squared moduli of the envelope functions and a phenomenological factor that depends only on the transition energy. Using the model to calculate scattering rates and imposing periodical boundary conditions on the current density, we find a good agreement with low-temperature data for current-voltage, power-current, and energy-photon flux characteristics for a QCL emitting at 5.2 μm.

  12. Conjugation of cascades

    International Nuclear Information System (INIS)

    San Martin, Jesus; Rodriguez-Perez, Daniel

    2009-01-01

    Presented in this work are some results relative to sequences found in the logistic equation bifurcation diagram, which is the unimodal quadratic map prototype. All of the different saddle-node bifurcation cascades, associated with every last appearance p-periodic orbit (p=3,4,5,...), can also be generated from the very Feigenbaum cascade. In this way it is evidenced the relationship between both cascades. The orbits of every saddle-node bifurcation cascade, mentioned above, are located in different chaotic bands, and this determines a sequence of orbits converging to every band-merging Misiurewicz point. In turn, these accumulation points form a sequence whose accumulation point is the Myrberg-Feigenbaum point. It is also proven that the first appearance orbits in the n-chaotic band converge to the same point as the last appearance orbits of the (n + 1)-chaotic band. The symbolic sequences of band-merging Misiurewicz points are computed for any window.

  13. Performance of an Interpolated Stochastic Weather Generator in Czechia and Nebraska

    Science.gov (United States)

    Dubrovsky, M.; Trnka, M.; Hayes, M. J.; Svoboda, M. D.; Semeradova, D.; Metelka, L.; Hlavinka, P.

    2008-12-01

    Met&Roll is a WGEN-like parametric four-variate daily weather generator (WG), with an optional extension allowing the user to generate additional variables (i.e. wind and water vapor pressure). It is designed to produce synthetic weather series representing present and/or future climate conditions to be used as an input into various models (e.g. crop growth and rainfall runoff models). The present contribution will summarize recent experiments, in which we tested the performance of the interpolated WG, with the aim to examine whether the WG may be used to produce synthetic weather series even for sites having no meteorological observations. The experiments being discussed include: (1) the comparison of various interpolation methods where the performance of the candidate methods is compared in terms of the accuracy of the interpolation for selected WG parameters; (2) assessing the ability of the interpolated WG in the territories of Czechia and Nebraska to reproduce extreme temperature and precipitation characteristics; (3) indirect validation of the interpolated WG in terms of the modeled crop yields simulated by STICS crop growth model (in Czechia); and (4) indirect validation of interpolated WG in terms of soil climate regime characteristics simulated by the SoilClim model (Czechia and Nebraska). The experiments are based on observed daily weather series from two regions: Czechia (area = 78864 km2, 125 stations available) and Nebraska (area = 200520 km2, 28 stations available). Even though Nebraska exhibits a much lower density of stations, this is offset by the state's relatively flat topography, which is an advantage in using the interpolated WG. Acknowledgements: The present study is supported by the AMVIS-KONTAKT project (ME 844) and the GAAV Grant Agency (project IAA300420806).

  14. Cascading failure in the wireless sensor scale-free networks

    Science.gov (United States)

    Liu, Hao-Ran; Dong, Ming-Ru; Yin, Rong-Rong; Han, Li

    2015-05-01

    In the practical wireless sensor networks (WSNs), the cascading failure caused by a failure node has serious impact on the network performance. In this paper, we deeply research the cascading failure of scale-free topology in WSNs. Firstly, a cascading failure model for scale-free topology in WSNs is studied. Through analyzing the influence of the node load on cascading failure, the critical load triggering large-scale cascading failure is obtained. Then based on the critical load, a control method for cascading failure is presented. In addition, the simulation experiments are performed to validate the effectiveness of the control method. The results show that the control method can effectively prevent cascading failure. Project supported by the Natural Science Foundation of Hebei Province, China (Grant No. F2014203239), the Autonomous Research Fund of Young Teacher in Yanshan University (Grant No. 14LGB017) and Yanshan University Doctoral Foundation, China (Grant No. B867).

  15. SAR image formation with azimuth interpolation after azimuth transform

    Science.gov (United States)

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  16. Using high-order methods on adaptively refined block-structured meshes - discretizations, interpolations, and filters.

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.

    2006-01-01

    Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.

  17. Comparison of spatial interpolation techniques to predict soil properties in the colombian piedmont eastern plains

    Directory of Open Access Journals (Sweden)

    Mauricio Castro Franco

    2017-07-01

    Full Text Available Context: Interpolating soil properties at field-scale in the Colombian piedmont eastern plains is challenging due to: the highly and complex variable nature of some processes; the effects of the soil; the land use; and the management. While interpolation techniques are being adapted to include auxiliary information of these effects, the soil data are often difficult to predict using conventional techniques of spatial interpolation. Method: In this paper, we evaluated and compared six spatial interpolation techniques: Inverse Distance Weighting (IDW, Spline, Ordinary Kriging (KO, Universal Kriging (UK, Cokriging (Ckg, and Residual Maximum Likelihood-Empirical Best Linear Unbiased Predictor (REML-EBLUP, from conditioned Latin Hypercube as a sampling strategy. The ancillary information used in Ckg and REML-EBLUP was indexes calculated from a digital elevation model (MDE. The “Random forest” algorithm was used for selecting the most important terrain index for each soil properties. Error metrics were used to validate interpolations against cross validation. Results: The results support the underlying assumption that HCLc captured adequately the full distribution of variables of ancillary information in the Colombian piedmont eastern plains conditions. They also suggest that Ckg and REML-EBLUP perform best in the prediction in most of the evaluated soil properties. Conclusions: Mixed interpolation techniques having auxiliary soil information and terrain indexes, provided a significant improvement in the prediction of soil properties, in comparison with other techniques.

  18. Learning optimal embedded cascades.

    Science.gov (United States)

    Saberian, Mohammad Javad; Vasconcelos, Nuno

    2012-10-01

    The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.

  19. Simulation of concentration spikes in cascades

    International Nuclear Information System (INIS)

    Wood, H.G.

    2006-01-01

    Research has been conducted to investigate the maximum possible enrichment that might be temporarily achieved in a facility that is producing enriched uranium for fuel for nuclear power reactors. The purpose is to provide information to evaluate if uranium enrichment facilities are producing 235 U enriched within declared limits appropriate for power reactors or if the facilities are actually producing more highly enriched uranium. The correlation between feed rate and separation factor in a gas centrifuge cascade shows that as flow decreases, the separation factor increases, thereby, creating small amounts of higher enriched uranium than would be found under optimum design operating conditions. The research uses a number of cascade enrichment programs to model the phenomenon and determine the maximum enrichment possible during the time transient of a gas centrifuge cascade. During cascade start-up, the flow through the centrifuges begins at lower than centrifuge design stage flow rates. Steady-state cascade models have been used to study the maximum 235 U concentrations that would be predicted in the cascade. These calculations should produce an upper bound of product concentrations expected during the transient phase of start-up. Due to the fact that there are different ways in which to start a cascade, several methods are used to determine the maximum enrichment during the time transient. Model cascades were created for gas centrifuges with several product to .feed assay separation factors. With this information, the models were defined and the equilibrium programs were used to determine the maximum enrichment level during the time transient. The calculations predict in a cascade with separation factor 1.254 designed to produce enriched uranium for the purpose of supplying reactor fuel, it would not be unreasonable to see some 235 U in the range of 12-15%. Higher assays produced during the start-up period might lead inspectors to believe the cascade is being

  20. MAGIC: A Tool for Combining, Interpolating, and Processing Magnetograms

    Science.gov (United States)

    Allred, Joel

    2012-01-01

    Transients in the solar coronal magnetic field are ultimately the source of space weather. Models which seek to track the evolution of the coronal field require magnetogram images to be used as boundary conditions. These magnetograms are obtained by numerous instruments with different cadences and resolutions. A tool is required which allows modelers to fmd all available data and use them to craft accurate and physically consistent boundary conditions for their models. We have developed a software tool, MAGIC (MAGnetogram Interpolation and Composition), to perform exactly this function. MAGIC can manage the acquisition of magneto gram data, cast it into a source-independent format, and then perform the necessary spatial and temporal interpolation to provide magnetic field values as requested onto model-defined grids. MAGIC has the ability to patch magneto grams from different sources together providing a more complete picture of the Sun's field than is possible from single magneto grams. In doing this, care must be taken so as not to introduce nonphysical current densities along the seam between magnetograms. We have designed a method which minimizes these spurious current densities. MAGIC also includes a number of post-processing tools which can provide additional information to models. For example, MAGIC includes an interface to the DA VE4VM tool which derives surface flow velocities from the time evolution of surface magnetic field. MAGIC has been developed as an application of the KAMELEON data formatting toolkit which has been developed by the CCMC.

  1. Defect production in simulated cascades: cascade quenching and short-term annealing

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1982-01-01

    Defect production in high energy displacement cascades has been modeled using the computer code MARLOWE to generate the cascades and the stochastic computer code ALSOME to simulate the cascade quenching and short-term annealing of isolated cascades. The quenching is accomplished by using ALSOME with exaggerated values for defect mobilities and critical reaction distanes for recombination and clustering, which are in effect until the number of defect pairs is equal to the value determined from resistivity experiments at 4K. Then normal mobilities and reaction distances are used during short-term annealing to a point representative of Stage III recovery. Effects of cascade interactions at low fluences are also being investigated. The quenching parameter values were empirically determined for 30 keV cascades. The results agree well with experimental information throughout the range from 1 keV to 100 keV. Even after quenching and short-term annealing the high energy cascades behave as a collection of lower energy subcascades and lobes. Cascades generated in a crystal having thermal displacements were found to be in better agreement with experiments after quenching and annealing than those generated in a non-thermal crystal

  2. Interpolation of fuzzy data | Khodaparast | Journal of Fundamental ...

    African Journals Online (AJOL)

    Considering the many applications of mathematical functions in different ways, it is essential to have a defining function. In this study, we used Fuzzy Lagrangian interpolation and natural fuzzy spline polynomials to interpolate the fuzzy data. In the current world and in the field of science and technology, interpolation issues ...

  3. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub

  4. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub

  5. Trends in Continuity and Interpolation for Computer Graphics.

    Science.gov (United States)

    Gonzalez Garcia, Francisco

    2015-01-01

    In every computer graphics oriented application today, it is a common practice to texture 3D models as a way to obtain realistic material. As part of this process, mesh texturing, deformation, and visualization are all key parts of the computer graphics field. This PhD dissertation was completed in the context of these three important and related fields in computer graphics. The article presents techniques that improve on existing state-of-the-art approaches related to continuity and interpolation in texture space (texturing), object space (deformation), and screen space (rendering).

  6. Cascade processes in kaonic and muonic atoms

    International Nuclear Information System (INIS)

    Faifman, M.P.; Men'shikov, L.I.

    2003-01-01

    Cascade processes in exotic (kaonic and muonic) hydrogen/deuterium have been studied with the quantum-classical Monte Carlo code (QCMC) developed for 'ab initio' - calculations. It has been shown that the majority of kaonic hydrogen atoms during cascade are accelerated to high energies E ∼ 100 eV, which leads to a much lower value for the calculated yields Y of x-rays than predicted by the 'standard cascade model'. The modified QCMC scheme has been applied to the study of the cascade in μp and μd muonic atoms. A comparison of the calculated yields for K-series x-rays with experimental data directly indicates that the molecular structure of the hydrogen target and new types of non-radiative transitions are essential for the light muonic atoms, while they are negligible for heavy (kaonic) atoms. These processes have been considered and estimates of their probabilities are presented. (author)

  7. Quadratic polynomial interpolation on triangular domain

    Science.gov (United States)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  8. Trace interpolation by slant-stack migration

    International Nuclear Information System (INIS)

    Novotny, M.

    1990-01-01

    The slant-stack migration formula based on the radon transform is studied with respect to the depth steep Δz of wavefield extrapolation. It can be viewed as a generalized trace-interpolation procedure including wave extrapolation with an arbitrary step Δz. For Δz > 0 the formula yields the familiar plane-wave decomposition, while for Δz > 0 it provides a robust tool for migration transformation of spatially under sampled wavefields. Using the stationary phase method, it is shown that the slant-stack migration formula degenerates into the Rayleigh-Sommerfeld integral in the far-field approximation. Consequently, even a narrow slant-stack gather applied before the diffraction stack can significantly improve the representation of noisy data in the wavefield extrapolation process. The theory is applied to synthetic and field data to perform trace interpolation and dip reject filtration. The data examples presented prove that the radon interpolator works well in the dip range, including waves with mutual stepouts smaller than half the dominant period

  9. Image Interpolation with Geometric Contour Stencils

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We consider the image interpolation problem where given an image vm,n with uniformly-sampled pixels vm,n and point spread function h, the goal is to find function u(x,y satisfying vm,n = (h*u(m,n for all m,n in Z. This article improves upon the IPOL article Image Interpolation with Contour Stencils. In the previous work, contour stencils are used to estimate the image contours locally as short line segments. This article begins with a continuous formulation of total variation integrated over a collection of curves and defines contour stencils as a consistent discretization. This discretization is more reliable than the previous approach and can effectively distinguish contours that are locally shaped like lines, curves, corners, and circles. These improved contour stencils sense more of the geometry in the image. Interpolation is performed using an extension of the method described in the previous article. Using the improved contour stencils, there is an increase in image quality while maintaining similar computational efficiency.

  10. Delimiting areas of endemism through kernel interpolation.

    Science.gov (United States)

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  11. Delimiting areas of endemism through kernel interpolation.

    Directory of Open Access Journals (Sweden)

    Ubirajara Oliveira

    Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  12. Flip-avoiding interpolating surface registration for skull reconstruction.

    Science.gov (United States)

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Annealing simulation of cascade damage using MARLOWE-DAIQUIRI codes

    International Nuclear Information System (INIS)

    Muroga, Takeo

    1984-01-01

    The localization effect of the defects generated by the cascade damage on the properties of solids was studied by using a computer code. The code is based on the two-body collision approximation method and the Monte Carlo method. The MARLOWE and DAIQUIRI codes were partly improved to fit the present calculation of the annealing of cascade damage. The purpose of this study is to investigate the behavior of defects under the simulated reactive and irradiation condition. Calculation was made for alpha iron (BCC), and the threshold energy was set at 40 eV. The temperature dependence of annealing and the growth of a cluster were studied. The overlapping effect of cascade was studied. At first, the extreme case of overlapping was studied, then the practical cases were estimated by interpolation. The state of overlapping of cascade corresponded to the irradiation speed. The interaction between cascade and dislocations was studied, and the calculation of the annealing of primary knock-out atoms (PKA) in alpha iron was performed. At low temperature, the effect of dislocations was large, but the growth of vacancy was not seen. At high temperature, the effect of dislocations was small. The evaluation of the simulation of various ion irradiation and the growth efficiency of defects were performed. (Kato, T.)

  14. Gleditsia Saponin C Induces A549 Cell Apoptosis via Caspase-Dependent Cascade and Suppresses Tumor Growth on Xenografts Tumor Animal Model

    Directory of Open Access Journals (Sweden)

    Ye Cheng

    2018-01-01

    Full Text Available Saponins are natural compounds and possess the most promising anti-cancer function. Here, a saponin gleditsia saponin C (GSC, extracted from gleditsiae fructus abnormalis, could induce apoptosis of lung tumor cell line A549 via caspase dependent cascade and this effect could be prevented by the caspase inhibitors. In addition, GSC induced cell death companied with an increase ratio of Bax:Bcl-2 and inhibition of ERK and Akt signaling pathways. Meanwhile, GSC suppressed TNFα inducing NF-κB activation and increased the susceptibility of lung cancer cell to TNFα induced apoptosis. Furthermore, on mouse xenograft model, GSC significantly suppressed tumor growth and induced cancer cell apoptosis, which validated the anti-tumor effect of GSC. Based on these results, GSC might be a promising drug candidate of anti-lung cancer for its potential clinical applications.

  15. A novel Drosophila injury model reveals severed axons are cleared through a Draper/MMP-1 signaling cascade

    Science.gov (United States)

    Purice, Maria D; Ray, Arpita; Münzel, Eva Jolanda; Pope, Bernard J; Park, Daniel J; Speese, Sean D; Logan, Mary A

    2017-01-01

    Neural injury triggers swift responses from glia, including glial migration and phagocytic clearance of damaged neurons. The transcriptional programs governing these complex innate glial immune responses are still unclear. Here, we describe a novel injury assay in adult Drosophila that elicits widespread glial responses in the ventral nerve cord (VNC). We profiled injury-induced changes in VNC gene expression by RNA sequencing (RNA-seq) and found that responsive genes fall into diverse signaling classes. One factor, matrix metalloproteinase-1 (MMP-1), is induced in Drosophila ensheathing glia responding to severed axons. Interestingly, glial induction of MMP-1 requires the highly conserved engulfment receptor Draper, as well as AP-1 and STAT92E. In MMP-1 depleted flies, glia do not properly infiltrate neuropil regions after axotomy and, as a consequence, fail to clear degenerating axonal debris. This work identifies Draper-dependent activation of MMP-1 as a novel cascade required for proper glial clearance of severed axons. DOI: http://dx.doi.org/10.7554/eLife.23611.001 PMID:28825401

  16. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    Science.gov (United States)

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  17. Interpolation functions and the Lions-Peetre interpolation construction

    International Nuclear Information System (INIS)

    Ovchinnikov, V I

    2014-01-01

    The generalization of the Lions-Peetre interpolation method of means considered in the present survey is less general than the generalizations known since the 1970s. However, our level of generalization is sufficient to encompass spaces that are most natural from the point of view of applications, like the Lorentz spaces, Orlicz spaces, and their analogues. The spaces φ(X 0 ,X 1 ) p 0 ,p 1 considered here have three parameters: two positive numerical parameters p 0 and p 1 of equal standing, and a function parameter φ. For p 0 ≠p 1 these spaces can be regarded as analogues of Orlicz spaces under the real interpolation method. Embedding criteria are established for the family of spaces φ(X 0 ,X 1 ) p 0 ,p 1 , together with optimal interpolation theorems that refine all the known interpolation theorems for operators acting on couples of weighted spaces L p and that extend these theorems beyond scales of spaces. The main specific feature is that the function parameter φ can be an arbitrary natural functional parameter in the interpolation. Bibliography: 43 titles

  18. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  19. Cascade Organic Solar Cells

    KAUST Repository

    Schlenker, Cody W.

    2011-09-27

    We demonstrate planar organic solar cells consisting of a series of complementary donor materials with cascading exciton energies, incorporated in the following structure: glass/indium-tin-oxide/donor cascade/C 60/bathocuproine/Al. Using a tetracene layer grown in a descending energy cascade on 5,6-diphenyl-tetracene and capped with 5,6,11,12-tetraphenyl- tetracene, where the accessibility of the π-system in each material is expected to influence the rate of parasitic carrier leakage and charge recombination at the donor/acceptor interface, we observe an increase in open circuit voltage (Voc) of approximately 40% (corresponding to a change of +200 mV) compared to that of a single tetracene donor. Little change is observed in other parameters such as fill factor and short circuit current density (FF = 0.50 ± 0.02 and Jsc = 2.55 ± 0.23 mA/cm2) compared to those of the control tetracene-C60 solar cells (FF = 0.54 ± 0.02 and Jsc = 2.86 ± 0.23 mA/cm2). We demonstrate that this cascade architecture is effective in reducing losses due to polaron pair recombination at donor-acceptor interfaces, while enhancing spectral coverage, resulting in a substantial increase in the power conversion efficiency for cascade organic photovoltaic cells compared to tetracene and pentacene based devices with a single donor layer. © 2011 American Chemical Society.

  20. Energy cascades in Canada

    Energy Technology Data Exchange (ETDEWEB)

    Hayden, A. C.; Brown, T. D.

    1979-03-15

    Combining energy uses in a cascade can result in significant overall reductions in fuel requirements. The simplest applications for a cascade are in the recovery of waste heat from existing processes using special boilers or turbines. Specific applications of more-complex energy cascades for Canada are discussed. A combined-cycle plant at a chemical refinery in Ontario is world leader in energy efficiency. Total-energy systems for commercial buildings, such as one installed in a school in Western Canada, offer attractive energy and operating cost benefits. A cogeneration plant proposed for the National Capital Region, generating electricity as well as steam for district heating, allows the use of a low-grade fossil fuel (coal), greatly improves energy-transformation efficiency, and also utilizes an effectively renewable resource (municipal garbage). Despite the widespread availability of equipment and technology of energy cascades, the sale of steam and electricity across plant boundaries presents a barrier. More widespread use of cascades will require increased cooperation among industry, electric utilities and the various levels of government if Canada is to realize the high levels of energy efficiency potential available.

  1. Cascade Organic Solar Cells

    KAUST Repository

    Schlenker, Cody W.; Barlier, Vincent S.; Chin, Stephanie W.; Whited, Matthew T.; McAnally, R. Eric; Forrest, Stephen R.; Thompson, Mark E.

    2011-01-01

    We demonstrate planar organic solar cells consisting of a series of complementary donor materials with cascading exciton energies, incorporated in the following structure: glass/indium-tin-oxide/donor cascade/C 60/bathocuproine/Al. Using a tetracene layer grown in a descending energy cascade on 5,6-diphenyl-tetracene and capped with 5,6,11,12-tetraphenyl- tetracene, where the accessibility of the π-system in each material is expected to influence the rate of parasitic carrier leakage and charge recombination at the donor/acceptor interface, we observe an increase in open circuit voltage (Voc) of approximately 40% (corresponding to a change of +200 mV) compared to that of a single tetracene donor. Little change is observed in other parameters such as fill factor and short circuit current density (FF = 0.50 ± 0.02 and Jsc = 2.55 ± 0.23 mA/cm2) compared to those of the control tetracene-C60 solar cells (FF = 0.54 ± 0.02 and Jsc = 2.86 ± 0.23 mA/cm2). We demonstrate that this cascade architecture is effective in reducing losses due to polaron pair recombination at donor-acceptor interfaces, while enhancing spectral coverage, resulting in a substantial increase in the power conversion efficiency for cascade organic photovoltaic cells compared to tetracene and pentacene based devices with a single donor layer. © 2011 American Chemical Society.

  2. Interpolation between multi-dimensional histograms using a new non-linear moment morphing method

    NARCIS (Netherlands)

    Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.

    2015-01-01

    A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model׳s parameters and transformed according to a specific

  3. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    Directory of Open Access Journals (Sweden)

    Lixin Li

    2014-09-01

    Full Text Available Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate

  4. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    Science.gov (United States)

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-01-01

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation

  5. Fast inverse distance weighting-based spatiotemporal interpolation: a web-based application of interpolating daily fine particulate matter PM2:5 in the contiguous U.S. using parallel programming and k-d tree.

    Science.gov (United States)

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-09-03

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation

  6. Research progress and hotspot analysis of spatial interpolation

    Science.gov (United States)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  7. Modeled and measured glacier change and related glaciological, hydrological, and meteorological conditions at South Cascade Glacier, Washington, balance and water years 2006 and 2007

    Science.gov (United States)

    Bidlake, William R.; Josberger, Edward G.; Savoca, Mark E.

    2010-01-01

    Winter snow accumulation and summer snow and ice ablation were measured at South Cascade Glacier, Washington, to estimate glacier mass balance quantities for balance years 2006 and 2007. Mass balances were computed with assistance from a new model that was based on the works of other glacier researchers. The model, which was developed for mass balance practitioners, coupled selected meteorological and glaciological data to systematically estimate daily mass balance at selected glacier sites. The North Cascade Range in the vicinity of South Cascade Glacier accumulated approximately average to above average winter snow packs during 2006 and 2007. Correspondingly, the balance years 2006 and 2007 maximum winter snow mass balances of South Cascade Glacier, 2.61 and 3.41 meters water equivalent, respectively, were approximately equal to or more positive (larger) than the average of such balances since 1959. The 2006 glacier summer balance, -4.20 meters water equivalent, was among the four most negative since 1959. The 2007 glacier summer balance, -3.63 meters water equivalent, was among the 14 most negative since 1959. The glacier continued to lose mass during 2006 and 2007, as it commonly has since 1953, but the loss was much smaller during 2007 than during 2006. The 2006 glacier net balance, -1.59 meters water equivalent, was 1.02 meters water equivalent more negative (smaller) than the average during 1953-2005. The 2007 glacier net balance, -0.22 meters water equivalent, was 0.37 meters water equivalent less negative (larger) than the average during 1953-2006. The 2006 accumulation area ratio was less than 0.10, owing to isolated patches of accumulated snow that endured the 2006 summer season. The 2006 equilibrium line altitude was higher than the glacier. The 2007 accumulation area ratio and equilibrium line altitude were 0.60 and 1,880 meters, respectively. Accompanying the glacier mass losses were retreat of the terminus and reduction of total glacier area. The

  8. Engaging the Entire Care Cascade in Western Kenya: A Model to Achieve the Cardiovascular Disease Secondary Prevention Roadmap Goals.

    Science.gov (United States)

    Vedanthan, Rajesh; Kamano, Jemima H; Bloomfield, Gerald S; Manji, Imran; Pastakia, Sonak; Kimaiyo, Sylvester N

    2015-12-01

    Cardiovascular disease (CVD) is the leading cause of death in the world, with a substantial health and economic burden confronted by low- and middle-income countries. In low-income countries such as Kenya, there exists a double burden of communicable and noncommunicable diseases, and the CVD profile includes many nonatherosclerotic entities. Socio-politico-economic realities present challenges to CVD prevention in Kenya, including poverty, low national spending on health, significant out-of-pocket health expenditures, and limited outpatient health insurance. In addition, the health infrastructure is characterized by insufficient human resources for health, medication stock-outs, and lack of facilities and equipment. Within this socio-politico-economic reality, contextually appropriate programs for CVD prevention need to be developed. We describe our experience from western Kenya, where we have engaged the entire care cascade across all levels of the health system, in order to improve access to high-quality, comprehensive, coordinated, and sustainable care for CVD and CVD risk factors. We report on several initiatives: 1) population-wide screening for hypertension and diabetes; 2) engagement of community resources and governance structures; 3) geographic decentralization of care services; 4) task redistribution to more efficiently use of available human resources for health; 5) ensuring a consistent supply of essential medicines; 6) improving physical infrastructure of rural health facilities; 7) developing an integrated health record; and 8) mobile health (mHealth) initiatives to provide clinical decision support and record-keeping functions. Although several challenges remain, there currently exists a critical window of opportunity to establish systems of care and prevention that can alter the trajectory of CVD in low-resource settings. Copyright © 2015 World Heart Federation (Geneva). Published by Elsevier B.V. All rights reserved.

  9. Optimal Interpolation scheme to generate reference crop evapotranspiration

    Science.gov (United States)

    Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco

    2018-05-01

    We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.

  10. Interpolation on the manifold of K component GMMs.

    Science.gov (United States)

    Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas

    2015-12-01

    Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.

  11. On the exact interpolating function in ABJ theory

    Energy Technology Data Exchange (ETDEWEB)

    Cavaglià, Andrea [Dipartimento di Fisica and INFN, Università di Torino,Via P. Giuria 1, 10125 Torino (Italy); Gromov, Nikolay [Mathematics Department, King’s College London,The Strand, London WC2R 2LS (United Kingdom); St. Petersburg INP,Gatchina, 188 300, St.Petersburg (Russian Federation); Levkovich-Maslyuk, Fedor [Mathematics Department, King’s College London,The Strand, London WC2R 2LS (United Kingdom); Nordita, KTH Royal Institute of Technology and Stockholm University,Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden)

    2016-12-16

    Based on the recent indications of integrability in the planar ABJ model, we conjecture an exact expression for the interpolating function h(λ{sub 1},λ{sub 2}) in this theory. Our conjecture is based on the observation that the integrability structure of the ABJM theory given by its Quantum Spectral Curve is very rigid and does not allow for a simple consistent modification. Under this assumption, we revised the previous comparison of localization results and exact all loop integrability calculations done for the ABJM theory by one of the authors and Grigory Sizov, fixing h(λ{sub 1},λ{sub 2}). We checked our conjecture against various weak coupling expansions, at strong coupling and also demonstrated its invariance under the Seiberg-like duality. This match also gives further support to the integrability of the model. If our conjecture is correct, it extends all the available integrability results in the ABJM model to the ABJ model.

  12. Generation of nuclear data banks through interpolation

    International Nuclear Information System (INIS)

    Castillo M, J.A.

    1999-01-01

    Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, used to generate Nuclear Data Banks employing bi cubic polynomial interpolation, taking as independent variables the uranium and gadolinium percents. Two proposals were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed to obtain the interpolating polynomial and later, the corresponding linear equations system. In the solution of this system the Gaussian elimination method with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validations test, a comparison was made between the values obtained with INTPOLBI and INTERTEG (created at the Instituto de Investigaciones Electricas with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks. (Author)

  13. Nuclear data banks generation by interpolation

    International Nuclear Information System (INIS)

    Castillo M, J. A.

    1999-01-01

    Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, use to generate Nuclear Data Banks employing bicubic polynominal interpolation, taking as independent variables the uranium and gadolinia percents. Two proposal were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed, to obtain the interpolating polynomial and later, the corresponding linear equation systems. In the solution of this systems the Gaussian elimination methods with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validation tests, a comparison was made between the values obtained with INTPOLBI and INTERTEG (create at the Instituto de Investigaciones Electricas (MX) with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks

  14. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    Science.gov (United States)

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  15. Calculation of reactivity without Lagrange interpolation

    International Nuclear Information System (INIS)

    Suescun D, D.; Figueroa J, J. H.; Rodriguez R, K. C.; Villada P, J. P.

    2015-09-01

    A new method to solve numerically the inverse equation of punctual kinetics without using Lagrange interpolating polynomial is formulated; this method uses a polynomial approximation with N points based on a process of recurrence for simulating different forms of nuclear power. The results show a reliable accuracy. Furthermore, the method proposed here is suitable for real-time measurements of reactivity, with step sizes of calculations greater that Δt = 0.3 s; due to its precision can be used to implement a digital meter of reactivity in real time. (Author)

  16. Solving the Schroedinger equation using Smolyak interpolants

    International Nuclear Information System (INIS)

    Avila, Gustavo; Carrington, Tucker Jr.

    2013-01-01

    In this paper, we present a new collocation method for solving the Schroedinger equation. Collocation has the advantage that it obviates integrals. All previous collocation methods have, however, the crucial disadvantage that they require solving a generalized eigenvalue problem. By combining Lagrange-like functions with a Smolyak interpolant, we device a collocation method that does not require solving a generalized eigenvalue problem. We exploit the structure of the grid to develop an efficient algorithm for evaluating the matrix-vector products required to compute energy levels and wavefunctions. Energies systematically converge as the number of points and basis functions are increased

  17. Energy cascading in the beat-wave accelerator

    International Nuclear Information System (INIS)

    McKinstrie, C.J.; Batha, S.H.

    1987-01-01

    A review is given of energy cascading in the beat-wave accelerator. The properties of the electromagnetic cascade and the corresponding plasma-wave evolution are well understood within the framework of an approximate analytic model. Based on this model, idealized laser-plasma coupling efficiencies of the order of 10% do not seem unreasonable. 28 refs

  18. The Geant4 Bertini Cascade

    Energy Technology Data Exchange (ETDEWEB)

    Wright, D.H.; Kelsey, M.H.

    2015-12-21

    One of the medium energy hadron–nucleus interaction models in the GEANT4 simulation toolkit is based partly on the Bertini intranuclear cascade model. Since its initial appearance in the toolkit, this model has been largely re-written in order to extend its physics capabilities and to reduce its memory footprint. Physics improvements include extensions in applicable energy range and incident particle types, and improved hadron–nucleon cross-sections and angular distributions. Interfaces have also been developed which allow the model to be coupled with other GEANT4 models at lower and higher energies. The inevitable speed reductions due to enhanced physics have been mitigated by memory and CPU efficiency improvements. Details of these improvements, along with selected comparisons of the model to data, are discussed.

  19. Cascade reactor: introduction

    International Nuclear Information System (INIS)

    Pitts, J.H.

    1985-01-01

    Cascade is a concept for an ultrasafe, highly efficient, easily built reactor to convert inertial-confinement fusion energy into electrical power. The Cascade design includes a rotating double-cone-shaped chamber in which a moving, 1-m-thick ceramic granular blanket is held against the reactor wall by centrifugal action. The granular material absorbs energy from the fusion reactions. Accomplishments this year associated with Cascade included improvements to simplify chamber design and lower activation. The authors switched from a steel chamber wall to one made from silicon-carbide (SiC) panels held in compression by SiC-fiber/Al-composite tendons that gird the chamber both circumferentially and axially. The authors studies a number of heat-exchanger designs and selected a gravity-flow cascade design with a vacuum on the primary side. This design allows granules leaving the chamber to be transported to the heat exchangers using their own peripheral speed. The granules transfer their thermal energy and return to the chamber gravitationally: no vacuum locks or conveyors are needed

  20. Stability of cascade search

    Energy Technology Data Exchange (ETDEWEB)

    Fomenko, Tatiana N [M. V. Lomonosov Moscow State University, Faculty of Computational Mathematics and Cybernetics, Moscow (Russian Federation)

    2010-10-22

    We find sufficient conditions on a searching multi-cascade for a modification of the set of limit points of the cascade that satisfy an assessing inequality for the distance from each of these points to the initial point to be small, provided that the modifications of the initial point and the initial set-valued functionals or maps used to construct the multi-cascade are small. Using this result, we prove the stability (in the above sense) of the cascade search for the set of common pre-images of a closed subspace under the action of n set-valued maps, n{>=}1 (in particular, for the set of common roots of these maps and for the set of their coincidences). For n=2 we obtain generalizations of some results of A. V. Arutyunov; the very statement of the problem comes from a recent paper of his devoted to the study of the stability of the subset of coincidences of a Lipschitz map and a covering map.

  1. Image re-sampling detection through a novel interpolation kernel.

    Science.gov (United States)

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Air Quality Assessment Using Interpolation Technique

    Directory of Open Access Journals (Sweden)

    Awkash Kumar

    2016-07-01

    Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.

  3. Randomized interpolative decomposition of separated representations

    Science.gov (United States)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  4. Size-Dictionary Interpolation for Robot's Adjustment

    Directory of Open Access Journals (Sweden)

    Morteza eDaneshmand

    2015-05-01

    Full Text Available This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically-inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps towards completion of the whole realistic online fitting package is provided.

  5. Integrated Broadband Quantum Cascade Laser

    Science.gov (United States)

    Mansour, Kamjou (Inventor); Soibel, Alexander (Inventor)

    2016-01-01

    A broadband, integrated quantum cascade laser is disclosed, comprising ridge waveguide quantum cascade lasers formed by applying standard semiconductor process techniques to a monolithic structure of alternating layers of claddings and active region layers. The resulting ridge waveguide quantum cascade lasers may be individually controlled by independent voltage potentials, resulting in control of the overall spectrum of the integrated quantum cascade laser source. Other embodiments are described and claimed.

  6. Observed and modelled effects of auroral precipitation on the thermal ionospheric plasma: comparing the MICA and Cascades2 sounding rocket events

    Science.gov (United States)

    Lynch, K. A.; Gayetsky, L.; Fernandes, P. A.; Zettergren, M. D.; Lessard, M.; Cohen, I. J.; Hampton, D. L.; Ahrns, J.; Hysell, D. L.; Powell, S.; Miceli, R. J.; Moen, J. I.; Bekkeng, T.

    2012-12-01

    Auroral precipitation can modify the ionospheric thermal plasma through a variety of processes. We examine and compare the events seen by two recent auroral sounding rockets carrying in situ thermal plasma instrumentation. The Cascades2 sounding rocket (March 2009, Poker Flat Research Range) traversed a pre-midnight poleward boundary intensification (PBI) event distinguished by a stationary Alfvenic curtain of field-aligned precipitation. The MICA sounding rocket (February 2012, Poker Flat Research Range) traveled through irregular precipitation following the passage of a strong westward-travelling surge. Previous modelling of the ionospheric effects of auroral precipitation used a one-dimensional model, TRANSCAR, which had a simplified treatment of electric fields and did not have the benefit of in situ thermal plasma data. This new study uses a new two-dimensional model which self-consistently calculates electric fields to explore both spatial and temporal effects, and compares to thermal plasma observations. A rigorous understanding of the ambient thermal plasma parameters and their effects on the local spacecraft sheath and charging, is required for quantitative interpretation of in situ thermal plasma observations. To complement this TRANSCAR analysis we therefore require a reliable means of interpreting in situ thermal plasma observation. This interpretation depends upon a rigorous plasma sheath model since the ambient ion energy is on the order of the spacecraft's sheath energy. A self-consistent PIC model is used to model the spacecraft sheath, and a test-particle approach then predicts the detector response for a given plasma environment. The model parameters are then modified until agreement is found with the in situ data. We find that for some situations, the thermal plasma parameters are strongly driven by the precipitation at the observation time. For other situations, the previous history of the precipitation at that position can have a stronger

  7. Spatial interpolation of fine particulate matter concentrations using the shortest wind-field path distance.

    Directory of Open Access Journals (Sweden)

    Longxiang Li

    Full Text Available Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.

  8. Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation

    Directory of Open Access Journals (Sweden)

    Hezerul Abdul Karim

    2004-09-01

    Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.

  9. Improvement of one-nucleon removal and total reaction cross sections in the Liège intranuclear-cascade model using Hartree-Fock-Bogoliubov calculations

    Science.gov (United States)

    Rodríguez-Sánchez, Jose Luis; David, Jean-Christophe; Mancusi, Davide; Boudard, Alain; Cugnon, Joseph; Leray, Sylvie

    2017-11-01

    The prediction of one-nucleon-removal cross sections by the Liège intranuclear-cascade model has been improved using a refined description of the matter and energy densities in the nuclear surface. Hartree-Fock-Bogoliubov calculations with the Skyrme interaction are used to obtain a more realistic description of the radial-density distributions of protons and neutrons, as well as the excitation-energy uncorrelation at the nuclear surface due to quantum effects and short-range correlations. The results are compared with experimental data covering a large range of nuclei, from carbon to uranium, and projectile kinetic energies. We find that the new approach is in good agreement with experimental data of one-nucleon-removal cross sections covering a broad range in nuclei and energies. The new ingredients also improve the description of total reaction cross sections induced by protons at low energies, the production cross sections of heaviest residues close to the projectile, and the triple-differential cross sections for one-proton removal. However, other observables such as quadruple-differential cross sections of coincident protons do not present any sizable sensitivity to the new approach. Finally, the model is also tested for light-ion-induced reactions. It is shown that the new parameters can give a reasonable description of the nucleus-nucleus total reaction cross sections at high energies.

  10. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-03

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  11. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-01

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  12. Distance-two interpolation for parallel algebraic multigrid

    International Nuclear Information System (INIS)

    Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M

    2007-01-01

    In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers

  13. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  14. Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations

    Science.gov (United States)

    Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.

  15. Interpolation Approaches for Characterizing Spatial Variability of Soil Properties in Tuz Lake Basin of Turkey

    Science.gov (United States)

    Gorji, Taha; Sertel, Elif; Tanik, Aysegul

    2017-12-01

    Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters

  16. Signaling Cascades: Consequences of Varying Substrate and Phosphatase Levels

    DEFF Research Database (Denmark)

    Feliu, Elisenda; Knudsen, Michael; Wiuf, Carsten Henrik

    2012-01-01

    We study signaling cascades with an arbitrary number of layers of one-site phosphorylation cycles. Such cascades are abundant in nature and integrated parts of many pathways. Based on the Michaelis-Menten model of enzyme kinetics and the law of mass-action, we derive explicit analytic expressions...

  17. Participant intimacy: A cluster analysis of the intranuclear cascade

    International Nuclear Information System (INIS)

    Cugnon, J.; Knoll, J.; Randrup, J.

    1981-01-01

    The intranuclear cascade for relativistic nuclear collisions is analyzed in terms of clusters consisting of groups of nucleons which are dynamically linked to each other by violent interactions. The formation cross sections for the different cluster types as well as their intrinsic dynamics are studied and compared with the predictions of the linear cascade model ( rows-on-rows ). (orig.)

  18. Noise propagation in two-step series MAPK cascade.

    Directory of Open Access Journals (Sweden)

    Venkata Dhananjaneyulu

    Full Text Available Series MAPK enzymatic cascades, ubiquitously found in signaling networks, act as signal amplifiers and play a key role in processing information during signal transduction in cells. In activated cascades, cell-to-cell variability or noise is bound to occur and thereby strongly affects the cellular response. Commonly used linearization method (LM applied to Langevin type stochastic model of the MAPK cascade fails to accurately predict intrinsic noise propagation in the cascade. We prove this by using extensive stochastic simulations for various ranges of biochemical parameters. This failure is due to the fact that the LM ignores the nonlinear effects on the noise. However, LM provides a good estimate of the extrinsic noise propagation. We show that the correct estimate of intrinsic noise propagation in signaling networks that contain at least one enzymatic step can be obtained only through stochastic simulations. Noise propagation in the cascade depends on the underlying biochemical parameters which are often unavailable. Based on a combination of global sensitivity analysis (GSA and stochastic simulations, we developed a systematic methodology to characterize noise propagation in the cascade. GSA predicts that noise propagation in MAPK cascade is sensitive to the total number of upstream enzyme molecules and the total number of molecules of the two substrates involved in the cascade. We argue that the general systematic approach proposed and demonstrated on MAPK cascade must accompany noise propagation studies in biological networks.

  19. Primary Radiation Damage in Materials. Review of Current Understanding and Proposed New Standard Displacement Damage Model to Incorporate in Cascade Defect Production Efficiency and Mixing Effects

    International Nuclear Information System (INIS)

    Nordlund, Kai; Sand, Andrea E.; Granberg, Fredric; Zinkle, Steven J.; Stoller, Roger; Averback, Robert S.; Suzudo, Tomoaki; Malerba, Lorenzo; Banhart, Florian; Weber, William J.; Willaime, Francois; Dudarev, Sergei; Simeone, David

    2015-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Multi-scale Modelling of Fuels and Structural Materials for Nuclear Systems (WPMM) was established in 2008 to assess the scientific and engineering aspects of fuels and structural materials, aiming at evaluating multi-scale models and simulations as validated predictive tools for the design of nuclear systems, fuel fabrication and performance. The WPMM's objective is to promote the exchange of information on models and simulations of nuclear materials, theoretical and computational methods, experimental validation, and related topics. It also provides member countries with up-to-date information, shared data, models and expertise. The WPMM Expert Group on Primary Radiation Damage (PRD) was established in 2009 to determine the limitations of the NRT-dpa standard, in the light of both atomistic simulations and known experimental discrepancies, to revisit the NRT-dpa standard and to examine the possibility of proposing a new improved standard of primary damage characteristics. This report reviews the current understanding of primary radiation damage from neutrons, ions and electrons (excluding photons, atomic clusters and more exotic particles), with emphasis on the range of validity of the 'displacement per atom' (dpa) concept in all major classes of materials with the exception of organics. The report also introduces an 'athermal recombination-corrected dpa' (arc-dpa) relation that uses a relatively simple functional to address the well-known issue that 'displacement per atom' (dpa) overestimates damage production in metals under energetic displacement cascade conditions, as well as a 'replacements-per-atom' (rpa) equation, also using a relatively simple functional, that accounts for the fact that dpa is understood to severely underestimate actual atom relocation (ion beam mixing) in metals. (authors)

  20. Cascade ICF power reactor

    International Nuclear Information System (INIS)

    Hogan, W.J.; Pitts, J.H.

    1986-01-01

    The double-cone-shaped Cascade reaction chamber rotates at 50 rpm to keep a blanket of ceramic granules in place against the wall as they slide from the poles to the exit slots at the equator. The 1 m-thick blanket consists of layers of carbon, beryllium oxide, and lithium aluminate granules about 1 mm in diameter. The x rays and debris are stopped in the carbon granules; the neutrons are multiplied and moderated in the BeO and breed tritium in the LiAlO 2 . The chamber wall is made up of SiO tiles held in compression by a network of composite SiC/Al tendons. Cascade operates at a 5 Hz pulse rate with 300 MJ in each pulse. The temperature in the blanket reaches 1600 K on the inner surface and 1350 K at the outer edge. The granules are automatically thrown into three separate vacuum heat exchangers where they give up their energy to high pressure helium. The helium is used in a Brayton cycle to obtain a thermal-to-electric conversion efficiency of 55%. Studies have been done on neutron activation, debris recovery, vaporization and recondensation of blanket material, tritium control and recovery, fire safety, and cost. These studies indicate that Cascade appears to be a promising ICF reactor candidate from all standpoints. At the 1000 MWe size, electricity could be made for about the same cost as in a future fission reactor

  1. Computer simulation of high energy displacement cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1990-01-01

    A methodology developed for modeling many aspects of high energy displacement cascades with molecular level computer simulations is reviewed. The initial damage state is modeled in the binary collision approximation (using the MARLOWE computer code), and the subsequent disposition of the defects within a cascade is modeled with a Monte Carlo annealing simulation (the ALSOME code). There are few adjustable parameters, and none are set to physically unreasonable values. The basic configurations of the simulated high energy cascades in copper, i.e., the number, size and shape of damage regions, compare well with observations, as do the measured numbers of residual defects and the fractions of freely migrating defects. The success of these simulations is somewhat remarkable, given the relatively simple models of defects and their interactions that are employed. The reason for this success is that the behavior of the defects is very strongly influenced by their initial spatial distributions, which the binary collision approximation adequately models. The MARLOWE/ALSOME system, with input from molecular dynamics and experiments, provides a framework for investigating the influence of high energy cascades on microstructure evolution. (author)

  2. Shape-based grey-level image interpolation

    International Nuclear Information System (INIS)

    Keh-Shih Chuang; Chun-Yuan Chen; Ching-Kai Yeh

    1999-01-01

    The three-dimensional (3D) object data obtained from a CT scanner usually have unequal sampling frequencies in the x-, y- and z-directions. Generally, the 3D data are first interpolated between slices to obtain isotropic resolution, reconstructed, then operated on using object extraction and display algorithms. The traditional grey-level interpolation introduces a layer of intermediate substance and is not suitable for objects that are very different from the opposite background. The shape-based interpolation method transfers a pixel location to a parameter related to the object shape and the interpolation is performed on that parameter. This process is able to achieve a better interpolation but its application is limited to binary images only. In this paper, we present an improved shape-based interpolation method for grey-level images. The new method uses a polygon to approximate the object shape and performs the interpolation using polygon vertices as references. The binary images representing the shape of the object were first generated via image segmentation on the source images. The target object binary image was then created using regular shape-based interpolation. The polygon enclosing the object for each slice can be generated from the shape of that slice. We determined the relative location in the source slices of each pixel inside the target polygon using the vertices of a polygon as the reference. The target slice grey-level was interpolated from the corresponding source image pixels. The image quality of this interpolation method is better and the mean squared difference is smaller than with traditional grey-level interpolation. (author)

  3. A spatial compression technique for head-related transfer function interpolation and complexity estimation

    DEFF Research Database (Denmark)

    Shekarchi, Sayedali; Christensen-Dalsgaard, Jakob; Hallam, John

    2015-01-01

    A head-related transfer function (HRTF) model employing Legendre polynomials (LPs) is evaluated as an HRTF spatial complexity indicator and interpolation technique in the azimuth plane. LPs are a set of orthogonal functions derived on the sphere which can be used to compress an HRTF dataset...

  4. Ultraprecise parabolic interpolator for numerically controlled machine tools. [Digital differential analyzer circuit

    Energy Technology Data Exchange (ETDEWEB)

    Davenport, C. M.

    1977-02-01

    The mathematical basis for an ultraprecise digital differential analyzer circuit for use as a parabolic interpolator on numerically controlled machines has been established, and scaling and other error-reduction techniques have been developed. An exact computer model is included, along with typical results showing tracking to within an accuracy of one part per million.

  5. INTAMAP: The design and implementation of an interoperable automated interpolation web service

    NARCIS (Netherlands)

    Pebesma, E.; Cornford, D.; Dubois, G.; Heuvelink, G.B.M.; Hristopulos, D.; Pilz, J.; Stohlker, U.; Morin, G.; Skoien, J.O.

    2011-01-01

    INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and

  6. GEANT4 hadronic cascade models analysis of proton and charged pion transverse momentum spectra from p plus Cu and Pb collisions at 3, 8, and 15 GeV/c

    CERN Document Server

    Abdel-Waged, Khaled; Uzhinskii, V V

    2011-01-01

    We describe how various hadronic cascade models, which are implemented in the GEANT4 toolkit, describe proton and charged pion transverse momentum spectra from p + Cu and Pb collisions at 3, 8, and 15 GeV/c, recently measured in the hadron production (HARP) experiment at CERN. The Binary, ultrarelativistic quantum molecular dynamics (UrQMD) and modified FRITIOF (FTF) hadronic cascade models are chosen for investigation. The first two models are based on limited (Binary) and branched (UrQMD) binary scattering between cascade particles which can be either a baryon or meson, in the three-dimensional space of the nucleus, while the latter (FTF) considers collective interactions between nucleons only, on the plane of impact parameter. It is found that the slow (p(T) 0.3 GeV/c) proton spectra are not strongly affected by the differences between the FTF and UrQMD models. It is also shown that the UrQMD and FTF combined with Binary (FTFB) models could reproduce both proton and charged pion spectra from p + Cu and Pb...

  7. Diabat Interpolation for Polymorph Free-Energy Differences.

    Science.gov (United States)

    Kamat, Kartik; Peters, Baron

    2017-02-02

    Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

  8. A fast and accurate dihedral interpolation loop subdivision scheme

    Science.gov (United States)

    Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan

    2018-04-01

    In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.

  9. Period doubling cascades of prey-predator model with nonlinear harvesting and control of over exploitation through taxation

    Science.gov (United States)

    Gupta, R. P.; Banerjee, Malay; Chandra, Peeyush

    2014-07-01

    The present study investigates a prey predator type model for conservation of ecological resources through taxation with nonlinear harvesting. The model uses the harvesting function as proposed by Agnew (1979) [1] which accounts for the handling time of the catch and also the competition between standard vessels being utilized for harvesting of resources. In this paper we consider a three dimensional dynamic effort prey-predator model with Holling type-II functional response. The conditions for uniform persistence of the model have been derived. The existence and stability of bifurcating periodic solution through Hopf bifurcation have been examined for a particular set of parameter value. Using numerical examples it is shown that the system admits periodic, quasi-periodic and chaotic solutions. It is observed that the system exhibits periodic doubling route to chaos with respect to tax. Many forms of complexities such as chaotic bands (including periodic windows, period-doubling bifurcations, period-halving bifurcations and attractor crisis) and chaotic attractors have been observed. Sensitivity analysis is carried out and it is observed that the solutions are highly dependent to the initial conditions. Pontryagin's Maximum Principle has been used to obtain optimal tax policy to maximize the monetary social benefit as well as conservation of the ecosystem.

  10. South Cascade (USA/North Cascades)

    Science.gov (United States)

    Bidlake, William R.

    2011-01-01

    The U.S. Geological Survey has closely monitored this temperate mountain glacier since the late 1950s. During 1958-2007, the glacier retreated about 0.7 km and shrank in area from 2.71 to 1.73 km2, although part of the area change was due to separation of contributing ice bodies from the main glacier. Maximum and average glacier thicknesses are about 170 and 80 m, respectively. Year-to-year variations of snow accumulation amounts on the glacier are largely attributable to the regional maritime climate and fluctuating climate conditions of the North Pacific Ocean. Long-term-average precipitation is about 4500 mm and most of that falls as snow during October through May. Average annual air temperature at 1,900 m altitude (the approximate ELA0) was estimated to be 1.6°C during 2000-2009. Mass balances are computed yearly by the direct glaciological method. Mass balances measured at selected locations are used in an interpolation and extrapolation procedure that computes the mass balance at each point in the glacier surface altitude grid. The resulting mass balance grid is averaged to obtain glacier mass balances. Additionally, the geodetic method has been applied to compute glacier net balances in 1970, 1975, 1977, 1979-80, and 1985-97. Winter snow accumulation on the glacier during 2007/08 and 2008/09 was larger than the long-term (1959-2009) average. The 2007/08 preliminary summer balance (-3510 mm w.e.) was slightly more negative than the long-term average and this yielded a preliminary 2007/08 net balance (-290 mm w.e.), which was less negative than the average for the period of record (-600 mm w.e.). Summer 2009 was uncommonly warm and the preliminary 2008/09 summer balance (-4980 mm w.e.) was more negative than any on record for the glacier. The 2008/09 glacier net balance (-1860 mm w.e.) was among the 10 most negative for the period of net balance record (1953-2009). Material presented here is preliminary in nature and presented prior to final review. These

  11. A Study on the Improvement of Digital Periapical Images using Image Interpolation Methods

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    Image resampling is of particular interest in digital radiology. When resampling an image to a new set of coordinate, there appears blocking artifacts and image changes. To enhance image quality, interpolation algorithms have been used. Resampling is used to increase the number of points in an image to improve its appearance for display. The process of interpolation is fitting a continuous function to the discrete points in the digital image. The purpose of this study was to determine the effects of the seven interpolation functions when image resampling in digital periapical images. The images were obtained by Digora, CDR and scanning of Ektaspeed plus periapical radiograms on the dry skull and human subject. The subjects were exposed to intraoral X-ray machine at 60 kVp and 70 kVp with exposure time varying between 0.01 and 0.50 second. To determine which interpolation method would provide the better image, seven functions were compared ; (1) nearest neighbor (2) linear (3) non-linear (4) facet model (5) cubic convolution (6) cubic spline (7) gray segment expansion. And resampled images were compared in terms of SNR (Signal to Noise Ratio) and MTF (Modulation Transfer Function) coefficient value. The obtained results were as follows ; 1. The highest SNR value (75.96 dB) was obtained with cubic convolution method and the lowest SNR value (72.44 dB) was obtained with facet model method among seven interpolation methods. 2. There were significant differences of SNR values among CDR, Digora and film scan (P 0.05). 4. There were significant differences of MTF coefficient values between linear interpolation method and the other six interpolation methods (P<0.05). 5. The speed of computation time was the fastest with nearest neighbor method and the slowest with non-linear method. 6. The better image was obtained with cubic convolution, cubic spline and gray segment method in ROC analysis. 7. The better sharpness of edge was obtained with gray segment expansion method

  12. Cascade of chromosomal rearrangements caused by a heterogeneous T-DNA integration supports the double-stranded break repair model for T-DNA integration.

    Science.gov (United States)

    Hu, Yufei; Chen, Zhiyu; Zhuang, Chuxiong; Huang, Jilei

    2017-06-01

    Transferred DNA (T-DNA) from Agrobacterium tumefaciens can be integrated into the plant genome. The double-stranded break repair (DSBR) pathway is a major model for T-DNA integration. From this model, we expect that two ends of a T-DNA molecule would invade into a single DNA double-stranded break (DSB) or independent DSBs in the plant genome. We call the later phenomenon a heterogeneous T-DNA integration, which has never been observed. In this work, we demonstrated it in an Arabidopsis T-DNA insertion mutant seb19. To resolve the chromosomal structural changes caused by T-DNA integration at both the nucleotide and chromosome levels, we performed inverse PCR, genome resequencing, fluorescence in situ hybridization and linkage analysis. We found, in seb19, a single T-DNA connected two different chromosomal loci and caused complex chromosomal rearrangements. The specific break-junction pattern in seb19 is consistent with the result of heterogeneous T-DNA integration but not of recombination between two T-DNA insertions. We demonstrated that, in seb19, heterogeneous T-DNA integration evoked a cascade of incorrect repair of seven DSBs on chromosomes 4 and 5, and then produced translocation, inversion, duplication and deletion. Heterogeneous T-DNA integration supports the DSBR model and suggests that two ends of a T-DNA molecule could be integrated into the plant genome independently. Our results also show a new origin of chromosomal abnormalities. © 2017 The Authors The Plant Journal © 2017 John Wiley & Sons Ltd.

  13. Hybrid kriging methods for interpolating sparse river bathymetry point data

    Directory of Open Access Journals (Sweden)

    Pedro Velloso Gomes Batista

    Full Text Available ABSTRACT Terrain models that represent riverbed topography are used for analyzing geomorphologic changes, calculating water storage capacity, and making hydrologic simulations. These models are generated by interpolating bathymetry points. River bathymetry is usually surveyed through cross-sections, which may lead to a sparse sampling pattern. Hybrid kriging methods, such as regression kriging (RK and co-kriging (CK employ the correlation with auxiliary predictors, as well as inter-variable correlation, to improve the predictions of the target variable. In this study, we use the orthogonal distance of a (x, y point to the river centerline as a covariate for RK and CK. Given that riverbed elevation variability is abrupt transversely to the flow direction, it is expected that the greater the Euclidean distance of a point to the thalweg, the greater the bed elevation will be. The aim of this study was to evaluate if the use of the proposed covariate improves the spatial prediction of riverbed topography. In order to asses such premise, we perform an external validation. Transversal cross-sections are used to make the spatial predictions, and the point data surveyed between sections are used for testing. We compare the results from CK and RK to the ones obtained from ordinary kriging (OK. The validation indicates that RK yields the lowest RMSE among the interpolators. RK predictions represent the thalweg between cross-sections, whereas the other methods under-predict the river thalweg depth. Therefore, we conclude that RK provides a simple approach for enhancing the quality of the spatial prediction from sparse bathymetry data.

  14. Interpolation from Grid Lines: Linear, Transfinite and Weighted Method

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2017-01-01

    When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid l...

  15. Shape Preserving Interpolation Using C2 Rational Cubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2016-01-01

    Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.

  16. Input variable selection for interpolating high-resolution climate ...

    African Journals Online (AJOL)

    Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...

  17. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  18. Some observations on interpolating gauges and non-covariant gauges

    Indian Academy of Sciences (India)

    We discuss the viability of using interpolating gauges to define the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition defining term. We show that the boundary condition needed to maintain gauge-invariance as the interpolating parameter ...

  19. Convergence of trajectories in fractal interpolation of stochastic processes

    International Nuclear Information System (INIS)

    MaIysz, Robert

    2006-01-01

    The notion of fractal interpolation functions (FIFs) can be applied to stochastic processes. Such construction is especially useful for the class of α-self-similar processes with stationary increments and for the class of α-fractional Brownian motions. For these classes, convergence of the Minkowski dimension of the graphs in fractal interpolation of the Hausdorff dimension of the graph of original process was studied in [Herburt I, MaIysz R. On convergence of box dimensions of fractal interpolation stochastic processes. Demonstratio Math 2000;4:873-88.], [MaIysz R. A generalization of fractal interpolation stochastic processes to higher dimension. Fractals 2001;9:415-28.], and [Herburt I. Box dimension of interpolations of self-similar processes with stationary increments. Probab Math Statist 2001;21:171-8.]. We prove that trajectories of fractal interpolation stochastic processes converge to the trajectory of the original process. We also show that convergence of the trajectories in fractal interpolation of stochastic processes is equivalent to the convergence of trajectories in linear interpolation

  20. Improved Interpolation Kernels for Super-resolution Algorithms

    DEFF Research Database (Denmark)

    Rasti, Pejman; Orlova, Olga; Tamberg, Gert

    2016-01-01

    Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...... when their original interpolation kernel is replaced by the ones introduced in this work....

  1. Evaluation of Teeth and Supporting Structures on Digital Radiograms using Interpolation Methods

    International Nuclear Information System (INIS)

    Koh, Kwang Joon; Chang, Kee Wan

    1999-01-01

    To determine the effect of interpolation functions when processing the digital periapical images. The digital images were obtained by Digora and CDR system on the dry skull and human subject. 3 oral radiologists evaluated the 3 portions of each processed image using 7 interpolation methods and ROC curves were obtained by trapezoidal methods. The highest Az value(0.96) was obtained with cubic spline method and the lowest Az value(0.03) was obtained with facet model method in Digora system. The highest Az value(0.79) was obtained with gray segment expansion method and the lowest Az value(0.07) was obtained with facet model method in CDR system. There was significant difference of Az value in original image between Digora and CDR system at alpha=0.05 level. There were significant differences of Az values between Digora and CDR images with cubic spline method, facet model method, linear interpolation method and non-linear interpolation method at alpha= 0.1 level.

  2. Scalable Intersample Interpolation Architecture for High-channel-count Beamformers

    DEFF Research Database (Denmark)

    Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt

    2011-01-01

    Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....

  3. Fractional Delayer Utilizing Hermite Interpolation with Caratheodory Representation

    Directory of Open Access Journals (Sweden)

    Qiang DU

    2018-04-01

    Full Text Available Fractional delay is indispensable for many sorts of circuits and signal processing applications. Fractional delay filter (FDF utilizing Hermite interpolation with an analog differentiator is a straightforward way to delay discrete signals. This method has a low time-domain error, but a complicated sampling module than the Shannon sampling scheme. A simplified scheme, which is based on Shannon sampling and utilizing Hermite interpolation with a digital differentiator, will lead a much higher time-domain error when the signal frequency approaches the Nyquist rate. In this letter, we propose a novel fractional delayer utilizing Hermite interpolation with Caratheodory representation. The samples of differential signal are obtained by Caratheodory representation from the samples of the original signal only. So, only one sampler is needed and the sampling module is simple. Simulation results for four types of signals demonstrate that the proposed method has significantly higher interpolation accuracy than Hermite interpolation with digital differentiator.

  4. Improvement of Hydrological Simulations by Applying Daily Precipitation Interpolation Schemes in Meso-Scale Catchments

    Directory of Open Access Journals (Sweden)

    Mateusz Szcześniak

    2015-02-01

    Full Text Available Ground-based precipitation data are still the dominant input type for hydrological models. Spatial variability in precipitation can be represented by spatially interpolating gauge data using various techniques. In this study, the effect of daily precipitation interpolation methods on discharge simulations using the semi-distributed SWAT (Soil and Water Assessment Tool model over a 30-year period is examined. The study was carried out in 11 meso-scale (119–3935 km2 sub-catchments lying in the Sulejów reservoir catchment in central Poland. Four methods were tested: the default SWAT method (Def based on the Nearest Neighbour technique, Thiessen Polygons (TP, Inverse Distance Weighted (IDW and Ordinary Kriging (OK. =The evaluation of methods was performed using a semi-automated calibration program SUFI-2 (Sequential Uncertainty Fitting Procedure Version 2 with two objective functions: Nash-Sutcliffe Efficiency (NSE and the adjusted R2 coefficient (bR2. The results show that: (1 the most complex OK method outperformed other methods in terms of NSE; and (2 OK, IDW, and TP outperformed Def in terms of bR2. The median difference in daily/monthly NSE between OK and Def/TP/IDW calculated across all catchments ranged between 0.05 and 0.15, while the median difference between TP/IDW/OK and Def ranged between 0.05 and 0.07. The differences between pairs of interpolation methods were, however, spatially variable and a part of this variability was attributed to catchment properties: catchments characterised by low station density and low coefficient of variation of daily flows experienced more pronounced improvement resulting from using interpolation methods. Methods providing higher precipitation estimates often resulted in a better model performance. The implication from this study is that appropriate consideration of spatial precipitation variability (often neglected by model users that can be achieved using relatively simple interpolation methods can

  5. Interrelation of structure and operational states in cascading failure of overloading lines in power grids

    Science.gov (United States)

    Xue, Fei; Bompard, Ettore; Huang, Tao; Jiang, Lin; Lu, Shaofeng; Zhu, Huaiying

    2017-09-01

    As the modern power system is expected to develop to a more intelligent and efficient version, i.e. the smart grid, or to be the central backbone of energy internet for free energy interactions, security concerns related to cascading failures have been raised with consideration of catastrophic results. The researches of topological analysis based on complex networks have made great contributions in revealing structural vulnerabilities of power grids including cascading failure analysis. However, existing literature with inappropriate assumptions in modeling still cannot distinguish the effects between the structure and operational state to give meaningful guidance for system operation. This paper is to reveal the interrelation between network structure and operational states in cascading failure and give quantitative evaluation by integrating both perspectives. For structure analysis, cascading paths will be identified by extended betweenness and quantitatively described by cascading drop and cascading gradient. Furthermore, the operational state for cascading paths will be described by loading level. Then, the risk of cascading failure along a specific cascading path can be quantitatively evaluated considering these two factors. The maximum cascading gradient of all possible cascading paths can be used as an overall metric to evaluate the entire power grid for its features related to cascading failure. The proposed method is tested and verified on IEEE30-bus system and IEEE118-bus system, simulation evidences presented in this paper suggests that the proposed model can identify the structural causes for cascading failure and is promising to give meaningful guidance for the protection of system operation in the future.

  6. A cluster randomized trial of alcohol prevention in small businesses: a cascade model of help seeking and risk reduction.

    Science.gov (United States)

    Reynolds, G Shawn; Bennett, Joel B

    2015-01-01

    The current study adapted two workplace substance abuse prevention programs and tested a conceptual model of workplace training effects on help seeking and alcohol consumption. Questionnaires were collected 1 month before, 1 month after, and 6 months within a cluster randomized field experiment. Texas small businesses in construction, transportation, and service industries. A total of 1510 employees from 45 businesses were randomly assigned to receive no training or one of the interventions. The interventions were 4-hour on-the-job classroom trainings that encouraged healthy lifestyles and seeking professional help (e.g., from the Employee Assistance Program [EAP]). The Team Awareness Program focused on peer referral and team building. The Choices in Health Promotion Program delivered various health topics based on a needs assessment. Questionnaires measured help-seeking attitudes and behavior, frequency of drinking alcohol, and job-related incidents. Mixed-model repeated-measures analyses of covariance were computed. Relative to the control group, training was associated with significantly greater reductions in drinking frequency, willingness to seek help, and seeking help from the EAP. After including help-seeking attitudes as a covariate, the correlation between training and help seeking becomes nonsignificant. Help-seeking behavior was not correlated with drinking frequency. Training improved help-seeking attitudes and behaviors and decreased alcohol risks. The reductions in drinking alcohol were directly correlated with training and independent from help seeking.

  7. Cascading Corruption News

    DEFF Research Database (Denmark)

    Damgaard, Mads

    2018-01-01

    Through a content analysis of 8,800 news items and six months of front pages in three Brazilian newspapers, all dealing with corruption and political transgression, this article documents the remarkable skew of media attention to corruption scandals. The bias is examined as an information...... phenomenon, arising from systemic and commercial factors of Brazil’s news media: An information cascade of news on corruption formed, destabilizing the governing coalition and legitimizing the impeachment process of Dilma Rousseff. As this process gained momentum, questions of accountability were disregarded...

  8. Cascading Corruption News

    DEFF Research Database (Denmark)

    Damgaard, Mads

    2018-01-01

    Through a content analysis of 8,800 news items and six months of front pages in three Brazilian newspapers, all dealing with corruption and political transgression, this article documents the remarkable skew of media attention to corruption scandals. The bias is examined as an information...... phenomenon, arising from systemic and commercial factors of Brazil’s news media: An information cascade of news on corruption formed, destabilizing the governing coalition and legitimizing the impeachment process of Dilma Rousseff. As this process gained momentum, questions of accountability were disregarded...... by the media, with harmful effects on democracy....

  9. The cascade-probabilistic method of particles-through-materials propagation modelling. Connection with the Markov’s chains, Boltzmann equations

    International Nuclear Information System (INIS)

    Kupchishin, A.I.

    2015-01-01

    The work was performed within in the context of cascade-probabilistic method, the essence of which is to obtain and further application of cascade-probability functions (CPF) for different particles. CPF make sense probability of that a particle generated at some depth h' reaches a certain depth h after the n-th number of collisions. We consider the interaction of particle with solids and relationship between radiation defect formation processes and Markov processes and Markov chains. It shows how to get the recurrence relations for the simplest CPF from the Chapman-Kolmogorov equations. (authors)

  10. Computing Diffeomorphic Paths for Large Motion Interpolation.

    Science.gov (United States)

    Seo, Dohyung; Jeffrey, Ho; Vemuri, Baba C

    2013-06-01

    In this paper, we introduce a novel framework for computing a path of diffeomorphisms between a pair of input diffeomorphisms. Direct computation of a geodesic path on the space of diffeomorphisms Diff (Ω) is difficult, and it can be attributed mainly to the infinite dimensionality of Diff (Ω). Our proposed framework, to some degree, bypasses this difficulty using the quotient map of Diff (Ω) to the quotient space Diff ( M )/ Diff ( M ) μ obtained by quotienting out the subgroup of volume-preserving diffeomorphisms Diff ( M ) μ . This quotient space was recently identified as the unit sphere in a Hilbert space in mathematics literature, a space with well-known geometric properties. Our framework leverages this recent result by computing the diffeomorphic path in two stages. First, we project the given diffeomorphism pair onto this sphere and then compute the geodesic path between these projected points. Second, we lift the geodesic on the sphere back to the space of diffeomerphisms, by solving a quadratic programming problem with bilinear constraints using the augmented Lagrangian technique with penalty terms. In this way, we can estimate the path of diffeomorphisms, first, staying in the space of diffeomorphisms, and second, preserving shapes/volumes in the deformed images along the path as much as possible. We have applied our framework to interpolate intermediate frames of frame-sub-sampled video sequences. In the reported experiments, our approach compares favorably with the popular Large Deformation Diffeomorphic Metric Mapping framework (LDDMM).

  11. Functions with disconnected spectrum sampling, interpolation, translates

    CERN Document Server

    Olevskii, Alexander M

    2016-01-01

    The classical sampling problem is to reconstruct entire functions with given spectrum S from their values on a discrete set L. From the geometric point of view, the possibility of such reconstruction is equivalent to determining for which sets L the exponential system with frequencies in L forms a frame in the space L^2(S). The book also treats the problem of interpolation of discrete functions by analytic ones with spectrum in S and the problem of completeness of discrete translates. The size and arithmetic structure of both the spectrum S and the discrete set L play a crucial role in these problems. After an elementary introduction, the authors give a new presentation of classical results due to Beurling, Kahane, and Landau. The main part of the book focuses on recent progress in the area, such as construction of universal sampling sets, high-dimensional and non-analytic phenomena. The reader will see how methods of harmonic and complex analysis interplay with various important concepts in different areas, ...

  12. Spatiotemporal video deinterlacing using control grid interpolation

    Science.gov (United States)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  13. Forecasting Social Unrest Using Activity Cascades.

    Science.gov (United States)

    Cadena, Jose; Korkmaz, Gizem; Kuhlman, Chris J; Marathe, Achla; Ramakrishnan, Naren; Vullikanti, Anil

    2015-01-01

    Social unrest is endemic in many societies, and recent news has drawn attention to happenings in Latin America, the Middle East, and Eastern Europe. Civilian populations mobilize, sometimes spontaneously and sometimes in an organized manner, to raise awareness of key issues or to demand changes in governing or other organizational structures. It is of key interest to social scientists and policy makers to forecast civil unrest using indicators observed on media such as Twitter, news, and blogs. We present an event forecasting model using a notion of activity cascades in Twitter (proposed by Gonzalez-Bailon et al., 2011) to predict the occurrence of protests in three countries of Latin America: Brazil, Mexico, and Venezuela. The basic assumption is that the emergence of a suitably detected activity cascade is a precursor or a surrogate to a real protest event that will happen "on the ground." Our model supports the theoretical characterization of large cascades using spectral properties and uses properties of detected cascades to forecast events. Experimental results on many datasets, including the recent June 2013 protests in Brazil, demonstrate the effectiveness of our approach.

  14. Forecasting Social Unrest Using Activity Cascades.

    Directory of Open Access Journals (Sweden)

    Jose Cadena

    Full Text Available Social unrest is endemic in many societies, and recent news has drawn attention to happenings in Latin America, the Middle East, and Eastern Europe. Civilian populations mobilize, sometimes spontaneously and sometimes in an organized manner, to raise awareness of key issues or to demand changes in governing or other organizational structures. It is of key interest to social scientists and policy makers to forecast civil unrest using indicators observed on media such as Twitter, news, and blogs. We present an event forecasting model using a notion of activity cascades in Twitter (proposed by Gonzalez-Bailon et al., 2011 to predict the occurrence of protests in three countries of Latin America: Brazil, Mexico, and Venezuela. The basic assumption is that the emergence of a suitably detected activity cascade is a precursor or a surrogate to a real protest event that will happen "on the ground." Our model supports the theoretical characterization of large cascades using spectral properties and uses properties of detected cascades to forecast events. Experimental results on many datasets, including the recent June 2013 protests in Brazil, demonstrate the effectiveness of our approach.

  15. Parton-hadron cascade approach at SPS and RHIC

    Energy Technology Data Exchange (ETDEWEB)

    Nara, Yasushi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-07-01

    A parton-hadron cascade model which is the extension of hadronic cascade model incorporating hard partonic scattering based on HIJING is presented to describe the space-time evolution of parton/hadron system produced by ultra-relativistic nuclear collisions. Hadron yield, baryon stopping and transverse momentum distribution are calculated and compared with HIJING and VNI. Baryon density, energy density and temperature for RHIC are calculated within this model. (author)

  16. Availability Cascades & the Sharing Economy

    DEFF Research Database (Denmark)

    Netter, Sarah

    2014-01-01

    attention. This conceptual paper attempts to explain the emergent focus on the sharing economy and associated business and consumption models by applying cascade theory. Risks associated with this behavior will be especially examined with regard to the sustainability claim of collaborative consumption......In search of a new concept that will provide answers to as to how modern societies should not only make sense but also resolve the social and environmental problems linked with our modes of production and consumption, collaborative consumption and the sharing economy are increasingly attracting....... With academics, practitioners, and civil society alike having a shared history in being rather fast in accepting new concepts that will not only provide business opportunities but also a good conscience, this study proposes a critical study of the implications of collaborative consumption, before engaging...

  17. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    Science.gov (United States)

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  18. Shape-based interpolation of multidimensional grey-level images

    International Nuclear Information System (INIS)

    Grevera, G.J.; Udupa, J.K.

    1996-01-01

    Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. In this paper, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n + 1)-dimensional [(n + 1)-D] space. The binary shape-based method is then applied to this image to create an (n + 1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation

  19. Search for QGP signals at AGS with a TPC spectrometer, and comparison of our event generator predictions for plasma model and cascade interactions

    International Nuclear Information System (INIS)

    Lindenbaum, S.J.; Foley, K.J.; Eiseman, S.E.

    1988-01-01

    We have developed and successfully tested a TPC Magnetic Spectrometer to search for QGP signals produced by ion beams at AGS. We also developed a cascade and plasma event generator the predictions of which are used to illustrate how our technique can detect possible plasma signals. 4 refs., 6 figs., 1 tab

  20. An Improved Minimum Error Interpolator of CNC for General Curves Based on FPGA

    Directory of Open Access Journals (Sweden)

    Jiye HUANG

    2014-05-01

    Full Text Available This paper presents an improved minimum error interpolation algorithm for general curves generation in computer numerical control (CNC. Compared with the conventional interpolation algorithms such as the By-Point Comparison method, the Minimum- Error method and the Digital Differential Analyzer (DDA method, the proposed improved Minimum-Error interpolation algorithm can find a balance between accuracy and efficiency. The new algorithm is applicable for the curves of linear, circular, elliptical and parabolic. The proposed algorithm is realized on a field programmable gate array (FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe. The algorithm has the following advantages: firstly, the maximum interpolation error is only half of the minimum step-size; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  1. Spatio-temporal interpolation of precipitation during monsoon periods in Pakistan

    Science.gov (United States)

    Hussain, Ijaz; Spöck, Gunter; Pilz, Jürgen; Yu, Hwa-Lung

    2010-08-01

    Spatio-temporal estimation of precipitation over a region is essential to the modeling of hydrologic processes for water resources management. The changes of magnitude and space-time heterogeneity of rainfall observations make space-time estimation of precipitation a challenging task. In this paper we propose a Box-Cox transformed hierarchical Bayesian multivariate spatio-temporal interpolation method for the skewed response variable. The proposed method is applied to estimate space-time monthly precipitation in the monsoon periods during 1974-2000, and 27-year monthly average precipitation data are obtained from 51 stations in Pakistan. The results of transformed hierarchical Bayesian multivariate spatio-temporal interpolation are compared to those of non-transformed hierarchical Bayesian interpolation by using cross-validation. The software developed by [11] is used for Bayesian non-stationary multivariate space-time interpolation. It is observed that the transformed hierarchical Bayesian method provides more accuracy than the non-transformed hierarchical Bayesian method.

  2. Design of interpolation functions for subpixel-accuracy stereo-vision systems.

    Science.gov (United States)

    Haller, Istvan; Nedevschi, Sergiu

    2012-02-01

    Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE

  3. APPLICABILITY OF VARIOUS INTERPOLATION APPROACHES FOR HIGH RESOLUTION SPATIAL MAPPING OF CLIMATE DATA IN KOREA

    Directory of Open Access Journals (Sweden)

    A. Jo

    2018-04-01

    Full Text Available The purpose of this study is to create a new dataset of spatially interpolated monthly climate data for South Korea at high spatial resolution (approximately 30m by performing various spatio-statistical interpolation and comparing with forecast LDAPS gridded climate data provided from Korea Meterological Administration (KMA. Automatic Weather System (AWS and Automated Synoptic Observing System (ASOS data in 2017 obtained from KMA were included for the spatial mapping of temperature and rainfall; instantaneous temperature and 1-hour accumulated precipitation at 09:00 am on 31th March, 21th June, 23th September, and 24th December. Among observation data, 80 percent of the total point (478 and remaining 120 points were used for interpolations and for quantification, respectively. With the training data and digital elevation model (DEM with 30 m resolution, inverse distance weighting (IDW, co-kriging, and kriging were performed by using ArcGIS10.3.1 software and Python 3.6.4. Bias and root mean square were computed to compare prediction performance quantitatively. When statistical analysis was performed for each cluster using 20 % validation data, co kriging was more suitable for spatialization of instantaneous temperature than other interpolation method. On the other hand, IDW technique was appropriate for spatialization of precipitation.

  4. On Multiple Interpolation Functions of the -Genocchi Polynomials

    Directory of Open Access Journals (Sweden)

    Jin Jeong-Hee

    2010-01-01

    Full Text Available Abstract Recently, many mathematicians have studied various kinds of the -analogue of Genocchi numbers and polynomials. In the work (New approach to q-Euler, Genocchi numbers and their interpolation functions, "Advanced Studies in Contemporary Mathematics, vol. 18, no. 2, pp. 105–112, 2009.", Kim defined new generating functions of -Genocchi, -Euler polynomials, and their interpolation functions. In this paper, we give another definition of the multiple Hurwitz type -zeta function. This function interpolates -Genocchi polynomials at negative integers. Finally, we also give some identities related to these polynomials.

  5. Spectral interpolation - Zero fill or convolution. [image processing

    Science.gov (United States)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  6. Steady State Stokes Flow Interpolation for Fluid Control

    DEFF Research Database (Denmark)

    Bhatacharya, Haimasree; Nielsen, Michael Bang; Bridson, Robert

    2012-01-01

    — suffer from a common problem. They fail to capture the rotational components of the velocity field, although extrapolation in the normal direction does consider the tangential component. We address this problem by casting the interpolation as a steady state Stokes flow. This type of flow captures......Fluid control methods often require surface velocities interpolated throughout the interior of a shape to use the velocity as a feedback force or as a boundary condition. Prior methods for interpolation in computer graphics — velocity extrapolation in the normal direction and potential flow...

  7. C1 Rational Quadratic Trigonometric Interpolation Spline for Data Visualization

    Directory of Open Access Journals (Sweden)

    Shengjun Liu

    2015-01-01

    Full Text Available A new C1 piecewise rational quadratic trigonometric spline with four local positive shape parameters in each subinterval is constructed to visualize the given planar data. Constraints are derived on these free shape parameters to generate shape preserving interpolation curves for positive and/or monotonic data sets. Two of these shape parameters are constrained while the other two can be set free to interactively control the shape of the curves. Moreover, the order of approximation of developed interpolant is investigated as O(h3. Numeric experiments demonstrate that our method can construct nice shape preserving interpolation curves efficiently.

  8. Simulation of short-term annealing of displacement cascades in FCC metals

    International Nuclear Information System (INIS)

    Heinisch, H.L.; Doran, D.G.; Schwartz, D.M.

    1980-01-01

    Computer models have been developed for the simulation of high energy displacement cascades. The objective is the generation of defect production functions for use in correlation analysis of radiation effects in fusion reactor materials. In particular, the stochastic cascade annealing simulation code SCAS has been developed and used to model the short-term annealing behavior of simulated cascades in FCC metals. The code is fast enough to make annealing of high energy cascades practical. Sets of cascades from 5 keV to 100 keV in copper were generated by the binary collision code MARLOWE

  9. A comparison of different interpolation methods for wind data in Central Asia

    Science.gov (United States)

    Reinhardt, Katja; Samimi, Cyrus

    2017-04-01

    For the assessment of the global climate change and its consequences, the results of computer based climate models are of central importance. The quality of these results and the validity of the derived forecasts are strongly determined by the quality of the underlying climate data. However, in many parts of the world high resolution data are not available. This is particularly true for many regions in Central Asia, where the density of climatological stations has often to be described as thinned out. Due to this insufficient data base the use of statistical methods to improve the resolution of existing climate data is of crucial importance. Only this can provide a substantial data base for a well-founded analysis of past climate changes as well as for a reliable forecast of future climate developments for the particular region. The study presented here shows a comparison of different interpolation methods for the wind components u and v for a region in Central Asia with a pronounced topography. The aim of the study is to find out whether there is an optimal interpolation method which can equally be applied for all pressure levels or if different interpolation methods have to be applied for each pressure level. The European reanalysis data Era-Interim for the years 1989 - 2015 are used as input data for the pressure levels of 850 hPa, 500 hPa and 200 hPa. In order to improve the input data, two different interpolation procedures were applied: On the one hand pure interpolation methods were used, such as inverse distance weighting and ordinary kriging. On the other hand machine learning algorithms, generalized additive models and regression kriging were applied, considering additional influencing factors, e.g. geopotential and topography. As a result it can be concluded that regression kriging provides the best results for all pressure levels, followed by support vector machine, neural networks and ordinary kriging. Inverse distance weighting showed the worst

  10. Chiral properties of baryon interpolating fields

    International Nuclear Information System (INIS)

    Nagata, Keitaro; Hosaka, Atsushi; Dmitrasinovic, V.

    2008-01-01

    We study the chiral transformation properties of all possible local (non-derivative) interpolating field operators for baryons consisting of three quarks with two flavors, assuming good isospin symmetry. We derive and use the relations/identities among the baryon operators with identical quantum numbers that follow from the combined color, Dirac and isospin Fierz transformations. These relations reduce the number of independent baryon operators with any given spin and isospin. The Fierz identities also effectively restrict the allowed baryon chiral multiplets. It turns out that the non-derivative baryons' chiral multiplets have the same dimensionality as their Lorentz representations. For the two independent nucleon operators the only permissible chiral multiplet is the fundamental one, ((1)/(2),0)+(0,(1)/(2)). For the Δ, admissible Lorentz representations are (1,(1)/(2))+((1)/(2),1) and ((3)/(2),0)+(0,(3)/(2)). In the case of the (1,(1)/(2))+((1)/(2),1) chiral multiplet, the I(J)=(3)/(2)((3)/(2)) Δ field has one I(J)=(1)/(2)((3)/(2)) chiral partner; otherwise it has none. We also consider the Abelian (U A (1)) chiral transformation properties of the fields and show that each baryon comes in two varieties: (1) with Abelian axial charge +3; and (2) with Abelian axial charge -1. In case of the nucleon these are the two Ioffe fields; in case of the Δ, the (1,(1)/(2))+((1)/(2),1) multiplet has an Abelian axial charge -1 and the ((3)/(2),0)+(0,(3)/(2)) multiplet has an Abelian axial charge +3. (orig.)

  11. MODIS Snow Cover Recovery Using Variational Interpolation

    Science.gov (United States)

    Tran, H.; Nguyen, P.; Hsu, K. L.; Sorooshian, S.

    2017-12-01

    Cloud obscuration is one of the major problems that limit the usages of satellite images in general and in NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) global Snow-Covered Area (SCA) products in particular. Among the approaches to resolve the problem, the Variational Interpolation (VI) algorithm method, proposed by Xia et al., 2012, obtains cloud-free dynamic SCA images from MODIS. This method is automatic and robust. However, computational deficiency is a main drawback that degrades applying the method for larger scales (i.e., spatial and temporal scales). To overcome this difficulty, this study introduces an improved version of the original VI. The modified VI algorithm integrates the MINimum RESidual (MINRES) iteration (Paige and Saunders., 1975) to prevent the system from breaking up when applied to much broader scales. An experiment was done to demonstrate the crash-proof ability of the new algorithm in comparison with the original VI method, an ability that is obtained when maintaining the distribution of the weights set after solving the linear system. After that, the new VI algorithm was applied to the whole Contiguous United States (CONUS) over four winter months of 2016 and 2017, and validated using the snow station network (SNOTEL). The resulting cloud free images have high accuracy in capturing the dynamical changes of snow in contrast with the MODIS snow cover maps. Lastly, the algorithm was applied to create a Cloud free images dataset from March 10, 2000 to February 28, 2017, which is able to provide an overview of snow trends over CONUS for nearly two decades. ACKNOWLEDGMENTSWe would like to acknowledge NASA, NOAA Office of Hydrologic Development (OHD) National Weather Service (NWS), Cooperative Institute for Climate and Satellites (CICS), Army Research Office (ARO), ICIWaRM, and UNESCO for supporting this research.

  12. Comparison of two fractal interpolation methods

    Science.gov (United States)

    Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo

    2017-03-01

    As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has

  13. Cascade Error Projection Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  14. SCC: Semantic Context Cascade for Efficient Action Detection

    KAUST Repository

    Heilbron, Fabian Caba; Barrios, Wayner; Escorcia, Victor; Ghanem, Bernard

    2017-01-01

    in videos. Existing approaches have mitigated the computational cost, but still, these methods lack rich high-level semantics that helps them to localize the actions quickly. In this paper, we introduce a Semantic Cascade Context (SCC) model that aims

  15. Cascading Generative Adversarial Networks for Targeted

    KAUST Repository

    Hamdi, Abdullah

    2018-01-01

    Abundance of labelled data played a crucial role in the recent developments in computer vision, but that faces problems like scalability and transferability to the wild. One alternative approach is to utilize the data without labels, i.e. unsupervised learning, in learning valuable information and put it in use to tackle vision problems. Generative Adversarial Networks (GANs) have gained momentum for their ability to model image distributions in unsupervised manner. They learn to emulate the training set and that enables sampling from that domain and using the knowledge learned for useful applications. Several methods proposed enhancing GANs, including regularizing the loss with some feature matching. We seek to push GANs beyond the data in the training and try to explore unseen territory in the image manifold. We first propose a new regularizer for GAN based on K-Nearest Neighbor (K-NN) selective feature matching to a target set Y in high-level feature space, during the adversarial training of GAN on the base set X, and we call this novel model K-GAN. We show that minimizing the added term follows from cross-entropy minimization between the distributions of GAN and set Y. Then, we introduce a cascaded framework for GANs that try to address the task of imagining a new distribution that combines the base set X and target set Y by cascading sampling GANs with translation GANs, and we dub the cascade of such GANs as the Imaginative Adversarial Network (IAN). Several cascades are trained on a collected dataset Zoo-Faces and generated innovative samples are shown, including from K-GAN cascade. We conduct an objective and subjective evaluation for different IAN setups in the addressed task of generating innovative samples and we show the effect of regularizing GAN on different scores. We conclude with some useful applications for these IANs, like multi-domain manifold traversing.

  16. Cascading Generative Adversarial Networks for Targeted

    KAUST Repository

    Hamdi, Abdullah

    2018-04-09

    Abundance of labelled data played a crucial role in the recent developments in computer vision, but that faces problems like scalability and transferability to the wild. One alternative approach is to utilize the data without labels, i.e. unsupervised learning, in learning valuable information and put it in use to tackle vision problems. Generative Adversarial Networks (GANs) have gained momentum for their ability to model image distributions in unsupervised manner. They learn to emulate the training set and that enables sampling from that domain and using the knowledge learned for useful applications. Several methods proposed enhancing GANs, including regularizing the loss with some feature matching. We seek to push GANs beyond the data in the training and try to explore unseen territory in the image manifold. We first propose a new regularizer for GAN based on K-Nearest Neighbor (K-NN) selective feature matching to a target set Y in high-level feature space, during the adversarial training of GAN on the base set X, and we call this novel model K-GAN. We show that minimizing the added term follows from cross-entropy minimization between the distributions of GAN and set Y. Then, we introduce a cascaded framework for GANs that try to address the task of imagining a new distribution that combines the base set X and target set Y by cascading sampling GANs with translation GANs, and we dub the cascade of such GANs as the Imaginative Adversarial Network (IAN). Several cascades are trained on a collected dataset Zoo-Faces and generated innovative samples are shown, including from K-GAN cascade. We conduct an objective and subjective evaluation for different IAN setups in the addressed task of generating innovative samples and we show the effect of regularizing GAN on different scores. We conclude with some useful applications for these IANs, like multi-domain manifold traversing.

  17. SU-E-I-11: Cascaded Linear System Model for Columnar CsI Flat Panel Imagers with Depth Dependent Gain and Blur

    International Nuclear Information System (INIS)

    Peng, B; Lubinsky, A; Zheng, H; Zhao, W; Teymurazyan, A

    2014-01-01

    Purpose: To implement a depth dependent gain and blur cascaded linear system model (CLSM) for optimizing columnar structured CsI indirect conversion flat panel imager (FPI) for advanced imaging applications. Methods: For experimental validation, depth dependent escape efficiency, e(z), was extracted from PHS measurement of different CsI scintillators (thickness, substrate and light output). The inherent MTF and DQE of CsI was measured using high resolution CMOS sensor. For CLSM, e(z) and the depth dependent MTF(f,z), were estimated using Monte Carlo simulation (Geant4) of optical photon transport through columnar CsI. Previous work showed that Monte Carlo simulation for CsI was hindered by the non-ideality of its columnar structure. In the present work we allowed variation in columnar width with depth, and assumed diffusive reflective backing and columns. Monte Carlo simulation was performed using an optical point source placed at different depth of the CsI layer, from which MTF(z,f) and e(z) were computed. The resulting e(z) with excellent matching with experimental measurements were then applied to the CLSM, Monte Carlo simulation was repeated until the modeled MTF, DQE(f) also match experimental measurement. Results: For a 150 micron FOS HL type CsI, e(z) varies between 0.56 to 0.45, and the MTF at 14 cycles/mm varies between 62.1% to 3.9%, from the front to the back of the scintillator. The overall MTF and DQE(f) at all frequencies are in excellent agreement with experimental measurements at all frequencies. Conclusion: We have developed a CLSM for columnar CsI scintillators with depth dependent gain and MTF, which were estimated from Monte Carlo simulation with novel optical simulation settings. Preliminary results showed excellent agreement between simulation results and experimental measurements. Future work is aimed at extending this approach to optimize CsI screen optic design and sensor structure for achieving higher DQE(f) in cone-beam CT, which uses

  18. Interband cascade lasers

    International Nuclear Information System (INIS)

    Vurgaftman, I; Meyer, J R; Canedy, C L; Kim, C S; Bewley, W W; Merritt, C D; Abell, J; Weih, R; Kamp, M; Kim, M; Höfling, S

    2015-01-01

    We review the current status of interband cascade lasers (ICLs) emitting in the midwave infrared (IR). The ICL may be considered the hybrid of a conventional diode laser that generates photons via electron–hole recombination, and an intersubband-based quantum cascade laser (QCL) that stacks multiple stages for enhanced current efficiency. Following a brief historical overview, we discuss theoretical aspects of the active region and core designs, growth by molecular beam epitaxy, and the processing of broad-area, narrow-ridge, and distributed feedback (DFB) devices. We then review the experimental performance of pulsed broad area ICLs, as well as the continuous-wave (cw) characteristics of narrow ridges having good beam quality and DFBs producing output in a single spectral mode. Because the threshold drive powers are far lower than those of QCLs throughout the λ = 3–6 µm spectral band, ICLs are increasingly viewed as the laser of choice for mid-IR laser spectroscopy applications that do not require high output power but need to be hand-portable and/or battery operated. Demonstrated ICL performance characteristics to date include threshold current densities as low as 106 A cm −2 at room temperature (RT), cw threshold drive powers as low as 29 mW at RT, maximum cw operating temperatures as high as 118 °C, maximum cw output powers exceeding 400 mW at RT, maximum cw wallplug efficiencies as high as 18% at RT, maximum cw single-mode output powers as high as 55 mW at RT, and single-mode output at λ = 5.2 µm with a cw drive power of only 138 mW at RT. (topical review)

  19. Rhie-Chow interpolation in strong centrifugal fields

    Science.gov (United States)

    Bogovalov, S. V.; Tronin, I. V.

    2015-10-01

    Rhie-Chow interpolation formulas are derived from the Navier-Stokes and continuity equations. These formulas are generalized to gas dynamics in strong centrifugal fields (as high as 106 g) occurring in gas centrifuges.

  20. Efficient Algorithms and Design for Interpolation Filters in Digital Receiver

    Directory of Open Access Journals (Sweden)

    Xiaowei Niu

    2014-05-01

    Full Text Available Based on polynomial functions this paper introduces a generalized design method for interpolation filters. The polynomial-based interpolation filters can be implemented efficiently by using a modified Farrow structure with an arbitrary frequency response, the filters allow many pass- bands and stop-bands, and for each band the desired amplitude and weight can be set arbitrarily. The optimization coefficients of the interpolation filters in time domain are got by minimizing the weighted mean squared error function, then converting to solve the quadratic programming problem. The optimization coefficients in frequency domain are got by minimizing the maxima (MiniMax of the weighted mean squared error function. The degree of polynomials and the length of interpolation filter can be selected arbitrarily. Numerical examples verified the proposed design method not only can reduce the hardware cost effectively but also guarantee an excellent performance.