Fuzzy linguistic model for interpolation
International Nuclear Information System (INIS)
Abbasbandy, S.; Adabitabar Firozja, M.
2007-01-01
In this paper, a fuzzy method for interpolating of smooth curves was represented. We present a novel approach to interpolate real data by applying the universal approximation method. In proposed method, fuzzy linguistic model (FLM) applied as universal approximation for any nonlinear continuous function. Finally, we give some numerical examples and compare the proposed method with spline method
Spline interpolations besides wood model widely used in lactation
Korkmaz, Mehmet
2017-04-01
In this study, for lactation curve, spline interpolations, alternative modeling passing through exactly all data points with respect to widely used Wood model applied to lactation data were be discussed. These models are linear spline, quadratic spline and cubic spline. The observed and estimated values according to spline interpolations and Wood model were given with their Error Sum of Squares and also the lactation curves of spline interpolations and widely used Wood model were shown on the same graph. Thus, the differences have been observed. The estimates for some intermediate values were done by using spline interpolations and Wood model. By using spline interpolations, the estimates of intermediate values could be made more precise. Furthermore, by using spline interpolations, the predicted values for missing or incorrect observation were very successful according to the values of Wood model. By using spline interpolations, new ideas and interpretations in addition to the information of the well-known classical analysis were shown to the investigators.
Clustering metagenomic sequences with interpolated Markov models
Directory of Open Access Journals (Sweden)
Kelley David R
2010-11-01
Full Text Available Abstract Background Sequencing of environmental DNA (often called metagenomics has shown tremendous potential to uncover the vast number of unknown microbes that cannot be cultured and sequenced by traditional methods. Because the output from metagenomic sequencing is a large set of reads of unknown origin, clustering reads together that were sequenced from the same species is a crucial analysis step. Many effective approaches to this task rely on sequenced genomes in public databases, but these genomes are a highly biased sample that is not necessarily representative of environments interesting to many metagenomics projects. Results We present SCIMM (Sequence Clustering with Interpolated Markov Models, an unsupervised sequence clustering method. SCIMM achieves greater clustering accuracy than previous unsupervised approaches. We examine the limitations of unsupervised learning on complex datasets, and suggest a hybrid of SCIMM and supervised learning method Phymm called PHYSCIMM that performs better when evolutionarily close training genomes are available. Conclusions SCIMM and PHYSCIMM are highly accurate methods to cluster metagenomic sequences. SCIMM operates entirely unsupervised, making it ideal for environments containing mostly novel microbes. PHYSCIMM uses supervised learning to improve clustering in environments containing microbial strains from well-characterized genera. SCIMM and PHYSCIMM are available open source from http://www.cbcb.umd.edu/software/scimm.
Scientific data interpolation with low dimensional manifold model
International Nuclear Information System (INIS)
Zhu, Wei; Wang, Bao; Barnard, Richard C.; Hauck, Cory D.
2017-01-01
Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.
Scientific data interpolation with low dimensional manifold model
Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley
2018-01-01
We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.
Interpolation solution of the single-impurity Anderson model
International Nuclear Information System (INIS)
Kuzemsky, A.L.
1990-10-01
The dynamical properties of the single-impurity Anderson model (SIAM) is studied using a novel Irreducible Green's Function method (IGF). The new solution for one-particle GF interpolating between the strong and weak correlation limits is obtained. The unified concept of relevant mean-field renormalizations is indispensable for strong correlation limit. (author). 21 refs
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Interpolation of daily rainfall using spatiotemporal models and clustering
Militino, A. F.
2014-06-11
Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.
Time-dependent intranuclear cascade model
International Nuclear Information System (INIS)
Barashenkov, V.S.; Kostenko, B.F.; Zadorogny, A.M.
1980-01-01
An intranuclear cascade model with explicit consideration of the time coordinate in the Monte Carlo simulation of the development of a cascade particle shower has been considered. Calculations have been performed using a diffuse nuclear boundary without any step approximation of the density distribution. Changes in the properties of the target nucleus during the cascade development have been taken into account. The results of these calculations have been compared with experiment and with the data which had been obtained by means of a time-independent cascade model. The consideration of time improved agreement between experiment and theory particularly for high-energy shower particles; however, for low-energy cascade particles (with grey and black tracks in photoemulsion) a discrepancy remains at T >= 10 GeV. (orig.)
Stochastic interpolation model of the medial superior olive neural circuit
Czech Academy of Sciences Publication Activity Database
Šanda, Pavel; Maršálek, P.
2012-01-01
Roč. 1434, JAN 24 (2012), s. 257-265 ISSN 0006-8993. [International Workshop on Neural Coding. Limassol, 29.10.2010-03.11.2010] R&D Projects: GA ČR(CZ) GAP103/11/0282 Grant - others:GA MPO(CZ) FR-TI3/869 Institutional research plan: CEZ:AV0Z50110509 Keywords : coincidence detection * directional hearing * interaural time delay * sound azimuth * interpolation model Subject RIV: FH - Neurology Impact factor: 2.879, year: 2012
Interpolation Algorithm and Mathematical Model in Automated Welding of Saddle-Shaped Weld
Directory of Open Access Journals (Sweden)
Lianghao Xue
2018-01-01
Full Text Available This paper presents welding torch pose model and interpolation algorithm of trajectory control of saddle-shaped weld formed by intersection of two pipes; the working principle, interpolation algorithm, welding experiment, and simulation result of the automatic welding system of the saddle-shaped weld are described. A variable angle interpolation method is used to control the trajectory and pose of the welding torch, which guarantees the constant linear terminal velocity. The mathematical model of the trajectory and pose of welding torch are established. Simulation and experiment have been carried out to verify the effectiveness of the proposed algorithm and mathematical model. The results demonstrate that the interpolation algorithm is well within the interpolation requirements of the saddle-shaped weld and ideal feed rate stability.
National Research Council Canada - National Science Library
Ingel, R
1999-01-01
.... Projection operators are employed for the model reduction or condensation process. Interpolation is then introduced over a user defined frequency window, which can have real and imaginary boundaries and be quite large. Hermitian...
Bayesian interpolation in a dynamic sinusoidal model with application to packet-loss concealment
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Cemgil, Ali Taylan
2010-01-01
In this paper, we consider Bayesian interpolation and parameter estimation in a dynamic sinusoidal model. This model is more ﬂexible than the static sinusoidal model since it enables the amplitudes and phases of the sinusoids to be time-varying. For the dynamic sinusoidal model, we derive...
Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling
Directory of Open Access Journals (Sweden)
Hyojin Lee
2015-01-01
Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.
Hittmeir, Sabine; Philipp, Anne; Seibert, Petra
2017-04-01
In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To
Directory of Open Access Journals (Sweden)
Tao Chen
2017-05-01
Full Text Available The spatial distribution of precipitation is an important aspect of water-related research. The use of different interpolation schemes in the same catchment may cause large differences and deviations from the actual spatial distribution of rainfall. Our study analyzes different methods of spatial rainfall interpolation at annual, daily, and hourly time scales to provide a comprehensive evaluation. An improved regression-based scheme is proposed using principal component regression with residual correction (PCRR and is compared with inverse distance weighting (IDW and multiple linear regression (MLR interpolation methods. In this study, the meso-scale catchment of the Fuhe River in southeastern China was selected as a typical region. Furthermore, a hydrological model HEC-HMS was used to calculate streamflow and to evaluate the impact of rainfall interpolation methods on the results of the hydrological model. Results show that the PCRR method performed better than the other methods tested in the study and can effectively eliminate the interpolation anomalies caused by terrain differences between observation points and surrounding areas. Simulated streamflow showed different characteristics based on the mean, maximum, minimum, and peak flows. The results simulated by PCRR exhibited the lowest streamflow error and highest correlation with measured values at the daily time scale. The application of the PCRR method is found to be promising because it considers multicollinearity among variables.
Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.
2013-09-01
One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.
A new stochastic model considering satellite clock interpolation errors in precise point positioning
Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong
2018-03-01
Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.
A cascading failure model for analyzing railway accident causation
Liu, Jin-Tao; Li, Ke-Ping
2018-01-01
In this paper, a new cascading failure model is proposed for quantitatively analyzing the railway accident causation. In the model, the loads of nodes are redistributed according to the strength of the causal relationships between the nodes. By analyzing the actual situation of the existing prevention measures, a critical threshold of the load parameter in the model is obtained. To verify the effectiveness of the proposed cascading model, simulation experiments of a train collision accident are performed. The results show that the cascading failure model can describe the cascading process of the railway accident more accurately than the previous models, and can quantitatively analyze the sensitivities and the influence of the causes. In conclusion, this model can assist us to reveal the latent rules of accident causation to reduce the occurrence of railway accidents.
Partitioning and interpolation based hybrid ARIMA–ANN model for ...
Indian Academy of Sciences (India)
One such hybrid model, namely auto regressive integrated moving average – artificial neural network (ARIMA–ANN) is devised in many different ways in the literature. However, the prediction accuracy of hybrid ARIMA–ANN model can be further improved by devising suitable processing techniques. In this paper, a hybrid ...
Digital elevation modeling via curvature interpolation for lidar data
Digital elevation model (DEM) is a three-dimensional (3D) representation of a terrain's surface - for a planet (including Earth), moon, or asteroid - created from point cloud data which measure terrain elevation. Its modeling requires surface reconstruction for the scattered data, which is an ill-p...
Directory of Open Access Journals (Sweden)
Ly, S.
2013-01-01
Full Text Available Watershed management and hydrological modeling require data related to the very important matter of precipitation, often measured using raingages or weather stations. Hydrological models often require a preliminary spatial interpolation as part of the modeling process. The success of spatial interpolation varies according to the type of model chosen, its mode of geographical management and the resolution used. The quality of a result is determined by the quality of the continuous spatial rainfall, which ensues from the interpolation method used. The objective of this article is to review the existing methods for interpolation of rainfall data that are usually required in hydrological modeling. We review the basis for the application of certain common methods and geostatistical approaches used in interpolation of rainfall. Previous studies have highlighted the need for new research to investigate ways of improving the quality of rainfall data and ultimately, the quality of hydrological modeling.
Geostatistical interpolation for modelling SPT data in northern Izmir
Indian Academy of Sciences (India)
http://www.ias.ac.in/article/fulltext/sadh/038/06/1451-1468. Keywords. Kriging; SPT; site investigations; land-use planning; modelling; Northern Izmir. Abstract. In this study, it was aimed to map the corrected Standard Penetration Test(SPT) values in Karşıyaka city center by kriging approach. Six maps were prepared by this ...
Geostatistical interpolation for modelling SPT data in northern Izmir
Indian Academy of Sciences (India)
to the data in hand. At various depths, prepared variograms and the kriging method were used together to model the variation of corrected SPT data in the region, which enabled ... backs and application of different tests on various types of soils requires an extensive study in decision ..... It should be added that, a big amount.
Interpolating Spline Curve-Based Perceptual Encryption for 3D Printing Models
Directory of Open Access Journals (Sweden)
Giao N. Pham
2018-02-01
Full Text Available With the development of 3D printing technology, 3D printing has recently been applied to many areas of life including healthcare and the automotive industry. Due to the benefit of 3D printing, 3D printing models are often attacked by hackers and distributed without agreement from the original providers. Furthermore, certain special models and anti-weapon models in 3D printing must be protected against unauthorized users. Therefore, in order to prevent attacks and illegal copying and to ensure that all access is authorized, 3D printing models should be encrypted before being transmitted and stored. A novel perceptual encryption algorithm for 3D printing models for secure storage and transmission is presented in this paper. A facet of 3D printing model is extracted to interpolate a spline curve of degree 2 in three-dimensional space that is determined by three control points, the curvature coefficients of degree 2, and an interpolating vector. Three control points, the curvature coefficients, and interpolating vector of the spline curve of degree 2 are encrypted by a secret key. The encrypted features of the spline curve are then used to obtain the encrypted 3D printing model by inverse interpolation and geometric distortion. The results of experiments and evaluations prove that the entire 3D triangle model is altered and deformed after the perceptual encryption process. The proposed algorithm is responsive to the various formats of 3D printing models. The results of the perceptual encryption process is superior to those of previous methods. The proposed algorithm also provides a better method and more security than previous methods.
National Research Council Canada - National Science Library
Ingel, R
1999-01-01
... (which require derivative information) interpolation functions as well as standard Lagrangian functions, which can be linear, quadratic or cubic, have been used to construct the interpolation windows...
Effect of raingage density, position and interpolation on rainfall-discharge modelling
Ly, S.; Sohier, C.; Charles, C.; Degré, A.
2012-04-01
Precipitation traditionally observed using raingages or weather stations, is one of the main parameters that have direct impact on runoff production. Precipitation data require a preliminary spatial interpolation prior to hydrological modeling. The accuracy of modelling result depends on the accuracy of the interpolated spatial rainfall which differs according to different interpolation methods. The accuracy of the interpolated spatial rainfall is usually determined by cross-validation method. The objective of this study is to assess the different interpolation methods of daily rainfall at the watershed scale through hydrological modelling and to explore the best methods that provide a good long term simulation. Four versions of geostatistics: Ordinary Kriging (ORK), Universal Kriging (UNK), Kriging with External Dridft (KED) and Ordinary Cokriging (OCK) and two types of deterministic methods: Thiessen polygon (THI) and Inverse Distance Weighting (IDW) are used to produce 30-year daily rainfall inputs for a distributed physically-based hydrological model (EPIC-GRID). This work is conducted in the Ourthe and Ambleve nested catchments, located in the Ardennes hilly landscape in the Walloon region, Belgium. The total catchment area is 2908 km2, lies between 67 and 693 m in elevation. The multivariate geostatistics (KED and OCK) are also used by incorporating elevation as external data to improve the rainfall prediction. This work also aims at analysing the effect of different raingage densities and position used for interpolation, on the stream flow modelled to get insight in terms of the capability and limitation of the geostatistical methods. The number of raingage varies from 70, 60, 50, 40, 30, 20, 8 to 4 stations located in and surrounding the catchment area. In the latter case, we try to use different positions: around the catchment and only a part of the catchment. The result shows that the simple method like THI fails to capture the rainfall and to produce
International Nuclear Information System (INIS)
Ryazanov, A.; Metelkin, E.V.; Semenov, E.A.
2007-01-01
Full text of publication follows: A new theoretical model is developed for the investigations of cascade and sub-cascade formation in fusion structural materials under fast neutron irradiation at high primary knock atom (PKA) energies. Under 14 MeV neutron irradiation especially of light fusion structural materials such as Be, C, SiC materials PKA will have the energies up to 1 MeV. At such high energies it is very difficult to use the Monte Carlo or molecular dynamic simulations. The developed model is based on the analytical consideration of elastic collisions between displaced moving atoms into atomic cascades produced by a PKAs with the some kinetic energy obtained from fast neutrons. The Tomas-Fermy interaction potential is used for the describing of elastic collisions between moving atoms. The suggested model takes into account also the electronic losses for moving atoms between elastic collisions. The self consistent criterion for sub-cascade formation is suggested here which is based on the comparison of mean distance between two consequent PKA collisions and size of sub-cascade produced by PKA. The analytical relations for the most important characteristics of cascades and sub-cascade are determined including the average number of sub-cascades per one PKA in the dependence on PKA energy, the distance between sub-cascades and the average cascade and sub-cascade sizes as a function of PKA energy. The developed model allows determining the total numbers, distribution functions of cascades and sub-cascades in dependence on their sizes and generation rate of cascades and sub-cascades for different fusion neutron energy spectra. Based on the developed model the numerical calculations for main characteristics of cascades and sub-cascades in different fusion structural materials are performed using the neutron flux and PKA energy spectra for fusion reactors: ITER and DEMO. The main characteristics for cascade and sub-cascade formation are calculated here for the
A thermal modelling of displacement cascades in uranium dioxide
Martin, G.; Garcia, P.; Sabathier, C.; Devynck, F.; Krack, M.; Maillard, S.
2014-05-01
The space and time dependent temperature distribution was studied in uranium dioxide during displacement cascades simulated by classical molecular dynamics (MD). The energy for each simulated radiation event ranged between 0.2 keV and 20 keV in cells at initial temperatures of 700 K or 1400 K. Spheres into which atomic velocities were rescaled (thermal spikes) have also been simulated by MD to simulate the thermal excitation induced by displacement cascades. Equipartition of energy was shown to occur in displacement cascades, half of the kinetic energy of the primary knock-on atom being converted after a few tenths of picoseconds into potential energy. The kinetic and potential parts of the system energy are however subjected to little variations during dedicated thermal spike simulations. This is probably due to the velocity rescaling process, which impacts a large number of atoms in this case and would drive the system away from a dynamical equilibrium. This result makes questionable MD simulations of thermal spikes carried out up to now (early 2014). The thermal history of cascades was compared to the heat equation solution of a punctual thermal excitation in UO2. The maximum volume brought to a temperature above the melting temperature during the simulated cascade events is well reproduced by this simple model. This volume eventually constitutes a relevant estimate of the volume affected by a displacement cascade in UO2. This definition of the cascade volume could also make sense in other materials, like iron.
Damped trophic cascades driven by fishing in model marine ecosystems
DEFF Research Database (Denmark)
Andersen, Ken Haste; Pedersen, Martin
2010-01-01
The largest perturbation on upper trophic levels of many marine ecosystems stems from fishing. The reaction of the ecosystem goes beyond the trophic levels directly targeted by the fishery. This reaction has been described either as a change in slope of the overall size spectrum or as a trophic...... cascade triggered by the removal of top predators. Here we use a novel size- and trait-based model to explore how marine ecosystems might react to perturbations from different types of fishing pressure. The model explicitly resolves the whole life history of fish, from larvae to adults. The results show...... that fishing does not change the overall slope of the size spectrum, but depletes the largest individuals and induces trophic cascades. A trophic cascade can propagate both up and down in trophic levels driven by a combination of changes in predation mortality and food limitation. The cascade is damped...
An Online Method for Interpolating Linear Parametric Reduced-Order Models
Amsallem, David
2011-01-01
A two-step online method is proposed for interpolating projection-based linear parametric reduced-order models (ROMs) in order to construct a new ROM for a new set of parameter values. The first step of this method transforms each precomputed ROM into a consistent set of generalized coordinates. The second step interpolates the associated linear operators on their appropriate matrix manifold. Real-time performance is achieved by precomputing inner products between the reduced-order bases underlying the precomputed ROMs. The proposed method is illustrated by applications in mechanical and aeronautical engineering. In particular, its robustness is demonstrated by its ability to handle the case where the sampled parameter set values exhibit a mode veering phenomenon. © 2011 Society for Industrial and Applied Mathematics.
Random Model Sampling: Making Craig Interpolation Work When It Should Not
Directory of Open Access Journals (Sweden)
Marat Akhin
2014-01-01
Full Text Available One of the most serious problems when doing program analyses is dealing with function calls. While function inlining is the traditional approach to this problem, it nonetheless suffers from the increase in analysis complexity due to the state space explosion. Craig interpolation has been successfully used in recent years in the context of bounded model checking to do function summarization which allows one to replace the complete function body with its succinct summary and, therefore, reduce the complexity. Unfortunately this technique can be applied only to a pair of unsatisfiable formulae.In this work-in-progress paper we present an approach to function summarization based on Craig interpolation that overcomes its limitation by using random model sampling. It captures interesting input/output relations, strengthening satisfiable formulae into unsatisfiable ones and thus allowing the use of Craig interpolation. Preliminary experiments show the applicability of this approach; in our future work we plan to do a full evaluation on real-world examples.
Modeling of Bit Error Rate in Cascaded 2R Regenerators
DEFF Research Database (Denmark)
Öhman, Filip; Mørk, Jesper
2006-01-01
This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments and the rege......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...
Interpolation Routines Assessment in ALS-Derived Digital Elevation Models for Forestry Applications
Directory of Open Access Journals (Sweden)
Antonio Luis Montealegre
2015-07-01
Full Text Available Airborne Laser Scanning (ALS is capable of estimating a variety of forest parameters using different metrics extracted from the normalized heights of the point cloud using a Digital Elevation Model (DEM. In this study, six interpolation routines were tested over a range of land cover and terrain roughness in order to generate a collection of DEMs with spatial resolution of 1 and 2 m. The accuracy of the DEMs was assessed twice, first using a test sample extracted from the ALS point cloud, second using a set of 55 ground control points collected with a high precision Global Positioning System (GPS. The effects of terrain slope, land cover, ground point density and pulse penetration on the interpolation error were examined stratifying the study area with these variables. In addition, a Classification and Regression Tree (CART analysis allowed the development of a prediction uncertainty map to identify in which areas DEMs and Airborne Light Detection and Ranging (LiDAR derived products may be of low quality. The Triangulated Irregular Network (TIN to raster interpolation method produced the best result in the validation process with the training data set while the Inverse Distance Weighted (IDW routine was the best in the validation with GPS (RMSE of 2.68 cm and RMSE of 37.10 cm, respectively.
Numerical modelling of compressible viscous flow in turbine cascades
Louda, P.; Kozel, K.; Příhoda, J.
2014-03-01
The work deals with mathematical models of turbulent flow through turbine cascade in 2D and 3D. It is based on the Favre-averaged Navier-Stokes equations with SST or EARSM turbulence models. A two-equation model of transition to turbulence is considered too. The solution is obtained by implicit AUSM finite volume method. The 2D and 3D results are shown flow through the SE1050 cascade including simulation of a range of off-design angles of attack.
Stein, A.
1991-01-01
The theory and practical application of techniques of statistical interpolation are studied in this thesis, and new developments in multivariate spatial interpolation and the design of sampling plans are discussed. Several applications to studies in soil science are
INCAS: an analytical model to describe displacement cascades
International Nuclear Information System (INIS)
Jumel, Stephanie; Claude Van-Duysen, Jean
2004-01-01
REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricite de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory
A Cascade-Based Emergency Model for Water Distribution Network
Directory of Open Access Journals (Sweden)
Qing Shuang
2015-01-01
Full Text Available Water distribution network is important in the critical physical infrastructure systems. The paper studies the emergency resource strategies on water distribution network with the approach of complex network and cascading failures. The model of cascade-based emergency for water distribution network is built. The cascade-based model considers the network topology analysis and hydraulic analysis to provide a more realistic result. A load redistribution function with emergency recovery mechanisms is established. From the aspects of uniform distribution, node betweenness, and node pressure, six recovery strategies are given to reflect the network topology and the failure information, respectively. The recovery strategies are evaluated with the complex network indicators to describe the failure scale and failure velocity. The proposed method is applied by an illustrative example. The results showed that the recovery strategy considering the node pressure can enhance the network robustness effectively. Besides, this strategy can reduce the failure nodes and generate the least failure nodes per time.
Kirkpatrick, J. C.
1976-01-01
A tabulation of selected altitude-correlated values of pressure, density, speed of sound, and coefficient of viscosity for each of six models of the atmosphere is presented in block data format. Interpolation for the desired atmospheric parameters is performed by using cubic spline functions. The recursive relations necessary to compute the cubic spline function coefficients are derived and implemented in subroutine form. Three companion subprograms, which form the preprocessor and processor, are also presented. These subprograms, together with the data element, compose the spline fit atmosphere package. Detailed FLOWGM flow charts and FORTRAN listings of the atmosphere package are presented in the appendix.
Mapping snow depth return levels: smooth spatial modeling versus station interpolation
Directory of Open Access Journals (Sweden)
J. Blanchet
2010-12-01
Full Text Available For adequate risk management in mountainous countries, hazard maps for extreme snow events are needed. This requires the computation of spatial estimates of return levels. In this article we use recent developments in extreme value theory and compare two main approaches for mapping snow depth return levels from in situ measurements. The first one is based on the spatial interpolation of pointwise extremal distributions (the so-called Generalized Extreme Value distribution, GEV henceforth computed at station locations. The second one is new and based on the direct estimation of a spatially smooth GEV distribution with the joint use of all stations. We compare and validate the different approaches for modeling annual maximum snow depth measured at 100 sites in Switzerland during winters 1965–1966 to 2007–2008. The results show a better performance of the smooth GEV distribution fitting, in particular where the station network is sparser. Smooth return level maps can be computed from the fitted model without any further interpolation. Their regional variability can be revealed by removing the altitudinal dependent covariates in the model. We show how return levels and their regional variability are linked to the main climatological patterns of Switzerland.
International Nuclear Information System (INIS)
Soycan, Arzu; Soycan, Metin
2009-01-01
GIS (Geographical Information System) is one of the most striking innovation for mapping applications supplied by the developing computer and software technology to users. GIS is a very effective tool which can show visually combination of the geographical and non-geographical data by recording these to allow interpretations and analysis. DEM (Digital Elevation Model) is an inalienable component of the GIS. The existing TM (Topographic Map) can be used as the main data source for generating DEM by amanual digitizing or vectorization process for the contours polylines. The aim of this study is to examine the DEM accuracies, which were obtained by TMs, as depending on the number of sampling points and grid size. For these purposes, the contours of the several 1/1000 scaled scanned topographical maps were vectorized. The different DEMs of relevant area have been created by using several datasets with different numbers of sampling points. We focused on the DEM creation from contour lines using gridding with RBF (Radial Basis Function) interpolation techniques, namely TPS as the surface fitting model. The solution algorithm and a short review of the mathematical model of TPS (Thin Plate Spline) interpolation techniques are given. In the test study, results of the application and the obtained accuracies are drawn and discussed. The initial object of this research is to discuss the requirement of DEM in GIS, urban planning, surveying engineering and the other applications with high accuracy (a few deci meters). (author)
Learning from a role model: A cascade or whirlpool effect?
Jochemsen-van der Leeuw, H. G. A. Ria; Buwalda, Nienke; Wieringa-de Waard, Margreet; van Dijk, Nynke
2015-01-01
Continuing Professional Development (CPD) and Faculty Development (FD) courses have been designed in the expectation that a cascade effect will occur, consisting of a conveyance of information from the courses to clinical trainers to daily practice and/or to trainees by means of role modeling. The
A psychological cascade model for persisting voice problems in teachers.
Jong, F.I.C.R.S. de; Cornelis, B.E.; Wuyts, F.L.; Kooijman, P.G.C.; Schutte, H.K.; Oudes, M.J.; Graamans, K.
2003-01-01
In 76 teachers with persisting voice problems, the maintaining factors and coping strategies were examined. Physical, functional, psychological and socioeconomic factors were assessed. A parallel was drawn to a psychological cascade model designed for patients with chronic back pain. The majority of
Resistor mesh model of a spherical head: part 1: applications to scalp potential interpolation.
Chauveau, N; Morucci, J P; Franceries, X; Celsis, P; Rigaud, B
2005-11-01
A resistor mesh model (RMM) has been implemented to describe the electrical properties of the head and the configuration of the intracerebral current sources by simulation of forward and inverse problems in electroencephalogram/event related potential (EEG/ERP) studies. For this study, the RMM representing the three basic tissues of the human head (brain, skull and scalp) was superimposed on a spherical volume mimicking the head volume: it included 43 102 resistances and 14 123 nodes. The validation was performed with reference to the analytical model by consideration of a set of four dipoles close to the cortex. Using the RMM and the chosen dipoles, four distinct families of interpolation technique (nearest neighbour, polynomial, splines and lead fields) were tested and compared so that the scalp potentials could be recovered from the electrode potentials. The 3D spline interpolation and the inverse forward technique (IFT) gave the best results. The IFT is very easy to use when the lead-field matrix between scalp electrodes and cortex nodes has been calculated. By simple application of the Moore-Penrose pseudo inverse matrix to the electrode cap potentials, a set of current sources on the cortex is obtained. Then, the forward problem using these cortex sources renders all the scalp potentials.
Roy, Subrata P.
2014-01-28
The method of moments with interpolative closure (MOMIC) for soot formation and growth provides a detailed modeling framework maintaining a good balance in generality, accuracy, robustness, and computational efficiency. This study presents several computational issues in the development and implementation of the MOMIC-based soot modeling for direct numerical simulations (DNS). The issues of concern include a wide dynamic range of numbers, choice of normalization, high effective Schmidt number of soot particles, and realizability of the soot particle size distribution function (PSDF). These problems are not unique to DNS, but they are often exacerbated by the high-order numerical schemes used in DNS. Four specific issues are discussed in this article: the treatment of soot diffusion, choice of interpolation scheme for MOMIC, an approach to deal with strongly oxidizing environments, and realizability of the PSDF. General, robust, and stable approaches are sought to address these issues, minimizing the use of ad hoc treatments such as clipping. The solutions proposed and demonstrated here are being applied to generate new physical insight into complex turbulence-chemistry-soot-radiation interactions in turbulent reacting flows using DNS. © 2014 Copyright Taylor and Francis Group, LLC.
International Nuclear Information System (INIS)
Ng, H.P.; Foong, K.W.C.; Ong, S.H.; Liu, J.; Nowinski, W.L.; Goh, P.S.
2007-01-01
The masseter plays a critical role in the mastication system. A hybrid method to shape-based interpolation is used to build the masseter model from magnetic resonance (MR) data sets. The main contribution here is the localizing of determinative slices in the data sets where clinicians are required to perform manual segmentations in order for an accurate model to be built. Shape-based criteria were used to locate the candidates for determinative slices and fuzzy-c-means (FCM) clustering technique was used to establish the determinative slices. Five masseter models were built in our work and the average overlap indices (κ) achieved is 85.2%. This indicates that there is good agreement between the models and the manual contour tracings. In addition, the time taken, as compared to manually segmenting all the slices, is significantly lesser. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering (Singapore); Biomedical Imaging Lab., Agency for Science Technology and Research (Singapore); Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering (Singapore); Dept. of Preventive Dentistry, National Univ. of Singapore (Singapore); Ong, S.H. [Dept. of Electrical and Computer Engineering, National Univ. of Singapore (Singapore); Div. of Bioengineering, National Univ. of Singapore (Singapore); Liu, J.; Nowinski, W.L. [Biomedical Imaging Lab., Agency for Science Technology and Research (Singapore); Goh, P.S. [Dept. of Diagnostic Radiology, National Univ. of Singapore (Singapore)
2007-06-15
The masseter plays a critical role in the mastication system. A hybrid method to shape-based interpolation is used to build the masseter model from magnetic resonance (MR) data sets. The main contribution here is the localizing of determinative slices in the data sets where clinicians are required to perform manual segmentations in order for an accurate model to be built. Shape-based criteria were used to locate the candidates for determinative slices and fuzzy-c-means (FCM) clustering technique was used to establish the determinative slices. Five masseter models were built in our work and the average overlap indices ({kappa}) achieved is 85.2%. This indicates that there is good agreement between the models and the manual contour tracings. In addition, the time taken, as compared to manually segmenting all the slices, is significantly lesser. (orig.)
Cascading failures in interdependent systems under a flow redistribution model
Zhang, Yingrui; Arenas, Alex; Yaǧan, Osman
2018-02-01
Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {LA,i,CA ,i} i =1 n and {LB,i,CB ,i} i =1 n, respectively. When a line fails in system A , a fraction of its load is redistributed to alive lines in B , while remaining (1 -a ) fraction is redistributed equally among all functional lines in A ; a line failure in B is treated similarly with b giving the fraction to be redistributed to A . We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p1 fraction of lines in A and p2 fraction in B . We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b , and robustness is maximized at non-trivial a ,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.
Digital elevation modeling via curvature interpolation for LiDAR data
Directory of Open Access Journals (Sweden)
Hwamog Kim
2016-03-01
Full Text Available Digital elevation model (DEM is a three-dimensional (3D representation of a terrain's surface - for a planet (including Earth, moon, or asteroid - created from point cloud data which measure terrain elevation. Its modeling requires surface reconstruction for the scattered data, which is an ill-posed problem and most computational algorithms become overly expensive as the number of sample points increases. This article studies an effective partial differential equation (PDE-based algorithm, called the curvature interpolation method (CIM. The new method iteratively utilizes curvature information, estimated from an intermediate surface, to construct a reliable image surface that contains all of the data points. The CIM is applied for DEM for point cloud data acquired by light detection and ranging (LiDAR technology. It converges to a piecewise smooth image, requiring O(N operations independently of the number of sample points, where $N$ is the number of grid points.
Directory of Open Access Journals (Sweden)
Shaofeng Wang
2017-05-01
Full Text Available Mineral reserve estimation and mining design depend on a precise modeling of the mineralized deposit. A multi-step interpolation algorithm, including 1D biharmonic spline estimator for interpolating floor altitudes, 2D nearest neighbor, linear, natural neighbor, cubic, biharmonic spline, inverse distance weighted, simple kriging, and ordinary kriging interpolations for grade distribution on the two vertical sections at roadways, and 3D linear interpolation for grade distribution between sections, was proposed to build a 3D grade distribution model of the mineralized seam in a longwall mining panel with a U-shaped layout having two roadways at both sides. Compared to field data from exploratory boreholes, this multi-step interpolation using a natural neighbor method shows an optimal stability and a minimal difference between interpolation and field data. Using this method, the 97,576 m3 of bauxite, in which the mass fraction of Al2O3 (Wa and the mass ratio of Al2O3 to SiO2 (Wa/s are 61.68% and 27.72, respectively, was delimited from the 189,260 m3 mineralized deposit in the 1102 longwall mining panel in the Wachangping mine, Southwest China. The mean absolute errors, the root mean squared errors and the relative standard deviations of errors between interpolated data and exploratory grade data at six boreholes are 2.544, 2.674, and 32.37% of Wa; and 1.761, 1.974, and 67.37% of Wa/s, respectively. The proposed method can be used for characterizing the grade distribution in a mineralized seam between two roadways at both sides of a longwall mining panel.
A weakened cascade model for turbulence in astrophysical plasmas
International Nuclear Information System (INIS)
Howes, G. G.; TenBarge, J. M.; Dorland, W.
2011-01-01
A refined cascade model for kinetic turbulence in weakly collisional astrophysical plasmas is presented that includes both the transition between weak and strong turbulence and the effect of nonlocal interactions on the nonlinear transfer of energy. The model describes the transition between weak and strong MHD turbulence and the complementary transition from strong kinetic Alfven wave (KAW) turbulence to weak dissipating KAW turbulence, a new regime of weak turbulence in which the effects of shearing by large scale motions and kinetic dissipation play an important role. The inclusion of the effect of nonlocal motions on the nonlinear energy cascade rate in the dissipation range, specifically the shearing by large-scale motions, is proposed to explain the nearly power-law energy spectra observed in the dissipation range of both kinetic numerical simulations and solar wind observations.
A weakened cascade model for turbulence in astrophysical plasmas
Energy Technology Data Exchange (ETDEWEB)
Howes, G. G. [Department of Physics and Astronomy, University of Iowa, Iowa City, Iowa 52242 (United States); Isaac Newton Institute for Mathematical Sciences, Cambridge, CB3 0EH (United Kingdom); TenBarge, J. M. [Department of Physics and Astronomy, University of Iowa, Iowa City, Iowa 52242 (United States); Dorland, W. [Department of Physics, University of Maryland, College Park, Maryland 20742-3511 (United States); Isaac Newton Institute for Mathematical Sciences, Cambridge, CB3 0EH (United Kingdom)
2011-10-15
A refined cascade model for kinetic turbulence in weakly collisional astrophysical plasmas is presented that includes both the transition between weak and strong turbulence and the effect of nonlocal interactions on the nonlinear transfer of energy. The model describes the transition between weak and strong MHD turbulence and the complementary transition from strong kinetic Alfven wave (KAW) turbulence to weak dissipating KAW turbulence, a new regime of weak turbulence in which the effects of shearing by large scale motions and kinetic dissipation play an important role. The inclusion of the effect of nonlocal motions on the nonlinear energy cascade rate in the dissipation range, specifically the shearing by large-scale motions, is proposed to explain the nearly power-law energy spectra observed in the dissipation range of both kinetic numerical simulations and solar wind observations.
Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model
Directory of Open Access Journals (Sweden)
C. Narendra Babu
2015-07-01
Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.
Directory of Open Access Journals (Sweden)
F. F. Asal
2012-07-01
Full Text Available Digital elevation data obtained from different Engineering Surveying techniques is utilized in generating Digital Elevation Model (DEM, which is employed in many Engineering and Environmental applications. This data is usually in discrete point format making it necessary to utilize an interpolation approach for the creation of DEM. Quality assessment of the DEM is a vital issue controlling its use in different applications; however this assessment relies heavily on statistical methods with neglecting the visual methods. The research applies visual analysis investigation on DEMs generated using IDW interpolator of varying powers in order to examine their potential in the assessment of the effects of the variation of the IDW power on the quality of the DEMs. Real elevation data has been collected from field using total station instrument in a corrugated terrain. DEMs have been generated from the data at a unified cell size using IDW interpolator with power values ranging from one to ten. Visual analysis has been undertaken using 2D and 3D views of the DEM; in addition, statistical analysis has been performed for assessment of the validity of the visual techniques in doing such analysis. Visual analysis has shown that smoothing of the DEM decreases with the increase in the power value till the power of four; however, increasing the power more than four does not leave noticeable changes on 2D and 3D views of the DEM. The statistical analysis has supported these results where the value of the Standard Deviation (SD of the DEM has increased with increasing the power. More specifically, changing the power from one to two has produced 36% of the total increase (the increase in SD due to changing the power from one to ten in SD and changing to the powers of three and four has given 60% and 75% respectively. This refers to decrease in DEM smoothing with the increase in the power of the IDW. The study also has shown that applying visual methods supported
Geer, F.C. van; Zuur, A.F.
1997-01-01
This paper advocates an approach to extend single-output Box-Jenkins transfer/noise models for several groundwater head series to a multiple-output transfer/noise model. The approach links several groundwater head series and enables a spatial interpolation in terms of time series analysis. Our
Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang
2018-05-01
In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.
Feature displacement interpolation
DEFF Research Database (Denmark)
Nielsen, Mads; Andresen, Per Rønsholt
1998-01-01
Given a sparse set of feature matches, we want to compute an interpolated dense displacement map. The application may be stereo disparity computation, flow computation, or non-rigid medical registration. Also estimation of missing image data, may be phrased in this framework. Since the features...... often are very sparse, the interpolation model becomes crucial. We show that a maximum likelihood estimation based on the covariance properties (Kriging) show properties more expedient than methods such as Gaussian interpolation or Tikhonov regularizations, also including scale......-selection. The computational complexities are identical. We apply the maximum likelihood interpolation to growth analysis of the mandibular bone. Here, the features used are the crest-lines of the object surface....
Boolean Models of Biological Processes Explain Cascade-Like Behavior
Chen, Hao; Wang, Guanyu; Simha, Rahul; Du, Chenghang; Zeng, Chen
2016-01-01
Biological networks play a key role in determining biological function and therefore, an understanding of their structure and dynamics is of central interest in systems biology. In Boolean models of such networks, the status of each molecule is either “on” or “off” and along with the molecules interact with each other, their individual status changes from “on” to “off” or vice-versa and the system of molecules in the network collectively go through a sequence of changes in state. This sequence of changes is termed a biological process. In this paper, we examine the common perception that events in biomolecular networks occur sequentially, in a cascade-like manner, and ask whether this is likely to be an inherent property. In further investigations of the budding and fission yeast cell-cycle, we identify two generic dynamical rules. A Boolean system that complies with these rules will automatically have a certain robustness. By considering the biological requirements in robustness and designability, we show that those Boolean dynamical systems, compared to an arbitrary dynamical system, statistically present the characteristics of cascadeness and sequentiality, as observed in the budding and fission yeast cell- cycle. These results suggest that cascade-like behavior might be an intrinsic property of biological processes. PMID:26821940
A Unified Model of Secondary Electron Cascades in Diamond
Energy Technology Data Exchange (ETDEWEB)
Ziaja, B; London, R A; Hajdu, J
2004-10-13
In this paper we present a detailed and unified theoretical treatment of secondary electron cascades that follow the absorption of an X-ray photon. A Monte Carlo model has been constructed that treats in detail the evolution of electron cascades induced by photoelectrons and by Auger electrons following inner shell ionizations. Detailed calculations are presented for cascades initiated by electron energies between 0.1-10 keV. The present paper expands our earlier work by extending the primary energy range, by improving the treatment of secondary electrons, especially at low electron energies, by including ionization by holes, and by taking into account their coupling to the crystal lattice. The calculations describe the three-dimensional evolution of the electron cloud, and monitor the equivalent instantaneous temperature of the free-electron gas as the system cools. The dissipation of the impact energy proceeds predominantly through the production of secondary electrons whose energies are comparable to the binding energies of the valence (40-50 eV) and of the core electrons (300 eV). The electron cloud generated by a 10 keV electron is strongly anisotropic in the early phases of the cascade (t {le} 1 fs). At later times, the sample is dominated by low energy electrons, and these are scattered more isotropically by atoms in the sample. Our results for the total late time number of secondary electrons agree with available experimental data, and show that the emission of secondary electrons approaches saturation within about 100 fs, following the primary impact.
2016-02-11
INVESTIGATION OF BACK-OFF BASED INTERPOLATION BETWEEN RECURRENT NEURAL NETWORK AND N-GRAM LANGUAGE MODELS X. Chen, X. Liu, M. J. F. Gales, and P. C...weighting based linear interpolation in state-of-the-art ASR systems. However, previous work doesn’t fully exploit the difference of mod- elling power of the...back-off based compact representation of n-gram dependent interpolation weights is pro- posed in this paper. This approach allows weight parameters to
International Nuclear Information System (INIS)
Pohjola, J.; Turunen, J.; Lipping, T.
2009-07-01
In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)
Directory of Open Access Journals (Sweden)
Hosein Ghaffarzadeh
Full Text Available Abstract This paper investigates the numerical modeling of the flexural wave propagation in Euler-Bernoulli beams using the Hermite-type radial point interpolation method (HRPIM under the damage quantification approach. HRPIM employs radial basis functions (RBFs and their derivatives for shape function construction as a meshfree technique. The performance of Multiquadric(MQ RBF to the assessment of the reflection ratio was evaluated. HRPIM signals were compared with the theoretical and finite element responses. Results represent that MQ is a suitable RBF for HRPIM and wave propagation. However, the range of the proper shape parameters is notable. The number of field nodes is the main parameter for accurate wave propagation modeling using HRPIM. The size of support domain should be less thanan upper bound in order to prevent high error. With regard to the number of quadrature points, providing the minimum numbers of points are adequate for the stable solution, but the existence of more points in damage region does not leads to necessarily the accurate responses. It is concluded that the pure HRPIM, without any polynomial terms, is acceptable but considering a few terms will improve the accuracy; even though more terms make the problem unstable and inaccurate.
Pursiainen, S.; Vorwerk, J.; Wolters, C. H.
2016-12-01
The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.
2015-03-16
with logical rules to simulate an archetype biochemical network, the human coagulation cascade. The model consisted of five differential equations...coagulation system. Coagulation is an archetype proteolytic cascade involving both positive and negative feedback [10–12]. Coagulation is mediated by a...purely ODE models in the literature . We estimated the model parameters from in vitro extrinsic coagulation data sets, in the presence of ATIII, with and
Emotional intelligence: an integrative meta-analysis and cascading model.
Joseph, Dana L; Newman, Daniel A
2010-01-01
Research and valid practice in emotional intelligence (EI) have been impeded by lack of theoretical clarity regarding (a) the relative roles of emotion perception, emotion understanding, and emotion regulation facets in explaining job performance; (b) conceptual redundancy of EI with cognitive intelligence and Big Five personality; and (c) application of the EI label to 2 distinct sets of constructs (i.e., ability-based EI and mixed-based EI). In the current article, the authors propose and then test a theoretical model that integrates these factors. They specify a progressive (cascading) pattern among ability-based EI facets, in which emotion perception must causally precede emotion understanding, which in turn precedes conscious emotion regulation and job performance. The sequential elements in this progressive model are believed to selectively reflect Conscientiousness, cognitive ability, and Neuroticism, respectively. "Mixed-based" measures of EI are expected to explain variance in job performance beyond cognitive ability and personality. The cascading model of EI is empirically confirmed via meta-analytic data, although relationships between ability-based EI and job performance are shown to be inconsistent (i.e., EI positively predicts performance for high emotional labor jobs and negatively predicts performance for low emotional labor jobs). Gender and race differences in EI are also meta-analyzed. Implications for linking the EI fad in personnel selection to established psychological theory are discussed. Copyright 2009 APA, all rights reserved.
Monte Carlo Modeling Electronuclear Processes in Cascade Subcritical Reactor
Bznuni, S A; Zhamkochyan, V M; Polyanskii, A A; Sosnin, A N; Khudaverdian, A G
2000-01-01
Accelerator driven subcritical cascade reactor composed of the main thermal neutron reactor constructed analogous to the core of the VVER-1000 reactor and a booster-reactor, which is constructed similar to the core of the BN-350 fast breeder reactor, is taken as a model example. It is shown by means of Monte Carlo calculations that such system is a safe energy source (k_{eff}=0.94-0.98) and it is capable of transmuting produced radioactive wastes (neutron flux density in the thermal zone is PHI^{max} (r,z)=10^{14} n/(cm^{-2} s^{-1}), neutron flux in the fast zone is respectively equal PHI^{max} (r,z)=2.25 cdot 10^{15} n/(cm^{-2} s^{-1}) if the beam current of the proton accelerator is k_{eff}=0.98 and I=5.3 mA). Suggested configuration of the "cascade" reactor system essentially reduces the requirements on the proton accelerator current.
Directory of Open Access Journals (Sweden)
Mengmeng Wang
2017-12-01
Full Text Available Near surface air temperature (NSAT is a primary descriptor of terrestrial environmental conditions. In recent decades, many efforts have been made to develop various methods for obtaining spatially continuous NSAT from gauge or station observations. This study compared three spatial interpolation (i.e., Kriging, Spline, and Inversion Distance Weighting (IDW and two regression analysis (i.e., Multiple Linear Regression (MLR and Geographically Weighted Regression (GWR models for predicting monthly minimum, mean, and maximum NSAT in China, a domain with a large area, complex topography, and highly variable station density. This was conducted for a period of 12 months of 2010. The accuracy of the GWR model is better than the MLR model with an improvement of about 3 °C in the Root Mean Squared Error (RMSE, which indicates that the GWR model is more suitable for predicting monthly NSAT than the MLR model over a large scale. For three spatial interpolation models, the RMSEs of the predicted monthly NSAT are greater in the warmer months, and the mean RMSEs of the predicted monthly mean NSAT for 12 months in 2010 are 1.56 °C for the Kriging model, 1.74 °C for the IDW model, and 2.39 °C for the Spline model, respectively. The GWR model is better than the Kriging model in the warmer months, while the Kriging model is superior to the GWR model in the colder months. The total precision of the GWR model is slightly higher than the Kriging model. The assessment result indicated that the higher standard deviation and the lower mean of NSAT from sample data would be associated with a better performance of predicting monthly NSAT using spatial interpolation models.
Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh
2011-12-01
Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. © The Author(s) 2011. Published by Oxford University Press.
Directory of Open Access Journals (Sweden)
Diane Palmer
2017-02-01
Full Text Available Plane-of-array (PoA irradiation data is a requirement to simulate the energetic performance of photovoltaic devices (PVs. Normally, solar data is only available as global horizontal irradiation, for a limited number of locations, and typically in hourly time resolution. One approach to handling this restricted data is to enhance it initially by interpolation to the location of interest; next, it must be translated to PoA data by separately considering the diffuse and the beam components. There are many methods of interpolation. This research selects ordinary kriging as the best performing technique by studying mathematical properties, experimentation and leave-one-out-cross validation. Likewise, a number of different translation models has been developed, most of them parameterised for specific measurement setups and locations. The work presented identifies the optimum approach for the UK on a national scale. The global horizontal irradiation will be split into its constituent parts. Divers separation models were tried. The results of each separation algorithm were checked against measured data distributed across the UK. It became apparent that while there is little difference between procedures (14 Wh/m2 mean bias error (MBE, 12 Wh/m2 root mean square error (RMSE, the Ridley, Boland, Lauret equation (a universal split algorithm consistently performed well. The combined interpolation/separation RMSE is 86 Wh/m2.
Developmental Cascade Model for Adolescent Substance Use from Infancy to Late Adolescence
Eiden, Rina D.; Lessard, Jared; Colder, Craig R.; Livingston, Jennifer; Casey, Meghan; Leonard, Kenneth E.
2016-01-01
A developmental cascade model for adolescent substance use beginning in infancy was examined in a sample of children with alcoholic and nonalcoholic parents. The model examined the role of parents' alcohol diagnoses, depression and antisocial behavior in a cascading process of risk via 3 major hypothesized pathways: first, via parental…
A cascade modelling approach to flood extent estimation
Pedrozo-Acuña, Adrian; Rodríguez-Rincón, Juan Pablo; Breña-Naranjo, Agustin
2014-05-01
Recent efforts dedicated to the generation of new flood risk management strategies, have pointed out that a possible way forward for an improvement in this field relies on the reduction and quantification of uncertainties associated to the prediction system. With the purpose of reducing these uncertainties, this investigation follows a cascade modelling approach (meteorological - hydrological - 2D hydrodynamic) in combination with high-quality data (LiDAR, satellite imagery, precipitation), to study an extreme event registered last year in Mexico. The presented approach is useful for both, the characterisation of epistemic uncertainties and the generation of flood management strategies through probabilistic flood maps. Uncertainty is considered in both meteorological and hydrological models, and is propagated to a given flood extent as determined with a hydrodynamic model. Despite the methodology does not consider all the uncertainties that may be involved in the determination of a flooded area, it enables better understanding of the interaction between errors in the set-up of models and their propagation to a given result.
Müller, H.; Haberlandt, U.
2018-01-01
Rainfall time series of high temporal resolution and spatial density are crucial for urban hydrology. The multiplicative random cascade model can be used for temporal rainfall disaggregation of daily data to generate such time series. Here, the uniform splitting approach with a branching number of 3 in the first disaggregation step is applied. To achieve a final resolution of 5 min, subsequent steps after disaggregation are necessary. Three modifications at different disaggregation levels are tested in this investigation (uniform splitting at Δt = 15 min, linear interpolation at Δt = 7.5 min and Δt = 3.75 min). Results are compared both with observations and an often used approach, based on the assumption that a time steps with Δt = 5.625 min, as resulting if a branching number of 2 is applied throughout, can be replaced with Δt = 5 min (called the 1280 min approach). Spatial consistence is implemented in the disaggregated time series using a resampling algorithm. In total, 24 recording stations in Lower Saxony, Northern Germany with a 5 min resolution have been used for the validation of the disaggregation procedure. The urban-hydrological suitability is tested with an artificial combined sewer system of about 170 hectares. The results show that all three variations outperform the 1280 min approach regarding reproduction of wet spell duration, average intensity, fraction of dry intervals and lag-1 autocorrelation. Extreme values with durations of 5 min are also better represented. For durations of 1 h, all approaches show only slight deviations from the observed extremes. The applied resampling algorithm is capable to achieve sufficient spatial consistence. The effects on the urban hydrological simulations are significant. Without spatial consistence, flood volumes of manholes and combined sewer overflow are strongly underestimated. After resampling, results using disaggregated time series as input are in the range of those using observed time series. Best
Interpolation functors and interpolation spaces
Brudnyi, Yu A
1991-01-01
The theory of interpolation spaces has its origin in the classical work of Riesz and Marcinkiewicz but had its first flowering in the years around 1960 with the pioneering work of Aronszajn, Calderón, Gagliardo, Krein, Lions and a few others. It is interesting to note that what originally triggered off this avalanche were concrete problems in the theory of elliptic boundary value problems related to the scale of Sobolev spaces. Later on, applications were found in many other areas of mathematics: harmonic analysis, approximation theory, theoretical numerical analysis, geometry of Banach spaces, nonlinear functional analysis, etc. Besides this the theory has a considerable internal beauty and must by now be regarded as an independent branch of analysis, with its own problems and methods. Further development in the 1970s and 1980s included the solution by the authors of this book of one of the outstanding questions in the theory of the real method, the K-divisibility problem. In a way, this book harvests the r...
Philipp, Anne; Hittmeir, Sabine; Seibert, Petra
2017-04-01
The distribution of wet deposition as calculated with Lagrangian particle transport models, e.g. FLEXPART (http://flexpart.eu) is governed by the intensity distribution of precipitation. Usually, meteorological input is taken from Eulerian weather forecast models, e.g. ECWMF (European Centre for Medium-Range Weather Forecasts), providing precipitation data as integrated over the time between two output times and over a grid cell. Simple linear interpolation would implicitly assume the integral value to be a point value valid at the grid centre and in the middle of the time interval, and thus underestimate peaks and overestimate local minima. In FLEXPART, a separate pre-processor is used to extract the meteorological input data from the ECMWF archive and prepare them for use in the model. Currently, a relatively simple method prepares the precipitation fields in a way that is consistent with the linear interpolation as applied in FLEXPART. This method is designed to conserve the original amount of precipitation. However, this leads to undesired temporal smoothing of the precipitation time series which even produces nonzero precipitation in dry intervals bordering a precipitation period. A new interpolation algorithm (currently in one dimension) was developed which introduces additional supporting grid points in each time interval (see companion contribution by Hittmeir, Philipp and Seibert). The quality of the algorithm is being tested at first by comparing 1-hourly values derived with the new algorithm from 3- (or 6-)hourly precipitation with the 1-hourly ECMWF model output. As ECWMF provides large-scale and convective precipitation data, the evaluation will be carried out separately as well as for different seasons and climatic zones.
Directory of Open Access Journals (Sweden)
S. Ly
2011-07-01
Full Text Available Spatial interpolation of precipitation data is of great importance for hydrological modelling. Geostatistical methods (kriging are widely applied in spatial interpolation from point measurement to continuous surfaces. The first step in kriging computation is the semi-variogram modelling which usually used only one variogram model for all-moment data. The objective of this paper was to develop different algorithms of spatial interpolation for daily rainfall on 1 km^{2} regular grids in the catchment area and to compare the results of geostatistical and deterministic approaches. This study leaned on 30-yr daily rainfall data of 70 raingages in the hilly landscape of the Ourthe and Ambleve catchments in Belgium (2908 km^{2}. This area lies between 35 and 693 m in elevation and consists of river networks, which are tributaries of the Meuse River. For geostatistical algorithms, seven semi-variogram models (logarithmic, power, exponential, Gaussian, rational quadratic, spherical and penta-spherical were fitted to daily sample semi-variogram on a daily basis. These seven variogram models were also adopted to avoid negative interpolated rainfall. The elevation, extracted from a digital elevation model, was incorporated into multivariate geostatistics. Seven validation raingages and cross validation were used to compare the interpolation performance of these algorithms applied to different densities of raingages. We found that between the seven variogram models used, the Gaussian model was the most frequently best fit. Using seven variogram models can avoid negative daily rainfall in ordinary kriging. The negative estimates of kriging were observed for convective more than stratiform rain. The performance of the different methods varied slightly according to the density of raingages, particularly between 8 and 70 raingages but it was much different for interpolation using 4 raingages. Spatial interpolation with the geostatistical and
Technical note: Cascade of submerged reservoirs as a rainfall-runoff model
Kurnatowski, Jacek
2017-09-01
The rainfall-runoff conceptual model as a cascade of submerged linear reservoirs with particular outflows depending on storages of adjoining reservoirs is developed. The model output contains different exponential functions with roots of Chebyshev polynomials of the first kind as exponents. The model is applied to instantaneous unit hydrograph (IUH) and recession curve problems and compared with the analogous results of the Nash cascade. A case study is performed on a basis of 46 recession periods. Obtained results show the usefulness of the model as an alternative concept to the Nash cascade.
Comparison of interpolation and approximation methods for optical freeform synthesis
Voznesenskaya, Anna; Krizskiy, Pavel
2017-06-01
Interpolation and approximation methods for freeform surface synthesis are analyzed using the developed software tool. Special computer tool is developed and results of freeform surface modeling with piecewise linear interpolation, piecewise quadratic interpolation, cubic spline interpolation, Lagrange polynomial interpolation are considered. The most accurate interpolation method is recommended. Surface profiles are approximated with the square least method. The freeform systems are generated in optical design software.
Modelling of the Blood Coagulation Cascade in an In Vitro Flow System
DEFF Research Database (Denmark)
Andersen, Nina Marianne; Sørensen, Mads Peter; Efendiev, Messoud A.
2010-01-01
We derive a mathematical model of a part of the blood coagulation cascade set up in a perfusion experiment. Our purpose is to simulate the influence of blood flow and diffusion on the blood coagulation pathway. The resulting model consists of a system of partial differential equations taking...... and flow equations, which guarantee non negative concentrations at all times. The criteria is applied to the model of the blood coagulation cascade....
Model and Study on Cascade Control System Based on IGBT Chopping Control
Niu, Yuxin; Chen, Liangqiao; Wang, Shuwen
2018-01-01
Thyristor cascade control system has a wide range of applications in the industrial field, but the traditional cascade control system has some shortcomings, such as a low power factor, serious harmonic pollution. In this paper, not only analyzing its system structure and working principle, but also discussing the two main factors affecting the power factor. Chopping-control cascade control system, adopted a new power switching device IGBT, which could overcome traditional cascade control system’s two main drawbacks efficiently. The basic principle of this cascade control system is discussed in this paper and the model of speed control system is built by using MATLAB/Simulink software. Finally, the simulation results of the system shows that the system works efficiently. This system is worthy to be spread widely in engineering application.
International Nuclear Information System (INIS)
Katoh, Yutai; Muroga, Takeo; Kohyama, Akira; Stoller, R.E.; Namba, Chusei; Motojima, Osamu.
1995-11-01
Recent computational and experimental studies have confirmed that high energy cascades produce clustered defects of both vacancy- and interstitial-types as well as isolated point defects. However, the production probability, configuration, stability and other characteristics of the cascade clusters are not well understood in spite of the fact that clustered defect production would substantially affect the irradiation-induced microstructures and the consequent property changes in a certain range of temperatures and displacement rates. In this work, a model of point defect and cluster evolution in irradiated materials under cascade damage conditions was developed by combining the conventional reaction rate theory and the results from the latest molecular dynamics simulation studies. This paper provides a description of the model and a model-based fundamental investigation of the influence of configuration, production efficiency and the initial size distribution of cascade-produced vacancy clusters. In addition, using the model, issues on characterizing cascade-induced defect production by microstructural analysis will be discussed. In particular, the determination of cascade vacancy cluster configuration, surviving defect production efficiency and cascade-interaction volume is attempted by analyzing the temperature dependence of swelling rate and loop growth rate in austenitic steels and model alloys. (author)
Directory of Open Access Journals (Sweden)
Mei Hong
2017-01-01
Full Text Available Prediction in Ungauged Basins (PUB is an important task for water resources planning and management and remains a fundamental challenge for the hydrological community. In recent years, geostatistical methods have proven valuable for estimating hydrological variables in ungauged catchments. However, four major problems restrict the development of geostatistical methods. We established a new information diffusion model based on genetic algorithm (GIDM for spatial interpolating of runoff in the ungauged basins. Genetic algorithms (GA are used to generate high-quality solutions to optimization and search problems. So, using GA, the parameter of optimal window width can be obtained. To test our new method, seven experiments for the annual runoff interpolation based on GIDM at 17 stations on the mainstream and tributaries of the Yellow River are carried out and compared with the inverse distance weighting (IDW method, Cokriging (COK method, and conventional IDMs using the same sparse observed data. The seven experiments all show that the GIDM method can solve four problems of the previous geostatistical methods to some extent and obtains best accuracy among four different models. The key problems of the PUB research are the lack of observation data and the difficulties in information extraction. So the GIDM is a new and useful tool to solve the Prediction in Ungauged Basins (PUB problem and to improve the water management.
Giesemann, Jens; Greiner, Martin; Lipa, Peter
1997-02-01
The generators of binary multiplicative cascade models with a non-overlapping branching structure are given by the Haar wavelets. We construct specific generalizations of these models for which any given wavelet represents the generators of the local cascade branchings. Such “wavelet cascades”, for which we calculate spatial correlation functions, have spatially overlapping branches and are therefore useful for modeling recombination effects in hierarchical branching processes.
Sixtus, Frederick
2009-01-01
Inhalt: Interpol - Kurzer geschichtlicher Abriss - Interpol heute - Struktur - Die Kernfunktionen Interpols Europol (oder: Europäisches Polizeiamt) - Kurzer geschichtlicher Abriss - Europol heute - Struktur Die Kontrolle Europols - Die Kernaufgaben Europols - Wie arbeiten die internationalen Polizeibehörden tatsächlich? - Vorboten einer Weltpolizei?
International Nuclear Information System (INIS)
Toeroek, Szabolcs; Borgulya, Gabor; Lobmayer, Peter; Jakab, Zsuzsanna; Schuler, Dezsoe; Fekete, Gyoergy
2005-01-01
The incidence of childhood leukaemia in Hungary has yet to be reported, although data are available since the early 70s. The Hungarian data therefore cover the time before and after the Chernobyl nuclear accident (1986). The aim of this study was to assess the effects of the Chernobyl accident on childhood leukaemia incidence in Hungary. A population-based study was carried out using data of the National Paediatric Cancer Registry of Hungary from 1973 to 2002. The total number of cases was 2204. To test the effect of the Chernobyl accident the authors applied a new approach called 'Hypothesized Impact Period Interpolation'-model, which takes into account the increasing trend of childhood leukaemia incidence and the hypothesized exposure and latency times. The incidence of leukaemia in the age group 0-14 varied between 33.2 and 39.4 per million person-years along the observed 30 year period, and the incidence of childhood leukaemia showed a moderate increase of 0.71% annually (p=0.0105). In the period of the hypothesized impact of the Chernobyl accident the incidence rate was elevated by 2.5% (95% CI: -8.1%; +14.3%), but this change was not statistically significant (p=0.663). The age standardised incidence, the age distribution, the gender ratio, and the magnitude of increasing trend of childhood leukaemia incidence in Hungary were similar to other European countries. Applying the presented interpolation method the authors did not find a statistically significant increase in the leukaemia incidence in the period of the hypothesized impact of the Chernobyl accident
A model of disordered zone formation in Cu3Au under cascade-producing irradiation
International Nuclear Information System (INIS)
Kapinos, V.G.; Bacon, D.J.
1995-01-01
A model to describe the disordering of ordered Cu 3 Au under irradiation is proposed. For the thermal spike phase of a displacement cascade, the processes of heat evolution and conduction in the cascade region are modelled by solving the thermal conduction equation by a discretization method for a medium that can melt and solidify under appropriate conditions. The model considers disordering to result from cascade core melting, with the final disordered zone corresponding to the largest molten zone achieved. The initial conditions for this treatment are obtained by simulation of cascades by the MARLOWE code. The contrast of disordered zones imaged in a superlattice dark-field reflection and projected on the plane parallel to the surface of a thin foil was calculated. The average size of images from hundreds of cascades created by incident Cu + ions were calculated for different ion energies and compared with experimental transmission electron microscopy data. The model is in reasonable quantitative agreement with the experimentally observed trends. (author)
An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.
Directory of Open Access Journals (Sweden)
P Martin Sander
Full Text Available Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM. This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism". Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.
An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.
Sander, P Martin
2013-01-01
Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism"). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.
An Evolutionary Cascade Model for Sauropod Dinosaur Gigantism - Overview, Update and Tests
Sander, P. Martin
2013-01-01
Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades (“Reproduction”, “Feeding”, “Head and neck”, “Avian-style lung”, and “Metabolism”). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait “Very high body mass”. Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size. PMID:24205267
Analysis of car-following model with cascade compensation strategy
Zhu, Wen-Xing; Zhang, Li-Dong
2016-05-01
Cascade compensation mechanism was designed to improve the dynamical performance of traffic flow system. Two compensation methods were used to study unit step response in time domain and frequency characteristics with different parameters. The overshoot and phase margins are proportional to the compensation parameter in an underdamped condition. Through the comparison we choose the phase-lead compensation method as the main strategy in suppressing the traffic jam. The simulations were conducted under two boundary conditions to verify the validity of the compensator. The conclusion can be drawn that the stability of the system is strengthened with increased phase-lead compensation parameter. Moreover, the numerical simulation results are in good agreement with analytical results.
Lorenzi, Juan M.; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian
2017-10-01
Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.
Robustness of power-law behavior in cascading line failure models
F. Sloothaak; S.C. Borst (Sem); A.P. Zwart (Bert)
2017-01-01
textabstractInspired by reliability issues in electric transmission networks, we use a probabilistic approach to study the occurrence of large failures in a stylized cascading line failure model. Such models capture the phenomenon where an initial line failure potentially triggers massive knock-on
ARRA: Reconfiguring Power Systems to Minimize Cascading Failures - Models and Algorithms
Energy Technology Data Exchange (ETDEWEB)
Dobson, Ian [Iowa State University; Hiskens, Ian [Unversity of Michigan; Linderoth, Jeffrey [University of Wisconsin-Madison; Wright, Stephen [University of Wisconsin-Madison
2013-12-16
Building on models of electrical power systems, and on powerful mathematical techniques including optimization, model predictive control, and simluation, this project investigated important issues related to the stable operation of power grids. A topic of particular focus was cascading failures of the power grid: simulation, quantification, mitigation, and control. We also analyzed the vulnerability of networks to component failures, and the design of networks that are responsive to and robust to such failures. Numerous other related topics were investigated, including energy hubs and cascading stall of induction machines
SPLINE, Spline Interpolation Function
International Nuclear Information System (INIS)
Allouard, Y.
1977-01-01
1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10
DEFF Research Database (Denmark)
Christensen, Esben Toke; Lund, Erik; Lindgaard, Esben
2018-01-01
hypercube approach. This sampling method allows for generating a space filling and high-quality sample plan that respects mechanical constraints of the variable shape mould systems. Through the benchmark study, it is found that mechanical freeplay in the modeled system is severely detrimental......This paper concerns the experimental validation of two surrogate models through a benchmark study involving two different variable shape mould prototype systems. The surrogate models in question are different methods based on kriging and proper orthogonal decomposition (POD), which were developed...... in previous work. Measurement data used in the benchmark study are obtained using digital image correlation (DIC). For determining the variable shape mould configurations used for the training, and test sets used in the study, sampling is carried out using a novel constrained nested orthogonal maximin Latin...
F. Pan; M. Stieglitz; R.B. McKane
2012-01-01
Digital elevation model (DEM) data are essential to hydrological applications and have been widely used to calculate a variety of useful topographic characteristics, e.g., slope, flow direction, flow accumulation area, stream channel network, topographic index, and others. Except for slope, none of the other topographic characteristics can be calculated until the flow...
D. T. Price; D. W. McKenney; L. A. Joyce; R. M. Siltanen; P. Papadopol; K. Lawrence
2011-01-01
Projections of future climate were selected for four well-established general circulation models (GCMs) forced by each of three greenhouse gas (GHG) emissions scenarios recommended by the Intergovernmental Panel on Climate Change (IPCC), namely scenarios A2, A1B, and B1 of the IPCC Special Report on Emissions Scenarios. Monthly data for the period 1961-2100 were...
A developmental cascade perspective of paediatric obesity: a conceptual model and scoping review.
Smith, Justin D; Egan, Kaitlyn N; Montaño, Zorash; Dawson-McClure, Spring; Jake-Schoffman, Danielle E; Larson, Madeline; St George, Sara M
2018-04-05
Considering the immense challenge of preventing obesity, the time has come to reconceptualise the way we study the obesity development in childhood. The developmental cascade model offers a longitudinal framework to elucidate the way cumulative consequences and spreading effects of risk and protective factors, across and within biopsychosocial spheres and phases of development, can propel individuals towards obesity. In this article, we use a theory-driven model-building approach and a scoping review that included 310 published studies to propose a developmental cascade model of paediatric obesity. The proposed model provides a basis for testing hypothesised cascades with multiple intervening variables and complex longitudinal processes. Moreover, the model informs future research by resolving seemingly contradictory findings on pathways to obesity previously thought to be distinct (low self-esteem, consuming sugary foods, and poor sleep cause obesity) that are actually processes working together over time (low self-esteem causes consumption of sugary foods which disrupts sleep quality and contributes to obesity). The findings of such inquiries can aid in identifying the timing and specific targets of preventive interventions across and within developmental phases. The implications of such a cascade model of paediatric obesity for health psychology and developmental and prevention sciences are discussed.
Cleator, Sean; Harrison, Sandy P.; Roulstone, Ian; Nichols, Nancy K.; Prentice, Iain Colin
2017-04-01
Site-based pollen records have been used to provide quantitative reconstructions of Last Glacial Maximum (LGM) climates, but there are too few such records to provide continuous climate fields for the evaluation of climate model simulations. Furthermore, many of the reconstructions were made using modern-analogue techniques, which do not account for the direct impact of CO2 on water-use efficiency and therefore reconstruct considerably drier conditions under low CO2 at the LGM than indicated by other sources of information. We have shown that it is possible to correct analogue-based moisture reconstructions for this effect by inverting a simple light-use efficiency model of productivity, based on the principle that the rate of water loss per unit carbon gain of a plant is the same under conditions of the true moisture, palaeotemperature and palaeo CO2 concentration as under reconstructed moisture, modern CO2 concentration and modern temperature (Prentice et al., 2016). In this study, we use data from the Bartlein el al. (2011) dataset, which provides reconstructions of one or more of six climate variables (mean annual temperature, mean temperature of the warmest and coldest months, the length of the growing seasons, mean annual precipitation, and the ratio of actual to potential evapotranspiration) at individual LGM sites. We use the SPLASH water-balance model to derive a moisture index (MI) at each site from mean annual precipitation and monthly values of sunshine fraction and average temperature, and correct this MI using the Prentice et al. (2016) inversion approach. We then use a three-dimensional variational (3D-Var) data assimilation scheme with the SPLASH model and Prentice et al. (2016) inversion approach to derive reconstructions of all six climate variables at each site, using the Bartlein et al. (2011) data set as a target. We use two alternative background climate states (or priors): modern climate derived from the CRU CL v2.0 data set (New et al., 2002
CSIR Research Space (South Africa)
Van den Bergh, F
2006-01-01
Full Text Available of the sequence no clouds were present, i.e. the cycle was complete without missing data. For each subsequent cycle this reference RKHS model was kept fixed and then just scaled and translated vertically to obtain the best least squares overlay. The 380 390 400... interpola- tion problem is novel. Results obtained by means of computer experiments are presented. 1. Introduction Remote sensing data obtained from earth observing satellites is frequently affected by cloud contamination, with about 50% of the globe...
daSilva, Arlindo
2004-01-01
The first set of interoperability experiments illustrates the role ESMF can play in integrating the national Earth science resources. Using existing data assimilation technology from NCEP and the National Weather Service, the Community Atmosphere Model (CAM) was able to ingest conventional and remotely sensed observations, a capability that could open the door to using CAM for weather as well as climate prediction. CAM, which includes land surface capabilities, was developed by NCAR, with key components from GSFC. In this talk we will describe the steps necessary for achieving the coupling of these two systems.
CASCADER: An M-chain gas-phase radionuclide transport and fate model
International Nuclear Information System (INIS)
Cawlfield, D.E.; Emer, D.F.; Lindstrom, F.T.; Shott, G.J.
1993-09-01
Chemicals and radionuclides move either in the gas-phase, liquid-phase, or both phases in soils. They may be acted upon by either biological or abiotic processes through advection and/or dispersion. Additionally during the transport of parent and daughter radionuclides in soil, radionuclide decay may occur. This version of CASCADER called CASCADR9 starts with the concepts presented in volumes one and three of this series. For a proper understanding of how the model works, the reader should read volume one first. Also presented in this volume is a set of realistic scenarios for buried sources of radon gas, and the input and output file structure for CASCADER9
Zayane, Chadia
2014-06-01
In this paper, we address a special case of state and parameter estimation, where the system can be put on a cascade form allowing to estimate the state components and the set of unknown parameters separately. Inspired by the nonlinear Balloon hemodynamic model for functional Magnetic Resonance Imaging problem, we propose a hierarchical approach. The system is divided into two subsystems in cascade. The state and input are first estimated from a noisy measured signal using an adaptive observer. The obtained input is then used to estimate the parameters of a linear system using the modulating functions method. Some numerical results are presented to illustrate the efficiency of the proposed method.
Directory of Open Access Journals (Sweden)
Jie Yang
2015-01-01
Full Text Available It is difficult to effectively identify and eliminate the multiple correlation influence among the independent factors by least-squares regression. Focusing on this insufficiency, the sediment deposition risk of cascade reservoirs and fitting model of sediment flux into the reservoir are studied. The partial least-squares regression (PLSR method is adopted for modeling analysis; the model fitting is organically combined with the non-model-style data content analysis, so as to realize the regression model, data structure simplification, and multiple correlations analysis among factors; meanwhile the accuracy of the model is ensured through cross validity check. The modeling analysis of sediment flux into the cascade reservoirs of Long-Liu section upstream of the Yellow River indicates that partial least-squares regression can effectively overcome the multiple correlation influence among factors, and the isolated factor variables have better ability to explain the physical cause of measured results.
International Nuclear Information System (INIS)
Lobov, G.A.; Stepanov, N.V.; Sibirtsev, A.A.; Trebukhovskij, Yu.V.
1983-01-01
A new version of the program of statistical simulation of hadron-nucleus and light nucleus-nucleus interaction is elaborated. The cascade part of the program is described. The comparison of model predictions with the proton-nucleus interaction experiments is performed. A satisfactory calculations-experiment agreement is obtained
Testing an Idealized Dynamic Cascade Model of the Development of Serious Violence in Adolescence
Dodge, Kenneth A.; Greenberg, Mark T.; Malone, Patrick S.
2008-01-01
A dynamic cascade model of development of serious adolescent violence was proposed and tested through prospective inquiry with 754 children (50% male; 43% African American) from 27 schools at 4 geographic sites followed annually from kindergarten through Grade 11 (ages 5-18). Self, parent, teacher, peer, observer, and administrative reports…
Using the Cascade Model to Improve Antenatal Screening for the Hemoglobin Disorders
Gould, Dinah; Papadopoulos, Irena; Kelly, Daniel
2012-01-01
Introduction: The inherited hemoglobin disorders constitute a major public health problem. Facilitators (experienced hemoglobin counselors) were trained to deliver knowledge and skills to "frontline" practitioners to enable them to support parents during antenatal screening via a cascade (train-the-trainer) model. Objectives of…
Monotone piecewise bicubic interpolation
International Nuclear Information System (INIS)
Carlson, R.E.; Fritsch, F.N.
1985-01-01
In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables
Modeling cascading failures with the crisis of trust in social networks
Yi, Chengqi; Bao, Yuanyuan; Jiang, Jingchi; Xue, Yibo
2015-10-01
In social networks, some friends often post or disseminate malicious information, such as advertising messages, informal overseas purchasing messages, illegal messages, or rumors. Too much malicious information may cause a feeling of intense annoyance. When the feeling exceeds a certain threshold, it will lead social network users to distrust these friends, which we call the crisis of trust. The crisis of trust in social networks has already become a universal concern and an urgent unsolved problem. As a result of the crisis of trust, users will cut off their relationships with some of their untrustworthy friends. Once a few of these relationships are made unavailable, it is likely that other friends will decline trust, and a large portion of the social network will be influenced. The phenomenon in which the unavailability of a few relationships will trigger the failure of successive relationships is known as cascading failure dynamics. To our best knowledge, no one has formally proposed cascading failures dynamics with the crisis of trust in social networks. In this paper, we address this potential issue, quantify the trust between two users based on user similarity, and model the minimum tolerance with a nonlinear equation. Furthermore, we construct the processes of cascading failures dynamics by considering the unique features of social networks. Based on real social network datasets (Sina Weibo, Facebook and Twitter), we adopt two attack strategies (the highest trust attack (HT) and the lowest trust attack (LT)) to evaluate the proposed dynamics and to further analyze the changes of the topology, connectivity, cascading time and cascade effect under the above attacks. We numerically find that the sparse and inhomogeneous network structure in our cascading model can better improve the robustness of social networks than the dense and homogeneous structure. However, the network structure that seems like ripples is more vulnerable than the other two network
Kerkweg, Astrid; Hofmann, Christiane; Jöckel, Patrick; Mertens, Mariano; Pante, Gregor
2018-03-01
As part of the Modular Earth Submodel System (MESSy), the Multi-Model-Driver (MMD v1.0) was developed to couple online the regional Consortium for Small-scale Modeling (COSMO) model into a driving model, which can be either the regional COSMO model or the global European Centre Hamburg general circulation model (ECHAM) (see Part 2 of the model documentation). The coupled system is called MECO(n), i.e., MESSy-fied ECHAM and COSMO models nested n times. In this article, which is part of the model documentation of the MECO(n) system, the second generation of MMD is introduced. MMD comprises the message-passing infrastructure required for the parallel execution (multiple programme multiple data, MPMD) of different models and the communication of the individual model instances, i.e. between the driving and the driven models. Initially, the MMD library was developed for a one-way coupling between the global chemistry-climate ECHAM/MESSy atmospheric chemistry (EMAC) model and an arbitrary number of (optionally cascaded) instances of the regional chemistry-climate model COSMO/MESSy. Thus, MMD (v1.0) provided only functions for unidirectional data transfer, i.e. from the larger-scale to the smaller-scale models.Soon, extended applications requiring data transfer from the small-scale model back to the larger-scale model became of interest. For instance, the original fields of the larger-scale model can directly be compared to the upscaled small-scale fields to analyse the improvements gained through the small-scale calculations, after the results are upscaled. Moreover, the fields originating from the two different models might be fed into the same diagnostic tool, e.g. the online calculation of the radiative forcing calculated consistently with the same radiation scheme. Last but not least, enabling the two-way data transfer between two models is the first important step on the way to a fully dynamical and chemical two-way coupling of the various model instances.In MMD (v1
Sahabiev, I. A.; Ryazanov, S. S.; Kolcova, T. G.; Grigoryan, B. R.
2018-03-01
The three most common techniques to interpolate soil properties at a field scale—ordinary kriging (OK), regression kriging with multiple linear regression drift model (RK + MLR), and regression kriging with principal component regression drift model (RK + PCR)—were examined. The results of the performed study were compiled into an algorithm of choosing the most appropriate soil mapping technique. Relief attributes were used as the auxiliary variables. When spatial dependence of a target variable was strong, the OK method showed more accurate interpolation results, and the inclusion of the auxiliary data resulted in an insignificant improvement in prediction accuracy. According to the algorithm, the RK + PCR method effectively eliminates multicollinearity of explanatory variables. However, if the number of predictors is less than ten, the probability of multicollinearity is reduced, and application of the PCR becomes irrational. In that case, the multiple linear regression should be used instead.
Chen, Xin; Xing, Pei; Luo, Yong; Zhao, Zongci; Nie, Suping; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua
2015-04-01
A new dataset of annual mean surface temperature has been constructed over North America in recent 500 years by performing optimal interpolation (OI) algorithm. Totally, 149 series totally were screened out including 69 tree ring width (MXD) and 80 tree ring width (TRW) chronologies are screened from International Tree Ring Data Bank (ITRDB). The simulated annual mean surface temperature derives from the past1000 experiment results of Community Climate System Model version 4 (CCSM4). Different from existing research that applying data assimilation approach to (General Circulation Models) GCMs simulation, the errors of both the climate model simulation and tree ring reconstruction were considered, with a view to combining the two parts in an optimal way. Variance matching (VM) was employed to calibrate tree ring chronologies on CRUTEM4v, and corresponding errors were estimated through leave-one-out process. Background error covariance matrix was estimated from samples of simulation results in a running 30-year window in a statistical way. Actually, the background error covariance matrix was calculated locally within the scanning range (2000km in this research). Thus, the merging process continued with a time-varying local gain matrix. The merging method (MM) was tested by two kinds of experiments, and the results indicated standard deviation of errors can be reduced by about 0.3 degree centigrade lower than tree ring reconstructions and 0.5 degree centigrade lower than model simulation. During the recent Obvious decadal variability can be identified in MM results including the evident cooling (0.10 degree per decade) in 1940-60s, wherein the model simulation exhibit a weak increasing trend (0.05 degree per decade) instead. MM results revealed a compromised spatial pattern of the linear trend of surface temperature during a typical period (1601-1800 AD) in Little Ice Age, which basically accorded with the phase transitions of the Pacific decadal oscillation (PDO) and
Kaurkin, M. N.; Ibrayev, R. A.; Belyaev, K. P.
2018-01-01
A parallel realization of the Ensemble Optimal Interpolation (EnOI) data assimilation (DA) method in conjunction with the eddy-resolving global circulation model is implemented. The results of DA experiments in the North Atlantic with the assimilation of the Archiving, Validation and Interpretation of Satellite Oceanographic (AVISO) data from the Jason-1 satellite are analyzed. The results of simulation are compared with the independent temperature and salinity data from the ARGO drifters.
Linear interpolation of histograms
Read, A L
1999-01-01
A prescription is defined for the interpolation of probability distributions that are assumed to have a linear dependence on a parameter of the distributions. The distributions may be in the form of continuous functions or histograms. The prescription is based on the weighted mean of the inverses of the cumulative distributions between which the interpolation is made. The result is particularly elegant for a certain class of distributions, including the normal and exponential distributions, and is useful for the interpolation of Monte Carlo simulation results which are time-consuming to obtain.
A Cascading Model Of An Active Magnetic Regenerator System
DEFF Research Database (Denmark)
Tahavori, M.; Filonenko, K.; Veje, C. T.
2016-01-01
In recent years, significant amounts of studies have been done on modeling and analysis of active magnetic regenerators (AMRs). Depending on the AMR geometry and the magnetocaloric material being modeled, the AMR may not be able to provide the required performance demanded by practical applicatio...
Directory of Open Access Journals (Sweden)
Bikić Siniša M.
2016-01-01
Full Text Available This paper is focused on the mathematical model of the Air Torque Position dampers. The mathematical model establishes a link between the velocity of air in front of the damper, position of the damper blade and the moment acting on the blade caused by the air flow. This research aims to experimentally verify the mathematical model for the damper type with non-cascading blades. Four different types of dampers with non-cascading blades were considered: single blade dampers, dampers with two cross-blades, dampers with two parallel blades and dampers with two blades of which one is a fixed blade in the horizontal position. The case of a damper with a straight pipeline positioned in front of and behind the damper was taken in consideration. Calibration and verification of the mathematical model was conducted experimentally. The experiment was conducted on the laboratory facility for testing dampers used for regulation of the air flow rate in heating, ventilation and air conditioning systems. The design and setup of the laboratory facility, as well as construction, adjustment and calibration of the laboratory damper are presented in this paper. The mathematical model was calibrated by using one set of data, while the verification of the mathematical model was conducted by using the second set of data. The mathematical model was successfully validated and it can be used for accurate measurement of the air velocity on dampers with non-cascading blades under different operating conditions. [Projekat Ministarstva nauke Republike Srbije, br. TR31058
An Improved Rotary Interpolation Based on FPGA
Directory of Open Access Journals (Sweden)
Mingyu Gao
2014-08-01
Full Text Available This paper presents an improved rotary interpolation algorithm, which consists of a standard curve interpolation module and a rotary process module. Compared to the conventional rotary interpolation algorithms, the proposed rotary interpolation algorithm is simpler and more efficient. The proposed algorithm was realized on a FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe, which uses rotary ellipse and rotary parabolic as an example. According to the theoretical analysis and practical process validation, the algorithm has the following advantages: firstly, less arithmetic items is conducive for interpolation operation; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.
Numerical study of corner separation in a linear compressor cascade using various turbulence models
Directory of Open Access Journals (Sweden)
Liu Yangwei
2016-06-01
Full Text Available Three-dimensional corner separation is a common phenomenon that significantly affects compressor performance. Turbulence model is still a weakness for RANS method on predicting corner separation flow accurately. In the present study, numerical study of corner separation in a linear highly loaded prescribed velocity distribution (PVD compressor cascade has been investigated using seven frequently used turbulence models. The seven turbulence models include Spalart–Allmaras model, standard k–ɛ model, realizable k–ɛ model, standard k–ω model, shear stress transport k–ω model, v2–f model and Reynolds stress model. The results of these turbulence models have been compared and analyzed in detail with available experimental data. It is found the standard k–ɛ model, realizable k–ɛ model, v2–f model and Reynolds stress model can provide reasonable results for predicting three dimensional corner separation in the compressor cascade. The Spalart–Allmaras model, standard k–ω model and shear stress transport k–ω model overestimate corner separation region at incidence of 0°. The turbulence characteristics are discussed and turbulence anisotropy is observed to be stronger in the corner separating region.
E-model modification for case of cascade codecs arrangement
Vozňák, Miroslav
2011-01-01
Speech quality assessment is one of the key matters of voice services and every provider should ensure adequate connection quality to end users. Speech quality has to be measured by a trusted method and results have to correlate with intelligibility and clarity of the speech, as perceived by the listener. It can be achieved by subjective methods but in real life we must rely on objective measurements based on reliable models. One of them is E-model that we can consider as...
Cascade annealing: an overview
International Nuclear Information System (INIS)
Doran, D.G.; Schiffgens, J.O.
1976-04-01
Concepts and an overview of radiation displacement damage modeling and annealing kinetics are presented. Short-term annealing methodology is described and results of annealing simulations performed on damage cascades generated using the Marlowe and Cascade programs are included. Observations concerning the inconsistencies and inadequacies of current methods are presented along with simulation of high energy cascades and simulation of longer-term annealing
Extension Of Lagrange Interpolation
Directory of Open Access Journals (Sweden)
Mousa Makey Krady
2015-01-01
Full Text Available Abstract In this paper is to present generalization of Lagrange interpolation polynomials in higher dimensions by using Gramers formula .The aim of this paper is to construct a polynomials in space with error tends to zero.
Calculating SPRT Interpolation Error
Filipe, E.; Gentil, S.; Lóio, I.; Bosma, R.; Peruzzi, A.
2018-02-01
Interpolation error is a major source of uncertainty in the calibration of standard platinum resistance thermometer (SPRT) in the subranges of the International Temperature Scale of 1990 (ITS-90). This interpolation error arises because the interpolation equations prescribed by the ITS-90 cannot perfectly accommodate all the SPRTs natural variations in the resistance-temperature behavior, and generates different forms of non-uniqueness. This paper investigates the type 3 non-uniqueness for fourteen SPRTs of five different manufacturers calibrated over the water-zinc subrange and demonstrates the use of the method of divided differences for calculating the interpolation error. The calculated maximum standard deviation of 0.25 mK (near 100°C) is similar to that observed in previous studies.
Modeling elephant-mediated cascading effects of water point closure.
Hilbers, Jelle P; Van Langevelde, Frank; Prins, Herbert H T; Grant, C C; Peel, Mike J S; Coughenour, Michael B; De Knegt, Henrik J; Slotow, Rob; Smit, Izak P J; Kiker, Greg A; De Boer, Willem F
2015-03-01
Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are, however, alternative ways to control wildlife densities, such as opening or closing water points. The effects of these alternatives are poorly studied. In this paper, we focus on manipulating large herbivores through the closure of water points (WPs). Removal of artificial WPs has been suggested in order to change the distribution of African elephants, which occur in high densities in national parks in Southern Africa and are thought to have a destructive effect on the vegetation. Here, we modeled the long-term effects of different scenarios of WP closure on the spatial distribution of elephants, and consequential effects on the vegetation and other herbivores in Kruger National Park, South Africa. Using a dynamic ecosystem model, SAVANNA, scenarios were evaluated that varied in availability of artificial WPs; levels of natural water; and elephant densities. Our modeling results showed that elephants can indirectly negatively affect the distributions of meso-mixed feeders, meso-browsers, and some meso-grazers under wet conditions. The closure of artificial WPs hardly had any effect during these natural wet conditions. Under dry conditions, the spatial distribution of both elephant bulls and cows changed when the availability of artificial water was severely reduced in the model. These changes in spatial distribution triggered changes in the spatial availability of woody biomass over the simulation period of 80 years, and this led to changes in the rest of the herbivore community, resulting in increased densities of all herbivores, except for giraffe and steenbok, in areas close to rivers. The spatial distributions of elephant bulls and cows showed to be less affected by the closure of WPs than most of the other herbivore species. Our study contributes to ecologically
Interferometric interpolation of sparse marine data
Hanafy, Sherif M.
2013-10-11
We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.
Molecular dynamics and binary collision modeling of the primary damage state of collision cascades
DEFF Research Database (Denmark)
Heinisch, H.L.; Singh, B.N.
1992-01-01
Quantitative information on defect production in cascades in copper obtained from recent molecular dynamics simulations is compared to defect production information determined earlier with a model based on the binary collision approximation (BCA). The total numbers of residual defects, the fracti...... that is practical for simulating much higher energies and longer times than MD alone can achieve. The extraction of collisional phase information from MD simulations and the correspondence of MD and BCA versions of the collisional phase is demonstrated at low energy.......Quantitative information on defect production in cascades in copper obtained from recent molecular dynamics simulations is compared to defect production information determined earlier with a model based on the binary collision approximation (BCA). The total numbers of residual defects...
Representation of the radiative strength functions in the practical model of cascade gamma decay
International Nuclear Information System (INIS)
Vu, D.C.; Sukhovoj, A.M.; Mitsyna, L.V.; Zeinalov, Sh.; Jovancevic, N.; Knezevic, D.; Krmar, M.; Dragic, A.
2016-01-01
The developed in Dubna practical model of the cascade gamma decay of neutron resonance allows one, from the fitted intensities of the two-step cascades, to obtain parameters both of level density and of partial widths of emission of nuclear reaction products. In the presented variant of the model a part of phenomenological representations is minimized. Analysis of new results confirms the previous finding that dynamics of interaction between Fermi- and Bose-nuclear states depends on the form of the nucleus. It also follows from the ratios of densities of vibrational and quasi-particle levels that this interaction exists at least up to the binding neutron energy and probably differs for nuclei with varied parities of nucleons. [ru
Molecular dynamics and binary collisions modeling of the primary damage state of collision cascades
International Nuclear Information System (INIS)
Heinisch, H.L.; Singh, B.N.
1992-01-01
The objective of this work is to determine the spectral dependence of defect production and microstructure evolution for the development of fission-fusion correlations. Quantitative information on defect production in cascades in copper obtained from recent molecular dynamics (MD) simulations is compared to defect production information determined earlier with a model based on the binary collision approximation (BCA). The total numbers of residual defects, the fractions of them that are mobile, and the sizes of immobile clusters compare favorably, especially when the termination conditions of the two simulations are taken into account. A strategy is laid out for integrating the details of the cascade quenching phase determined by MD into a BCA-based model that is practical for simulating much higher energies and longer times than MD alone can achieve. The extraction of collisional phase information from MD simulations and the correspondence of MD and BCA versions of the collisional phase demonstrated at low energy
Hu, Shou-Cun; Ji, Jiang-Hui
2017-12-01
In asteroid rendezvous missions, the dynamical environment near an asteroid’s surface should be made clear prior to launch of the mission. However, most asteroids have irregular shapes, which lower the efficiency of calculating their gravitational field by adopting the traditional polyhedral method. In this work, we propose a method to partition the space near an asteroid adaptively along three spherical coordinates and use Chebyshev polynomial interpolation to represent the gravitational acceleration in each cell. Moreover, we compare four different interpolation schemes to obtain the best precision with identical initial parameters. An error-adaptive octree division is combined to improve the interpolation precision near the surface. As an example, we take the typical irregularly-shaped near-Earth asteroid 4179 Toutatis to demonstrate the advantage of this method; as a result, we show that the efficiency can be increased by hundreds to thousands of times with our method. Our results indicate that this method can be applicable to other irregularly-shaped asteroids and can greatly improve the evaluation efficiency.
Directory of Open Access Journals (Sweden)
L. Yao
2011-03-01
Full Text Available Relations between mineralization and certain geological processes are established mostly by geologist's knowledge of field observations. However, these relations are descriptive and a quantitative model of how certain geological processes strengthen or hinder mineralization is not clear, that is to say, the mechanism of the interactions between mineralization and the geological framework has not been thoroughly studied. The dynamics behind these interactions are key in the understanding of fractal or multifractal formations caused by mineralization, among which singularities arise due to anomalous concentration of metals in narrow space. From a statistical point of view, we think that cascade dynamics play an important role in mineralization and studying them can reveal the nature of the various interactions throughout the process. We have constructed a multiplicative cascade model to simulate these dynamics. The probabilities of mineral deposit occurrences are used to represent direct results of mineralization. Multifractal simulation of probabilities of mineral potential based on our model is exemplified by a case study dealing with hydrothermal gold deposits in southern Nova Scotia, Canada. The extent of the impacts of certain geological processes on gold mineralization is related to the scale of the cascade process, especially to the maximum cascade division number n_{max}. Our research helps to understand how the singularity occurs during mineralization, which remains unanswered up to now, and the simulation may provide a more accurate distribution of mineral deposit occurrences that can be used to improve the results of the weights of evidence model in mapping mineral potential.
Representation of radiative strength functions within a practical model of cascade gamma decay
Energy Technology Data Exchange (ETDEWEB)
Vu, D. C., E-mail: vuconghnue@gmail.com; Sukhovoj, A. M., E-mail: suchovoj@nf.jinr.ru; Mitsyna, L. V., E-mail: mitsyna@nf.jinr.ru; Zeinalov, Sh., E-mail: zeinal@nf.jinr.ru [Joint Institute for Nuclear Research (Russian Federation); Jovancevic, N., E-mail: nikola.jovancevic@df.uns.ac.rs; Knezevic, D., E-mail: david.knezevic@df.uns.ac.rs; Krmar, M., E-mail: krmar@df.uns.ac.rs [University of Novi Sad, Department of Physics, Faculty of Sciences (Serbia); Dragic, A., E-mail: dragic@ipb.ac.rs [Institute of Physics Belgrade (Serbia)
2017-03-15
A practical model developed at the Joint Institute for Nuclear Research (JINR, Dubna) in order to describe the cascade gamma decay of neutron resonances makes it possible to determine simultaneously, from an approximation of the intensities of two-step cascades, parameters of nuclear level densities and partial widths with respect to the emission of nuclear-reaction products. The number of the phenomenological ideas used isminimized in themodel version considered in the present study. An analysis of new results confirms what was obtained earlier for the dependence of dynamics of the interaction of fermion and boson nuclear states on the nuclear shape. From the ratio of the level densities for excitations of the vibrational and quasiparticle types, it also follows that this interaction manifests itself in the region around the neutron binding energy and is probably different in nuclei that have different parities of nucleons.
Representation of radiative strength functions within a practical model of cascade gamma decay
Vu, D. C.; Sukhovoj, A. M.; Mitsyna, L. V.; Zeinalov, Sh.; Jovancevic, N.; Knezevic, D.; Krmar, M.; Dragic, A.
2017-03-01
A practical model developed at the Joint Institute for Nuclear Research (JINR, Dubna) in order to describe the cascade gamma decay of neutron resonances makes it possible to determine simultaneously, from an approximation of the intensities of two-step cascades, parameters of nuclear level densities and partial widths with respect to the emission of nuclear-reaction products. The number of the phenomenological ideas used isminimized in themodel version considered in the present study. An analysis of new results confirms what was obtained earlier for the dependence of dynamics of the interaction of fermion and boson nuclear states on the nuclear shape. From the ratio of the level densities for excitations of the vibrational and quasiparticle types, it also follows that this interaction manifests itself in the region around the neutron binding energy and is probably different in nuclei that have different parities of nucleons.
Calibrating a multi-model approach to defect production in high energy collision cascades
International Nuclear Information System (INIS)
Heinisch, H.L.; Singh, B.N.; Diaz de la Rubia, T.
1994-01-01
A multi-model approach to simulating defect production processes at the atomic scale is described that incorporates molecular dynamics (MD), binary collision approximation (BCA) calculations and stochastic annealing simulations. The central hypothesis is that the simple, fast computer codes capable of simulating large numbers of high energy cascades (e.g., BCA codes) can be made to yield the correct defect configurations when their parameters are calibrated using the results of the more physically realistic MD simulations. The calibration procedure is investigated using results of MD simulations of 25 keV cascades in copper. The configurations of point defects are extracted from the MD cascade simulations at the end of the collisional phase, thus providing information similar to that obtained with a binary collision model. The MD collisional phase defect configurations are used as input to the ALSOME annealing simulation code, and values of the ALSOME quenching parameters are determined that yield the best fit to the post-quenching defect configurations of the MD simulations. ((orig.))
Billoire, Alain
2006-04-01
I use an interpolation formula, introduced recently by Guerra and Toninelli, in order to prove the existence of the free energy of the Sherrington-Kirkpatrick spin glass model in the infinite volume limit, to investigate numerically the finite-size corrections to the free energy of this model. The results are compatible with a (1/12N)ln(N/N0) behavior at Tc , as predicted by Parisi, Ritort, and Slanina, and a 1/N2/3 behavior below Tc .
CRT--Cascade Routing Tool to define and visualize flow paths for grid-based watershed models
Henson, Wesley R.; Medina, Rose L.; Mayers, C. Justin; Niswonger, Richard G.; Regan, R.S.
2013-01-01
The U.S. Geological Survey Cascade Routing Tool (CRT) is a computer application for watershed models that include the coupled Groundwater and Surface-water FLOW model, GSFLOW, and the Precipitation-Runoff Modeling System (PRMS). CRT generates output to define cascading surface and shallow subsurface flow paths for grid-based model domains. CRT requires a land-surface elevation for each hydrologic response unit (HRU) of the model grid; these elevations can be derived from a Digital Elevation Model raster data set of the area containing the model domain. Additionally, a list is required of the HRUs containing streams, swales, lakes, and other cascade termination features along with indices that uniquely define these features. Cascade flow paths are determined from the altitudes of each HRU. Cascade paths can cross any of the four faces of an HRU to a stream or to a lake within or adjacent to an HRU. Cascades can terminate at a stream, lake, or HRU that has been designated as a watershed outflow location.
Signal-to-noise performance analysis of streak tube imaging lidar systems. I. Cascaded model.
Yang, Hongru; Wu, Lei; Wang, Xiaopeng; Chen, Chao; Yu, Bing; Yang, Bin; Yuan, Liang; Wu, Lipeng; Xue, Zhanli; Li, Gaoping; Wu, Baoning
2012-12-20
Streak tube imaging lidar (STIL) is an active imaging system using a pulsed laser transmitter and a streak tube receiver to produce 3D range and intensity imagery. The STIL has recently attracted a great deal of interest and attention due to its advantages of wide azimuth field-of-view, high range and angle resolution, and high frame rate. This work investigates the signal-to-noise performance of STIL systems. A theoretical model for characterizing the signal-to-noise performance of the STIL system with an internal or external intensified streak tube receiver is presented, based on the linear cascaded systems theory of signal and noise propagation. The STIL system is decomposed into a series of cascaded imaging chains whose signal and noise transfer properties are described by the general (or the spatial-frequency dependent) noise factors (NFs). Expressions for the general NFs of the cascaded chains (or the main components) in the STIL system are derived. The work presented here is useful for the design and evaluation of STIL systems.
NONLINEAR SYSTEM MODELING USING SINGLE NEURON CASCADED NEURAL NETWORK FOR REAL-TIME APPLICATIONS
Directory of Open Access Journals (Sweden)
S. Himavathi
2012-04-01
Full Text Available Neural Networks (NN have proved its efficacy for nonlinear system modeling. NN based controllers and estimators for nonlinear systems provide promising alternatives to the conventional counterpart. However, NN models have to meet the stringent requirements on execution time for its effective use in real time applications. This requires the NN model to be structurally compact and computationally less complex. In this paper a parametric method of analysis is adopted to determine the compact and faster NN model among various neural network architectures. This work proves through analysis and examples that the Single Neuron Cascaded (SNC architecture is distinct in providing compact and simpler models requiring lower execution time. The unique structural growth of SNC architecture enables automation in design. The SNC Network is shown to combine the advantages of both single and multilayer neural network architectures. Extensive analysis on selected architectures and their models for four benchmark nonlinear theoretical plants and a practical application are tested. A performance comparison of the NN models is presented to demonstrate the superiority of the single neuron cascaded architecture for online real time applications.
Zhang, Hong; Kong, Fengchong; Ren, Lei; Jin, Jian-Yue
2014-09-01
Cone beam computed tomography (CBCT) imaging is a key step in image guided radiation therapy (IGRT) to improve tumor targeting. The quality and imaging dose of CBCT are two important factors. However, X-ray scatter in the large cone beam field usually induces image artifacts and degrades the image quality for CBCT. A synchronized moving grid (SMOG) approach has recently been proposed to resolve this issue and shows great promise. However, the SMOG technique requires two projections in the same gantry angle to obtain full information due to signal blockage by the grid. This study aims to develop an inter-projection interpolation (IPI) method to estimate the blocked image information. This approach will require only one projection in each gantry angle, thus reducing the scan time and patient dose. IPI is also potentially suitable for sparse-view CBCT reconstruction to reduce the imaging dose. To be compared with other state-of-the-art spatial interpolation (called inpainting) methods in terms of signal-to-noise ratio (SNR) on a Catphan and head phantoms, IPI increases SNR from 15.3dB and 12.7dB to 29.0dB and 28.1dB, respectively. The SNR of IPI on sparse-view CBCT reconstruction can achieve from 28dB to 17dB for undersample projection sets with gantry angle interval varying from 1 to 3 degrees for both phantoms.
Image Interpolation Scheme based on SVM and Improved PSO
Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.
2018-01-01
In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.
International Nuclear Information System (INIS)
Portoghese, Celia Christiani Paschoa
2002-01-01
Several different mathematical models that make it possible to plan, design and follow the operation of uranium isotopic separation cascades using the gaseous ultracentrifugation process are presented, discussed and tested. Models to be used in the planning and conception phases use theoretical hypothesis, making it possible to calculate approximate values for the flow rate and isotopic composition of the cascade internal streams. Twelve theoretical models developed to perform this task are discussed and compared. The theoretical models that have greater applicability are identified. Models to be used for the complete dimensioning of a cascade, before its construction, called semi-empirical models, use experimental results obtained in ultracentrifuges individual testes combined with theoretical equations, allowing to calculate accurate values for the flow rate, pressure and isotopic composition of the cascade internal streams. Thirteen semi-empirical models developed to perform this task are presented, five of them are widely discussed and one of them is validated through comparison with experimental results. In order to follow the operation of a cascade, it is necessary to develop models to simulate its behavior in operational conditions other than the nominal, defined in the project. Three semi-empirical models to make this kind of simulation are presented and one of them is validated through comparison with experimental results. Finally, it is necessary to have tools that simulate the cascade behavior during transients. Two dynamic models developed to perform this task are presented and compared. The dynamic model capable to simulate results closer ti the real behaviour of a cascade during three different kinds of transient is identified, through comparison between simulated and experimental results. (author)
Information Theory Analysis of Cascading Process in a Synthetic Model of Fluid Turbulence
Directory of Open Access Journals (Sweden)
Massimo Materassi
2014-02-01
Full Text Available The use of transfer entropy has proven to be helpful in detecting which is the verse of dynamical driving in the interaction of two processes, X and Y . In this paper, we present a different normalization for the transfer entropy, which is capable of better detecting the information transfer direction. This new normalized transfer entropy is applied to the detection of the verse of energy flux transfer in a synthetic model of fluid turbulence, namely the Gledzer–Ohkitana–Yamada shell model. Indeed, this is a fully well-known model able to model the fully developed turbulence in the Fourier space, which is characterized by an energy cascade towards the small scales (large wavenumbers k, so that the application of the information-theory analysis to its outcome tests the reliability of the analysis tool rather than exploring the model physics. As a result, the presence of a direct cascade along the scales in the shell model and the locality of the interactions in the space of wavenumbers come out as expected, indicating the validity of this data analysis tool. In this context, the use of a normalized version of transfer entropy, able to account for the difference of the intrinsic randomness of the interacting processes, appears to perform better, being able to discriminate the wrong conclusions to which the “traditional” transfer entropy would drive.
The cascade model of teachers’ continuing professional development in Kenya: A time for change?
Directory of Open Access Journals (Sweden)
Harry Kipkemoi Bett
2016-12-01
Full Text Available Kenya is one of the countries whose teachers the UNESCO (2015 report cited as lacking curriculum support in the classroom. As is the case in many African countries, a large portion of teachers in Kenya enter the teaching profession when inadequately prepared, while those already in the field receive insufficient support in their professional lives. The cascade model has often been utilized in the country whenever need for teachers’ continuing professional development (TCPD has arisen, especially on a large scale. The preference for the model is due to, among others, its cost effectiveness and ability to reach out to many teachers within a short period of time. Many researchers have however cast aspersions with this model for its glaring shortcomings. On the contrary, TCPD programmes that are collaborative in nature and based on teachers’ contexts have been found to be more effective than those that are not. This paper briefly examines cases of the cascade model in Kenya, the challenges associated with this model and proposes the adoption of collaborative and institution-based models to mitigate these challenges. The education sectors in many nations in Africa, and those in the developing world will find the discussions here relevant.
A cascade model of information processing and encoding for retinal prosthesis
Directory of Open Access Journals (Sweden)
Zhi-jun Pei
2016-01-01
Full Text Available Retinal prosthesis offers a potential treatment for individuals suffering from photoreceptor degeneration diseases. Establishing biological retinal models and simulating how the biological retina convert incoming light signal into spike trains that can be properly decoded by the brain is a key issue. Some retinal models have been presented, ranking from structural models inspired by the layered architecture to functional models originated from a set of specific physiological phenomena. However, Most of these focus on stimulus image compression, edge detection and reconstruction, but do not generate spike trains corresponding to visual image. In this study, based on state-of-the-art retinal physiological mechanism, including effective visual information extraction, static nonlinear rectification of biological systems and neurons Poisson coding, a cascade model of the retina including the out plexiform layer for information processing and the inner plexiform layer for information encoding was brought forward, which integrates both anatomic connections and functional computations of retina. Using MATLAB software, spike trains corresponding to stimulus image were numerically computed by four steps: linear spatiotemporal filtering, static nonlinear rectification, radial sampling and then Poisson spike generation. The simulated results suggested that such a cascade model could recreate visual information processing and encoding functionalities of the retina, which is helpful in developing artificial retina for the retinally blind.
Simple monotonic interpolation scheme
International Nuclear Information System (INIS)
Greene, N.M.
1980-01-01
A procedure for presenting tabular data, such as are contained in the ENDF/B files, that is simpler, more general, and potentially much more compact than the present schemes used with ENDF/B is presented. The method has been successfully used for Bondarenko interpolation in a module of the AMPX system. 1 figure, 1 table
Fuzzy Interpolation and Other Interpolation Methods Used in Robot Calibrations
Directory of Open Access Journals (Sweden)
Ying Bai
2012-01-01
Full Text Available A novel interpolation algorithm, fuzzy interpolation, is presented and compared with other popular interpolation methods widely implemented in industrial robots calibrations and manufacturing applications. Different interpolation algorithms have been developed, reported, and implemented in many industrial robot calibrations and manufacturing processes in recent years. Most of them are based on looking for the optimal interpolation trajectories based on some known values on given points around a workspace. However, it is rare to build an optimal interpolation results based on some random noises, and this is one of the most popular topics in industrial testing and measurement applications. The fuzzy interpolation algorithm (FIA reported in this paper provides a convenient and simple way to solve this problem and offers more accurate interpolation results based on given position or orientation errors that are randomly distributed in real time. This method can be implemented in many industrial applications, such as manipulators measurements and calibrations, industrial automations, and semiconductor manufacturing processes.
Potential problems with interpolating fields
Energy Technology Data Exchange (ETDEWEB)
Birse, Michael C. [The University of Manchester, Theoretical Physics Division, School of Physics and Astronomy, Manchester (United Kingdom)
2017-11-15
A potential can have features that do not reflect the dynamics of the system it describes but rather arise from the choice of interpolating fields used to define it. This is illustrated using a toy model of scattering with two coupled channels. A Bethe-Salpeter amplitude is constructed which is a mixture of the waves in the two channels. The potential derived from this has a strong repulsive core, which arises from the admixture of the closed channel in the wave function and not from the dynamics of the model. (orig.)
CASCADER: An m-chain gas-phase radionuclide transport and fate model
International Nuclear Information System (INIS)
Cawlfield, D.E.; Been, K.B.; Emer, D.F.; Lindstrom, F.T.; Shott, G.J.
1993-06-01
Chemicals and radionuclides move either in the gas-phase, liquid-phase, or both phases in soils. They may be acted upon by either biological or abiotic processes through advection and/or diffusion. Furthermore, parent and daughter radionuclides may decay as they are transported in the soil. This is volume two to the CASCADER series, titled CASCADR8. It embodies the concepts presented in volume one of this series. To properly understand how the CASCADR8 model works, the reader should read volume one first. This volume presents the input and output file structure for CASCADR8, and a set of realistic scenarios for buried sources of radon gas
International Nuclear Information System (INIS)
Blok, M. de; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica
1990-01-01
This report describes a time-interpolator with which time differences can be measured using digital and analog techniques. It concerns a maximum measuring time of 6.4 μs with a resolution of 100 ps. Use is made of Emitter Coupled Logic (ECL) and analogues of high-frequency techniques. The difficulty which accompanies the use of ECL-logic is keeping as short as possible the mutual connections and closing properly the outputs in order to avoid reflections. The digital part of the time-interpolator consists of a continuous running clock and logic which converts an input signal into a start- and stop signal. The analog part consists of a Time to Amplitude Converter (TAC) and an analog to digital converter. (author). 3 refs.; 30 figs
Ansari, Imran Shafique
2010-12-01
The introduction of new schemes that are based on the communication among nodes has motivated the use of composite fading models due to the fact that the nodes experience different multipath fading and shadowing statistics, which subsequently determines the required statistics for the performance analysis of different transceivers. The end-to-end signal-to-noise-ratio (SNR) statistics plays an essential role in the determination of the performance of cascaded digital communication systems. In this thesis, a closed-form expression for the probability density function (PDF) of the end-end SNR for independent but not necessarily identically distributed (i.n.i.d.) cascaded generalized-K (GK) composite fading channels is derived. The developed PDF expression in terms of the Meijer-G function allows the derivation of subsequent performance metrics, applicable to different modulation schemes, including outage probability, bit error rate for coherent as well as non-coherent systems, and average channel capacity that provides insights into the performance of a digital communication system operating in N cascaded GK composite fading environment. Another line of research that was motivated by the introduction of composite fading channels is the error performance. Error performance is one of the main performance measures and derivation of its closed-form expression has proved to be quite involved for certain systems. Hence, in this thesis, a unified closed-form expression, applicable to different binary modulation schemes, for the bit error rate of dual-branch selection diversity based systems undergoing i.n.i.d. GK fading is derived in terms of the extended generalized bivariate Meijer G-function.
Three-dimensional Cascaded Lattice Boltzmann Model for Thermal Convective Flows
Hajabdollahi, Farzaneh; Premnath, Kannan
2017-11-01
Fluid motion driven by thermal effects, such as due to buoyancy in differentially heated enclosures arise in several natural and industrial settings, whose understanding can be achieved via numerical simulations. Lattice Boltzmann (LB) methods are efficient kinetic computational approaches for coupled flow physics problems. In this study, we develop three-dimensional (3D) LB models based on central moments and multiple relaxation times for D3Q7 and D3Q15 lattices to solve the energy transport equations in a double distribution function approach. Their collision operators lead to a cascaded structure involving higher order terms resulting in improved stability. This is coupled to a central moment based LB flow solver with source terms. The new 3D cascaded LB models for the convective flows are first validated for natural convection of air driven thermally on two vertically opposite faces in a cubic cavity at different Rayleigh numbers against prior numerical and experimental data, which show good quantitative agreement. Then, the detailed structure of the 3D flow and thermal fields and the heat transfer rates at different Rayleigh numbers are analyzed and interpreted.
Rudaz, Benjamin; Bardou, Eric; Jaboyedoff, Michel
2015-04-01
Alpine ephemeral streams act as links between high altitude erosional processes, slope movements and valley-floor fluvial systems or fan storage. Anticipating future mass wasting from these systems is crucial for hazard mitigation measures. Torrential activity is highly stochastic, with punctual transfers separating long periods of calm, during which the system evolves internally and recharges. Changes can originate from diffuse (rock faces, sheet erosion of bared moraines), concentrated external sources (rock glacier front, slope instabilities) or internal transfers (bed incision or aggradation). The proposed sediment cascade model takes into account those different processes and calculates sediment transfer from the slope to the channel reaches, and then propagates sediments downstream. The two controlling parameters are precipitation series (generated from existing rain gauge data using Gumbel and Extreme Probability Distribution functions) and temperature (generated from local meteorological stations data and IPCC scenarios). Snow accumulation and melting, and thus runoff can then be determined for each subsystem, to account for different altitudes and expositions. External stocks and sediment sources have each a specific response to temperature and precipitation. For instance, production from rock faces is dependent on frost-thaw cycles, in addition to precipitations. On the other hand, landslide velocity, and thus sediment production is linked to precipitations over longer periods of time. Finally, rock glaciers react to long-term temperature trends, but are also prone to sudden release of material during extreme rain events. All those modules feed the main sediment cascade model, constructed around homogeneous torrent reaches, to and from which sediments are transported by debris flows and bedload transport events. These events are determined using a runoff/erosion curve, with a threshold determining the occurrence of debris flows in the system. If a debris
Geant4 Hadronic Cascade Models and CMS Data Analysis : Computational Challenges in the LHC era
Heikkinen, Aatos
This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we es...
Interpolation effects in tabulated interatomic potentials
Wen, M.; Whalen, S. M.; Elliott, R. S.; Tadmor, E. B.
2015-10-01
Empirical interatomic potentials are widely used in atomistic simulations due to their ability to compute the total energy and interatomic forces quickly relative to more accurate quantum calculations. The functional forms in these potentials are sometimes stored in a tabulated format, as a collection of data points (argument-value pairs), and a suitable interpolation (often spline-based) is used to obtain the function value at an arbitrary point. We explore the effect of these interpolations on the potential predictions by calculating the quasi-harmonic thermal expansion and finite-temperature elastic constant of a one-dimensional chain compared with molecular dynamics simulations. Our results show that some predictions are affected by the choice of interpolation regardless of the number of tabulated data points. Our results clearly indicate that the interpolation must be considered part of the potential definition, especially for lattice dynamics properties that depend on higher-order derivatives of the potential. This is facilitated by the Knowledgebase of Interatomic Models (KIM) project, in which both the tabulated data (‘parameterized model’) and the code that interpolates them to compute energy and forces (‘model driver’) are stored and given unique citeable identifiers. We have developed cubic and quintic spline model drivers for pair functional type models (EAM, FS, EMT) and uploaded them to the OpenKIM repository (https://openkim.org).
The cascade of uncertainty in modeling the impacts of climate change on Europe's forests
Reyer, Christopher; Lasch-Born, Petra; Suckow, Felicitas; Gutsch, Martin
2015-04-01
Projecting the impacts of global change on forest ecosystems is a cornerstone for designing sustainable forest management strategies and paramount for assessing the potential of Europe's forest to contribute to the EU bioeconomy. Research on climate change impacts on forests relies to a large extent on model applications along a model chain from Integrated Assessment Models to General and Regional Circulation Models that provide important driving variables for forest models. Or to decision support systems that synthesize findings of more detailed forest models to inform forest managers. At each step in the model chain, model-specific uncertainties about, amongst others, parameter values, input data or model structure accumulate, leading to a cascade of uncertainty. For example, climate change impacts on forests strongly depend on the in- or exclusion of CO2-effects or on the use of an ensemble of climate models rather than relying on one particular climate model. In the past, these uncertainties have not or only partly been considered in studies of climate change impacts on forests. This has left managers and decision-makers in doubt of how robust the projected impacts on forest ecosystems are. We deal with this cascade of uncertainty in a structured way and the objective of this presentation is to assess how different types of uncertainties affect projections of the effects of climate change on forest ecosystems. To address this objective we synthesized a large body of scientific literature on modeled productivity changes and the effects of extreme events on plant processes. Furthermore, we apply the process-based forest growth model 4C to forest stands all over Europe and assess how different climate models, emission scenarios and assumptions about the parameters and structure of 4C affect the uncertainty of the model projections. We show that there are consistent regional changes in forest productivity such as an increase in NPP in cold and wet regions while
Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong
2016-01-01
Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.
Andreh, Angga Muhamad; Subiyanto, Sunardiyo, Said
2017-01-01
Development of non-linear loading in the application of industry and distribution system and also harmonic compensation becomes important. Harmonic pollution is an urgent problem in increasing power quality. The main contribution of the study is the modeling approach used to design a shunt active filter and the application of the cascade multilevel inverter topology to improve the power quality of electrical energy. In this study, shunt active filter was aimed to eliminate dominant harmonic component by injecting opposite currents with the harmonic component system. The active filter was designed by shunt configuration with cascaded multilevel inverter method controlled by PID controller and SPWM. With this shunt active filter, the harmonic current can be reduced so that the current wave pattern of the source is approximately sinusoidal. Design and simulation were conducted by using Power Simulator (PSIM) software. Shunt active filter performance experiment was conducted on the IEEE four bus test system. The result of shunt active filter installation on the system (IEEE four bus) could reduce THD current from 28.68% to 3.09%. With this result, the active filter can be applied as an effective method to reduce harmonics.
Kaushik, Swati; Nair, Anu G; Mutt, Eshita; Subramanian, Hari Prasanna; Sowdhamini, Ramanathan
2016-02-01
In the post-genomic era, automatic annotation of protein sequences using computational homology-based methods is highly desirable. However, often protein sequences diverge to an extent where detection of homology and automatic annotation transfer is not straightforward. Sophisticated approaches to detect such distant relationships are needed. We propose a new approach to identify deep evolutionary relationships of proteins to overcome shortcomings of the available methods. We have developed a method to identify remote homologues more effectively from any protein sequence database by using several cascading events with Hidden Markov Models (C-HMM). We have implemented clustering of hits and profile generation of hit clusters to effectively reduce the computational timings of the cascaded sequence searches. Our C-HMM approach could cover 94, 83 and 40% coverage at family, superfamily and fold levels, respectively, when applied on diverse protein folds. We have compared C-HMM with various remote homology detection methods and discuss the trade-offs between coverage and false positives. A standalone package implemented in Java along with a detailed documentation can be downloaded from https://github.com/RSLabNCBS/C-HMM SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online. mini@ncbs.res.in. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Interpolating string field theories
International Nuclear Information System (INIS)
Zwiebach, B.
1992-01-01
This paper reports that a minimal area problem imposing different length conditions on open and closed curves is shown to define a one-parameter family of covariant open-closed quantum string field theories. These interpolate from a recently proposed factorizable open-closed theory up to an extended version of Witten's open string field theory capable of incorporating on shell closed strings. The string diagrams of the latter define a new decomposition of the moduli spaces of Riemann surfaces with punctures and boundaries based on quadratic differentials with both first order and second order poles
Smooth Phase Interpolated Keying
Borah, Deva K.
2007-01-01
Smooth phase interpolated keying (SPIK) is an improved method of computing smooth phase-modulation waveforms for radio communication systems that convey digital information. SPIK is applicable to a variety of phase-shift-keying (PSK) modulation schemes, including quaternary PSK (QPSK), octonary PSK (8PSK), and 16PSK. In comparison with a related prior method, SPIK offers advantages of better performance and less complexity of implementation. In a PSK scheme, the underlying information waveform that one seeks to convey consists of discrete rectangular steps, but the spectral width of such a waveform is excessive for practical radio communication. Therefore, the problem is to smooth the step phase waveform in such a manner as to maintain power and bandwidth efficiency without incurring an unacceptably large error rate and without introducing undesired variations in the amplitude of the affected radio signal. Although the ideal constellation of PSK phasor points does not cause amplitude variations, filtering of the modulation waveform (in which, typically, a rectangular pulse is converted to a square-root raised cosine pulse) causes amplitude fluctuations. If a power-efficient nonlinear amplifier is used in the radio communication system, the fluctuating-amplitude signal can undergo significant spectral regrowth, thus compromising the bandwidth efficiency of the system. In the related prior method, one seeks to solve the problem in a procedure that comprises two major steps: phase-value generation and phase interpolation. SPIK follows the two-step approach of the related prior method, but the details of the steps are different. In the phase-value-generation step, the phase values of symbols in the PSK constellation are determined by a phase function that is said to be maximally smooth and that is chosen to minimize the spectral spread of the modulated signal. In this step, the constellation is divided into two groups by assigning, to information symbols, phase values
Beckerman, Bernardo S.; Jerrett, Michael; Martin, Randall V.; van Donkelaar, Aaron; Ross, Zev; Burnett, Richard T.
2013-10-01
Land use regression (LUR) models are widely employed in health studies to characterize chronic exposure to air pollution. The LUR is essentially an interpolation technique that employs the pollutant of interest as the dependent variable with proximate land use, traffic, and physical environmental variables used as independent predictors. Two major limitations with this method have not been addressed: (1) variable selection in the model building process, and (2) dealing with unbalanced repeated measures. In this paper, we address these issues with a modeling framework that implements the deletion/substitution/addition (DSA) machine learning algorithm that uses a generalized linear model to average over unbalanced temporal observations. Models were derived for fine particulate matter with aerodynamic diameter of 2.5 microns or less (PM2.5) and nitrogen dioxide (NO2) using monthly observations. We used 4119 observations at 108 sites and 15,301 observations at 138 sites for PM2.5 and NO2, respectively. We derived models with good predictive capacity (cross-validated-R2 values were 0.65 and 0.71 for PM2.5 and NO2, respectively). By addressing these two shortcomings in current approaches to LUR modeling, we have developed a framework that minimizes arbitrary decisions during the model selection process. We have also demonstrated how to integrate temporally unbalanced data in a theoretically sound manner. These developments could have widespread applicability for future LUR modeling efforts.
International Nuclear Information System (INIS)
Doneddu, F.
1982-01-01
Starting from the modelization of gaseous flow in a porous medium (flow in a capillary), we generalize the law of enrichment in an infinite cylindrical capillary, established for an isotropic linear mixture, to a multicomponent mixture. A generalization is given of the notion of separation yields and characteristic pressure classically used for separations of isotropic linear mixtures. We present formulas for diagonalizing the diffusion operator, modelization of a multistage, gaseous diffusion cascade and comparison with the experimental results of a drain cascade (N 2 -SF 6 -UF 6 mixture). [fr
Hosseini, M.; Magagi, R.; Goita, K.
2013-12-01
Soil moisture is an important parameter in hydrology that can be derived from remote sensing. In different studies, it was shown that optical-thermal, active and passive microwave remote sensing data can be used for soil moisture estimation. However, the most promising approach to estimate soil moisture in large areas is passive microwave radiometry. Global estimation of soil moisture is now operational by using remote sensing techniques. The Advanced Microwave Scanning Radiometer-Earth Observing System Sensor (AMSR-E) and Soil Moisture and Ocean Salinity (SMOS) passive microwave radiometers that were lunched on 2002 and 2009 respectively along with the upcoming Soil Moisture Active-Passive (SMAP) satellite that was planned to be lunched in the time frame of 2014-2015 make remote sensing to be more useful in soil moisture estimation. However, the spatial resolutions of AMSR-E, SMOS and SMAP are 60 km, 40 km and 10 km respectively. These very low spatial resolutions can not show the temporal and spatial variability of soil moisture in field or small scales. So, using disaggregation methods is required to efficiently using the passive microwave derived soil moisture information in different scales. The low spatial resolutions of passive microwave satellites can be improved by using disaggregation methods. Random Cascade (RC) model (Over and Gupta, 1996) is used in this research to downscale the 40 km resolution of SMOS satellite. By using this statistical method, the SMOS soil moisture resolutions are improved to 20 km, 10 km, 5 km and 2.5 km, respectively. The data that were measured during Soil Moisture Active Passive Validation Experiment 2012 (SMAPVEX12) field campaign are used to do the experiments. Totally the ground data and SMOS images that were obtained during 13 different days from 7-June-2012 to 13-July-2012 are used. By comparison with ground soil moisture, it is observed that the SMOS soil moisture is underestimated for all the images and so bias amounts
Developmental Cascade Model for Adolescent Substance Use From Infancy to Late Adolescence
Eiden, Rina D.; Lessard, Jared; Colder, Craig R.; Livingston, Jennifer; Casey, Meghan; Leonard, Kenneth E.
2016-01-01
A developmental cascade model for adolescent substance use beginning in infancy was examined in a sample of children with alcoholic and non-alcoholic parents. The model examined the role of parents’ alcohol diagnoses, depression and antisocial behavior in a cascading process of risk via three major hypothesized pathways: first via parental warmth/sensitivity from toddler to kindergarten age predicting higher parental monitoring in middle childhood through early adolescence serving as a protective pathway for adolescent substance use; second, via child low self-regulation in the preschool years to a continuing externalizing behavior problem pathway leading to underage drinking and higher engagement with substance using peers; and third, via higher social competence from kindergarten age through middle childhood being protective against engagement with delinquent and substance using peers, and leading to lower adolescent substance use. The sample consisted of 227 intact families recruited from the community at 12 months of child age. Results were supportive for the first two pathways to substance use in late adolescence. Among proximal, early adolescent risks, engagement with delinquent peers and parent’s acceptance of underage drinking were significant predictors of late adolescent alcohol and marijuana use. The results highlight the important protective roles of maternal warmth/sensitivity in early childhood to kindergarten age, parental monitoring in middle childhood, and of child self-regulation in the preschool period as reducing risk for externalizing behavior problems, underage drinking, and engagement with delinquent peers in early adolescence. Specific implications for the creation of developmentally fine-tuned preventive intervention are discussed. PMID:27584669
Variations on Debris Disks. IV. An Improved Analytical Model for Collisional Cascades
Kenyon, Scott J.; Bromley, Benjamin C.
2017-04-01
We derive a new analytical model for the evolution of a collisional cascade in a thin annulus around a single central star. In this model, r max the size of the largest object changes with time, {r}\\max \\propto {t}-γ , with γ ≈ 0.1-0.2. Compared to standard models where r max is constant in time, this evolution results in a more rapid decline of M d , the total mass of solids in the annulus, and L d , the luminosity of small particles in the annulus: {M}d\\propto {t}-(γ +1) and {L}d\\propto {t}-(γ /2+1). We demonstrate that the analytical model provides an excellent match to a comprehensive suite of numerical coagulation simulations for annuli at 1 au and at 25 au. If the evolution of real debris disks follows the predictions of the analytical or numerical models, the observed luminosities for evolved stars require up to a factor of two more mass than predicted by previous analytical models.
Energy Technology Data Exchange (ETDEWEB)
Aatos, Heikkinen; Andi, Hektor; Veikko, Karimaki; Tomas, Linden [Helsinki Univ., Institute of Physics (Finland)
2003-07-01
We study the performance of a new Bertini intra-nuclear cascade model implemented in the general detector simulation tool-kit Geant4 with a High Throughput Computing (HTC) cluster architecture. A 60 node Pentium III open-Mosix cluster is used with the Mosix kernel performing automatic process load-balancing across several CPUs. The Mosix cluster consists of several computer classes equipped with Windows NT workstations that automatically boot, daily and become nodes of the Mosix cluster. The models included in our study are a Bertini intra-nuclear cascade model with excitons, consisting of a pre-equilibrium model, a nucleus explosion model, a fission model and an evaporation model. The speed and accuracy obtained for these models is presented. (authors)
A two-stage cascade model of BOLD responses in human visual cortex.
Directory of Open Access Journals (Sweden)
Kendrick N Kay
Full Text Available Visual neuroscientists have discovered fundamental properties of neural representation through careful analysis of responses to controlled stimuli. Typically, different properties are studied and modeled separately. To integrate our knowledge, it is necessary to build general models that begin with an input image and predict responses to a wide range of stimuli. In this study, we develop a model that accepts an arbitrary band-pass grayscale image as input and predicts blood oxygenation level dependent (BOLD responses in early visual cortex as output. The model has a cascade architecture, consisting of two stages of linear and nonlinear operations. The first stage involves well-established computations-local oriented filters and divisive normalization-whereas the second stage involves novel computations-compressive spatial summation (a form of normalization and a variance-like nonlinearity that generates selectivity for second-order contrast. The parameters of the model, which are estimated from BOLD data, vary systematically across visual field maps: compared to primary visual cortex, extrastriate maps generally have larger receptive field size, stronger levels of normalization, and increased selectivity for second-order contrast. Our results provide insight into how stimuli are encoded and transformed in successive stages of visual processing.
Suttinger, Matthew; Go, Rowel; Figueiredo, Pedro; Todi, Ankesh; Shu, Hong; Leshin, Jason; Lyakh, Arkadiy
2018-01-01
Experimental and model results for 15-stage broad area quantum cascade lasers (QCLs) are presented. Continuous wave (CW) power scaling from 1.62 to 2.34 W has been experimentally demonstrated for 3.15-mm long, high reflection-coated QCLs for an active region width increased from 10 to 20 μm. A semiempirical model for broad area devices operating in CW mode is presented. The model uses measured pulsed transparency current, injection efficiency, waveguide losses, and differential gain as input parameters. It also takes into account active region self-heating and sublinearity of pulsed power versus current laser characteristic. The model predicts that an 11% improvement in maximum CW power and increased wall-plug efficiency can be achieved from 3.15 mm×25 μm devices with 21 stages of the same design, but half doping in the active region. For a 16-stage design with a reduced stage thickness of 300 Å, pulsed rollover current density of 6 kA/cm2, and InGaAs waveguide layers, an optical power increase of 41% is projected. Finally, the model projects that power level can be increased to ˜4.5 W from 3.15 mm×31 μm devices with the baseline configuration with T0 increased from 140 K for the present design to 250 K.
Energy Technology Data Exchange (ETDEWEB)
Wampler, William R.; Myers, Samuel Maxwell,
2014-02-01
A model is presented for recombination of charge carriers at displacement damage in gallium arsenide, which includes clustering of the defects in atomic displacement cascades produced by neutron or ion irradiation. The carrier recombination model is based on an atomistic description of capture and emission of carriers by the defects with time evolution resulting from the migration and reaction of the defects. The physics and equations on which the model is based are presented, along with details of the numerical methods used for their solution. The model uses a continuum description of diffusion, field-drift and reaction of carriers and defects within a representative spherically symmetric cluster. The initial radial defect profiles within the cluster were chosen through pair-correlation-function analysis of the spatial distribution of defects obtained from the binary-collision code MARLOWE, using recoil energies for fission neutrons. Charging of the defects can produce high electric fields within the cluster which may influence transport and reaction of carriers and defects, and which may enhance carrier recombination through band-to-trap tunneling. Properties of the defects are discussed and values for their parameters are given, many of which were obtained from density functional theory. The model provides a basis for predicting the transient response of III-V heterojunction bipolar transistors to pulsed neutron irradiation.
Directory of Open Access Journals (Sweden)
Arnaud Grüss
2018-01-01
Full Text Available To be able to simulate spatial patterns of predator-prey interactions, many spatially-explicit ecosystem modeling platforms, including Atlantis, need to be provided with distribution maps defining the annual or seasonal spatial distributions of functional groups and life stages. We developed a methodology combining extrapolation and interpolation of the predictions made by statistical habitat models to produce distribution maps for the fish and invertebrates represented in the Atlantis model of the Gulf of Mexico (GOM Large Marine Ecosystem (LME (“Atlantis-GOM”. This methodology consists of: (1 compiling a large monitoring database, gathering all the fisheries-independent and fisheries-dependent data collected in the northern (U.S. GOM since 2000; (2 compiling a large environmental database, storing all the environmental parameters known to influence the spatial distribution patterns of fish and invertebrates of the GOM; (3 fitting binomial generalized additive models (GAMs to the large monitoring and environmental databases, and geostatistical binomial generalized linear mixed models (GLMMs to the large monitoring database; and (4 employing GAM predictions to infer spatial distributions in the southern GOM, and GLMM predictions to infer spatial distributions in the U.S. GOM. Thus, our methodology allows for reasonable extrapolation in the southern GOM based on a large amount of monitoring and environmental data, and for interpolation in the U.S. GOM accurately reflecting the probability of encountering fish and invertebrates in that region. We used an iterative cross-validation procedure to validate GAMs. When a GAM did not pass the validation test, we employed a GAM for a related functional group/life stage to generate distribution maps for the southern GOM. In addition, no geostatistical GLMMs were fit for the functional groups and life stages whose depth, longitudinal and latitudinal ranges within the U.S. GOM are not entirely covered by
Influence maximization in social networks under an independent cascade-based model
Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan
2016-02-01
The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.
Market disruption, cascading effects, and economic recovery:a life-cycle hypothesis model.
Energy Technology Data Exchange (ETDEWEB)
Sprigg, James A.
2004-11-01
This paper builds upon previous work [Sprigg and Ehlen, 2004] by introducing a bond market into a model of production and employment. The previous paper described an economy in which households choose whether to enter the labor and product markets based on wages and prices. Firms experiment with prices and employment levels to maximize their profits. We developed agent-based simulations using Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate that multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment, but also suffer from market noise stemming from consumer churn. In this paper we introduce a bond market as a mechanism for household savings. We simulate an economy of continuous overlapping generations in which each household grows older in the course of the simulation and continually revises its target level of savings according to a life-cycle hypothesis. Households can seek employment, earn income, purchase goods, and contribute to savings until they reach the mandatory retirement age; upon retirement households must draw from savings in order to purchase goods. This paper demonstrates the simultaneous convergence of product, labor, and savings markets to their calculated equilibria, and simulates how a disruption to a productive sector will create cascading effects in all markets. Subsequent work will use similar models to simulate how disruptions, such as terrorist attacks, would interplay with consumer confidence to affect financial markets and the broader economy.
A disposition of interpolation techniques
Knotters, M.; Heuvelink, G.B.M.
2010-01-01
A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated
International Nuclear Information System (INIS)
1997-01-01
It is widely recognized that cascade models are potentially effective and powerful tools for interpreting and predicting multi-particle observables in heavy ion physics. However, the lack of common standards, documentation, version control, and accessibility have made it difficult to apply objective scientific criteria for evaluating the many physical and algorithmic assumptions or even to reproduce some published results. The first RIKEN Research Center workshop was proposed by Yang Pang to address this problem by establishing open standards for original codes for applications to nuclear collisions at RHIC energies. The aim of this first workshop is: (1) to prepare a WWW depository site for original source codes and detailed documentation with examples; (2) to develop and perform standardized test for the models such as Lorentz invariance, kinetic theory comparisons, and thermodynamic simulations; (3) to publish a compilation of results of the above work in a journal e.g., ''Heavy Ion Physics''; and (4) to establish a policy statement on a set of minimal requirements for inclusion in the OSCAR-WWW depository
2.5-D and 3-D DC resistivity modelling using an extrapolation cascadic multigrid method
Pan, Kejia; Tang, Jingtian
2014-06-01
Multigrid methods are well known for their high efficiency in solving elliptic boundary value problems. In this study, an improved extrapolation cascadic multigrid (EXCMG) method is presented to solve large sparse systems of linear equations, which are discretized from both 2.5-D and 3-D DC resistivity modelling using the finite element methods. To increase the accuracy, the singularity generated by the source term is removed by reformulating the solution with the secondary potential. In addition, a set of new and efficient Fourier coefficient is presented to transform the solutions in the 2.5-D Fourier domain to the 3-D Cartesian domain. To show the efficiency and the ease-to-implement of EXCMG, we first implement the EXCMG methods to a two-layered model of both 2-D and 3-D and compare the results with the analytical solutions. It has been shown that the maximum relative error in apparent resistivity is no more than 0.4 per cent provided an appropriate grid size is chosen. Then the comparisons of EXCMG with two other iterative solvers [symmetric successive over-relaxation conjugate gradient (SSORCG) and incomplete Cholesky conjugate gradient (ICCG)] show that converging at a rate independent of the grid size, the EXCMG method is much more efficient than SSORCG and ICCG solvers. Moreover, the EXCMG method has been shown its potential for being generalized to large-scale 3-D problems, due to the fact that it becomes more efficient as the size of the problem increases.
International Nuclear Information System (INIS)
Singh, B.N.; Ghoniem, N.M.; Trinkaus, H.
2002-01-01
The analysis of the available experimental observations shows that the occurrence of a sudden yield drop and the associated plastic flow localization are the major concerns regarding the performance and lifetime of materials exposed to fission or fusion neutrons. In the light of the known mechanical properties and microstructures of the as-irradiated and irradiated and deformed materials, it has been argued that the increase in the upper yield stress, the sudden yield drop and the initiation of plastic flow localization, can be rationalized in terms of the cascade induced source hardening (CISH) model. Various aspects of the model (main assumptions and predictions) have been investigated using analytical calculations, 3-D dislocation dynamics and molecular dynamics simulations. The main results and conclusions are briefly summarized. Finally, it is pointed out that even though the formation of cleared channels may be rationalized in terms of climb-controlled glide of the source dislocation, a number of problems regarding the initiation and the evolution of these channels remain unsolved
Energy Technology Data Exchange (ETDEWEB)
Singh, B.N. E-mail: bachu.singh@risoe.dk; Ghoniem, N.M.; Trinkaus, H
2002-12-01
The analysis of the available experimental observations shows that the occurrence of a sudden yield drop and the associated plastic flow localization are the major concerns regarding the performance and lifetime of materials exposed to fission or fusion neutrons. In the light of the known mechanical properties and microstructures of the as-irradiated and irradiated and deformed materials, it has been argued that the increase in the upper yield stress, the sudden yield drop and the initiation of plastic flow localization, can be rationalized in terms of the cascade induced source hardening (CISH) model. Various aspects of the model (main assumptions and predictions) have been investigated using analytical calculations, 3-D dislocation dynamics and molecular dynamics simulations. The main results and conclusions are briefly summarized. Finally, it is pointed out that even though the formation of cleared channels may be rationalized in terms of climb-controlled glide of the source dislocation, a number of problems regarding the initiation and the evolution of these channels remain unsolved.
Perry, Bruce A.; Anderson, Molly S.
2015-01-01
The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station Water Processor Assembly to form a complete water recovery system for future missions. A preliminary chemical process simulation was previously developed using Aspen Custom Modeler® (ACM), but it could not simulate thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. This paper describes modifications to the ACM simulation of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version can be used to model thermal startup and predicts the total energy consumption of the CDS. The simulation has been validated for both NaC1 solution and pretreated urine feeds and no longer requires retuning when operating parameters change. The simulation was also used to predict how internal processes and operating conditions of the CDS affect its performance. In particular, it is shown that the coefficient of performance of the thermoelectric heat pump used to provide heating and cooling for the CDS is the largest factor in determining CDS efficiency. Intrastage heat transfer affects CDS performance indirectly through effects on the coefficient of performance.
Hernandez, Andrew M; Boone, John M
2014-04-01
Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R(2)) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, "Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector," Phys. Med. Biol. 24, 505-517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 k
Energy Technology Data Exchange (ETDEWEB)
Hernandez, Andrew M. [Biomedical Engineering Graduate Group, University of California Davis, Sacramento, California 95817 (United States); Boone, John M., E-mail: john.boone@ucdmc.ucdavis.edu [Departments of Radiology and Biomedical Engineering, Biomedical Engineering Graduate Group, University of California Davis, Sacramento, California 95817 (United States)
2014-04-15
Purpose: Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. Methods: X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Results: Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R{sup 2}) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, “Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector,” Phys. Med. Biol. 24, 505–517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9
Directory of Open Access Journals (Sweden)
Dennis Rödder
2012-08-01
Full Text Available Abstract. - Species distribution models (SDMs are increasingly used in many scientific fields, with most studies requiring the application of the SDM to predict the likelihood of occurrence and/or environmental suitability in locations and time periods outside the range of the data set used to fit the model. Uncertainty in the quality of SDM predictions caused by errors of interpolation and extrapolation has been acknowledged for a long time, but the explicit consideration of the magnitude of such errors is, as yet, uncommon. Among other issues, the spatial variation in the colinearity of the environmental predictor variables used in the development of SDMs may cause misleading predictions when applying SDMs to novel locations and time periods. In this paper, we provide a framework for the spatially explicit identification of areas prone to errors caused by changes in the inter-correlation structure (i.e. their colinearity of environmental predictors used for SDM development. The proposed method is compatible with all SDM algorithms currently employed, and expands the available toolbox for assessing the uncertainties raising from SDM predictions. We provide an implementation of the analysis as a script for the R statistical platform in an online appendix.
Occlusion-Aware View Interpolation
Directory of Open Access Journals (Sweden)
Ince Serdar
2008-01-01
Full Text Available Abstract View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.
Linda A. Joyce; David T. Price; Daniel W. McKenney; R. Martin Siltanen; Pia Papadopol; Kevin Lawrence; David P. Coulson
2011-01-01
Projections of future climate were selected for four well-established general circulation models (GCM) forced by each of three greenhouse gas (GHG) emissions scenarios, namely A2, A1B, and B1 from the Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES). Monthly data for the period 1961-2100 were downloaded mainly from the web...
Cascade model of gamma-ray bursts: Power-law and annihilation-line components
Harding, A. K.; Sturrock, P. A.; Daugherty, J. K.
1988-01-01
If, in a neutron star magnetosphere, an electron is accelerated to an energy of 10 to the 11th or 12th power eV by an electric field parallel to the magnetic field, motion of the electron along the curved field line leads to a cascade of gamma rays and electron-positron pairs. This process is believed to occur in radio pulsars and gamma ray burst sources. Results are presented from numerical simulations of the radiation and photon annihilation pair production processes, using a computer code previously developed for the study of radio pulsars. A range of values of initial energy of a primary electron was considered along with initial injection position, and magnetic dipole moment of the neutron star. The resulting spectra was found to exhibit complex forms that are typically power law over a substantial range of photon energy, and typically include a dip in the spectrum near the electron gyro-frequency at the injection point. The results of a number of models are compared with data for the 5 Mar., 1979 gamma ray burst. A good fit was found to the gamma ray part of the spectrum, including the equivalent width of the annihilation line.
BIMOND3, Monotone Bivariate Interpolation
International Nuclear Information System (INIS)
Fritsch, F.N.; Carlson, R.E.
2001-01-01
1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data
Perry, Bruce; Anderson, Molly
2015-01-01
The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station (ISS) Water Processor Assembly (WPA) to form a complete Water Recovery System (WRS) for future missions. Independent chemical process simulations with varying levels of detail have previously been developed using Aspen Custom Modeler (ACM) to aid in the analysis of the CDS and several WPA components. The existing CDS simulation could not model behavior during thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. The first part of this paper describes modifications to the ACM model of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version of the model can accurately predict behavior during thermal startup for both NaCl solution and pretreated urine feeds. The model is used to predict how changing operating parameters and design features of the CDS affects its performance, and conclusions from these predictions are discussed. The second part of this paper describes the integration of the modified CDS model and the existing WPA component models into a single WRS model. The integrated model is used to demonstrate the effects that changes to one component can have on the dynamic behavior of the system as a whole.
Chen, Xin; Xing, Pei; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua
2017-02-01
A new dataset of surface temperature over North America has been constructed by merging climate model results and empirical tree-ring data through the application of an optimal interpolation algorithm. Errors of both the Community Climate System Model version 4 (CCSM4) simulation and the tree-ring reconstruction were considered to optimize the combination of the two elements. Variance matching was used to reconstruct the surface temperature series. The model simulation provided the background field, and the error covariance matrix was estimated statistically using samples from the simulation results with a running 31-year window for each grid. Thus, the merging process could continue with a time-varying gain matrix. This merging method (MM) was tested using two types of experiment, and the results indicated that the standard deviation of errors was about 0.4 °C lower than the tree-ring reconstructions and about 0.5 °C lower than the model simulation. Because of internal variabilities and uncertainties in the external forcing data, the simulated decadal warm-cool periods were readjusted by the MM such that the decadal variability was more reliable (e.g., the 1940-1960s cooling). During the two centuries (1601-1800 AD) of the preindustrial period, the MM results revealed a compromised spatial pattern of the linear trend of surface temperature, which is in accordance with the phase transition of the Pacific decadal oscillation and Atlantic multidecadal oscillation. Compared with pure CCSM4 simulations, it was demonstrated that the MM brought a significant improvement to the decadal variability of the gridded temperature via the merging of temperature-sensitive tree-ring records.
Directory of Open Access Journals (Sweden)
P. Phaochoo
2016-01-01
Full Text Available In this paper, the fractional Black–Scholes equation in financial problem is solved by using the numerical techniques for the option price of a European call or European put under the Black–Scholes model. The MLPG and implicit finite difference method are used for discretizing the governing equation in option price and time variable, respectively. In MLPG method, the shape function is constructed by a moving kriging approximation. The Dirac delta function is chosen to be the test function. The numerical examples for varieties of variables are also included.
The research on NURBS adaptive interpolation technology
Zhang, Wanjun; Gao, Shanping; Zhang, Sujia; Zhang, Feng
2017-04-01
In order to solve the problems of Research on NURBS Adaptive Interpolation Technology, such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for NURBS adaptive interpolation method of NURBS curve and simulation. We can use NURBS adaptive interpolation that calculates (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meets the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.
The EH Interpolation Spline and Its Approximation
Directory of Open Access Journals (Sweden)
Jin Xie
2014-01-01
Full Text Available A new interpolation spline with two parameters, called EH interpolation spline, is presented in this paper, which is the extension of the standard cubic Hermite interpolation spline, and inherits the same properties of the standard cubic Hermite interpolation spline. Given the fixed interpolation conditions, the shape of the proposed splines can be adjusted by changing the values of the parameters. Also, the introduced spline could approximate to the interpolated function better than the standard cubic Hermite interpolation spline and the quartic Hermite interpolation splines with single parameter by a new algorithm.
Spectral Cascade-Transport Turbulence Model Development for Two-Phase Flows
Brown, Cameron Scott
Turbulence modeling remains a challenging problem in nuclear reactor applications, particularly for the turbulent multiphase flow conditions in nuclear reactor subchannels. Understanding the fundamental physics of turbulent multiphase flows is crucial for the improvement and further development of multiphase flow models used in reactor operation and safety calculations. Reactor calculations with Reynolds-averaged Navier-Stokes (RANS) approach continue to become viable tools for reactor analysis. The on-going increase in available computational resources allows for turbulence models that are more complex than the traditional two-equation models to become practical choices for nuclear reactor computational fluid dynamic (CFD) and multiphase computational fluid dynamic (M-CFD) simulations. Similarly, increased computational capabilities continue to allow for higher Reynolds numbers and more complex geometries to be evaluated using direct numerical simulation (DNS), thus providing more validation and verification data for turbulence model development. Spectral turbulence models are a promising approach to M-CFD simulations. These models resolve mean flow parameters as well as the turbulent kinetic energy spectrum, reproducing more physical details of the turbulence than traditional two-equation type models. Previously, work performed by other researchers on a spectral cascade-transport model has shown that the model behaves well for single and bubbly twophase decay of isotropic turbulence, single and two-phase uniform shear flow, and single-phase flow in a channel without resolving the near-wall boundary layer for relatively low Reynolds number. Spectral models are great candidates for multiphase RANS modeling since bubble source terms can be modeled as contributions to specific turbulence scales. This work focuses on the improvement and further development of the spectral cascadetransport model (SCTM) to become a three-dimensional (3D) turbulence model for use in M
Energy Technology Data Exchange (ETDEWEB)
Steinbeck, T.; Rohr, J. [m.u.t. GmbH, Wedel (Germany)
2005-06-01
Quantum cascade lasers represent an almost ideal light source for infrared gas analysis. They allow sensitive and selective measurements in the mid-infrared. The detection of combustion gases for early fire detection represents an interesting field of application, where further technologic benefits are shown to advantage. The focus of this report is on the technical realization of a functional model and the electronic components. (orig.)
Li, Z. W.
2012-05-01
The propagation delay when radar signals travel from the troposphere has been one of the major limitations for the applications of high precision repeat-pass Interferometric Synthetic Aperture Radar (InSAR). In this paper, we first present an elevation-dependent atmospheric correction model for Advanced Synthetic Aperture Radar (ASAR—the instrument aboard the ENVISAT satellite) interferograms with Medium Resolution Imaging Spectrometer (MERIS) integrated water vapour (IWV) data. Then, using four ASAR interferometric pairs over Southern California as examples, we conduct the atmospheric correction experiments with cloud-free MERIS IWV data. The results show that after the correction the rms differences between InSAR and GPS have reduced by 69.6 per cent, 29 per cent, 31.8 per cent and 23.3 per cent, respectively for the four selected interferograms, with an average improvement of 38.4 per cent. Most importantly, after the correction, six distinct deformation areas have been identified, that is, Long Beach–Santa Ana Basin, Pomona–Ontario, San Bernardino and Elsinore basin, with the deformation velocities along the radar line-of-sight (LOS) direction ranging from −20 mm yr−1 to −30 mm yr−1 and on average around −25 mm yr−1, and Santa Fe Springs and Wilmington, with a slightly low deformation rate of about −10 mm yr−1 along LOS. Finally, through the method of stacking, we generate a mean deformation velocity map of Los Angeles over a period of 5 yr. The deformation is quite consistent with the historical deformation of the area. Thus, using the cloud-free MERIS IWV data correcting synchronized ASAR interferograms can significantly reduce the atmospheric effects in the interferograms and further better capture the ground deformation and other geophysical signals.
Energy Technology Data Exchange (ETDEWEB)
Volant, C.; Turzo, K.; Trautmann, W.; Auger, G.; Begemann-Blaich, M.-L.; Bittiger, R.; Borderie, B.; Botvina, A.S.; Bougault, R.; Bouriquet, B.; Charvet, J.-L.; Chbihi, A.; Dayras, R.; Dore, D.; Durand, D.; Frankland, J.D.; Galichet, E.; Gourio, D.; Guinet, D.; Hudan, S.; Imme, G.; Lautesse, Ph.; Lavaud, F.; Le Fevre, A.; Lopez, O.; Lukasik, J.; Lynen, U.; Mueller, W.F.J.; Nalpas, L.; Orth, H.; Plagnol, E.; Raciti, G.; Rosato, E.; Saija, A.; Schwarz, C.; Seidel, W.; Sfienti, C.; Steckmeyer, J.C.; Tamain, B.; Trzcinski, A.; Vient, E.; Vigilante, M.; Zwieglinski, B
2004-04-05
The nucleus-nucleus Liege intranuclear-cascade+percolation+evaporation model has been applied to the {sup 12}C+{sup 197}Au data measured by the INDRA-ALADIN collaboration at GSI. After the intranuclear cascade stage, the data are better reproduced when using the Statistical Multiframentation Model as afterburner. Further checks of the model are done on data from the EOS and KAOS collaborations.
Ajeani, Judith; Mangwi Ayiasi, Richard; Tetui, Moses; Ekirapa-Kiracho, Elizabeth; Namazzi, Gertrude; Muhumuza Kananura, Ronald; Namusoke Kiwanuka, Suzanne; Beyeza-Kashesya, Jolly
2017-08-01
There is increasing demand for trainers to shift from traditional didactic training to innovative approaches that are more results-oriented. Mentorship is one such approach that could bridge the clinical knowledge gap among health workers. This paper describes the experiences of an attempt to improve health-worker performance in maternal and newborn health in three rural districts through a mentoring process using the cascade model. The paper further highlights achievements and lessons learnt during implementation of the cascade model. The cascade model started with initial training of health workers from three districts of Pallisa, Kibuku and Kamuli from where potential local mentors were selected for further training and mentorship by central mentors. These local mentors then went on to conduct mentorship visits supported by the external mentors. The mentorship process concentrated on partograph use, newborn resuscitation, prevention and management of Post-Partum Haemorrhage (PPH), including active management of third stage of labour, preeclampsia management and management of the sick newborn. Data for this paper was obtained from key informant interviews with district-level managers and local mentors. Mentorship improved several aspects of health-care delivery, ranging from improved competencies and responsiveness to emergencies and health-worker professionalism. In addition, due to better district leadership for Maternal and Newborn Health (MNH), there were improved supplies/medicine availability, team work and innovative local problem-solving approaches. Health workers were ultimately empowered to perform better. The study demonstrated that it is possible to improve the competencies of frontline health workers through performance enhancement for MNH services using locally built capacity in clinical mentorship for Emergency Obstetric and Newborn Care (EmONC). The cascade mentoring process needed strong external mentorship support at the start to ensure improved
Guo, Yang; Liu, Shuhui; Li, Zhanhuai; Shang, Xuequn
2018-04-11
The classification of cancer subtypes is of great importance to cancer disease diagnosis and therapy. Many supervised learning approaches have been applied to cancer subtype classification in the past few years, especially of deep learning based approaches. Recently, the deep forest model has been proposed as an alternative of deep neural networks to learn hyper-representations by using cascade ensemble decision trees. It has been proved that the deep forest model has competitive or even better performance than deep neural networks in some extent. However, the standard deep forest model may face overfitting and ensemble diversity challenges when dealing with small sample size and high-dimensional biology data. In this paper, we propose a deep learning model, so-called BCDForest, to address cancer subtype classification on small-scale biology datasets, which can be viewed as a modification of the standard deep forest model. The BCDForest distinguishes from the standard deep forest model with the following two main contributions: First, a named multi-class-grained scanning method is proposed to train multiple binary classifiers to encourage diversity of ensemble. Meanwhile, the fitting quality of each classifier is considered in representation learning. Second, we propose a boosting strategy to emphasize more important features in cascade forests, thus to propagate the benefits of discriminative features among cascade layers to improve the classification performance. Systematic comparison experiments on both microarray and RNA-Seq gene expression datasets demonstrate that our method consistently outperforms the state-of-the-art methods in application of cancer subtype classification. The multi-class-grained scanning and boosting strategy in our model provide an effective solution to ease the overfitting challenge and improve the robustness of deep forest model working on small-scale data. Our model provides a useful approach to the classification of cancer subtypes
Modeling and Analysis of the Common Mode Voltage in a Cascaded H-Bridge Electronic Power Transformer
Directory of Open Access Journals (Sweden)
Yun Yang
2017-09-01
Full Text Available Electronic power transformers (EPTs have been identified as emerging intelligent electronic devices in the future smart grid, e.g., the Energy Internet, especially in the application of renewable energy conversion and management. Considering that the EPT is directly connected to the medium-voltage grid, e.g., a10 kV distribution system, and its cascaded H-bridges structure, the common mode voltage (CMV issue will be more complex and severe. The CMV will threaten the insulation of the entire EPT device and even produce common mode current. This paper investigates the generated mechanism and characteristics of the CMV in a cascaded H-bridge EPT (CHB-EPT under both balanced and fault grid conditions. First, the CHB-EPT system is introduced. Then, a three-phase simplified circuit model of the high-voltage side of the EPT system is presented. Combined with a unipolar modulation strategy and carrier phase shifting technology by rigorous mathematical analysis and derivation, the EPT internal CMV and its characteristics are obtained. Moreover, the influence of the sinusoidal pulse width modulation dead time is considered and discussed based on analytical calculation. Finally, the simulation results are provided to verify the validity of the aforementioned model and the analysis results. The proposed theoretical analysis method is also suitable for other similar cascaded converters and can provide a useful theoretical guide for structural design and power density optimization.
The Impact of the Topology on Cascading Failures in a Power Grid Model
Koç, Y.; Warnier, M.; Mieghem, P. van; Kooij, R.E.; Brazier, F.M.T.
2014-01-01
Cascading failures are one of the main reasons for large scale blackouts in power transmission grids. Secure electrical power supply requires, together with careful operation, a robust design of the electrical power grid topology. Currently, the impact of the topology on grid robustness is mainly
Numerical modeling of energy-separation in cascaded Leontiev tubes with a central body
Directory of Open Access Journals (Sweden)
Makarov Maksim
2017-01-01
Full Text Available Designs of two- and three-cascaded Leontiev tubes are proposed in the paper. The results of numerical simulation of the energy separation in such tubes are presented. The efficiency parameters are determined in direct flows of helium-xenon coolant with low Prandtl number.
International Nuclear Information System (INIS)
Bregeon, J.
2005-09-01
GLAST is the new generation of Gamma-ray telescope and should dramatically improve our knowledge of the gamma-ray sky when it is launched on September 7. 2007. Data from the beam test that was held at GANIL with low energy ions were analyzed in order to measure the light quenching factor of CsI for all kinds of ions from proton to krypton of energy between 0 and 73 MeV per nucleon. These results have been very useful to understand the light quenching for relativistic ions that was measured during the GSI beam test. The knowledge of light quenching in GLAST CsI detectors for high energy ions is required for the on-orbit calibration with cosmic rays to succeed. Hadronic background rejection is another major issue for GLAST, thus, all the algorithms rely on the GLAST official Monte-Carlo simulation, GlastRelease. Hadronic cascade data from the GSI beam test and from another beam test held at CERN on the SPS have been used to benchmark hadronic cascade simulation within the framework of GEANT4, on which GlastRelease is based. Testing the good reproduction of simple parameters in GLAST-like calorimeters for hadronic cascades generated by 1.7 GeV, 3.4 GeV, 10 GeV and 20 GeV protons or pions led us to the conclusion that at high energy the default LHEP model is good enough, whereas at low energy the Bertini intra-nuclear cascade model should be used. (author)
Rajabali Nejad, Mohammadreza; Mahdi, Tew-Fik
2010-01-01
A recently developed Bayesian interpolation method (BI) and its application to safety assessment of a flood defense structure are described in this paper. We use a one-dimensional Bayesian Monte Carlo method (BMC) that has been proposed in (Rajabalinejad 2009) to develop a weighted logical
Research of Cubic Bezier Curve NC Interpolation Signal Generator
Directory of Open Access Journals (Sweden)
Shijun Ji
2014-08-01
Full Text Available Interpolation technology is the core of the computer numerical control (CNC system, and the precision and stability of the interpolation algorithm directly affect the machining precision and speed of CNC system. Most of the existing numerical control interpolation technology can only achieve circular arc interpolation, linear interpolation or parabola interpolation, but for the numerical control (NC machining of parts with complicated surface, it needs to establish the mathematical model and generate the curved line and curved surface outline of parts and then discrete the generated parts outline into a large amount of straight line or arc to carry on the processing, which creates the complex program and a large amount of code, so it inevitably introduce into the approximation error. All these factors affect the machining accuracy, surface roughness and machining efficiency. The stepless interpolation of cubic Bezier curve controlled by analog signal is studied in this paper, the tool motion trajectory of Bezier curve can be directly planned out in CNC system by adjusting control points, and then these data were put into the control motor which can complete the precise feeding of Bezier curve. This method realized the improvement of CNC trajectory controlled ability from the simple linear and circular arc to the complex project curve, and it provides a new way for economy realizing the curve surface parts with high quality and high efficiency machining.
Spatial interpolation of monthly mean air temperature data for Latvia
Aniskevich, Svetlana
2016-04-01
Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.
Temporal interpolation in Meteosat images
DEFF Research Database (Denmark)
Larsen, Rasmus; Hansen, Johan Dore; Ersbøll, Bjarne Kjær
in such animated films are perceived as being jerky due to t he low temporal sampling rate in general and missing images in particular. In order to perform a satisfactory temporal interpolation we estimate and use the optical flow corresponding to every image in the sequenc e. The estimation of the optical flow...... a threshold between clouds and land/water. The temperature maps are estimated using observations from the image sequence itself at cloud free pixels and ground temperature measurements from a series of meteor ological observation stations in Europe. The temporal interpolation of the images is bas ed on a path...... of each pixel determined by the estimated optical flow. The performance of the algorithm is illustrated by the interpolation of a sequence of Meteosat infrared images....
Covert, Michael
2015-01-01
This book is intended for software developers, system architects and analysts, big data project managers, and data scientists who wish to deploy big data solutions using the Cascading framework. You must have a basic understanding of the big data paradigm and should be familiar with Java development techniques.
Directory of Open Access Journals (Sweden)
Ramzia Abu Hamad
2017-12-01
Full Text Available Background/Aims: Renal injuries induced by increased intra-glomerular pressure coincide with podocyte detachment from the glomerular basement membrane (GBM. In previous studies, it was demonstrated that mesangial cells have a crucial role in the pathogenesis of malignant hypertension. However, the exact pathophysiological cascade responsible for podocyte detachment and its relationship with mesangial cells has not been fully elucidated yet and this was the aim of the current study. Methods: Rat renal mesangial or podocytes were exposed to high hydrostatic pressure in an in-vitro model of malignant hypertension. The resulted effects on podocyte detachment, apoptosis and expression of podocin and integrinβ1 in addition to Angiotensin-II and TGF-β1 generation were evaluated. To simulate the paracrine effect podocytes were placed in mesangial cell media pre-exposed to pressure, or in media enriched with Angiotensin-II, TGF-β1 or receptor blockers. Results: High pressure resulted in increased Angiotensin-II levels in mesangial and podocyte cells. Angiotensin-II via the AT1 receptors reduced podocin expression and integrinβ1, culminating in detachment of both viable and apoptotic podocytes. Mesangial cells exposed to pressure had a greater increase in Angiotensin-II than pressure-exposed podocytes. The massively increased concentration of Angiotensin-II by mesangial cells, together with increased TGF-β1 production, resulted in increased apoptosis and detachment of non-viable apoptotic podocytes. Unlike the direct effect of pressure on podocytes, the mesangial mediated effects were not related to changes in adhesion proteins expression. Conclusions: Hypertension induces podocyte detachment by autocrine and paracrine effects. In a direct response to pressure, podocytes increase Angiotensin-II levels. This leads, via AT1 receptors, to structural changes in adhesion proteins, culminating in viable podocyte detachment. Paracrine effects of
Model approach for stress induced steroidal hormone cascade changes in severe mental diseases.
Volko, Claus D; Regidor, Pedro A; Rohr, Uwe D
2016-03-01
Stress was described by Cushing and Selye as an adaptation to a foreign stressor by the anterior pituitary increasing ACTH, which stimulates the release of glucocorticoid and mineralocorticoid hormones. The question is raised whether stress can induce additional steroidal hormone cascade changes in severe mental diseases (SMD), since stress is the common denominator. A systematic literature review was conducted in PubMed, where the steroidal hormone cascade of patients with SMD was compared to the impact of increasing stress on the steroidal hormone cascade (a) in healthy amateur marathon runners with no overtraining; (b) in healthy well-trained elite soldiers of a ranger training unit in North Norway, who were under extreme physical and mental stress, sleep deprivation, and insufficient calories for 1 week; and, (c) in soldiers suffering from post traumatic stress disorder (PTSD), schizophrenia (SI), and bipolar disorders (BD). (a) When physical stress is exposed moderately to healthy men and women for 3-5 days, as in the case of amateur marathon runners, only few steroidal hormones are altered. A mild reduction in testosterone, cholesterol and triglycerides is detected in blood and in saliva, but there was no decrease in estradiol. Conversely, there is an increase of the glucocorticoids, aldosterone and cortisol. Cellular immunity, but not specific immunity, is reduced for a short time in these subjects. (b) These changes are also seen in healthy elite soldiers exposed to extreme physical and mental stress but to a somewhat greater extent. For instance, the aldosterone is increased by a factor of three. (c) In SMD, an irreversible effect on the entire steroidal hormone cascade is detected. Hormones at the top of the cascade, such as cholesterol, dehydroepiandrosterone (DHEA), aldosterone and other glucocorticoids, are increased. However, testosterone and estradiol and their metabolites, and other hormones at the lower end of the cascade, seem to be reduced. 1
Optimization of contrast-enhanced breast imaging: Analysis using a cascaded linear system model.
Hu, Yue-Houng; Scaduto, David A; Zhao, Wei
2017-01-01
Contrast-enhanced (CE) breast imaging involves the injection contrast agents (i.e., iodine) to increase conspicuity of malignant lesions. CE imaging may be used in conjunction with digital mammography (DM) or digital breast tomosynthesis (DBT) and has shown promise in improving diagnostic specificity. Both CE-DM and CE-DBT techniques require optimization as clinical diagnostic tools. Physical factors including x-ray spectra, subtraction technique, and the signal from iodine contrast, must be considered to provide the greatest object detectability and image quality. We developed a cascaded linear system model (CLSM) for the optimization of CE-DM and CE-DBT employing dual energy (DE) subtraction or temporal (TE) subtraction. We have previously developed a CLSM for DBT implemented with an a-Se flat panel imager (FPI) and filtered backprojection (FBP) reconstruction algorithm. The model is used to track image quality metrics - modulation transfer function (MTF) and noise power spectrum (NPS) - at each stage of the imaging chain. In this study, the CLSM is extended for CE breast imaging. The effect of x-ray spectrum (varied by changing tube potential and the filter) and DE and TE subtraction techniques on breast structural noise was measured was studied and included as a deterministic source of noise in the CLSM. From the two-dimensional (2D) and three-dimensional (3D) MTF and NPS, the ideal observer signal-to-noise ratio (SNR), also known as the detectability index (d'), may be calculated. Using d' as a FOM, we discuss the optimization of CE imaging for the task of iodinated contrast object detection within structured backgrounds. Increasing x-ray energy was determined to decrease the magnitude of structural noise and not its correlation. By performing DE subtraction, the magnitude of the structural noise was further reduced at the expense of increased stochastic (quantum and electronic) noise. TE subtraction exhibited essentially no residual structural noise at the
Directory of Open Access Journals (Sweden)
Anna Petit-Boix
2018-03-01
Full Text Available Given the expansion of urban agriculture (UA, we need to understand how this system provides ecosystem services, including foundational societal needs such as social cohesion, i.e., people’s willingness to cooperate with one another. Although social cohesion in UA has been documented, there is no framework for its emergence and how it can be modeled within a sustainability framework. In this study, we address this literature gap by showing how the popular cascade ecosystem services model can be modified to include social structures. We then transform the cascade model into a bottom-up causal framework for UA. In this bottom-up framework, basic biophysical (e.g., land availability and social (e.g., leadership ecosystem structures and processes lead to human activities (e.g., learning that can foster specific human attitudes and feelings (e.g., trust. These attitudes and feelings, when aggregated (e.g., social network, generate an ecosystem value of social cohesion. These cause-effect relationships can support the development of causality pathways in social life cycle assessment (S-LCA and further our understanding of the mechanisms behind social impacts and benefits. The framework also supports UA studies by showing the sustainability of UA as an emergent food supplier in cities.
INTERPOL's Surveillance Network in Curbing Transnational Terrorism
Gardeazabal, Javier; Sandler, Todd
2015-01-01
This paper investigates the role that INTERPOL surveillance – the Mobile INTERPOL Network Database (MIND) and the Fixed INTERPOL Network Database (FIND) – played in the War on Terror since its inception in 2005. MIND/FIND surveillance allows countries to screen people and documents systematically at border crossings against INTERPOL databases on terrorists, fugitives, and stolen and lost travel documents. Such documents have been used in the past by terrorists to transit borders. By applyi...
Topics in multivariate approximation and interpolation
Jetter, Kurt
2005-01-01
This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr
Tang, Youhua; Pagowski, Mariusz; Chai, Tianfeng; Pan, Li; Lee, Pius; Baker, Barry; Kumar, Rajesh; Delle Monache, Luca; Tong, Daniel; Kim, Hyun-Cheol
2017-12-01
This study applies the Gridpoint Statistical Interpolation (GSI) 3D-Var assimilation tool originally developed by the National Centers for Environmental Prediction (NCEP), to improve surface PM2.5 predictions over the contiguous United States (CONUS) by assimilating aerosol optical depth (AOD) and surface PM2.5 in version 5.1 of the Community Multi-scale Air Quality (CMAQ) modeling system. An optimal interpolation (OI) method implemented earlier (Tang et al., 2015) for the CMAQ modeling system is also tested for the same period (July 2011) over the same CONUS. Both GSI and OI methods assimilate surface PM2.5 observations at 00:00, 06:00, 12:00 and 18:00 UTC, and MODIS AOD at 18:00 UTC. The assimilations of observations using both GSI and OI generally help reduce the prediction biases and improve correlation between model predictions and observations. In the GSI experiments, assimilation of surface PM2.5 (particle matter with diameter root mean squared error (RMSE). It should be noted that the 3D-Var and OI methods used here have several big differences besides the data assimilation schemes. For instance, the OI uses relatively big model uncertainties, which helps yield smaller mean biases, but sometimes causes the RMSE to increase. We also examine and discuss the sensitivity of the assimilation experiments' results to the AOD forward operators.
Sa, Ben-Hao; Zhou, Dai-Mei; Yan, Yu-Liang; Li, Xiao-Mei; Feng, Sheng-Qin; Dong, Bao-Guo; Cai, Xu
2012-02-01
We have updated the parton and hadron cascade model PACIAE for the relativistic nuclear collisions, from based on JETSET 6.4 and PYTHIA 5.7 to based on PYTHIA 6.4, and renamed as PACIAE 2.0. The main physics concerning the stages of the parton initiation, parton rescattering, hadronization, and hadron rescattering were discussed. The structures of the programs were briefly explained. In addition, some calculated examples were compared with the experimental data. It turns out that this model (program) works well. Program summaryProgram title: PACIAE version 2.0 Catalogue identifier: AEKI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 297 523 No. of bytes in distributed program, including test data, etc.: 2 051 274 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: DELL Studio XPS and others with a FORTRAN 77 or GFORTRAN compiler Operating system: Unix/Linux RAM: 1 G words Word size: 64 bits Classification: 11.2 Nature of problem: The Monte Carlo simulation of hadron transport (cascade) model is successful in studying the observables at final state in the relativistic nuclear collisions. However the high p suppression, the jet quenching (energy loss), and the eccentricity scaling of v etc., observed in high energy nuclear collisions, indicates the important effect of the initial partonic state on the final hadronic state. Therefore better parton and hadron transport (cascade) models for the relativistic nuclear collisions are highly required. Solution method: The parton and hadron cascade model PACIAE is originally based on the JETSET 7.4 and PYTHIA 5.7. The PYTHIA model has been updated to PYTHIA 6.4 with the additions of new physics, the improvements in existing physics, and the
Williams, Kate E; Berthelsen, Donna; Walker, Sue; Nicholson, Jan M
2017-01-01
This article documents the longitudinal and reciprocal relations among behavioral sleep problems and emotional and attentional self-regulation in a population sample of 4,109 children participating in Growing Up in Australia: The Longitudinal Study of Australian Children (LSAC)-Infant Cohort. Maternal reports of children's sleep problems and self-regulation were collected at five time-points from infancy to 8-9 years of age. Longitudinal structural equation modeling supported a developmental cascade model in which sleep problems have a persistent negative effect on emotional regulation, which in turn contributes to ongoing sleep problems and poorer attentional regulation in children over time. Findings suggest that sleep behaviors are a key target for interventions that aim to improve children's self-regulatory capacities.
Guse, Björn; Kail, Jochem; Radinger, Johannes; Schröder, Maria; Kiesel, Jens; Hering, Daniel; Wolter, Christian; Fohrer, Nicola
2015-11-15
Climate and land use changes affect the hydro- and biosphere at different spatial scales. These changes alter hydrological processes at the catchment scale, which impact hydrodynamics and habitat conditions for biota at the river reach scale. In order to investigate the impact of large-scale changes on biota, a cascade of models at different scales is required. Using scenario simulations, the impact of climate and land use change can be compared along the model cascade. Such a cascade of consecutively coupled models was applied in this study. Discharge and water quality are predicted with a hydrological model at the catchment scale. The hydraulic flow conditions are predicted by hydrodynamic models. The habitat suitability under these hydraulic and water quality conditions is assessed based on habitat models for fish and macroinvertebrates. This modelling cascade was applied to predict and compare the impacts of climate- and land use changes at different scales to finally assess their effects on fish and macroinvertebrates. Model simulations revealed that magnitude and direction of change differed along the modelling cascade. Whilst the hydrological model predicted a relevant decrease of discharge due to climate change, the hydraulic conditions changed less. Generally, the habitat suitability for fish decreased but this was strongly species-specific and suitability even increased for some species. In contrast to climate change, the effect of land use change on discharge was negligible. However, land use change had a stronger impact on the modelled nitrate concentrations affecting the abundances of macroinvertebrates. The scenario simulations for the two organism groups illustrated that direction and intensity of changes in habitat suitability are highly species-dependent. Thus, a joined model analysis of different organism groups combined with the results of hydrological and hydrodynamic models is recommended to assess the impact of climate and land use changes on
Inferring network structure from cascades
Ghonge, Sushrut; Vural, Dervis Can
2017-07-01
Many physical, biological, and social phenomena can be described by cascades taking place on a network. Often, the activity can be empirically observed, but not the underlying network of interactions. In this paper we offer three topological methods to infer the structure of any directed network given a set of cascade arrival times. Our formulas hold for a very general class of models where the activation probability of a node is a generic function of its degree and the number of its active neighbors. We report high success rates for synthetic and real networks, for several different cascade models.
Interpolating of climate data using R
Reinhardt, Katja
2017-04-01
Interpolation methods are used in many different geoscientific areas, such as soil physics, climatology and meteorology. Thereby, unknown values are calculated by using statistical calculation approaches applied on known values. So far, the majority of climatologists have been using computer languages, such as FORTRAN or C++, but there is also an increasing number of climate scientists using R for data processing and visualization. Most of them, however, are still working with arrays and vector based data which is often associated with complex R code structures. For the presented study, I have decided to convert the climate data into geodata and to perform the whole data processing using the raster package, gstat and similar packages, providing a much more comfortable way for data handling. A central goal of my approach is to create an easy to use, powerful and fast R script, implementing the entire geodata processing and visualization into a single and fully automated R based procedure, which allows avoiding the necessity of using other software packages, such as ArcGIS or QGIS. Thus, large amount of data with recurrent process sequences can be processed. The aim of the presented study, which is located in western Central Asia, is to interpolate wind data based on the European reanalysis data Era-Interim, which are available as raster data with a resolution of 0.75˚ x 0.75˚ , to a finer grid. Therefore, various interpolation methods are used: inverse distance weighting, the geostatistical methods ordinary kriging and regression kriging, generalized additve model and the machine learning algorithms support vector machine and neural networks. Besides the first two mentioned methods, the methods are used with influencing factors, e.g. geopotential and topography.
Energy Technology Data Exchange (ETDEWEB)
Song, Jun Beom [Dept. of Aviation Maintenance, Dongwon Institute of Science and Technology, Yangsan (Korea, Republic of); Byun, Young Seop; Jeong, Jin Seok; Kim, Jeong; Kang, Beom Soo [Dept. of Aerospace Engineering, Pusan National University, Busan (Korea, Republic of)
2016-11-15
This paper proposes a cascaded control structure and a method of practical application for attitude control of a multi-rotor Unmanned aerial vehicle (UAV). The cascade control, which has tighter control capability than a single-loop control, is rarely used in attitude control of a multi-rotor UAV due to the input-output relation, which is no longer simply a set-point to Euler angle response transfer function of a single-loop PID control, but there are multiply measured signals and interactive control loops that increase the complexity of evaluation in conventional way of design. However, it is proposed in this research a method that can optimize a cascade control with a primary and secondary loops and a PID controller for each loop. An investigation of currently available PID-tuning methods lead to selection of the Simple internal model control (SIMC) method, which is based on the Internal model control (IMC) and direct-synthesis method. Through the analysis and experiments, this research proposes a systematic procedure to implement a cascaded attitude controller, which includes the flight test, system identification and SIMC-based PID-tuning. The proposed method was validated successfully from multiple applications where the application to roll axis lead to a PID-PID cascade control, but the application to yaw axis lead to that of PID-PI.
International Nuclear Information System (INIS)
Song, Jun Beom; Byun, Young Seop; Jeong, Jin Seok; Kim, Jeong; Kang, Beom Soo
2016-01-01
This paper proposes a cascaded control structure and a method of practical application for attitude control of a multi-rotor Unmanned aerial vehicle (UAV). The cascade control, which has tighter control capability than a single-loop control, is rarely used in attitude control of a multi-rotor UAV due to the input-output relation, which is no longer simply a set-point to Euler angle response transfer function of a single-loop PID control, but there are multiply measured signals and interactive control loops that increase the complexity of evaluation in conventional way of design. However, it is proposed in this research a method that can optimize a cascade control with a primary and secondary loops and a PID controller for each loop. An investigation of currently available PID-tuning methods lead to selection of the Simple internal model control (SIMC) method, which is based on the Internal model control (IMC) and direct-synthesis method. Through the analysis and experiments, this research proposes a systematic procedure to implement a cascaded attitude controller, which includes the flight test, system identification and SIMC-based PID-tuning. The proposed method was validated successfully from multiple applications where the application to roll axis lead to a PID-PID cascade control, but the application to yaw axis lead to that of PID-PI
Directory of Open Access Journals (Sweden)
Mauricio Ayala-Rincón
2013-01-01
Full Text Available In this work, we present an algebraic approach for modeling the two-party cascade protocol of Dolev-Yao and for fully formalizing its security in the specification language of the Prototype Verification System PVS. Although cascade protocols could be argued to be a very limited model, it should be stressed here that they are the basis of more sophisticated protocols of great applicability, such as those which allow treatment of multiparty, tuples, nonces, name-stamps, signatures, etc. In the current algebraic approach, steps of the protocol are modeled in a monoid freely generated by the cryptographic operators. Words in this monoid are specified as finite sequences and the whole protocol as a finite sequence of protocol steps, that are functions from pairs of users to sequences of cryptographic operators. In a previous work, assuming that for balanced protocols admissible words produced by a potential intruder should be balanced, a formalization of the characterization of security of this kind of protocols was given in PVS. In this work, the previously assumed property is also formalized, obtaining in this way a complete formalization which mathematically guarantees the security of these protocols. Despite such property being relatively easy to specify, obtaining a complete formalization requires a great amount of effort, because several algebraic properties, that are related to the preservation of the balancing property of the admissible language of the intruder, should be formalized in the granularity of the underlying data structure (of finite sequences used in the specification. Among these properties, the most complex are related to the notion of linkage property, which allows for a systematic analysis of words of the admissible language of a potential saboteur, showing how he/she is unable to isolate private keys of other users under the assumption of balanced protocols. The difficulties that arose in conducting this formalization are
Kügler, Philipp; Bulelzai, M A K; Erhardt, André H
2017-04-04
Early afterdepolarizations (EADs) are pathological voltage oscillations during the repolarization phase of cardiac action potentials (APs). EADs are caused by drugs, oxidative stress or ion channel disease, and they are considered as potential precursors to cardiac arrhythmias in recent attempts to redefine the cardiac drug safety paradigm. The irregular behaviour of EADs observed in experiments has been previously attributed to chaotic EAD dynamics under periodic pacing, made possible by a homoclinic bifurcation in the fast subsystem of the deterministic AP system of differential equations. In this article we demonstrate that a homoclinic bifurcation in the fast subsystem of the action potential model is neither a necessary nor a sufficient condition for the genesis of chaotic EADs. We rather argue that a cascade of period doubling (PD) bifurcations of limit cycles in the full AP system paves the way to chaotic EAD dynamics across a variety of models including a) periodically paced and spontaneously active cardiomyocytes, b) periodically paced and non-active cardiomyocytes as well as c) unpaced and spontaneously active cardiomyocytes. Furthermore, our bifurcation analysis reveals that chaotic EAD dynamics may coexist in a stable manner with fully regular AP dynamics, where only the initial conditions decide which type of dynamics is displayed. EADs are a potential source of cardiac arrhythmias and hence are of relevance both from the viewpoint of drug cardiotoxicity testing and the treatment of cardiomyopathies. The model-independent association of chaotic EADs with period doubling cascades of limit cycles introduced in this article opens novel opportunities to study chaotic EADs by means of bifurcation control theory and inverse bifurcation analysis. Furthermore, our results may shed new light on the synchronization and propagation of chaotic EADs in homogeneous and heterogeneous multicellular and cardiac tissue preparations.
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.
Spatial interpolation methods for monthly rainfalls and temperatures in Basilicata
Directory of Open Access Journals (Sweden)
Ferrara A
2008-12-01
Full Text Available Spatial interpolated climatic data on grids are important as input in forest modeling because climate spatial variability has a direct effect on productivity and forest growth. Maps of climatic variables can be obtained by different interpolation methods depending on data quality (number of station, spatial distribution, missed data etc. and topographic and climatic features of study area. In this paper four methods are compared to interpolate monthly rainfall at regional scale: 1 inverse distance weighting (IDW; 2 regularized spline with tension (RST; 3 ordinary kriging (OK; 4 universal kriging (UK. Besides, an approach to generate monthly surfaces of temperatures over regions of complex terrain and with limited number of stations is presented. Daily data were gathered from 1976 to 2006 period and then gaps in the time series were filled in order to obtain monthly mean temperatures and cumulative precipitation. Basic statistics of monthly dataset and analysis of relationship of temperature and precipitation to elevation were performed. A linear relationship was found between temperature and altitude, while no relationship was found between rainfall and elevation. Precipitations were then interpolated without taking into account elevation. Based on root mean squared error for each month the best method was ranked. Results showed that universal kriging (UK is the best method in spatial interpolation of rainfall in study area. Then cross validation was used to compare prediction performance of tree different variogram model (circular, spherical, exponential using UK algorithm in order to produce final maps of monthly precipitations. Before interpolating temperatures were referred to see level using the calculated lapse rate and a digital elevation model (DEM. The result of interpolation with RST was then set to originally elevation with an inverse procedure. To evaluate the quality of interpolated surfaces a comparison between interpolated and
Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H
2001-01-01
Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.
Arbuthnott, Alexis E; Lewis, Stephen P; Bailey, Heidi N
2015-01-01
This study examined relations between repeated rumination trials and emotions in nonsuicidal self-injury (NSSI) and eating disorder behaviors (EDBs) within the context of the emotional cascade model (Selby, Anestis, & Joiner, 2008). Rumination was repeatedly induced in 342 university students (79.2% female, Mage = 18.61, standard error = .08); negative and positive emotions were reported after each rumination trial. Repeated measures analyses of variance were used to examine the relations between NSSI and EDB history and changes in emotions. NSSI history associated with greater initial increases in negative emotions, whereas EDB history associated with greater initial decreases in positive emotions. Baseline negative emotional states and trait emotion regulation mediated the relation between NSSI/EDB history and emotional states after rumination. Although NSSI and EDBs share similarities in emotion dysregulation, differences also exist. Both emotion dysregulation and maladaptive cognitive processes should be targeted in treatment for NSSI and EDBs. © 2014 Wiley Periodicals, Inc.
Song, Ju-Hyun; Volling, Brenda L.; Lane, Jonathan D.; Wellman, Henry M.
2016-01-01
A developmental cascade model was tested to examine longitudinal associations among firstborn children’s aggression, Theory-of-Mind, and antagonism toward their younger sibling during the first year of siblinghood. Aggression and Theory-of-Mind were assessed before the birth of a sibling, and 4 and 12 months after the birth, and antagonism was examined at 4 and 12 months in a sample of 208 firstborn children (initial M age = 30 months, 56% girls) from primarily European American, middle- class families. Firstborns’ aggression consistently predicted high sibling antagonism both directly and through poorer Theory-of-Mind. Results highlight the importance of examining longitudinal influences across behavioral, social-cognitive, and relational factors that are closely intertwined even from the early years of life. PMID:27096923
Li, Hongzhi; Zhong, Ziyan; Li, Lin; Gao, Rui; Cui, Jingxia; Gao, Ting; Hu, Li Hong; Lu, Yinghua; Su, Zhong-Min; Li, Hui
2015-05-30
A cascaded model is proposed to establish the quantitative structure-activity relationship (QSAR) between the overall power conversion efficiency (PCE) and quantum chemical molecular descriptors of all-organic dye sensitizers. The cascaded model is a two-level network in which the outputs of the first level (JSC, VOC, and FF) are the inputs of the second level, and the ultimate end-point is the overall PCE of dye-sensitized solar cells (DSSCs). The model combines quantum chemical methods and machine learning methods, further including quantum chemical calculations, data division, feature selection, regression, and validation steps. To improve the efficiency of the model and reduce the redundancy and noise of the molecular descriptors, six feature selection methods (multiple linear regression, genetic algorithms, mean impact value, forward selection, backward elimination, and +n-m algorithm) are used with the support vector machine. The best established cascaded model predicts the PCE values of DSSCs with a MAE of 0.57 (%), which is about 10% of the mean value PCE (5.62%). The validation parameters according to the OECD principles are R(2) (0.75), Q(2) (0.77), and Qcv2 (0.76), which demonstrate the great goodness-of-fit, predictivity, and robustness of the model. Additionally, the applicability domain of the cascaded QSAR model is defined for further application. This study demonstrates that the established cascaded model is able to effectively predict the PCE for organic dye sensitizers with very low cost and relatively high accuracy, providing a useful tool for the design of dye sensitizers with high PCE. © 2015 Wiley Periodicals, Inc.
Interpolation of rational matrix functions
Ball, Joseph A; Rodman, Leiba
1990-01-01
This book aims to present the theory of interpolation for rational matrix functions as a recently matured independent mathematical subject with its own problems, methods and applications. The authors decided to start working on this book during the regional CBMS conference in Lincoln, Nebraska organized by F. Gilfeather and D. Larson. The principal lecturer, J. William Helton, presented ten lectures on operator and systems theory and the interplay between them. The conference was very stimulating and helped us to decide that the time was ripe for a book on interpolation for matrix valued functions (both rational and non-rational). When the work started and the first partial draft of the book was ready it became clear that the topic is vast and that the rational case by itself with its applications is already enough material for an interesting book. In the process of writing the book, methods for the rational case were developed and refined. As a result we are now able to present the rational case as an indepe...
Nourani, Vahid; Mousavi, Shahram; Dabrowska, Dominika; Sadikoglu, Fahreddin
2017-05-01
As an innovation, both black box and physical-based models were incorporated into simulating groundwater flow and contaminant transport. Time series of groundwater level (GL) and chloride concentration (CC) observed at different piezometers of study plain were firstly de-noised by the wavelet-based de-noising approach. The effect of de-noised data on the performance of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) was evaluated. Wavelet transform coherence was employed for spatial clustering of piezometers. Then for each cluster, ANN and ANFIS models were trained to predict GL and CC values. Finally, considering the predicted water heads of piezometers as interior conditions, the radial basis function as a meshless method which solves partial differential equations of GFCT, was used to estimate GL and CC values at any point within the plain where there is not any piezometer. Results indicated that efficiency of ANFIS based spatiotemporal model was more than ANN based model up to 13%.
Liu, Junsheng; Bullock, Amanda; Coplan, Robert J; Chen, Xinyin; Li, Dan; Zhou, Ying
2018-03-01
This study explored the longitudinal relations among peer victimization, depression, and academic achievement in Chinese primary school students. Participants were N = 945 fourth-grade students (485 boys, 460 girls; M age = 10.16 years, SD = 2 months) attending elementary schools in Shanghai, People's Republic of China. Three waves of data on peer victimization, depression, and academic achievement were collected from peer nominations, self-reports, and school records, respectively. The results indicated that peer victimization had both direct and indirect effects on later depression and academic achievement. Depression also had both direct and indirect negative effects on later academic achievement, but demonstrated only an indirect effect on later peer victimization. Finally, academic achievement had both direct and indirect negative effects on later peer victimization and depression. The findings show that there are cross-cultural similarities and differences in the various transactions that exist among peer victimization, depression, and academic achievement. Statement of contribution What is already known on this subject? Peer victimization directly and indirectly relates to depression and academic achievement. Depression directly and indirectly relates to academic achievement. Academic achievement directly and indirectly relates to depression. What the present study adds? A developmental cascade approach was used to assess the interrelations among peer victimization, depression, and academic achievement. Academic achievement mediates the relation between peer victimization and depression. Depression is related to peer victimization through academic achievement. Academic achievement directly and indirectly relates to peer victimization. Academic achievement is related to depression through peer victimization. © 2017 The British Psychological Society.
Myrdal, Paul B; Stein, Stephen W; Mogalian, Erik; Hoye, William; Gupta, Abhishek
2004-09-01
The product performance of a series of solution Metered Dose Inhalers (MDIs) were evaluated using the TSI Model 3306 Impactor Inlet and the Andersen Cascade Impactor (ACI). The goal of the study was to test whether the fine particle and coarse particle depositions obtained using the Model 3306 were comparable to those results obtained by ACI testing. The analysis using the Model 3306 was performed as supplied by the manufacturer as well as with 20 cm and 40 cm vertical extensions that were inserted between the Model 3306 and the USP Inlet. Nine different solution formulations were evaluated. The drug concentrations ranged from 0.08 to 0.8% w/w and the ethanol cosolvent concentration varied between 5 and 20% w/w. In general, it was found that good correlations between the two instruments were obtained. However, for formulations containing 10-20% w/w ethanol it is shown that an extension fitted to the Model 3306 yielded an improved correlation to those obtained from the ACI.
Directory of Open Access Journals (Sweden)
Peilu Liu
2017-10-01
Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
Jing, Zhao; Wu, Lixin; Ma, Xiaohui
2016-08-01
The authors regret that the Acknowledgements section in Jing et al. (2016) neglected to give proper credit to the model development team and to the intellectual work behind the model simulation and wish to add the following acknowledgements: We are very grateful to the developers of the coupled regional climate model (CRCM) used in this study. The CRCM was developed at Texas A&M University by Dr. Raffaele Montuoro under the direction of Dr. Ping Chang, with support from National Science Foundation Grants AGS-1067937 and AGS-1347808, Department of Energy Grant DE-SC0006824, as well as National Oceanic and Atmospheric Administration Grant NA11OAR4310154. The design of the reported CRCM simulations was led by Dr. Ping Chang and carried out by Dr. Xiaohui Ma as a part of her dissertation research under the supervision of Dr. Ping Chang, supported by National Science Foundation Grants AGS-1067937 and AGS-1347808. The authors would like to apologise for any inconvenience caused.
Evaluation of various interpolants available in DICE
Energy Technology Data Exchange (ETDEWEB)
Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reu, Phillip L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Crozier, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-02-01
This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an image correlation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel intensity data as a continuous signal. Interpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guide the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the interpolants implemented.
Hadron cascades produced by electromagnetic cascades
International Nuclear Information System (INIS)
Nelson, W.R.; Jenkins, T.M.; Ranft, J.
1986-12-01
A method for calculating high energy hadron cascades induced by multi-GeV electron and photon beams is described. Using the EGS4 computer program, high energy photons in the EM shower are allowed to interact hadronically according to the vector meson dominance (VMD) model, facilitated by a Monte Carlo version of the dual multistring fragmentation model which is used in the hadron cascade code FLUKA. The results of this calculation compare very favorably with experimental data on hadron production in photon-proton collisions and on the hadron production by electron beams on targets (i.e., yields in secondary particle beam lines). Electron beam induced hadron star density contours are also presented and are compared with those produced by proton beams. This FLUKA-EGS4 coupling technique could find use in the design of secondary beams, in the determination high energy hadron source terms for shielding purposes, and in the estimation of induced radioactivity in targets, collimators and beam dumps
Analysis of ECT Synchronization Performance Based on Different Interpolation Methods
Directory of Open Access Journals (Sweden)
Yang Zhixin
2014-01-01
Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.
de Oliveira, Samuel Conceição; de Castro, Heizir Ferreira; Visconti, Alexandre Eliseu Stourdze; Giudici, Reinaldo
2015-03-01
Experiments of continuous alcoholic fermentation of sugarcane juice with flocculating yeast recycle were conducted in a system of two 0.22-L tower bioreactors in series, operated at a range of dilution rates (D 1 = D 2 = 0.27-0.95 h(-1)), constant recycle ratio (α = F R /F = 4.0) and a sugar concentration in the feed stream (S 0) around 150 g/L. The data obtained in these experimental conditions were used to adjust the parameters of a mathematical model previously developed for the single-stage process. This model considers each of the tower bioreactors as a perfectly mixed continuous reactor and the kinetics of cell growth and product formation takes into account the limitation by substrate and the inhibition by ethanol and biomass, as well as the substrate consumption for cellular maintenance. The model predictions agreed satisfactorily with the measurements taken in both stages of the cascade. The major differences with respect to the kinetic parameters previously estimated for a single-stage system were observed for the maximum specific growth rate, for the inhibition constants of cell growth and for the specific rate of substrate consumption for cell maintenance. Mathematical models were validated and used to simulate alternative operating conditions as well as to analyze the performance of the two-stage process against that of the single-stage process.
Fouladi Osgouei, Hojjatollah; Zarghami, Mahdi; Ashouri, Hamed
2017-07-01
The availability of spatial, high-resolution rainfall data is one of the most essential needs in the study of water resources. These data are extremely valuable in providing flood awareness for dense urban and industrial areas. The first part of this paper applies an optimization-based method to the calibration of radar data based on ground rainfall gauges. Then, the climatological Z-R relationship for the Sahand radar, located in the East Azarbaijan province of Iran, with the help of three adjacent rainfall stations, is obtained. The new climatological Z-R relationship with a power-law form shows acceptable statistical performance, making it suitable for radar-rainfall estimation by the Sahand radar outputs. The second part of the study develops a new heterogeneous random-cascade model for spatially disaggregating the rainfall data resulting from the power-law model. This model is applied to the radar-rainfall image data to disaggregate rainfall data with coverage area of 512 × 512 km2 to a resolution of 32 × 32 km2. Results show that the proposed model has a good ability to disaggregate rainfall data, which may lead to improvement in precipitation forecasting, and ultimately better water-resources management in this arid region, including Urmia Lake.
Directory of Open Access Journals (Sweden)
Mingjian Sun
2015-01-01
Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.
Stochastic background of atmospheric cascades
International Nuclear Information System (INIS)
Wilk, G.; Wlodarczyk, Z.
1993-01-01
Fluctuations in the atmospheric cascades developing during the propagation of very high energy cosmic rays through the atmosphere are investigated using stochastic branching model of pure birth process with immigration. In particular, we show that the multiplicity distributions of secondaries emerging from gamma families are much narrower than those resulting from hadronic families. We argue that the strong intermittent like behaviour found recently in atmospheric families results from the fluctuations in the cascades themselves and are insensitive to the details of elementary interactions
Importance of interpolation and coincidence errors in data fusion
Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana
2018-02-01
The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.
Interpolation of vector fields from human cardiac DT-MRI
Yang, F.; Zhu, Y. M.; Rapacchi, S.; Luo, J. H.; Robini, M.; Croisille, P.
2011-03-01
There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.
Interpolation of vector fields from human cardiac DT-MRI
International Nuclear Information System (INIS)
Yang, F; Zhu, Y M; Rapacchi, S; Robini, M; Croisille, P; Luo, J H
2011-01-01
There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.
An adaptive interpolation scheme for molecular potential energy surfaces
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
Atom-atom collision cascades localization
International Nuclear Information System (INIS)
Kirsanov, V.V.
1980-01-01
The presence of an impurity and thermal vibration influence on the atom-atom collision cascade development is analysed by the computer simulation method (the modificated dynamic model). It is discovered that the relatively low energetic cascades are localized with the temperature increase of an irradiated crystal. On the basis of the given effect the mechanism of splitting of the high energetic cascades into subcascades is proposed. It accounts for two factors: the primary knocked atom energy and the irradiated crystal temperature. Introduction of an impurity also localizes the cascades independently from the impurity atom mass. The cascades localization leads to intensification of the process of annealing in the cascades and reduction of the post-cascade vacancy cluster sizes. (author)
Energy Technology Data Exchange (ETDEWEB)
Niemann, V.
1998-01-01
Homogeneous stratified turbulent shear flow was simulated numerically using the cascade model of Eggers and Grossmann (1991). The model is made applicable to homogeneous shear flow by transformation into a coordinate system that moves along with a basic flow with a constant vertical velocity gradient. The author simulated cases of stable thermal stratification with Richardson numbers in the range of 0{<=}Ri{<=}1. The simulation data were evaluated with particular regard to the anisotropic characteristics of the turbulence field. Further, the results are compared with some common equation systems up to second order. (orig.) [Deutsch] Thema der vorliegenden Dissertation ist die numerische Simulation homogener geschichteter turbulenter Scherstroemungen. Grundlage der Simulation ist das von Eggers and Grossmann (1991) entwickelte Kaskadenmodell. Dieses Modell wird durch Transformation in ein Koordinatensystem, das mit einem Grundstrom mit konstantem vertikalen Geschwindigkeitsgradienten mitbewegt wird, auf homogene Scherstroemungen angewendet. Simuliert werden Faelle mit stabiler thermischer Schichtung mit Richardsonzahlen im Bereich von 0{<=}Ri{<=}1. Der Schwerpunkt bei der Auswertung der Simulationsdaten liegt auf der Untersuchung der Anisotropie-Eigenschaften des Turbulenzfeldes. Darueber hinaus wird ein Vergleich mit einigen gaengigen Schliessungsansaetzen bis zur zweiten Ordnung gezogen. (orig.)
Muzy, Jean-François; Baïle, Rachel; Bacry, Emmanuel
2013-04-01
In this paper we propose a new model for volatility fluctuations in financial time series. This model relies on a nonstationary Gaussian process that exhibits aging behavior. It turns out that its properties, over any finite time interval, are very close to continuous cascade models. These latter models are indeed well known to reproduce faithfully the main stylized facts of financial time series. However, it involves a large-scale parameter (the so-called "integral scale" where the cascade is initiated) that is hard to interpret in finance. Moreover, the empirical value of the integral scale is in general deeply correlated to the overall length of the sample. This feature is precisely predicted by our model, which, as illustrated by various examples from daily stock index data, quantitatively reproduces the empirical observations.
Directory of Open Access Journals (Sweden)
Zhenling Yao
2008-01-01
Full Text Available Corticosteroids (CS effects on insulin resistance related genes in rat skeletal muscle were studied. In our acute study, adrenalectomized (ADX rats were given single doses of 50 mg/kg methylprednisolone (MPL intravenously. In our chronic study, ADX rats were implanted with Alzet mini-pumps giving zero-order release rates of 0.3 mg/kg/h MPL and sacriﬁced at various times up to 7 days. Total RNA was extracted from gastrocnemius muscles and hybridized to Affymetrix GeneChips. Data mining and literature searches identiﬁed 6 insulin resistance related genes which exhibited complex regulatory pathways. Insulin receptor substrate-1 (IRS-1, uncoupling protein 3 (UCP3, pyruvate dehydrogenase kinase isoenzyme 4 (PDK4, fatty acid translocase (FAT and glycerol-3-phosphate acyltransferase (GPAT dynamic proﬁles were modeled with mutual effects by calculated nuclear drug-receptor complex (DR(N and transcription factors. The oscillatory feature of endothelin-1 (ET-1 expression was depicted by a negative feedback loop. These integrated models provide test- able quantitative hypotheses for these regulatory cascades.
Observability of the h-->bb channel in cascade decay of SUSY particles within the SUGRA model
Mitsou, V A
1999-01-01
The possibility of observing the lightest scalar Higgs boson through the decay h->bb is studied within the SUGRA constrained MSSM model. All SUSY processes implemented in PYTHIA are simulated, including PSqPSq, PSqPSg, PSgPSg production. The h-boson is mainly produced via the chi_2^0->chi_1^0h cascade, with the chi_2^0 resulting either from decays of, strongly produced, squarks and gluinos, or from direct chargino/neutralino pair production. The fast simulation package ATLFAST is used for the simulation of the ATLAS detector. The b-tagged jets appearing at the high ETmiss and multi-jet events are used to reconstruct the h->bb decay. In several SUSY scenarios, clean signal can be extracted above SUSY and Standard Model background. The 5-sigma discovery contour curves in the SUGRA parameter space, scanned up to m_0=2000~GeV and m_{1/2}=1000~GeV, are also shown.
Differential Interpolation Effects in Free Recall
Petrusic, William M.; Jamieson, Donald G.
1978-01-01
Attempts to determine whether a sufficiently demanding and difficult interpolated task (shadowing, i.e., repeating aloud) would decrease recall for earlier-presented items as well as for more recent items. Listening to music was included as a second interpolated task. Results support views that serial position effects reflect a single process.…
Transfinite C2 interpolant over triangles
International Nuclear Information System (INIS)
Alfeld, P.; Barnhill, R.E.
1984-01-01
A transfinite C 2 interpolant on a general triangle is created. The required data are essentially C 2 , no compatibility conditions arise, and the precision set includes all polynomials of degree less than or equal to eight. The symbol manipulation language REDUCE is used to derive the scheme. The scheme is discretized to two different finite dimensional C 2 interpolants in an appendix
Interpolation of diffusion weighted imaging datasets
DEFF Research Database (Denmark)
Dyrby, Tim B; Lundell, Henrik; Burke, Mark W
2014-01-01
by the interpolation method used should be considered. The results indicate that conventional interpolation methods can be successfully applied to DWI datasets for mining anatomical details that are normally seen only at higher resolutions, which will aid in tractography and microstructural mapping of tissue...
Analysis of velocity planning interpolation algorithm based on NURBS curve
Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng
2017-04-01
To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.
Matching interpolation of CT faulted images based on corresponding object
International Nuclear Information System (INIS)
Chen Lingna
2005-01-01
For CT faulted images interpolation this paper presents a corresponding pint matching interpolation algorithm, which is based on object feature. Compared with the traditional interpolation algorithms, the new algorithm improves visual effect and its interpolation error. The computer experiments show that the algorithm can effectively improve the interpolation quality, especially more clear scene at the boundary. (authors)
The Paleoclimate Uncertainty Cascade: Tracking Proxy Errors Via Proxy System Models.
Emile-Geay, J.; Dee, S. G.; Evans, M. N.; Adkins, J. F.
2014-12-01
Paleoclimatic observations are, by nature, imperfect recorders of climate variables. Empirical approaches to their calibration are challenged by the presence of multiple sources of uncertainty, which may confound the interpretation of signals and the identifiability of the noise. In this talk, I will demonstrate the utility of proxy system models (PSMs, Evans et al, 2013, 10.1016/j.quascirev.2013.05.024) to quantify the impact of all known sources of uncertainty. PSMs explicitly encode the mechanistic knowledge of the physical, chemical, biological and geological processes from which paleoclimatic observations arise. PSMs may be divided into sensor, archive and observation components, all of which may conspire to obscure climate signals in actual paleo-observations. As an example, we couple a PSM for the δ18O of speleothem calcite to an isotope-enabled climate model (Dee et al, submitted) to analyze the potential of this measurement as a proxy for precipitation amount. A simple soil/karst model (Partin et al, 2013, 10.1130/G34718.1) is used as sensor model, while a hiatus-permitting chronological model (Haslett & Parnell, 2008, 10.1111/j.1467-9876.2008.00623.x) is used as part of the observation model. This subdivision allows us to explicitly model the transformation from precipitation amount to speleothem calcite δ18O as a multi-stage process via a physical and chemical sensor model, and a stochastic archive model. By illustrating the PSM's behavior within the context of the climate simulations, we show how estimates of climate variability may be affected by each submodel's transformation of the signal. By specifying idealized climate signals(periodic vs. episodic, slow vs. fast) to the PSM, we investigate how frequency and amplitude patterns are modulated by sensor and archive submodels. To the extent that the PSM and the climate models are representative of real world processes, then the results may help us more accurately interpret existing paleodata
Elliptic flow in a hadron-string cascade model at 130 GeV energy
Indian Academy of Sciences (India)
On the other hand, this model does not explain 2 at high T or in peripheral collisions and thus generally, it underestimates the elliptic flow at RHIC energy. ... Hokkaido University, Sapporo 060-0810, Japan; Nuclear Data Center, Department of Nuclear Energy System, Japan Atomic Energy Research Institute, Tokai, ...
Laos Organization Name Using Cascaded Model Based on SVM and CRF
Directory of Open Access Journals (Sweden)
Duan Shaopeng
2017-01-01
Full Text Available According to the characteristics of Laos organization name, this paper proposes a two layer model based on conditional random field (CRF and support vector machine (SVM for Laos organization name recognition. A layer of model uses CRF to recognition simple organization name, and the result is used to support the decision of the second level. Based on the driving method, the second layer uses SVM and CRF to recognition the complicated organization name. Finally, the results of the two levels are combined, And by a subsequent treatment to correct results of low confidence recognition. The results show that this approach based on SVM and CRF is efficient in recognizing organization name through open test for real linguistics, and the recalling rate achieve 80. 83％and the precision rate achieves 82. 75％.
Srivastava, Dinesh K.; Bass, Steffen A.; Chatterjee, Rupa
2017-12-01
We study the production and dynamics of heavy quarks in the parton cascade model for relativistic heavy ion collisions. The model is motivated by the QCD parton picture and describes the dynamics of an ultrarelativistic heavy ion collision in terms of cascading partons which undergo scattering and multiplication while propagating. We focus on the dynamics of charm quark production and evolution in p +p and Au + Au collisions for several different interaction scenarios, viz., collisions only between primary partons without radiation of gluons, multiple collisions without radiation of gluons, and multiple collisions with radiation of gluons, allowing us to isolate the contributions of parton rescattering and radiation to charm production. We also discuss results of an eikonal approximation of the collision which provides a valuable comparison with minijet calculations and clearly brings out the importance of multiple collisions.
Sitnick, Stephanie L; Shaw, Daniel S; Hyde, Luke W
2014-02-01
This study examined developmentally salient risk and protective factors of adolescent substance use assessed during early childhood and early adolescence using a sample of 310 low-income boys. Child problem behavior and proximal family risk and protective factors (i.e., parenting and maternal depression) during early childhood, as well as child and family factors and peer deviant behavior during adolescence, were explored as potential precursors to later substance use during adolescence using structural equation modeling. Results revealed that early childhood risk and protective factors (i.e., child externalizing problems, mothers' depressive symptomatology, and nurturant parenting) were indirectly related to substance use at the age of 17 via risk and protective factors during early and middle adolescence (i.e., parental knowledge and externalizing problems). The implications of these findings for early prevention and intervention are discussed.
Accuracy of stream habitat interpolations across spatial scales
Sheehan, Kenneth R.; Welsh, Stuart A.
2013-01-01
Stream habitat data are often collected across spatial scales because relationships among habitat, species occurrence, and management plans are linked at multiple spatial scales. Unfortunately, scale is often a factor limiting insight gained from spatial analysis of stream habitat data. Considerable cost is often expended to collect data at several spatial scales to provide accurate evaluation of spatial relationships in streams. To address utility of single scale set of stream habitat data used at varying scales, we examined the influence that data scaling had on accuracy of natural neighbor predictions of depth, flow, and benthic substrate. To achieve this goal, we measured two streams at gridded resolution of 0.33 × 0.33 meter cell size over a combined area of 934 m2 to create a baseline for natural neighbor interpolated maps at 12 incremental scales ranging from a raster cell size of 0.11 m2 to 16 m2 . Analysis of predictive maps showed a logarithmic linear decay pattern in RMSE values in interpolation accuracy for variables as resolution of data used to interpolate study areas became coarser. Proportional accuracy of interpolated models (r2 ) decreased, but it was maintained up to 78% as interpolation scale moved from 0.11 m2 to 16 m2 . Results indicated that accuracy retention was suitable for assessment and management purposes at various scales different from the data collection scale. Our study is relevant to spatial modeling, fish habitat assessment, and stream habitat management because it highlights the potential of using a single dataset to fulfill analysis needs rather than investing considerable cost to develop several scaled datasets.
Abnormal cascading failure spreading on complex networks
International Nuclear Information System (INIS)
Wang, Jianwei; Sun, Enhui; Xu, Bo; Li, Peng; Ni, Chengzhang
2016-01-01
Applying the mechanism of the preferential selection of the flow destination, we develop a new method to quantify the initial load on an edge, of which the flow is transported along the path with the shortest edge weight between two nodes. Considering the node weight, we propose a cascading model on the edge and investigate cascading dynamics induced by the removal of the edge with the largest load. We perform simulated attacks on four types of constructed networks and two actual networks and observe an interesting and counterintuitive phenomenon of the cascading spreading, i.e., gradually improving the capacity of nodes does not lead to the monotonous increase in the robustness of these networks against cascading failures. The non monotonous behavior of cascading dynamics is well explained by the analysis on a simple graph. We additionally study the effect of the parameter of the node weight on cascading dynamics and evaluate the network robustness by a new metric.
Wade, Mark; Madigan, Sheri; Plamondon, Andre; Rodrigues, Michelle; Browne, Dillon; Jenkins, Jennifer M
2017-12-21
Previous studies have demonstrated that various psychosocial risks are associated with poor cognitive functioning in children, and these risks frequently cluster together. In the current longitudinal study, we tested a model in which it was hypothesized that cumulative psychosocial adversity of mothers would have deleterious effects on children's cognitive functioning by compromising socialization processes within families (i.e., parental competence). A prospective community birth cohort of 501 families was recruited when children were newborns. At this time, mothers reported on their current psychosocial circumstances (socioeconomic status, teen parenthood, depression, etc.), which were summed into a cumulative risk score. Families were followed up at 18 months and 3 years, at which point maternal reflective capacity and cognitive sensitivity were measured, respectively. Child cognition (executive functioning, theory of mind, and language ability) was assessed at age 4.5 using age-appropriate observational and standardized tasks. Analyses controlled for child age, gender, number of children in the home, number of years married, and mothers' history of adversity. The results revealed significant declines in child cognition as well as maternal reflective capacity and cognitive sensitivity as the number of psychosocial risks increased. Moreover, longitudinal path analysis showed significant indirect effects from cumulative risk to all three cognitive outcomes via reflective capacity and cognitive sensitivity. Findings suggest that cumulative risk of mothers may partially account for child cognitive difficulties in various domains by disrupting key parental socialization competencies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Frank, R.; Levine, A.; Dijk, O.
2014-01-01
Prevailing economic models of consumer behavior completely ignore the well-documented link between context and evaluation. We propose and test a theory that explicitly incorporates this link. Changes in one group's spending shift the frame of reference that defines consumption standards for others
Petersen, Marcell Elo; Maar, Marie; Larsen, Janus; Møller, Eva Friis; Hansen, Per Juel
2017-05-01
The aim of the study was to investigate the relative importance of bottom-up and top-down forcing on trophic cascades in the pelagic food-web and the implications for water quality indicators (summer phytoplankton biomass and winter nutrients) in relation to management. The 3D ecological model ERGOM was validated and applied in a local set-up of the Kattegat, Denmark, using the off-line Flexsem framework. The model scenarios were conducted by changing the forcing by ± 20% of nutrient inputs (bottom-up) and mesozooplankton mortality (top-down), and both types of forcing combined. The model results showed that cascading effects operated differently depending on the forcing type. In the single-forcing bottom-up scenarios, the cascade directions were in the same direction as the forcing. For scenarios involving top-down, there was a skipped-level-transmission in the trophic responses that was either attenuated or amplified at different trophic levels. On a seasonal scale, bottom-up forcing showed strongest response during winter-spring for DIN and Chl a concentrations, whereas top-down forcing had the highest cascade strength during summer for Chl a concentrations and microzooplankton biomass. On annual basis, the system was more bottom-up than top-down controlled. Microzooplankton was found to play an important role in the pelagic food web as mediator of nutrient and energy fluxes. This study demonstrated that the best scenario for improved water quality was a combined reduction in nutrient input and mesozooplankton mortality calling for the need of an integrated management of marine areas exploited by human activities.
Semi-Lagrangian methods in air pollution models
Directory of Open Access Journals (Sweden)
A. B. Hansen
2011-06-01
Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.
The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.
Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.
All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.
The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.
The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme
Bahar, B; O'Doherty, J V; Vigors, S; Sweeney, T
2016-11-01
The technique of challenging postmortem tissue explants with inflammation inducer such as lipopolysaccharide (LPS) followed by gene expression analysis is used widely for evaluating the immune-suppressing effect of bioactives. Using porcine colonic tissue as an ex-vivo model of mammalian intestinal gut, this study evaluated the effect of incubation time on the integrity of gene transcripts and activation of inflammatory immune gene cascade by LPS treatment. Post-slaughter colon was removed surgically and explants were incubated for 0, 3, 6 and 12 h and the abundance of mRNA transcripts of a panel of 92 immune genes were evaluated using quantitative polymerase chain reaction (qPCR) arrays. The mRNA transcripts were highly intact after 0 and 3 h of incubation; however, after 6 h the degradation was clearly evident. Following 3 h incubation, 98·8% and 100% mRNA transcripts were detectable in the colonic tissue harvested from weaned and mature pigs, respectively. In the explants of weaned piglets, LPS treatment activated inflammatory signalling pathways [high mobility group B1 (HMGB1), dendritic cell maturation, interleukin (IL)-6, IL-8, IL-17F], while these pathways were inhibited by dexamethasone treatment. Activations of inflammatory genes were also evident in the explants collected from the mature pigs subjected to ex-vivo incubation for 3 h in the absence or presence of LPS. It is concluded that the colonic explant remains physiologically viable and responsive to immunological challenge for up to 3 h ex-vivo. © 2016 British Society for Immunology.
Traffic volume estimation using network interpolation techniques.
2013-12-01
Kriging method is a frequently used interpolation methodology in geography, which enables estimations of unknown values at : certain places with the considerations of distances among locations. When it is used in transportation field, network distanc...
Revisiting Veerman’s interpolation method
DEFF Research Database (Denmark)
Christiansen, Peter; Bay, Niels Oluf
2016-01-01
for comparison. Bulge testing and tensile testing of aluminium sheets containingelectro-chemically etched circle grids are performed to experimentally determine the forming limit of the sheet material.The forming limit is determined using (a) Veerman’s interpolation method, (b) exact Lagrangian interpolation......This article describes an investigation of Veerman’s interpolation method and its applicability for determining sheet metalformability. The theoretical foundation is established and its mathematical assumptions are clarified. An exact Lagrangianinterpolation scheme is also established...... and (c) FEsimulations. A comparison of the determined forming limits yields insignificant differences in the limit strain obtainedwith Veerman’s method or exact Lagrangian interpolation for the two sheet metal forming processes investigated. Theagreement with the FE-simulations is reasonable....
Kuu plaat : Interpol Antics. Plaadid kauplusest Lasering
2005-01-01
Heliplaatidest: "Interpol Antics", Scooter "Mind the Gap", Slide-Fifty "The Way Ahead", Psyhhoterror "Freddy, löö esimesena!", Riho Sibul "Must", Bossacucanova "Uma Batida Diferente", "Biscantorat - Sound of the spirit from Glenstal Abbey"
Interpol pidas mõttetalguid / Allan Espenberg
Espenberg, Allan
2008-01-01
Maailma kriminaalspetsialistid tulid Venemaal kokku, et valida rahvusvahelisele kriminaalpolitsei organisatsioonile Interpol uus juhtkond ning määrata kindlaks oma lähemad ja kaugemad tööülesanded
NOAA Optimum Interpolation (OI) SST V2
National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...
Interpolation of uniformly absolutely continuous operators
Czech Academy of Sciences Publication Activity Database
Cobos, F.; Gogatishvili, Amiran; Opic, B.; Pick, L.
2013-01-01
Roč. 286, 5-6 (2013), s. 579-599 ISSN 0025-584X R&D Projects: GA ČR GA201/08/0383 Institutional support: RVO:67985840 Keywords : uniformly absolutely continuous operators * interpolation * type of an interpolation method Subject RIV: BA - General Mathematics Impact factor: 0.658, year: 2013 http://onlinelibrary.wiley.com/doi/10.1002/ mana .201100205/full
Integration and interpolation of sampled waveforms
International Nuclear Information System (INIS)
Stearns, S.D.
1978-01-01
Methods for integrating, interpolating, and improving the signal-to-noise ratio of digitized waveforms are discussed with regard to seismic data from underground tests. The frequency-domain integration method and the digital interpolation method of Schafer and Rabiner are described and demonstrated using test data. The use of bandpass filtering for noise reduction is also demonstrated. With these methods, a backlog of seismic test data has been successfully processed
Interpolation for a subclass of H
Indian Academy of Sciences (India)
|g(zm)| ≤ c |zm − zm |, ∀m ∈ N. Thus it is natural to pose the following interpolation problem for H. ∞. : DEFINITION 4. We say that (zn) is an interpolating sequence in the weak sense for H. ∞ if given any sequence of complex numbers (λn) verifying. |λn| ≤ c ψ(zn,z. ∗ n) |zn − zn |, ∀n ∈ N,. (4) there exists a product fg ∈ H.
Wolchik, Sharlene A; Tein, Jenn-Yun; Sandler, Irwin N; Kim, Han-Joe
2016-08-01
A developmental cascade model from functioning in adolescence to emerging adulthood was tested using data from a 15-year longitudinal follow-up of 240 emerging adults whose families participated in a randomized, experimental trial of a preventive program for divorced families. Families participated in the program or literature control condition when the offspring were ages 9-12. Short-term follow-ups were conducted 3 months and 6 months following completion of the program when the offspring were in late childhood/early adolescence. Long-term follow-ups were conducted 6 years and 15 years after program completion when the offspring were in middle to late adolescence and emerging adulthood, respectively. It was hypothesized that the impact of the program on mental health and substance use outcomes in emerging adulthood would be explained by developmental cascade effects of program effects in adolescence. The results provided support for a cascade effects model. Specifically, academic competence in adolescence had cross-domain effects on internalizing problems and externalizing problems in emerging adulthood. In addition, adaptive coping in adolescence was significantly, negatively related to binge drinking. It was unexpected that internalizing symptoms in adolescence were significantly negatively related to marijuana use and alcohol use. Gender differences occurred in the links between mental health problems and substance use in adolescence and mental health problems and substance use in emerging adulthood.
Calculation of electromagnetic parameter based on interpolation algorithm
International Nuclear Information System (INIS)
Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan
2015-01-01
Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment
Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI
Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.
2015-01-01
Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085
Contingency Analysis of Cascading Line Outage Events
Energy Technology Data Exchange (ETDEWEB)
Thomas L Baldwin; Magdy S Tawfik; Miles McQueen
2011-03-01
As the US power systems continue to increase in size and complexity, including the growth of smart grids, larger blackouts due to cascading outages become more likely. Grid congestion is often associated with a cascading collapse leading to a major blackout. Such a collapse is characterized by a self-sustaining sequence of line outages followed by a topology breakup of the network. This paper addresses the implementation and testing of a process for N-k contingency analysis and sequential cascading outage simulation in order to identify potential cascading modes. A modeling approach described in this paper offers a unique capability to identify initiating events that may lead to cascading outages. It predicts the development of cascading events by identifying and visualizing potential cascading tiers. The proposed approach was implemented using a 328-bus simplified SERC power system network. The results of the study indicate that initiating events and possible cascading chains may be identified, ranked and visualized. This approach may be used to improve the reliability of a transmission grid and reduce its vulnerability to cascading outages.
Dynamic Stability Analysis Using High-Order Interpolation
Directory of Open Access Journals (Sweden)
Juarez-Toledo C.
2012-10-01
Full Text Available A non-linear model with robust precision for transient stability analysis in multimachine power systems is proposed. The proposed formulation uses the interpolation of Lagrange and Newton's Divided Difference. The High-Order Interpolation technique developed can be used for evaluation of the critical conditions of the dynamic system.The technique is applied to a 5-area 45-machine model of the Mexican interconnected system. As a particular case, this paper shows the application of the High-Order procedure for identifying the slow-frequency mode for a critical contingency. Numerical examples illustrate the method and demonstrate the ability of the High-Order technique to isolate and extract temporal modal behavior.
Peter, Noel Aaron; Pandit, Hermant; Le, Grace; Nduhiu, Mathenge; Moro, Emmanuel; Lavy, Christopher
2015-04-27
Injury accounts for 267 000 deaths annually in the nine College of Surgeons of East, Central, and Southern Africa (COSECSA-ASESA) countries, and the introduction of a sustainable standardised trauma training programme across all cadres is essential. We have delivered a primary trauma care (PTC) programme that encompasses both a "provider" and "training the trainers" course using a "cascading training model" across nine COSECSA countries. The first "primary course" in each country is delivered by a team of UK instructors, followed by "cascading courses" to more rural regions led by newly qualified local instructors, with mentorship provided by UK instructors. This study examines the programme's effectiveness in terms of knowledge, clinical confidence, and cost-effectiveness. We collected pre-training and post-training data from 1030 candidates (119 clinical officers, 540 doctors, 260 nurses, and 111 medical students) trained over 28 courses (nine primary and 19 cascading courses) between Dec 5, 2012, and Dec 19, 2013. Knowledge was assessed with a validated PTC multiple choice questionnaire and clinical confidence ratings of eight trauma scenarios, measured against covariants of sex, age, clinical experience, job roles, country, and health institution's workload. Post-training, a significant improvement was noted across all cadres in knowledge (19% [95% CI 18·0-19·5]; psub-Saharan Africa. Our study supports the concept of cascading courses as an educationally and cost-effective method in delivering vital trauma training in low-resource settings led by local clinicians. Health Partnership Scheme through the UK Department for International Development (DFID). Copyright © 2015 Elsevier Ltd. All rights reserved.
Interpolated pressure laws in two-fluid simulations and hyperbolicity
Helluy, Philippe; Jung, Jonathan
2014-01-01
We consider a two-fluid compressible flow. Each fluid obeys a stiffened gas pressure law. The continuous model is well defined without considering mixture regions. However, for numerical applications it is often necessary to consider artificial mixtures, because the two-fluid interface is diffused by the numerical scheme. We show that classic pressure law interpolations lead to a non-convex hyperbolicity domain and failure of well-known numerical schemes. We propose a physically relevant pres...
INTERPOL's Surveillance Network in Curbing Transnational Terrorism
Gardeazabal, Javier; Sandler, Todd
2015-01-01
Abstract This paper investigates the role that International Criminal Police Organization (INTERPOL) surveillance—the Mobile INTERPOL Network Database (MIND) and the Fixed INTERPOL Network Database (FIND)—played in the War on Terror since its inception in 2005. MIND/FIND surveillance allows countries to screen people and documents systematically at border crossings against INTERPOL databases on terrorists, fugitives, and stolen and lost travel documents. Such documents have been used in the past by terrorists to transit borders. By applying methods developed in the treatment‐effects literature, this paper establishes that countries adopting MIND/FIND experienced fewer transnational terrorist attacks than they would have had they not adopted MIND/FIND. Our estimates indicate that, on average, from 2008 to 2011, adopting and using MIND/FIND results in 0.5 fewer transnational terrorist incidents each year per 100 million people. Thus, a country like France with a population just above 64 million people in 2008 would have 0.32 fewer transnational terrorist incidents per year owing to its use of INTERPOL surveillance. This amounts to a sizeable average proportional reduction of about 30 percent.
Steiner, J. F.; Siegfried, T.; Yakovlev, A.
2014-12-01
In the Amu Darya River Basin in Central Asia, the Vakhsh catchment in Tajikistan is a major source of hydropower energy for the country. With a number of large dams already constructed, upstream Tajikistan is interested in the construction of one more large dam and a number of smaller storage facilities with the prospect of supplying its neighboring states with hydropower through a newly planned power grid. The impact of new storage facilities along the river is difficult to estimate and causes considerable concern and consternation among the downstream users. Today, it is one of the vexing poster child studies in international water conflict that awaits resolution. With a lack of meteorological data and a complex topography that makes application of remote sensed data difficult it is a challenge to model runoff correctly. Large parts of the catchment is glacierized and ranges from just 500 m asl to peaks above 7000 m asl. Based on in-situ time series for temperature and precipitation we find local correction factors for remote sensed products. Using this data we employ a model based on the Budyko framework with an extension for snow and ice in the higher altitude bands. The model furthermore accounts for groundwater and soil storage. Runoff data from a number of stations are used for the calibration of the model parameters. With an accurate representation of the existing and planned reservoirs in the Vakhsh cascade we study the potential impacts from the construction of the new large reservoir in the river. Impacts are measured in terms of a) the timing and availability of new hydropower energy, also in light of its potential for export to South Asia, b) shifting challenges with regard to river sediment loads and siltation of reservoirs and c) impacts on downstream runoff and the timely availability of irrigation water there. With our coupled hydro-climatological approach, the challenges of optimal cascade management can be addressed so as to minimize detrimental
The Diffraction Response Interpolation Method
DEFF Research Database (Denmark)
Jespersen, Søren Kragh; Wilhjelm, Jens Erik; Pedersen, Peder C.
1998-01-01
Computer modeling of the output voltage in a pulse-echo system is computationally very demanding, particularly whenconsidering reflector surfaces of arbitrary geometry. A new, efficient computational tool, the diffraction response interpolationmethod (DRIM), for modeling of reflectors in a fluid ...
2010-07-01
... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Exemption of the INTERPOL-United States National Central Bureau (INTERPOL-USNCB) System. 16.103 Section 16.103 Judicial Administration DEPARTMENT... Privacy Act § 16.103 Exemption of the INTERPOL-United States National Central Bureau (INTERPOL-USNCB...
Directory of Open Access Journals (Sweden)
S. Tessitore
2015-11-01
Full Text Available Subsidence is a hazard that may have natural or anthropogenic origin causing important economic losses. The area of Murcia city (SE Spain has been affected by subsidence due to groundwater overexploitation since the year 1992. The main observed historical piezometric level declines occurred in the periods 1982–1984, 1992–1995 and 2004–2008 and showed a close correlation with the temporal evolution of ground displacements. Since 2008, the pressure recovery in the aquifer has led to an uplift of the ground surface that has been detected by the extensometers. In the present work an elastic hydro-mechanical finite element code has been used to compute the subsidence time series for 24 geotechnical boreholes, prescribing the measured groundwater table evolution. The achieved results have been compared with the displacements estimated through an advanced DInSAR technique and measured by the extensometers. These spatio-temporal comparisons have showed that, in spite of the limited geomechanical data available, the model has turned out to satisfactorily reproduce the subsidence phenomenon affecting Murcia City. The model will allow the prediction of future induced deformations and the consequences of any piezometric level variation in the study area.
Marquez, Enrique, Salvador; Hare, Jonathon; Niranjan, Mahesan
2018-01-01
In this paper, we propose a novel approach for efficient training of deep neural networks in a bottom-up fashion using a layered structure. Our algorithm, which we refer to as Deep Cascade Learning, is motivated by the Cascade Correlation approach of Fahlman who introduced it in the context of perceptrons. We demonstrate our algorithm on networks of convolutional layers, though its applicability is more general. Such training of deep networks in a cascade, directly circumvents the well-know...
Hollow Anode Cascading Plasma Focus | Alabraba | Journal of the ...
African Journals Online (AJOL)
Using the 3-phase model for each focus event, the 9-phase, two solid disc auxiliary anode cascading plasma focus has been extended to include holes at the center of each cascade anode (hereafter referred to as hollow anode cascading focus) with a view of increasing the neutron yield with each focus event. Results ...
Interpolation of quasi-Banach spaces
International Nuclear Information System (INIS)
Tabacco Vignati, A.M.
1986-01-01
This dissertation presents a method of complex interpolation for familities of quasi-Banach spaces. This method generalizes the theory for families of Banach spaces, introduced by others. Intermediate spaces in several particular cases are characterized using different approaches. The situation when all the spaces have finite dimensions is studied first. The second chapter contains the definitions and main properties of the new interpolation spaces, and an example concerning the Schatten ideals associated with a separable Hilbert space. The case of L/sup P/ spaces follows from the maximal operator theory contained in Chapter III. Also introduced is a different method of interpolation for quasi-Banach lattices of functions, and conditions are given to guarantee that the two techniques yield the same result. Finally, the last chapter contains a different, and more direct, approach to the case of Hardy spaces
Image interpolation via graph-based Bayesian label propagation.
Xianming Liu; Debin Zhao; Jiantao Zhou; Wen Gao; Huifang Sun
2014-03-01
In this paper, we propose a novel image interpolation algorithm via graph-based Bayesian label propagation. The basic idea is to first create a graph with known and unknown pixels as vertices and with edge weights encoding the similarity between vertices, then the problem of interpolation converts to how to effectively propagate the label information from known points to unknown ones. This process can be posed as a Bayesian inference, in which we try to combine the principles of local adaptation and global consistency to obtain accurate and robust estimation. Specially, our algorithm first constructs a set of local interpolation models, which predict the intensity labels of all image samples, and a loss term will be minimized to keep the predicted labels of the available low-resolution (LR) samples sufficiently close to the original ones. Then, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on all samples. Moreover, a graph-Laplacian-based manifold regularization term is incorporated to penalize the global smoothness of intensity labels, such smoothing can alleviate the insufficient training of the local models and make them more robust. Finally, we construct a unified objective function to combine together the global loss of the locally linear regression, square error of prediction bias on the available LR samples, and the manifold regularization term. It can be solved with a closed-form solution as a convex optimization problem. Experimental results demonstrate that the proposed method achieves competitive performance with the state-of-the-art image interpolation algorithms.
Multiscale empirical interpolation for solving nonlinear PDEs
Calo, Victor M.
2014-12-01
In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.
Positivity Preserving Interpolation Using Rational Bicubic Spline
Directory of Open Access Journals (Sweden)
Samsul Ariffin Abdul Karim
2015-01-01
Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.
DEFF Research Database (Denmark)
Petersen, Marcell Elo; Maar, Marie; Larsen, Janus
2017-01-01
The aim of the study was to investigate the relative importance of bottom-up and top-down forcing on trophic cascades in the pelagic food-web and the implications for water quality indicators (summer phytoplankton biomass and winter nutrients) in relation to management. The 3D ecological model....... On annual basis, the system was more bottom-up than top-down controlled. Microzooplankton was found to play an important role in the pelagic food web as mediator of nutrient and energy fluxes. This study demonstrated that the best scenario for improved water quality was a combined reduction in nutrient...
Cascaded automatic target recognition (Cascaded ATR)
Walls, Bradley
2010-04-01
The global war on terror has plunged US and coalition forces into a battle space requiring the continuous adaptation of tactics and technologies to cope with an elusive enemy. As a result, technologies that enhance the intelligence, surveillance, and reconnaissance (ISR) mission making the warfighter more effective are experiencing increased interest. In this paper we show how a new generation of smart cameras built around foveated sensing makes possible a powerful ISR technique termed Cascaded ATR. Foveated sensing is an innovative optical concept in which a single aperture captures two distinct fields of view. In Cascaded ATR, foveated sensing is used to provide a coarse resolution, persistent surveillance, wide field of view (WFOV) detector to accomplish detection level perception. At the same time, within the foveated sensor, these detection locations are passed as a cue to a steerable, high fidelity, narrow field of view (NFOV) detector to perform recognition level perception. Two new ISR mission scenarios, utilizing Cascaded ATR, are proposed.
Garzón-Machado, Víctor; Otto, Rüdiger; del Arco Aguilar, Marcelino José
2014-07-01
Different spatial interpolation techniques have been applied to construct objective bioclimatic maps of La Palma, Canary Islands. Interpolation of climatic data on this topographically complex island with strong elevation and climatic gradients represents a challenge. Furthermore, meteorological stations are not evenly distributed over the island, with few stations at high elevations. We carried out spatial interpolations of the compensated thermicity index (Itc) and the annual ombrothermic Index (Io), in order to obtain appropriate bioclimatic maps by using automatic interpolation procedures, and to establish their relation to potential vegetation units for constructing a climatophilous potential natural vegetation map (CPNV). For this purpose, we used five interpolation techniques implemented in a GIS: inverse distance weighting (IDW), ordinary kriging (OK), ordinary cokriging (OCK), multiple linear regression (MLR) and MLR followed by ordinary kriging of the regression residuals. Two topographic variables (elevation and aspect), derived from a high-resolution digital elevation model (DEM), were included in OCK and MLR. The accuracy of the interpolation techniques was examined by the results of the error statistics of test data derived from comparison of the predicted and measured values. Best results for both bioclimatic indices were obtained with the MLR method with interpolation of the residuals showing the highest R 2 of the regression between observed and predicted values and lowest values of root mean square errors. MLR with correction of interpolated residuals is an attractive interpolation method for bioclimatic mapping on this oceanic island since it permits one to fully account for easily available geographic information but also takes into account local variation of climatic data.
PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0
Sa, Ben-Hao; Zhou, Dai-Mei; Yan, Yu-Liang; Dong, Bao-Guo; Cai, Xu
2013-05-01
We have updated the parton and hadron cascade model PACIAE 2.0 (cf. Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Xiao-Mei Li, Sheng-Qin Feng, Bao-Guo Dong, Xu Cai, Comput. Phys. Comm. 183 (2012) 333.) to the new issue of PACIAE 2.1. The PACIAE model is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum pT is randomly sampled in the string fragmentation, the px and py components are originally put on the circle with radius pT randomly. Now it is put on the circumference of ellipse with half major and minor axes of pT(1+δp) and pT(1-δp), respectively, in order to better investigate the final state transverse momentum anisotropy. New version program summaryManuscript title: PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0 Authors: Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Bao-Guo Dong, and Xu Cai Program title: PACIAE version 2.1 Journal reference: Catalogue identifier: Licensing provisions: none Programming language: FORTRAN 77 or GFORTRAN Computer: DELL Studio XPS and others with a FORTRAN 77 or GFORTRAN compiler Operating system: Linux or Windows with FORTRAN 77 or GFORTRAN compiler RAM: ≈ 1GB Number of processors used: Supplementary material: Keywords: relativistic nuclear collision; PYTHIA model; PACIAE model Classification: 11.1, 17.8 External routines/libraries: Subprograms used: Catalogue identifier of previous version: aeki_v1_0* Journal reference of previous version: Comput. Phys. Comm. 183(2012)333. Does the new version supersede the previous version?: Yes* Nature of problem: PACIAE is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum(pT)is randomly sampled in the string fragmentation, thepxandpycomponents are randomly placed on the circle with radius ofpT. This strongly cancels the final state transverse momentum asymmetry developed dynamically. Solution method: Thepxandpycomponent of hadron in the string fragmentation is now randomly placed on the circumference of an ellipse with
The Diffraction Response Interpolation Method
DEFF Research Database (Denmark)
Jespersen, Søren Kragh; Wilhjelm, Jens Erik; Pedersen, Peder C.
1998-01-01
Computer modeling of the output voltage in a pulse-echo system is computationally very demanding, particularly whenconsidering reflector surfaces of arbitrary geometry. A new, efficient computational tool, the diffraction response interpolationmethod (DRIM), for modeling of reflectors in a fluid...... medium, is presented. The DRIM is based on the velocity potential impulseresponse method, adapted to pulse-echo applications by the use of acoustical reciprocity. Specifically, the DRIM operates bydividing the reflector surface into planar elements, finding the diffraction response at the corners...
Importance of interpolation and coincidence errors in data fusion
Directory of Open Access Journals (Sweden)
S. Ceccherini
2018-02-01
Full Text Available The complete data fusion (CDF method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.
Directory of Open Access Journals (Sweden)
Anna Gamell
Full Text Available Strategies to improve the uptake of Prevention of Mother-To-Child Transmission of HIV (PMTCT are needed. We integrated HIV and maternal, newborn and child health services in a One Stop Clinic to improve the PMTCT cascade in a rural Tanzanian setting.The One Stop Clinic of Ifakara offers integral care to HIV-infected pregnant women and their families at one single place and time. All pregnant women and HIV-exposed infants attended during the first year of Option B+ implementation (04/2014-03/2015 were included. PMTCT was assessed at the antenatal clinic (ANC, HIV care and labour ward, and compared with the pre-B+ period. We also characterised HIV-infected pregnant women and evaluated the MTCT rate.1,579 women attended the ANC. Seven (0.4% were known to be HIV-infected. Of the remainder, 98.5% (1,548/1,572 were offered an HIV test, 94% (1,456/1,548 accepted and 38 (2.6% tested HIV-positive. 51 were re-screened for HIV during late pregnancy and one had seroconverted. The HIV prevalence at the ANC was 3.1% (46/1,463. Of the 39 newly diagnosed women, 35 (90% were linked to care. HIV test was offered to >98% of ANC clients during both the pre- and post-B+ periods. During the post-B+ period, test acceptance (94% versus 90.5%, p<0.0001 and linkage to care (90% versus 26%, p<0.0001 increased. Ten additional women diagnosed outside the ANC were linked to care. 82% (37/45 of these newly-enrolled women started antiretroviral treatment (ART. After a median time of 17 months, 27% (12/45 were lost to follow-up. 79 women under HIV care became pregnant and all received ART. After a median follow-up time of 19 months, 6% (5/79 had been lost. 5,727 women delivered at the hospital, 20% (1,155/5,727 had unknown HIV serostatus. Of these, 30% (345/1,155 were tested for HIV, and 18/345 (5.2% were HIV-positive. Compared to the pre-B+ period more women were tested during labour (30% versus 2.4%, p<0.0001. During the study, the MTCT rate was 2.2%.The implementation of
Dzurisin, Daniel; Lisowski, Michael; Wicks, Charles W.; Poland, Michael P.; Endo, Elliot T.
2006-01-01
Tumescence at the Three Sisters volcanic center began sometime between summer 1996 and summer 1998 and was discovered in April 2001 using interferometric synthetic aperture radar (InSAR). Swelling is centered about 5 km west of the summit of South Sister, a composite basaltic-andesite to rhyolite volcano that last erupted between 2200 and 2000 yr ago, and it affects an area ∼20 km in diameter within the Three Sisters Wilderness. Yearly InSAR observations show that the average maximum displacement rate was 3–5 cm/yr through summer 2001, and the velocity of a continuous GPS station within the deforming area was essentially constant from June 2001 to June 2004. The background level of seismic activity has been low, suggesting that temperatures in the source region are high enough or the strain rate has been low enough to favor plastic deformation over brittle failure. A swarm of about 300 small earthquakes (Mmax = 1.9) in the northeast quadrant of the deforming area on March 23–26, 2004, was the first notable seismicity in the area for at least two decades. The U.S. Geological Survey (USGS) established tilt-leveling and EDM networks at South Sister in 1985–1986, resurveyed them in 2001, the latter with GPS, and extended them to cover more of the deforming area. The 2001 tilt-leveling results are consistent with the inference drawn from InSAR that the current deformation episode did not start before 1996, i.e., the amount of deformation during 1995–2001 from InSAR fully accounts for the net tilt at South Sister during 1985–2001 from tilt-leveling. Subsequent InSAR, GPS, and leveling observations constrain the source location, geometry, and inflation rate as a function of time. A best-fit source model derived from simultaneous inversion of all three datasets is a dipping sill located 6.5 ± 2.5 km below the surface with a volume increase of 5.0 × 106 ± 1.5 × 106m3/yr (95% confidence limits). The most likely cause of tumescence is a pulse of
Dzurisin, Daniel; Lisowski, Michael; Wicks, Charles W.; Poland, Michael P.; Endo, Elliot T.
2006-02-01
Tumescence at the Three Sisters volcanic center began sometime between summer 1996 and summer 1998 and was discovered in April 2001 using interferometric synthetic aperture radar (InSAR). Swelling is centered about 5 km west of the summit of South Sister, a composite basaltic-andesite to rhyolite volcano that last erupted between 2200 and 2000 yr ago, and it affects an area ˜20 km in diameter within the Three Sisters Wilderness. Yearly InSAR observations show that the average maximum displacement rate was 3-5 cm/yr through summer 2001, and the velocity of a continuous GPS station within the deforming area was essentially constant from June 2001 to June 2004. The background level of seismic activity has been low, suggesting that temperatures in the source region are high enough or the strain rate has been low enough to favor plastic deformation over brittle failure. A swarm of about 300 small earthquakes ( Mmax = 1.9) in the northeast quadrant of the deforming area on March 23-26, 2004, was the first notable seismicity in the area for at least two decades. The U.S. Geological Survey (USGS) established tilt-leveling and EDM networks at South Sister in 1985-1986, resurveyed them in 2001, the latter with GPS, and extended them to cover more of the deforming area. The 2001 tilt-leveling results are consistent with the inference drawn from InSAR that the current deformation episode did not start before 1996, i.e., the amount of deformation during 1995-2001 from InSAR fully accounts for the net tilt at South Sister during 1985-2001 from tilt-leveling. Subsequent InSAR, GPS, and leveling observations constrain the source location, geometry, and inflation rate as a function of time. A best-fit source model derived from simultaneous inversion of all three datasets is a dipping sill located 6.5 ± 2.5 km below the surface with a volume increase of 5.0 × 10 6 ± 1.5 × 10 6 m 3/yr (95% confidence limits). The most likely cause of tumescence is a pulse of basaltic magma
International Nuclear Information System (INIS)
Webb, J F; Yong, K S C; Haldar, M K
2014-01-01
An equivalent circuit simulation of a two-level rate equation model for quantum cascade laser (QCL) materials is used to study the turn on delay and rise time for three QCLs with 5 micron, 9 micron and terahertz-range wavelengths. In order to do this it is necessary that the model can deal with large signal responses and not be restricted to small signal responses; the model used here is capable of this. The effect of varying some of the characteristic times in the model is also investigated. The comparison of the terahertz wave QCL with the others is particularly important given the increased interest in terahertz sources which have a large range of important applications, such as in medical imaging
Mechanisms of cascade collapse
International Nuclear Information System (INIS)
Diaz de la Rubia, T.; Smalinskas, K.; Averback, R.S.; Robertson, I.M.; Hseih, H.; Benedek, R.
1988-12-01
The spontaneous collapse of energetic displacement cascades in metals into vacancy dislocation loops has been investigated by molecular dynamics (MD) computer simulation and transmission electron microscopy (TEM). Simulations of 5 keV recoil events in Cu and Ni provide the following scenario of cascade collapse: atoms are ejected from the central region of the cascade by replacement collision sequences; the central region subsequently melts; vacancies are driven to the center of the cascade during resolidification where they may collapse into loops. Whether or not collapse occurs depends critically on the melting temperature of the metal and the energy density and total energy in the cascade. Results of TEM are presented in support of this mechanism. 14 refs., 4 figs., 1 tab
Image coding using adaptive recursive interpolative DPCM.
Gifford, E A; Hunt, B R; Marcellin, M W
1995-01-01
A predictive image coder having minimal decoder complexity is presented. The image coder utilizes recursive interpolative DPCM in conjunction with adaptive classification, entropy-constrained trellis coded quantization, and optimal rate allocation to obtain signal-to-noise ratios (SNRs) in the range of those provided by the most advanced transform coders.
Interpolation of intermolecular potentials using Gaussian processes
Uteva, Elena; Graham, Richard S.; Wilkinson, Richard D.; Wheatley, Richard J.
2017-10-01
A procedure is proposed to produce intermolecular potential energy surfaces from limited data. The procedure involves generation of geometrical configurations using a Latin hypercube design, with a maximin criterion, based on inverse internuclear distances. Gaussian processes are used to interpolate the data, using over-specified inverse molecular distances as covariates, greatly improving the interpolation. Symmetric covariance functions are specified so that the interpolation surface obeys all relevant symmetries, reducing prediction errors. The interpolation scheme can be applied to many important molecular interactions with trivial modifications. Results are presented for three systems involving CO2, a system with a deep energy minimum (HF-HF), and a system with 48 symmetries (CH4-N2). In each case, the procedure accurately predicts an independent test set. Training this method with high-precision ab initio evaluations of the CO2-CO interaction enables a parameter-free, first-principles prediction of the CO2-CO cross virial coefficient that agrees very well with experiments.
Statistical investigation of hydraulic driven circular interpolation ...
Indian Academy of Sciences (India)
2Mechanical Education Department, Gazi University, 06500 Ankara, Turkey. 3Electrical and Electronics Engineering .... PLC (Programmable Logic Controller) set. An incremental type linear encoder with ... realize the CNC basic motions such as linear (G01) and circular interpolation (G02, G03). 2.1 CNC system. The control ...
Interpolation for a subclass of H
Indian Academy of Sciences (India)
Abstract. We introduce and characterize two types of interpolating sequences in the unit disc D of the complex plane for the class of all functions being the product of two analytic functions in D, one bounded and another regular up to the boundary of D, concretely in the Lipschitz class, and at least one of them vanishing at ...
An efficient implementation of reconfigurable interpolation rootraised ...
Indian Academy of Sciences (India)
Hence, multiplexers, shifters, and adders in the multiplier structure are reduced, which results in theimprovement of operating frequency. The number of addition operations is further reduced using programmable adders and an efficient polyphase interpolation structure is implemented to reduce the hardware cost.
Interpolation for a subclass of H∞
Indian Academy of Sciences (India)
We introduce and characterize two types of interpolating sequences in the unit disc D of the complex plane for the class of all functions being the product of two analytic functions in D , one bounded and another regular up to the boundary of D , concretely in the Lipschitz class, and at least one of them vanishing at some ...
Bankruptcy cascades in interbank markets.
Directory of Open Access Journals (Sweden)
Gabriele Tedeschi
Full Text Available We study a credit network and, in particular, an interbank system with an agent-based model. To understand the relationship between business cycles and cascades of bankruptcies, we model a three-sector economy with goods, credit and interbank market. In the interbank market, the participating banks share the risk of bad debits, which may potentially spread a bank's liquidity problems through the network of banks. Our agent-based model sheds light on the correlation between bankruptcy cascades and the endogenous economic cycle of booms and recessions. It also demonstrates the serious trade-off between, on the one hand, reducing risks of individual banks by sharing them and, on the other hand, creating systemic risks through credit-related interlinkages of banks. As a result of our study, the dynamics underlying the meltdown of financial markets in 2008 becomes much better understandable.
Bankruptcy cascades in interbank markets.
Tedeschi, Gabriele; Mazloumian, Amin; Gallegati, Mauro; Helbing, Dirk
2012-01-01
We study a credit network and, in particular, an interbank system with an agent-based model. To understand the relationship between business cycles and cascades of bankruptcies, we model a three-sector economy with goods, credit and interbank market. In the interbank market, the participating banks share the risk of bad debits, which may potentially spread a bank's liquidity problems through the network of banks. Our agent-based model sheds light on the correlation between bankruptcy cascades and the endogenous economic cycle of booms and recessions. It also demonstrates the serious trade-off between, on the one hand, reducing risks of individual banks by sharing them and, on the other hand, creating systemic risks through credit-related interlinkages of banks. As a result of our study, the dynamics underlying the meltdown of financial markets in 2008 becomes much better understandable.
Research on an innovative modification algorithm of NURBS curve interpolation
Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng
2017-04-01
in order to solve the problems of modification algorithm of NURBS curve interpolation, Such as interpolation time bigger, NURBS curve step error and chord error are not easy changed, and so on. A novel proposed a modification algorithm of NURBS curve interpolation. The algorithm has merits such as higher interpolation position accuracy, short processing time and so on. In this simulation, an open five-axis CNC platform based on SIEMENS 840D CNC system is developed for verifying the proposed modification algorithm of NURBS curve interpolation experimentally. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.
Directory of Open Access Journals (Sweden)
Pengyun Chen
2014-01-01
Full Text Available The interpolation-reconstruction of local underwater terrain using the underwater digital terrain map (UDTM is an important step for building an underwater terrain matching unit and directly affects the accuracy of underwater terrain matching navigation. The Kriging method is often used in terrain interpolation, but, with this method, the local terrain features are often lost. Therefore, the accuracy cannot meet the requirements of practical application. Analysis of the geographical features is performed on the basis of the randomness and self-similarity of underwater terrain. We extract the fractal features of local underwater terrain with the fractal Brownian motion model, compensating for the possible errors of the Kriging method with fractal theory. We then put forward an improved Kriging interpolation method based on this fractal compensation. Interpolation-reconstruction tests show that the method can simulate the real underwater terrain features well and that it has good usability.
Efficient charge assignment and back interpolation in multigrid methods for molecular dynamics.
Banerjee, Sanjay; Board, John A
2005-07-15
The assignment of atomic charges to a regular computational grid and the interpolation of forces from the grid back to the original atomic positions are crucial steps in a multigrid approach to the calculation of molecular forces. For purposes of grid assignment, atomic charges are modeled as truncated Gaussian distributions. The charge assignment and back interpolation methods are currently bottlenecks, and take up to one-third the execution time of the multigrid method each. Here, we propose alternative approaches to both charge assignment and back interpolation where convolution is used both to map Gaussian representations of atomic charges onto the grid and to map the forces computed at grid points back to atomic positions. These approaches achieve the same force accuracy with reduced run time. The proposed charge assignment and back interpolation methods scale better than baseline multigrid computations with both problem size and number of processors. (c) 2005 Wiley Periodicals, Inc.
Gravity Aided Navigation Precise Algorithm with Gauss Spline Interpolation
Directory of Open Access Journals (Sweden)
WEN Chaobin
2015-01-01
Full Text Available The gravity compensation of error equation thoroughly should be solved before the study on gravity aided navigation with high precision. A gravity aided navigation model construction algorithm based on research the algorithm to approximate local grid gravity anomaly filed with the 2D Gauss spline interpolation is proposed. Gravity disturbance vector, standard gravity value error and Eotvos effect are all compensated in this precision model. The experiment result shows that positioning accuracy is raised by 1 times, the attitude and velocity accuracy is raised by 1～2 times and the positional error is maintained from 100~200 m.
Reinhardt, Katja; Samimi, Cyrus
2018-01-01
While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and
Timescape: a simple space-time interpolation geostatistical Algorithm
Ciolfi, Marco; Chiocchini, Francesca; Gravichkova, Olga; Pisanelli, Andrea; Portarena, Silvia; Scartazza, Andrea; Brugnoli, Enrico; Lauteri, Marco
2016-04-01
Environmental sciences include both time and space variability in their datasets. Some established tools exist for both spatial interpolation and time series analysis alone, but mixing space and time variability calls for compromise: Researchers are often forced to choose which is the main source of variation, neglecting the other. We propose a simple algorithm, which can be used in many fields of Earth and environmental sciences when both time and space variability must be considered on equal grounds. The algorithm has already been implemented in Java language and the software is currently available at https://sourceforge.net/projects/timescapeglobal/ (it is published under GNU-GPL v3.0 Free Software License). The published version of the software, Timescape Global, is focused on continent- to Earth-wide spatial domains, using global longitude-latitude coordinates for samples localization. The companion Timescape Local software is currently under development ad will be published with an open license as well; it will use projected coordinates for a local to regional space scale. The basic idea of the Timescape Algorithm consists in converting time into a sort of third spatial dimension, with the addition of some causal constraints, which drive the interpolation including or excluding observations according to some user-defined rules. The algorithm is applicable, as a matter of principle, to anything that can be represented with a continuous variable (a scalar field, technically speaking). The input dataset should contain position, time and observed value of all samples. Ancillary data can be included in the interpolation as well. After the time-space conversion, Timescape follows basically the old-fashioned IDW (Inverse Distance Weighted) interpolation Algorithm, although users have a wide choice of customization options that, at least partially, overcome some of the known issues of IDW. The three-dimensional model produced by the Timescape Algorithm can be
Performance of an Interpolated Stochastic Weather Generator in Czechia and Nebraska
Dubrovsky, M.; Trnka, M.; Hayes, M. J.; Svoboda, M. D.; Semeradova, D.; Metelka, L.; Hlavinka, P.
2008-12-01
Met&Roll is a WGEN-like parametric four-variate daily weather generator (WG), with an optional extension allowing the user to generate additional variables (i.e. wind and water vapor pressure). It is designed to produce synthetic weather series representing present and/or future climate conditions to be used as an input into various models (e.g. crop growth and rainfall runoff models). The present contribution will summarize recent experiments, in which we tested the performance of the interpolated WG, with the aim to examine whether the WG may be used to produce synthetic weather series even for sites having no meteorological observations. The experiments being discussed include: (1) the comparison of various interpolation methods where the performance of the candidate methods is compared in terms of the accuracy of the interpolation for selected WG parameters; (2) assessing the ability of the interpolated WG in the territories of Czechia and Nebraska to reproduce extreme temperature and precipitation characteristics; (3) indirect validation of the interpolated WG in terms of the modeled crop yields simulated by STICS crop growth model (in Czechia); and (4) indirect validation of interpolated WG in terms of soil climate regime characteristics simulated by the SoilClim model (Czechia and Nebraska). The experiments are based on observed daily weather series from two regions: Czechia (area = 78864 km2, 125 stations available) and Nebraska (area = 200520 km2, 28 stations available). Even though Nebraska exhibits a much lower density of stations, this is offset by the state's relatively flat topography, which is an advantage in using the interpolated WG. Acknowledgements: The present study is supported by the AMVIS-KONTAKT project (ME 844) and the GAAV Grant Agency (project IAA300420806).
Application of Hardy's multiquadric interpolation to hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Kansa, E.J.
1985-10-01
Hardy's multiquadric interpolation (MQI) scheme is a global, continuously differentiable interpolation method for solving scattered data interpolation problems. It is capable of producing monotonic, extremely accurate interpolating functions, integrals, and derivatives. Derivative estimates for a variety of one and two-dimensional surfaces were obtained. MQI was then applied to the spherical blast wave problem of von Neumann. The numerical solution agreed extremely well with the exact solution. 17 refs., 3 figs., 2 tabs.
Rose, Susan A; Feldman, Judith F; Jankowski, Jeffery J
2011-09-01
This study identified deficits in executive functioning in pre-adolescent preterms and modeled their role, along with processing speed, in explaining preterm/full-term differences in reading and mathematics. Preterms (working memory, inhibition, and shifting. Confirmatory factor analysis showed that these executive functions, though correlated, were distinct from one another and from processing speed, which later proved to account for much of the intercorrelation among executive functions. In the best-fitting structural equation model, the negative effects of prematurity on achievement were completely mediated by the three executive functions and speed in a cascade of effects: prematurity → slower processing speed → poorer executive functioning (working memory) → lower achievement in math and reading. © 2011 Blackwell Publishing Ltd.
MAGIC: A Tool for Combining, Interpolating, and Processing Magnetograms
Allred, Joel
2012-01-01
Transients in the solar coronal magnetic field are ultimately the source of space weather. Models which seek to track the evolution of the coronal field require magnetogram images to be used as boundary conditions. These magnetograms are obtained by numerous instruments with different cadences and resolutions. A tool is required which allows modelers to fmd all available data and use them to craft accurate and physically consistent boundary conditions for their models. We have developed a software tool, MAGIC (MAGnetogram Interpolation and Composition), to perform exactly this function. MAGIC can manage the acquisition of magneto gram data, cast it into a source-independent format, and then perform the necessary spatial and temporal interpolation to provide magnetic field values as requested onto model-defined grids. MAGIC has the ability to patch magneto grams from different sources together providing a more complete picture of the Sun's field than is possible from single magneto grams. In doing this, care must be taken so as not to introduce nonphysical current densities along the seam between magnetograms. We have designed a method which minimizes these spurious current densities. MAGIC also includes a number of post-processing tools which can provide additional information to models. For example, MAGIC includes an interface to the DA VE4VM tool which derives surface flow velocities from the time evolution of surface magnetic field. MAGIC has been developed as an application of the KAMELEON data formatting toolkit which has been developed by the CCMC.
Directory of Open Access Journals (Sweden)
Mauricio Castro Franco
2017-07-01
Full Text Available Context: Interpolating soil properties at field-scale in the Colombian piedmont eastern plains is challenging due to: the highly and complex variable nature of some processes; the effects of the soil; the land use; and the management. While interpolation techniques are being adapted to include auxiliary information of these effects, the soil data are often difficult to predict using conventional techniques of spatial interpolation. Method: In this paper, we evaluated and compared six spatial interpolation techniques: Inverse Distance Weighting (IDW, Spline, Ordinary Kriging (KO, Universal Kriging (UK, Cokriging (Ckg, and Residual Maximum Likelihood-Empirical Best Linear Unbiased Predictor (REML-EBLUP, from conditioned Latin Hypercube as a sampling strategy. The ancillary information used in Ckg and REML-EBLUP was indexes calculated from a digital elevation model (MDE. The “Random forest” algorithm was used for selecting the most important terrain index for each soil properties. Error metrics were used to validate interpolations against cross validation. Results: The results support the underlying assumption that HCLc captured adequately the full distribution of variables of ancillary information in the Colombian piedmont eastern plains conditions. They also suggest that Ckg and REML-EBLUP perform best in the prediction in most of the evaluated soil properties. Conclusions: Mixed interpolation techniques having auxiliary soil information and terrain indexes, provided a significant improvement in the prediction of soil properties, in comparison with other techniques.
Directory of Open Access Journals (Sweden)
Lei Shi
2016-09-01
Full Text Available Tidal datums are key components in NOAA’s Vertical Datum transformation project (VDatum. In this paper, we propose a statistical interpolation method, derived from the variational principle, to calculate tidal datums by blending the modeled and the observed tidal datums. Through the implementation of this statistical interpolation method in the Chesapeake and Delaware Bays, we conclude that the statistical interpolation method for tidal datums has great advantages over the currently used deterministic interpolation method. The foremost, and inherent, advantage of the statistical interpolation is its capability to integrate data from different sources and with different accuracies without concern for their relative spatial locations. The second advantage is that it provides a spatially varying uncertainty for the entire domain in which data is being integrated. The latter is especially helpful for the decision-making process of where new instruments would be most effectively placed. Lastly, the test case results show that the statistical interpolation reduced the bias, maximum absolute error, mean absolute error, and root mean square error in comparison to the current deterministic approach.
Wang, Jian-Bo; Reetz, Manfred T.
2015-12-01
Racemic or enantiomerically pure alcohols can be converted with high yield into enantiopure chiral amines in a one-pot redox-neutral cascade process by the clever combination of an alcohol dehydrogenase and an appropriate amine dehydrogenase.
International Nuclear Information System (INIS)
San Martin, Jesus; Rodriguez-Perez, Daniel
2009-01-01
Presented in this work are some results relative to sequences found in the logistic equation bifurcation diagram, which is the unimodal quadratic map prototype. All of the different saddle-node bifurcation cascades, associated with every last appearance p-periodic orbit (p=3,4,5,...), can also be generated from the very Feigenbaum cascade. In this way it is evidenced the relationship between both cascades. The orbits of every saddle-node bifurcation cascade, mentioned above, are located in different chaotic bands, and this determines a sequence of orbits converging to every band-merging Misiurewicz point. In turn, these accumulation points form a sequence whose accumulation point is the Myrberg-Feigenbaum point. It is also proven that the first appearance orbits in the n-chaotic band converge to the same point as the last appearance orbits of the (n + 1)-chaotic band. The symbolic sequences of band-merging Misiurewicz points are computed for any window.
Cascade of links in complex networks
International Nuclear Information System (INIS)
Feng, Yeqian; Sun, Bihui; Zeng, An
2017-01-01
Cascading failure is an important process which has been widely used to model catastrophic events such as blackouts and financial crisis in real systems. However, so far most of the studies in the literature focus on the cascading process on nodes, leaving the possibility of link cascade overlooked. In many real cases, the catastrophic events are actually formed by the successive disappearance of links. Examples exist in the financial systems where the firms and banks (i.e. nodes) still exist but many financial trades (i.e. links) are gone during the crisis, and the air transportation systems where the airports (i.e. nodes) are still functional but many airlines (i.e. links) stop operating during bad weather. In this letter, we develop a link cascade model in complex networks. With this model, we find that both artificial and real networks tend to collapse even if a few links are initially attacked. However, the link cascading process can be effectively terminated by setting a few strong nodes in the network which do not respond to any link reduction. Finally, a simulated annealing algorithm is used to optimize the location of these strong nodes, which significantly improves the robustness of the networks against the link cascade. - Highlights: • We propose a link cascade model in complex networks. • Both artificial and real networks tend to collapse even if a few links are initially attacked. • The link cascading process can be effectively terminated by setting a few strong nodes. • A simulated annealing algorithm is used to optimize the location of these strong nodes.
Cascade of links in complex networks
Energy Technology Data Exchange (ETDEWEB)
Feng, Yeqian; Sun, Bihui [Department of Management Science, School of Government, Beijing Normal University, 100875 Beijing (China); Zeng, An, E-mail: anzeng@bnu.edu.cn [School of Systems Science, Beijing Normal University, 100875 Beijing (China)
2017-01-30
Cascading failure is an important process which has been widely used to model catastrophic events such as blackouts and financial crisis in real systems. However, so far most of the studies in the literature focus on the cascading process on nodes, leaving the possibility of link cascade overlooked. In many real cases, the catastrophic events are actually formed by the successive disappearance of links. Examples exist in the financial systems where the firms and banks (i.e. nodes) still exist but many financial trades (i.e. links) are gone during the crisis, and the air transportation systems where the airports (i.e. nodes) are still functional but many airlines (i.e. links) stop operating during bad weather. In this letter, we develop a link cascade model in complex networks. With this model, we find that both artificial and real networks tend to collapse even if a few links are initially attacked. However, the link cascading process can be effectively terminated by setting a few strong nodes in the network which do not respond to any link reduction. Finally, a simulated annealing algorithm is used to optimize the location of these strong nodes, which significantly improves the robustness of the networks against the link cascade. - Highlights: • We propose a link cascade model in complex networks. • Both artificial and real networks tend to collapse even if a few links are initially attacked. • The link cascading process can be effectively terminated by setting a few strong nodes. • A simulated annealing algorithm is used to optimize the location of these strong nodes.
Parametric Integration by Magic Point Empirical Interpolation
Gaß, Maximilian; Glau, Kathrin
2015-01-01
We derive analyticity criteria for explicit error bounds and an exponential rate of convergence of the magic point empirical interpolation method introduced by Barrault et al. (2004). Furthermore, we investigate its application to parametric integration. We find that the method is well-suited to Fourier transforms and has a wide range of applications in such diverse fields as probability and statistics, signal and image processing, physics, chemistry and mathematical finance. To illustrate th...
Parametric Integration by Magic Point Empirical Interpolation
Gaß, M., Glau, K.
2016-01-01
We derive analyticity criteria for explicit error bounds and an exponential rate of convergence of the magic point empirical interpolation method introduced by Barrault et al. (2004). Furthermore, we investigate its application to parametric integration. We find that the method is well-suited to Fourier transforms and has a wide range of applications in such diverse fields as probability and statistics, signal and image processing, physics, chemistry and mathematical finance. To illustrate th...
Some splines produced by smooth interpolation
Czech Academy of Sciences Publication Activity Database
Segeth, Karel
2018-01-01
Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub
Some splines produced by smooth interpolation
Czech Academy of Sciences Publication Activity Database
Segeth, Karel
2018-01-01
Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub
A Bidirectional Flow Joint Sobolev Gradient for Image Interpolation
Directory of Open Access Journals (Sweden)
Yi Zhan
2013-01-01
Full Text Available An energy functional with bidirectional flow is presented to sharpen image by reducing its edge width, which performs a forward diffusion in brighter lateral on edge ramp and backward diffusion that proceeds in darker lateral. We first consider the diffusion equations as L2 gradient flows on integral functionals and then modify the inner product from L2 to a Sobolev inner product. The experimental results demonstrate that our model efficiently reconstructs the real image, leading to a natural interpolation with reduced blurring, staircase artifacts and preserving better the texture features of image.
Trends in Continuity and Interpolation for Computer Graphics.
Gonzalez Garcia, Francisco
2015-01-01
In every computer graphics oriented application today, it is a common practice to texture 3D models as a way to obtain realistic material. As part of this process, mesh texturing, deformation, and visualization are all key parts of the computer graphics field. This PhD dissertation was completed in the context of these three important and related fields in computer graphics. The article presents techniques that improve on existing state-of-the-art approaches related to continuity and interpolation in texture space (texturing), object space (deformation), and screen space (rendering).
Delimiting areas of endemism through kernel interpolation.
Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
Delimiting areas of endemism through kernel interpolation.
Directory of Open Access Journals (Sweden)
Ubirajara Oliveira
Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
Learning optimal embedded cascades.
Saberian, Mohammad Javad; Vasconcelos, Nuno
2012-10-01
The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.
Evaluation of Nonlinear Methods for Interpolation of Catchment-Scale
Coleman, M. L.; Niemann, J. D.
2008-12-01
Soil moisture acts as a key state variable in interactions between the atmosphere and land surface, strongly influencing radiation and precipitation partitioning and thus many components of the hydrologic cycle. Despite its importance as a state variable, measuring soil moisture patterns with adequate spatial resolutions over useful spatial extents remains a significant challenge due to both physical and economic constraints. For this reason, ancillary data, such as topographic attributes, have been employed as process proxies and predictor variables for soil moisture. Most methods that have been used to estimate soil moisture from ancillary variables assume that soil moisture is linearly dependent on these variables. However, unsaturated zone water transport is typically modeled as a nonlinear function of the soil moisture state. While that fact does not necessarily imply nonlinear relationships with the ancillary variables, there is some evidence suggesting nonlinear methods may be more efficient than linear methods for interpolating soil moisture from ancillary data. Therefore, this work investigates the value of nonlinear estimation techniques, namely conditional density estimation, support vector machines, and a spatial artificial neural network, for interpolating soil moisture patterns from sparse measurements and ancillary data. The set of candidate predictor variables in this work includes simple and compound terrain attributes calculated from digital elevation models and, in some cases, soil texture data. The initial task in the interpolation procedure is the selection of the most effective predictor variables. Given the possibility of nonlinear relationships, mutual information is used to quantify relationships between candidate variables and soil moisture and ultimately to select the most efficient ancillary data as predictor variables. After selecting a subset of the potential ancillary data variables for use, the nonlinear estimation techniques are
Defect production in simulated cascades: cascade quenching and short-term annealing
International Nuclear Information System (INIS)
Heinisch, H.L.
1982-01-01
Defect production in high energy displacement cascades has been modeled using the computer code MARLOWE to generate the cascades and the stochastic computer code ALSOME to simulate the cascade quenching and short-term annealing of isolated cascades. The quenching is accomplished by using ALSOME with exaggerated values for defect mobilities and critical reaction distanes for recombination and clustering, which are in effect until the number of defect pairs is equal to the value determined from resistivity experiments at 4K. Then normal mobilities and reaction distances are used during short-term annealing to a point representative of Stage III recovery. Effects of cascade interactions at low fluences are also being investigated. The quenching parameter values were empirically determined for 30 keV cascades. The results agree well with experimental information throughout the range from 1 keV to 100 keV. Even after quenching and short-term annealing the high energy cascades behave as a collection of lower energy subcascades and lobes. Cascades generated in a crystal having thermal displacements were found to be in better agreement with experiments after quenching and annealing than those generated in a non-thermal crystal
Color Orchestra: Ordering Color Palettes for Interpolation and Prediction.
Phan, Huy; Fu, Hongbo; Chan, Antoni
2017-04-25
Color theme or color palette can deeply influence the quality and the feeling of a photograph or a graphical design. Although color palettes may come from different sources such as online crowd-sourcing, photographs and graphical designs, in this paper, we consider color palettes extracted from fine art collections, which we believe to be an abundant source of stylistic and unique color themes. We aim to capture color styles embedded in these collections by means of statistical models and to build practical applications upon these models. As artists often use their personal color themes in their paintings, making these palettes appear frequently in the dataset, we employed density estimation to capture the characteristics of palette data. Via density estimation, we carried out various predictions and interpolations on palettes, which led to promising applications such as photo-style exploration, real-time color suggestion, and enriched photo recolorization. It was, however, challenging to apply density estimation to palette data as palettes often come as unordered sets of colors, which make it difficult to use conventional metrics on them. To this end, we developed a divide-and-conquer sorting algorithm to rearrange the colors in the palettes in a coherent order, which allows meaningful interpolation between color palettes. To confirm the performance of our model, we also conducted quantitative experiments on datasets of digitized paintings collected from the Internet and received favorable results.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model׳s parameters and transformed according to a specific
Research progress and hotspot analysis of spatial interpolation
Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li
2018-02-01
In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.
On the Quality of Velocity Interpolation Schemes for Marker-in-Cell Method and Staggered Grids
Pusok, Adina E.; Kaus, Boris J. P.; Popov, Anton A.
2017-03-01
The marker-in-cell method is generally considered a flexible and robust method to model the advection of heterogenous non-diffusive properties (i.e., rock type or composition) in geodynamic problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without considering the divergence of the velocity field at the interpolated locations (i.e., non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Journal of Computational Physics 166:218-252, 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. To remedy this at low computational costs, Jenny et al. (Journal of Computational Physics 166:218-252, 2001) and Meyer and Jenny (Proceedings in Applied Mathematics and Mechanics 4:466-467, 2004) proposed a simple, conservative velocity interpolation scheme for 2-D staggered grid, while Wang et al. (Geochemistry, Geophysics, Geosystems 16(6):2015-2023, 2015) extended the formulation to 3-D finite element methods. Here, we adapt this formulation for 3-D staggered grids (correction interpolation) and we report on the quality of various velocity interpolation methods for 2-D and 3-D staggered grids. We test the interpolation schemes in combination with different advection schemes on incompressible Stokes problems with strong velocity gradients, which are discretized using a finite difference method. Our results suggest that a conservative formulation reduces the dispersion and clustering of markers, minimizing the need of unphysical marker control in geodynamic models.
Schlenker, Cody W.
2011-09-27
We demonstrate planar organic solar cells consisting of a series of complementary donor materials with cascading exciton energies, incorporated in the following structure: glass/indium-tin-oxide/donor cascade/C 60/bathocuproine/Al. Using a tetracene layer grown in a descending energy cascade on 5,6-diphenyl-tetracene and capped with 5,6,11,12-tetraphenyl- tetracene, where the accessibility of the π-system in each material is expected to influence the rate of parasitic carrier leakage and charge recombination at the donor/acceptor interface, we observe an increase in open circuit voltage (Voc) of approximately 40% (corresponding to a change of +200 mV) compared to that of a single tetracene donor. Little change is observed in other parameters such as fill factor and short circuit current density (FF = 0.50 ± 0.02 and Jsc = 2.55 ± 0.23 mA/cm2) compared to those of the control tetracene-C60 solar cells (FF = 0.54 ± 0.02 and Jsc = 2.86 ± 0.23 mA/cm2). We demonstrate that this cascade architecture is effective in reducing losses due to polaron pair recombination at donor-acceptor interfaces, while enhancing spectral coverage, resulting in a substantial increase in the power conversion efficiency for cascade organic photovoltaic cells compared to tetracene and pentacene based devices with a single donor layer. © 2011 American Chemical Society.
Vedanthan, Rajesh; Kamano, Jemima H; Bloomfield, Gerald S; Manji, Imran; Pastakia, Sonak; Kimaiyo, Sylvester N
2015-12-01
Cardiovascular disease (CVD) is the leading cause of death in the world, with a substantial health and economic burden confronted by low- and middle-income countries. In low-income countries such as Kenya, there exists a double burden of communicable and noncommunicable diseases, and the CVD profile includes many nonatherosclerotic entities. Socio-politico-economic realities present challenges to CVD prevention in Kenya, including poverty, low national spending on health, significant out-of-pocket health expenditures, and limited outpatient health insurance. In addition, the health infrastructure is characterized by insufficient human resources for health, medication stock-outs, and lack of facilities and equipment. Within this socio-politico-economic reality, contextually appropriate programs for CVD prevention need to be developed. We describe our experience from western Kenya, where we have engaged the entire care cascade across all levels of the health system, in order to improve access to high-quality, comprehensive, coordinated, and sustainable care for CVD and CVD risk factors. We report on several initiatives: 1) population-wide screening for hypertension and diabetes; 2) engagement of community resources and governance structures; 3) geographic decentralization of care services; 4) task redistribution to more efficiently use of available human resources for health; 5) ensuring a consistent supply of essential medicines; 6) improving physical infrastructure of rural health facilities; 7) developing an integrated health record; and 8) mobile health (mHealth) initiatives to provide clinical decision support and record-keeping functions. Although several challenges remain, there currently exists a critical window of opportunity to establish systems of care and prevention that can alter the trajectory of CVD in low-resource settings. Copyright © 2015 World Heart Federation (Geneva). Published by Elsevier B.V. All rights reserved.
On the exact interpolating function in ABJ theory
Energy Technology Data Exchange (ETDEWEB)
Cavaglià, Andrea [Dipartimento di Fisica and INFN, Università di Torino,Via P. Giuria 1, 10125 Torino (Italy); Gromov, Nikolay [Mathematics Department, King’s College London,The Strand, London WC2R 2LS (United Kingdom); St. Petersburg INP,Gatchina, 188 300, St.Petersburg (Russian Federation); Levkovich-Maslyuk, Fedor [Mathematics Department, King’s College London,The Strand, London WC2R 2LS (United Kingdom); Nordita, KTH Royal Institute of Technology and Stockholm University,Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden)
2016-12-16
Based on the recent indications of integrability in the planar ABJ model, we conjecture an exact expression for the interpolating function h(λ{sub 1},λ{sub 2}) in this theory. Our conjecture is based on the observation that the integrability structure of the ABJM theory given by its Quantum Spectral Curve is very rigid and does not allow for a simple consistent modification. Under this assumption, we revised the previous comparison of localization results and exact all loop integrability calculations done for the ABJM theory by one of the authors and Grigory Sizov, fixing h(λ{sub 1},λ{sub 2}). We checked our conjecture against various weak coupling expansions, at strong coupling and also demonstrated its invariance under the Seiberg-like duality. This match also gives further support to the integrability of the model. If our conjecture is correct, it extends all the available integrability results in the ABJM model to the ABJ model.
Bidlake, William R.; Josberger, Edward G.; Savoca, Mark E.
2010-01-01
Winter snow accumulation and summer snow and ice ablation were measured at South Cascade Glacier, Washington, to estimate glacier mass balance quantities for balance years 2006 and 2007. Mass balances were computed with assistance from a new model that was based on the works of other glacier researchers. The model, which was developed for mass balance practitioners, coupled selected meteorological and glaciological data to systematically estimate daily mass balance at selected glacier sites. The North Cascade Range in the vicinity of South Cascade Glacier accumulated approximately average to above average winter snow packs during 2006 and 2007. Correspondingly, the balance years 2006 and 2007 maximum winter snow mass balances of South Cascade Glacier, 2.61 and 3.41 meters water equivalent, respectively, were approximately equal to or more positive (larger) than the average of such balances since 1959. The 2006 glacier summer balance, -4.20 meters water equivalent, was among the four most negative since 1959. The 2007 glacier summer balance, -3.63 meters water equivalent, was among the 14 most negative since 1959. The glacier continued to lose mass during 2006 and 2007, as it commonly has since 1953, but the loss was much smaller during 2007 than during 2006. The 2006 glacier net balance, -1.59 meters water equivalent, was 1.02 meters water equivalent more negative (smaller) than the average during 1953-2005. The 2007 glacier net balance, -0.22 meters water equivalent, was 0.37 meters water equivalent less negative (larger) than the average during 1953-2006. The 2006 accumulation area ratio was less than 0.10, owing to isolated patches of accumulated snow that endured the 2006 summer season. The 2006 equilibrium line altitude was higher than the glacier. The 2007 accumulation area ratio and equilibrium line altitude were 0.60 and 1,880 meters, respectively. Accompanying the glacier mass losses were retreat of the terminus and reduction of total glacier area. The
Nuclear data banks generation by interpolation
International Nuclear Information System (INIS)
Castillo M, J. A.
1999-01-01
Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, use to generate Nuclear Data Banks employing bicubic polynominal interpolation, taking as independent variables the uranium and gadolinia percents. Two proposal were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed, to obtain the interpolating polynomial and later, the corresponding linear equation systems. In the solution of this systems the Gaussian elimination methods with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validation tests, a comparison was made between the values obtained with INTPOLBI and INTERTEG (create at the Instituto de Investigaciones Electricas (MX) with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks
Generation of nuclear data banks through interpolation
International Nuclear Information System (INIS)
Castillo M, J.A.
1999-01-01
Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, used to generate Nuclear Data Banks employing bi cubic polynomial interpolation, taking as independent variables the uranium and gadolinium percents. Two proposals were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed to obtain the interpolating polynomial and later, the corresponding linear equations system. In the solution of this system the Gaussian elimination method with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validations test, a comparison was made between the values obtained with INTPOLBI and INTERTEG (created at the Instituto de Investigaciones Electricas with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks. (Author)
Calculation of reactivity without Lagrange interpolation
International Nuclear Information System (INIS)
Suescun D, D.; Figueroa J, J. H.; Rodriguez R, K. C.; Villada P, J. P.
2015-09-01
A new method to solve numerically the inverse equation of punctual kinetics without using Lagrange interpolating polynomial is formulated; this method uses a polynomial approximation with N points based on a process of recurrence for simulating different forms of nuclear power. The results show a reliable accuracy. Furthermore, the method proposed here is suitable for real-time measurements of reactivity, with step sizes of calculations greater that Δt = 0.3 s; due to its precision can be used to implement a digital meter of reactivity in real time. (Author)
Interband Cascade Photovoltaic Cells
Energy Technology Data Exchange (ETDEWEB)
Yang, Rui Q. [Univ. of Oklahoma, Norman, OK (United States); Santos, Michael B. [Univ. of Oklahoma, Norman, OK (United States); Johnson, Matthew B. [Univ. of Oklahoma, Norman, OK (United States)
2014-09-24
In this project, we are performing basic and applied research to systematically investigate our newly proposed interband cascade (IC) photovoltaic (PV) cells [1]. These cells follow from the great success of infrared IC lasers [2-3] that pioneered the use of quantum-engineered IC structures. This quantum-engineered approach will enable PV cells to efficiently convert infrared radiation from the sun or other heat source, to electricity. Such cells will have important applications for more efficient use of solar energy, waste-heat recovery, and power beaming in combination with mid-infrared lasers. The objectives of our investigations are to: achieve extensive understanding of the fundamental aspects of the proposed PV structures, develop the necessary knowledge for making such IC PV cells, and demonstrate prototype working PV cells. This research will focus on IC PV structures and their segments for utilizing infrared radiation with wavelengths from 2 to 5 μm, a range well suited for emission by heat sources (1,000-2,000 K) that are widely available from combustion systems. The long-term goal of this project is to push PV technology to longer wavelengths, allowing for relatively low-temperature thermal sources. Our investigations address material quality, electrical and optical properties, and their interplay for the different regions of an IC PV structure. The tasks involve: design, modeling and optimization of IC PV structures, molecular beam epitaxial growth of PV structures and relevant segments, material characterization, prototype device fabrication and testing. At the end of this program, we expect to generate new cutting-edge knowledge in the design and understanding of quantum-engineered semiconductor structures, and demonstrate the concepts for IC PV devices with high conversion efficiencies.
Image re-sampling detection through a novel interpolation kernel.
Hilal, Alaa
2018-03-27
Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.
Interpolated Sounding and Gridded Sounding Value-Added Products
Energy Technology Data Exchange (ETDEWEB)
Toto, T. [Brookhaven National Lab. (BNL), Upton, NY (United States); Jensen, M. [Brookhaven National Lab. (BNL), Upton, NY (United States)
2016-03-01
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25 and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.The INTERPOLATEDSONDE VAP, a continuous time-height grid of relative humidity-corrected sounding data, is intended to provide input to higher-order products, such as the Merged Soundings (MERGESONDE; Troyan 2012) VAP, which extends INTERPOLATEDSONDE by incorporating model data. The INTERPOLATEDSONDE VAP also is used to correct gaseous attenuation of radar reflectivity in products such as the KAZRCOR VAP.
Air Quality Assessment Using Interpolation Technique
Directory of Open Access Journals (Sweden)
Awkash Kumar
2016-07-01
Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.
Size-Dictionary Interpolation for Robot's Adjustment
Directory of Open Access Journals (Sweden)
Morteza eDaneshmand
2015-05-01
Full Text Available This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically-inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps towards completion of the whole realistic online fitting package is provided.
Influence of blood flow on the coagulation cascade
DEFF Research Database (Denmark)
The influence of diffusion and convetive flows on the blood coagulation cascade is investigated for a controlled perfusion experiment. We present a cartoon model and reaction schemes for parts of the coagulation cascade with sunsequent set up of a mathematical model in two space dimensions plus one...
Peng, Junhui; Zhang, Zhiyong
2016-07-05
Various low-resolution experimental techniques have gained more and more popularity in obtaining structural information of large biomolecules. In order to interpret the low-resolution structural data properly, one may need to construct an atomic model of the biomolecule by fitting the data using computer simulations. Here we develop, to our knowledge, a new computational tool for such integrative modeling by taking the advantage of an efficient sampling technique called parallel cascade selection (PaCS) simulation. For given low-resolution structural data, this PaCS-Fit method converts it into a scoring function. After an initial simulation starting from a known structure of the biomolecule, the scoring function is used to pick conformations for next cycle of multiple independent simulations. By this iterative screening-after-sampling strategy, the biomolecule may be driven towards a conformation that fits well with the low-resolution data. Our method has been validated using three proteins with small-angle X-ray scattering data and two proteins with electron microscopy data. In all benchmark tests, high-quality atomic models, with generally 1-3 Å from the target structures, are obtained. Since our tool does not need to add any biasing potential in the simulations to deform the structure, any type of low-resolution data can be implemented conveniently.
Availability Cascades & the Sharing Economy
DEFF Research Database (Denmark)
Netter, Sarah
2014-01-01
attention. This conceptual paper attempts to explain the emergent focus on the sharing economy and associated business and consumption models by applying cascade theory. Risks associated with this behavior will be especially examined with regard to the sustainability claim of collaborative consumption....... With academics, practitioners, and civil society alike having a shared history in being rather fast in accepting new concepts that will not only provide business opportunities but also a good conscience, this study proposes a critical study of the implications of collaborative consumption, before engaging...
Directory of Open Access Journals (Sweden)
Longxiang Li
Full Text Available Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.
Monotonicity preserving splines using rational cubic Timmer interpolation
Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md
2017-08-01
In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.
Experimental Performance of Spatial Interpolators for Ground Water Salinity
International Nuclear Information System (INIS)
Alsaaran, Nasser A.
2005-01-01
Mapping groundwater qualities requires either sampling on a fine regular grid or spatial interpolation. The latter is usually used because the cost of the former is prohibitive. Experimental performance of five spatial interpolators for groundwater salinity was investigated using cross validation. The methods included ordinary kriging (OK), lognormal kriging, inverse distance, inverse squared distance and inverse cubed distance. The results show that OK outperformed other interpolators in terms of bias. Interpolation accuracy based on mean absolute difference criterion is relatively high for all interpolators with small difference among them. While three-dimensional surfaces produced by all inverse distance based procedures are dominated by isolated peaks and pits, surfaces produced by kriging are free from localized pits and peaks, and show areas of low groundwater salinity as elongated basins and areas of high salinity as ridges, which make regional trends easy to identify. Considering all criteria, OK was judged to be the most suitable spatial interpolator for groundwater salinity in this study. (author)
Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation
Directory of Open Access Journals (Sweden)
Hezerul Abdul Karim
2004-09-01
Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.
Lynch, K. A.; Gayetsky, L.; Fernandes, P. A.; Zettergren, M. D.; Lessard, M.; Cohen, I. J.; Hampton, D. L.; Ahrns, J.; Hysell, D. L.; Powell, S.; Miceli, R. J.; Moen, J. I.; Bekkeng, T.
2012-12-01
Auroral precipitation can modify the ionospheric thermal plasma through a variety of processes. We examine and compare the events seen by two recent auroral sounding rockets carrying in situ thermal plasma instrumentation. The Cascades2 sounding rocket (March 2009, Poker Flat Research Range) traversed a pre-midnight poleward boundary intensification (PBI) event distinguished by a stationary Alfvenic curtain of field-aligned precipitation. The MICA sounding rocket (February 2012, Poker Flat Research Range) traveled through irregular precipitation following the passage of a strong westward-travelling surge. Previous modelling of the ionospheric effects of auroral precipitation used a one-dimensional model, TRANSCAR, which had a simplified treatment of electric fields and did not have the benefit of in situ thermal plasma data. This new study uses a new two-dimensional model which self-consistently calculates electric fields to explore both spatial and temporal effects, and compares to thermal plasma observations. A rigorous understanding of the ambient thermal plasma parameters and their effects on the local spacecraft sheath and charging, is required for quantitative interpretation of in situ thermal plasma observations. To complement this TRANSCAR analysis we therefore require a reliable means of interpreting in situ thermal plasma observation. This interpretation depends upon a rigorous plasma sheath model since the ambient ion energy is on the order of the spacecraft's sheath energy. A self-consistent PIC model is used to model the spacecraft sheath, and a test-particle approach then predicts the detector response for a given plasma environment. The model parameters are then modified until agreement is found with the in situ data. We find that for some situations, the thermal plasma parameters are strongly driven by the precipitation at the observation time. For other situations, the previous history of the precipitation at that position can have a stronger
Data interpolation using rational cubic Ball spline with three parameters
Karim, Samsul Ariffin Abdul
2016-11-01
Data interpolation is an important task for scientific visualization. This research introduces new rational cubic Ball spline scheme with three parameters. The rational cubic Ball will be used for data interpolation with or without true derivative values. Error estimation show that the proposed scheme works well and is a very good interpolant to approximate the function. All graphical examples are presented by using Mathematica software.
Systems and methods for interpolation-based dynamic programming
Rockwood, Alyn
2013-01-03
Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.
An Evaluation of Interpol's Cooperative-Based Counterterrorism Linkages
Todd Sandler; Daniel G. Arce; Walter Enders
2011-01-01
This paper evaluates the payback from efforts of the International Criminal Police Organization (Interpol) to coordinate proactive counterterrorism measures by its member countries to arrest terrorists and weaken their ability to conduct operations. We use Interpol arrest data and data on utilization of Interpol resources by member countries to compute counterfactual benefit measurements, which, when matched with costs, yield benefit-cost ratios. The average of these ratios is approximately 2...
Distance-two interpolation for parallel algebraic multigrid
International Nuclear Information System (INIS)
Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M
2007-01-01
In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers
Indeterminacy of interpolation problems in the Stieltjes class
International Nuclear Information System (INIS)
Dyukarev, Yu M
2005-01-01
The concept of ordered families of interpolation problems in the Stieltjes class is introduced. Ordered families are used for the introduction of the concept of limiting interpolation problem in the same class. The limiting interpolation problem is proved to be soluble. A criterion for the complete indeterminacy of a limiting interpolation problem in the Stieltjes class is obtained. All solutions in the completely indeterminate case are described in terms of linear fractional transformations. General constructions are illustrated by the examples of the Stieltjes moment problem and the Nevanlinna-Pick problem in the Stieltjes class.
Levy, Jonathan; Pernet, Cyril; Treserras, Sébastien; Boulanouar, Kader; Aubry, Florent; Démonet, Jean-François; Celsis, Pierre
2009-08-18
Neuropsychological data about the forms of acquired reading impairment provide a strong basis for the theoretical framework of the dual-route cascade (DRC) model which is predictive of reading performance. However, lesions are often extensive and heterogeneous, thus making it difficult to establish precise functional anatomical correlates. Here, we provide a connective neural account in the aim of accommodating the main principles of the DRC framework and to make predictions on reading skill. We located prominent reading areas using fMRI and applied structural equation modeling to pinpoint distinct neural pathways. Functionality of regions together with neural network dissociations between words and pseudowords corroborate the existing neuroanatomical view on the DRC and provide a novel outlook on the sub-regions involved. In a similar vein, congruent (or incongruent) reliance of pathways, that is reliance on the word (or pseudoword) pathway during word reading and on the pseudoword (or word) pathway during pseudoword reading predicted good (or poor) reading performance as assessed by out-of-magnet reading tests. Finally, inter-individual analysis unraveled an efficient reading style mirroring pathway reliance as a function of the fingerprint of the stimulus to be read, suggesting an optimal pattern of cerebral information trafficking which leads to high reading performance.
Directory of Open Access Journals (Sweden)
Jonathan Levy
Full Text Available Neuropsychological data about the forms of acquired reading impairment provide a strong basis for the theoretical framework of the dual-route cascade (DRC model which is predictive of reading performance. However, lesions are often extensive and heterogeneous, thus making it difficult to establish precise functional anatomical correlates. Here, we provide a connective neural account in the aim of accommodating the main principles of the DRC framework and to make predictions on reading skill. We located prominent reading areas using fMRI and applied structural equation modeling to pinpoint distinct neural pathways. Functionality of regions together with neural network dissociations between words and pseudowords corroborate the existing neuroanatomical view on the DRC and provide a novel outlook on the sub-regions involved. In a similar vein, congruent (or incongruent reliance of pathways, that is reliance on the word (or pseudoword pathway during word reading and on the pseudoword (or word pathway during pseudoword reading predicted good (or poor reading performance as assessed by out-of-magnet reading tests. Finally, inter-individual analysis unraveled an efficient reading style mirroring pathway reliance as a function of the fingerprint of the stimulus to be read, suggesting an optimal pattern of cerebral information trafficking which leads to high reading performance.
Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations
Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul
2014-06-01
The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.
GA Based Rational cubic B-Spline Representation for Still Image Interpolation
Directory of Open Access Journals (Sweden)
Samreen Abbas
2016-12-01
Full Text Available In this paper, an image interpolation scheme is designed for 2D natural images. A local support rational cubic spline with control parameters, as interpolatory function, is being optimized using Genetic Algorithm (GA. GA is applied to determine the appropriate values of control parameter used in the description of rational cubic spline. Three state-of-the-art Image Quality Assessment (IQA models with traditional one are hired for comparison with existing image interpolation schemes and perceptual quality check of resulting images. The results show that the proposed scheme is better than the existing ones in comparison.
Electronic structure interpolation via atomic orbitals
Energy Technology Data Exchange (ETDEWEB)
Chen Mohan; Guo, G-C; He Lixin, E-mail: helx@ustc.edu.cn [Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei, 230026 (China)
2011-08-17
We present an efficient scheme for accurate electronic structure interpolation based on systematically improvable optimized atomic orbitals. The atomic orbitals are generated by minimizing the spillage value between the atomic basis calculations and the converged plane wave basis calculations on some coarse k-point grid. They are then used to calculate the band structure of the full Brillouin zone using the linear combination of atomic orbitals algorithms. We find that usually 16-25 orbitals per atom can give an accuracy of about 10 meV compared to the full ab initio calculations, and the accuracy can be systematically improved by using more atomic orbitals. The scheme is easy to implement and robust, and works equally well for metallic systems and systems with complicated band structures. Furthermore, the atomic orbitals have much better transferability than Shirley's basis and Wannier functions, which is very useful for perturbation calculations.
Gorji, Taha; Sertel, Elif; Tanik, Aysegul
2017-12-01
Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters
Emmer, Adam; Mergili, Martin; Juřicová, Anna; Cochachin, Alejo; Huggel, Christian
2016-04-01
particularly challenging test case for the currently developed, GIS-based two-phase dynamic mass flow model r.avaflow. Whilst the test results are very promising, lessons learned for r.avaflow model are the need for (i) an improved concept to determine the flow boundaries; and (ii) thorough parameter tests. High demands on the resolution and quality of the DEM are revealed. From our event and modelling analysis we conclude the following: mass movements in the headwaters of hydrologically connected lake and river systems may affect the catchment in complex and cascading ways. Flood and mass flow magnitudes can be both intensified or attenuated along the pathway. Geomorphological analysis and related modelling efforts may elucidate the related hazards as a basis to reduce the associated risks to downstream communities and infrastructures. Keywords: cascading processes, dam failure, glacial lake outburst flood (GLOF), high-mountain lakes, r.avaflow
The role of interpolation in PVC-induced cardiomyopathy.
Olgun, Hilal; Yokokawa, Miki; Baman, Timir; Kim, Hyungjin Myra; Armstrong, William; Good, Eric; Chugh, Aman; Pelosi, Frank; Crawford, Thomas; Oral, Hakan; Morady, Fred; Bogun, Frank
2011-07-01
Frequent premature ventricular complexes (PVCs) can cause cardiomyopathy. The mechanism is not known and may be multifactorial. This study assessed the role of PVC interpolation in PVC-induced cardiomyopathy. In 51 consecutive patients (14 women, age 49 ± 15 years, ejection fraction (EF) 0.49 ± 0.14) with frequent PVCs, 24-hour Holter recordings were performed. The amount of interpolation was determined and correlated with the presence of PVC-induced cardiomyopathy. In addition, parameters measured during an electrophysiology study were correlated with the Holter findings. Fourteen of the 21 patients (67%) with cardiomyopathy had interpolated PVCs, compared with only 6 of 30 patients (20%) without PVC-induced cardiomyopathy (P PVC burden than patients without interpolation (28% ± 12% vs. 15% ± 15%; P = .002). The burden of interpolated PVCs correlated with the presence of PVC cardiomyopathy (21% ± 30% vs. 4% ± 13%; P = .008). Both PVC burden and interpolation independently predicted PVC-induced cardiomyopathy (odds ratio 1.07, 95% confidence interval 1.01 to 1.13, P = .02; and odds ratio 4.43, 95% confidence interval 1.06 to 18.48, P = .04, respectively). The presence of ventriculoatrial block at a ventricular pacing cycle length of 600 ms correlated with the presence of interpolation (P = .004). Patients with interpolation had a longer mean ventriculoatrial block cycle length than patients without interpolated PVCs (520 ± 110 ms vs. 394 ± 92 ms; P = .01). The presence of interpolated PVCs was predictive of the presence of PVC cardiomyopathy. Interpolation may play an important role in the generation of PVC-induced cardiomyopathy. Copyright © 2011 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
DEFF Research Database (Denmark)
Shekarchi, Sayedali; Christensen-Dalsgaard, Jakob; Hallam, John
2015-01-01
A head-related transfer function (HRTF) model employing Legendre polynomials (LPs) is evaluated as an HRTF spatial complexity indicator and interpolation technique in the azimuth plane. LPs are a set of orthogonal functions derived on the sphere which can be used to compress an HRTF dataset...
Energy Technology Data Exchange (ETDEWEB)
Davenport, C. M.
1977-02-01
The mathematical basis for an ultraprecise digital differential analyzer circuit for use as a parabolic interpolator on numerically controlled machines has been established, and scaling and other error-reduction techniques have been developed. An exact computer model is included, along with typical results showing tracking to within an accuracy of one part per million.
Vegter, H.; van den Boogaard, Antonius H.
2006-01-01
An anisotropic plane stress yield function based on interpolation by second order Bézier curves is proposed. The parameters for the model are readily derived by four mechanical tests: a uniaxial, an equi-biaxial and a plane strain tensile test and a shear test. In case of planar anisotropy, this set
International Nuclear Information System (INIS)
Nordlund, Kai; Sand, Andrea E.; Granberg, Fredric; Zinkle, Steven J.; Stoller, Roger; Averback, Robert S.; Suzudo, Tomoaki; Malerba, Lorenzo; Banhart, Florian; Weber, William J.; Willaime, Francois; Dudarev, Sergei; Simeone, David
2015-01-01
Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Multi-scale Modelling of Fuels and Structural Materials for Nuclear Systems (WPMM) was established in 2008 to assess the scientific and engineering aspects of fuels and structural materials, aiming at evaluating multi-scale models and simulations as validated predictive tools for the design of nuclear systems, fuel fabrication and performance. The WPMM's objective is to promote the exchange of information on models and simulations of nuclear materials, theoretical and computational methods, experimental validation, and related topics. It also provides member countries with up-to-date information, shared data, models and expertise. The WPMM Expert Group on Primary Radiation Damage (PRD) was established in 2009 to determine the limitations of the NRT-dpa standard, in the light of both atomistic simulations and known experimental discrepancies, to revisit the NRT-dpa standard and to examine the possibility of proposing a new improved standard of primary damage characteristics. This report reviews the current understanding of primary radiation damage from neutrons, ions and electrons (excluding photons, atomic clusters and more exotic particles), with emphasis on the range of validity of the 'displacement per atom' (dpa) concept in all major classes of materials with the exception of organics. The report also introduces an 'athermal recombination-corrected dpa' (arc-dpa) relation that uses a relatively simple functional to address the well-known issue that 'displacement per atom' (dpa) overestimates damage production in metals under energetic displacement cascade conditions, as well as a 'replacements-per-atom' (rpa) equation, also using a relatively simple functional, that accounts for the fact that dpa is understood to severely underestimate actual atom relocation (ion beam mixing) in metals. (authors)
Diabat Interpolation for Polymorph Free-Energy Differences.
Kamat, Kartik; Peters, Baron
2017-02-02
Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.
Participant intimacy: A cluster analysis of the intranuclear cascade
International Nuclear Information System (INIS)
Cugnon, J.; Knoll, J.; Randrup, J.
1981-01-01
The intranuclear cascade for relativistic nuclear collisions is analyzed in terms of clusters consisting of groups of nucleons which are dynamically linked to each other by violent interactions. The formation cross sections for the different cluster types as well as their intrinsic dynamics are studied and compared with the predictions of the linear cascade model ( rows-on-rows ). (orig.)
The interpolation damage detection method for frames under seismic excitation
Limongelli, M. P.
2011-10-01
In this paper a new procedure, addressed as Interpolation Damage Detecting Method (IDDM), is investigated as a possible mean for early detection and location of light damage in a structure struck by an earthquake. Damage is defined in terms of the accuracy of a spline function in interpolating the operational mode shapes (ODS) of the structure. At a certain location a decrease (statistically meaningful) of accuracy, with respect to a reference configuration, points out a localized variation of the operational shapes thus revealing the existence of damage. In this paper, the proposed method is applied to a numerical model of a multistory frame, simulating a damaged condition through a reduction of the story stiffness. Several damage scenarios have been considered and the results indicate the effectiveness of the method to assess and localize damage for the case of concentrated damage and for low to medium levels of noise in the recorded signals. The main advantage of the proposed algorithm is that it does not require a numerical model of the structure as well as an intense data post-processing or user interaction. The ODS are calculated from Frequency Response Functions hence responses recorded on the structure can be directly used without the need of modal identification. Furthermore, the local character of the feature chosen to detect damage makes the IDDM less sensitive to noise and to environmental changes with respect to other damage detection methods. For these reasons the IDDM appears as a valid option for automated post-earthquake damage assessment, able to provide after an earthquake, reliable information about the location of damage.
Abdel-Waged, Khaled; Uzhinskii, V V
2011-01-01
We describe how various hadronic cascade models, which are implemented in the GEANT4 toolkit, describe proton and charged pion transverse momentum spectra from p + Cu and Pb collisions at 3, 8, and 15 GeV/c, recently measured in the hadron production (HARP) experiment at CERN. The Binary, ultrarelativistic quantum molecular dynamics (UrQMD) and modified FRITIOF (FTF) hadronic cascade models are chosen for investigation. The first two models are based on limited (Binary) and branched (UrQMD) binary scattering between cascade particles which can be either a baryon or meson, in the three-dimensional space of the nucleus, while the latter (FTF) considers collective interactions between nucleons only, on the plane of impact parameter. It is found that the slow (p(T) 0.3 GeV/c) proton spectra are not strongly affected by the differences between the FTF and UrQMD models. It is also shown that the UrQMD and FTF combined with Binary (FTFB) models could reproduce both proton and charged pion spectra from p + Cu and Pb...
Thomas A. Kirk; William J. Zielinski
2009-01-01
We used field surveys and Geographic Information System data to identify landscape-scale habitat associations of American martens (Martes americana) and to develop a model to predict their occurrence in northeastern California. Systematic surveys using primarily enclosed track plates, with 10-km spacing, were conducted across a 27,700 km
Gupta, R. P.; Banerjee, Malay; Chandra, Peeyush
2014-07-01
The present study investigates a prey predator type model for conservation of ecological resources through taxation with nonlinear harvesting. The model uses the harvesting function as proposed by Agnew (1979) [1] which accounts for the handling time of the catch and also the competition between standard vessels being utilized for harvesting of resources. In this paper we consider a three dimensional dynamic effort prey-predator model with Holling type-II functional response. The conditions for uniform persistence of the model have been derived. The existence and stability of bifurcating periodic solution through Hopf bifurcation have been examined for a particular set of parameter value. Using numerical examples it is shown that the system admits periodic, quasi-periodic and chaotic solutions. It is observed that the system exhibits periodic doubling route to chaos with respect to tax. Many forms of complexities such as chaotic bands (including periodic windows, period-doubling bifurcations, period-halving bifurcations and attractor crisis) and chaotic attractors have been observed. Sensitivity analysis is carried out and it is observed that the solutions are highly dependent to the initial conditions. Pontryagin's Maximum Principle has been used to obtain optimal tax policy to maximize the monetary social benefit as well as conservation of the ecosystem.
Shape Preserving Interpolation Using C2 Rational Cubic Spline
Directory of Open Access Journals (Sweden)
Samsul Ariffin Abdul Karim
2016-01-01
Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.
Visualizing and Understanding the Components of Lagrange and Newton Interpolation
Yang, Yajun; Gordon, Sheldon P.
2016-01-01
This article takes a close look at Lagrange and Newton interpolation by graphically examining the component functions of each of these formulas. Although interpolation methods are often considered simply to be computational procedures, we demonstrate how the components of the polynomial terms in these formulas provide insight into where these…
An efficient interpolation filter VLSI architecture for HEVC standard
Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang
2015-12-01
The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.
Some observations on interpolating gauges and non-covariant gauges
Indian Academy of Sciences (India)
We discuss the viability of using interpolating gauges to deﬁne the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition deﬁning term. We show that the boundary condition needed to maintain gauge-invariance as the interpolating parameter ...
Algorithm for applying interpolation in digital signal processing ...
African Journals Online (AJOL)
Software-defined radios and test equipment use a variety of digital signal processing techniques to improve system performance. Interpolation is one technique that can be used to increase the sample rate of digital signals. In this work, we illustrated interpolation in the time domain by writing appropriate codes using ...
Interpolation of fuzzy data | Khodaparast | Journal of Fundamental ...
African Journals Online (AJOL)
In the current world and in the field of science and technology, interpolation issues are also of a fuzzy type, it has many scientific applications in developmental work, medical issues, imaging, engineering software and graphics. Therefore, in this article we intend to investigate Interpolation of fuzzy data in order to apply fuzzy ...
Selection of an Appropriate Interpolation Method for Rainfall Data In ...
African Journals Online (AJOL)
There are many interpolation methods in use with various limitations and likelihood of errors. This study applied five interpolation methods to existing rainfall data in central Nigeria to determine the most appropriate method that returned the best prediction of rainfall at an ungauged site. The methods include the inverse ...
Optimal interpolation schemes for particle tracking in turbulence
van Hinsberg, M.A.T.; ten Thije Boonkkamp, J.H.M.; Toschi, F.; Clercx, H.J.H.
2013-01-01
An important aspect in numerical simulations of particle-laden turbulent flows is the interpolation of the flow field needed for the computation of the Lagrangian trajectories. The accuracy of the interpolation method has direct consequences for the acceleration spectrum of the fluid particles and
EFEKTIFITAS PERANAN INTERPOL DALAM MENANGULANGI JARINGAN NARKOTIKA DI INDONESIA
RICHARD LIU, VINSENSIUS
2013-01-01
Penelitian ini bertujuan untuk mengetahui dan menjelaskan efektifitas peran Interpol didalam penanggulangan jaringan narkotika di Indonesia, Startegi Interpol dalam menangani jaringan narkotika internasional di Indonesia, dan sikap Pemerintah Indonesia dalam menanggulangi jaringan narkotika Internasional. Penulis membatasi penelitian ini dalam kurun waktu 3 tahun yaitu 2009-2011 Tipe penelitian yang penulis gunakan untuk mencapai tujuan penelitian adalah tipe penelitian deskriptif. Tekn...
A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY
The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...
Input variable selection for interpolating high-resolution climate ...
African Journals Online (AJOL)
Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...
Steady State Stokes Flow Interpolation for Fluid Control
DEFF Research Database (Denmark)
Bhatacharya, Haimasree; Nielsen, Michael Bang; Bridson, Robert
2012-01-01
Fluid control methods often require surface velocities interpolated throughout the interior of a shape to use the velocity as a feedback force or as a boundary condition. Prior methods for interpolation in computer graphics — velocity extrapolation in the normal direction and potential flow...
MAP Kinase Cascades in Plant Innate Immunity
Directory of Open Access Journals (Sweden)
Magnus Wohlfahrt Rasmussen
2012-07-01
Full Text Available Plant mitogen-activated protein kinase (MAPK cascades generally transduce extracellular stimuli into cellular responses. These stimuli include the perception of pathogen-associated molecular patterns (PAMPs by host transmembrane pattern recognition receptors (PRRs which trigger MAPK-dependent innate immune responses. In the model Arabidopsis, molecular genetic evidence implicates a number of MAPK cascade components in PAMP signaling, and in responses to immunity-related phytohormones such as ethylene, jasmonate and salicylate. In a few cases, cascade components have been directly linked to the transcription of target genes or to the regulation of phytohormone synthesis. Thus MAPKs are obvious targets for bacterial effector proteins and are likely guardees of resistance (R proteins, which mediate defense signaling in response to the action of effectors, or effector-triggered immunity (ETI. This mini-review discusses recent progress in this field with a focus on the Arabidopsis MAPKs MPK3, 4, 6 and 11 in their apparent pathways.
Cassidy, Adam R; White, Matthew T; DeMaso, David R; Newburger, Jane W; Bellinger, David C
2016-10-01
To establish executive function (EF) structure/organization and test a longitudinal developmental cascade model linking processing speed (PS) and EF skills at 8-years of age to academic achievement outcomes, both at 8- and 16-years, in a large sample of children/adolescents with surgically repaired dextro-transposition of the great arteries (d-TGA). Data for this study come from the 8- (n = 155) and 16-year (n = 139) time points of the Boston Circulatory Arrest Study and included WISC-III, Trail Making Test, Test of Variables of Attention, and WIAT/WIAT-II tasks. A 2-factor model (Working Memory/Inhibition and Shifting) provided the best fit for the EF data, χ²(3) = 1.581, p = .66, RMSEA = 0, CFI = 1, NNFI = 1.044). Working Memory/Inhibition and Shifting factors were not correlated. In the structural equation model, PS was directly related to both EF factors and Reading at 8 years, and was indirectly related to Math and Reading achievement, both concurrently and longitudinally, via its effects on Working Memory/Inhibition. Shifting at 8 years was significantly associated with Math (but not Reading) at 16 years. The academic difficulties experienced by children and adolescents with d-TGA may be driven, at least in part, by underlying deficits in processing speed and aspects of executive function. Intervention efforts aimed at bolstering these abilities, particularly if implemented early in development, may prove beneficial in improving academic outcomes and, perhaps by extension, in reducing the stress and diminished self-confidence often associated with academic underachievement. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Hu, Yufei; Chen, Zhiyu; Zhuang, Chuxiong; Huang, Jilei
2017-06-01
Transferred DNA (T-DNA) from Agrobacterium tumefaciens can be integrated into the plant genome. The double-stranded break repair (DSBR) pathway is a major model for T-DNA integration. From this model, we expect that two ends of a T-DNA molecule would invade into a single DNA double-stranded break (DSB) or independent DSBs in the plant genome. We call the later phenomenon a heterogeneous T-DNA integration, which has never been observed. In this work, we demonstrated it in an Arabidopsis T-DNA insertion mutant seb19. To resolve the chromosomal structural changes caused by T-DNA integration at both the nucleotide and chromosome levels, we performed inverse PCR, genome resequencing, fluorescence in situ hybridization and linkage analysis. We found, in seb19, a single T-DNA connected two different chromosomal loci and caused complex chromosomal rearrangements. The specific break-junction pattern in seb19 is consistent with the result of heterogeneous T-DNA integration but not of recombination between two T-DNA insertions. We demonstrated that, in seb19, heterogeneous T-DNA integration evoked a cascade of incorrect repair of seven DSBs on chromosomes 4 and 5, and then produced translocation, inversion, duplication and deletion. Heterogeneous T-DNA integration supports the DSBR model and suggests that two ends of a T-DNA molecule could be integrated into the plant genome independently. Our results also show a new origin of chromosomal abnormalities. © 2017 The Authors The Plant Journal © 2017 John Wiley & Sons Ltd.
South Cascade (USA/North Cascades)
Bidlake, William R.
2011-01-01
The U.S. Geological Survey has closely monitored this temperate mountain glacier since the late 1950s. During 1958-2007, the glacier retreated about 0.7 km and shrank in area from 2.71 to 1.73 km2, although part of the area change was due to separation of contributing ice bodies from the main glacier. Maximum and average glacier thicknesses are about 170 and 80 m, respectively. Year-to-year variations of snow accumulation amounts on the glacier are largely attributable to the regional maritime climate and fluctuating climate conditions of the North Pacific Ocean. Long-term-average precipitation is about 4500 mm and most of that falls as snow during October through May. Average annual air temperature at 1,900 m altitude (the approximate ELA0) was estimated to be 1.6°C during 2000-2009. Mass balances are computed yearly by the direct glaciological method. Mass balances measured at selected locations are used in an interpolation and extrapolation procedure that computes the mass balance at each point in the glacier surface altitude grid. The resulting mass balance grid is averaged to obtain glacier mass balances. Additionally, the geodetic method has been applied to compute glacier net balances in 1970, 1975, 1977, 1979-80, and 1985-97. Winter snow accumulation on the glacier during 2007/08 and 2008/09 was larger than the long-term (1959-2009) average. The 2007/08 preliminary summer balance (-3510 mm w.e.) was slightly more negative than the long-term average and this yielded a preliminary 2007/08 net balance (-290 mm w.e.), which was less negative than the average for the period of record (-600 mm w.e.). Summer 2009 was uncommonly warm and the preliminary 2008/09 summer balance (-4980 mm w.e.) was more negative than any on record for the glacier. The 2008/09 glacier net balance (-1860 mm w.e.) was among the 10 most negative for the period of net balance record (1953-2009). Material presented here is preliminary in nature and presented prior to final review. These
Evaluation of Teeth and Supporting Structures on Digital Radiograms using Interpolation Methods
Energy Technology Data Exchange (ETDEWEB)
Koh, Kwang Joon [Dept. of Oral and Maxillofacial Radiology, School of Dentistry and Institute of Oral Bio Science , Chonbuk National University, Chonju (Korea, Republic of); Chang, Kee Wan [Dept. of Preventive and Community Dentistry, School of Dentistry and Institute of Oral Bio Science, Chonbuk National University, Chonju (Korea, Republic of)
1999-02-15
To determine the effect of interpolation functions when processing the digital periapical images. The digital images were obtained by Digora and CDR system on the dry skull and human subject. 3 oral radiologists evaluated the 3 portions of each processed image using 7 interpolation methods and ROC curves were obtained by trapezoidal methods. The highest Az value(0.96) was obtained with cubic spline method and the lowest Az value(0.03) was obtained with facet model method in Digora system. The highest Az value(0.79) was obtained with gray segment expansion method and the lowest Az value(0.07) was obtained with facet model method in CDR system. There was significant difference of Az value in original image between Digora and CDR system at alpha=0.05 level. There were significant differences of Az values between Digora and CDR images with cubic spline method, facet model method, linear interpolation method and non-linear interpolation method at alpha= 0.1 level.
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz
The Use of Wavelets in Image Interpolation: Possibilities and Limitations
Directory of Open Access Journals (Sweden)
M. Grgic
2007-12-01
Full Text Available Discrete wavelet transform (DWT can be used in various applications, such as image compression and coding. In this paper we examine how DWT can be used in image interpolation. Afterwards proposed method is compared with two other traditional interpolation methods. For the case of magnified image achieved by interpolation, original image is unknown and there is no perfect way to judge the magnification quality. Common approach is to start with an original image, generate a lower resolution version of original image by downscaling, and then use different interpolation methods to magnify low resolution image. After that original and magnified images are compared to evaluate difference between them using different picture quality measures. Our results show that comparison of image interpolation methods depends on downscaling technique, image contents and quality metric. For fair comparison all these parameters need to be considered.
Interpolation from Grid Lines: Linear, Transfinite and Weighted Method
DEFF Research Database (Denmark)
Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen
2017-01-01
When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid...... of transfinite method close to grid lines, and the stability of the linear method. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates for two data sets. Depending on the upsampling rate, we show significant difference in the performance of the three methods....... We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all relevant upsampling rates....
Scalable Intersample Interpolation Architecture for High-channel-count Beamformers
DEFF Research Database (Denmark)
Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt
2011-01-01
Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....
Matsuura, Naomi; Fujiwara, Takeo; Okuyama, Makiko; Izumi, Mayuko
2013-04-01
This study examined the following hypotheses: (1) a child abuse history (CAH), domestic violence (DV), and child abuse by an intimate partner might have a crucial and specific influence but act differently on women's negative mental health; (2) CAH, DV, child abuse by an intimate partner, and negative mental health might be predictors of maternal child abuse, with complex interactions. A self-administered questionnaire survey was conducted among a sample of mothers (N=304) and their children (N=498) staying in 83 Mother-Child Homes in Japan to assess the women's CAH and DV experiences, along with their current mental health problems, including dissociated, depressed, and traumatic symptoms. A structural equation modeling (SEM) was adapted to test whether a complex theoretical model fits the actual relationship among a set of observed measures. Our model confirmed the linkage with broader aspects of violence within the family such as CAH and DV, focusing on women's mental health problems reported by them. In addition, CAH, DV, child abuse by intimate partner, and maternal mental health might have a crucial and specific but act influence on maternal child abuse. Copyright © 2012 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Mateusz Szcześniak
2015-02-01
Full Text Available Ground-based precipitation data are still the dominant input type for hydrological models. Spatial variability in precipitation can be represented by spatially interpolating gauge data using various techniques. In this study, the effect of daily precipitation interpolation methods on discharge simulations using the semi-distributed SWAT (Soil and Water Assessment Tool model over a 30-year period is examined. The study was carried out in 11 meso-scale (119–3935 km2 sub-catchments lying in the Sulejów reservoir catchment in central Poland. Four methods were tested: the default SWAT method (Def based on the Nearest Neighbour technique, Thiessen Polygons (TP, Inverse Distance Weighted (IDW and Ordinary Kriging (OK. =The evaluation of methods was performed using a semi-automated calibration program SUFI-2 (Sequential Uncertainty Fitting Procedure Version 2 with two objective functions: Nash-Sutcliffe Efficiency (NSE and the adjusted R2 coefficient (bR2. The results show that: (1 the most complex OK method outperformed other methods in terms of NSE; and (2 OK, IDW, and TP outperformed Def in terms of bR2. The median difference in daily/monthly NSE between OK and Def/TP/IDW calculated across all catchments ranged between 0.05 and 0.15, while the median difference between TP/IDW/OK and Def ranged between 0.05 and 0.07. The differences between pairs of interpolation methods were, however, spatially variable and a part of this variability was attributed to catchment properties: catchments characterised by low station density and low coefficient of variation of daily flows experienced more pronounced improvement resulting from using interpolation methods. Methods providing higher precipitation estimates often resulted in a better model performance. The implication from this study is that appropriate consideration of spatial precipitation variability (often neglected by model users that can be achieved using relatively simple interpolation methods can
Interpolator for numerically controlled machine tools
Bowers, Gary L.; Davenport, Clyde M.; Stephens, Albert E.
1976-01-01
A digital differential analyzer circuit is provided that depending on the embodiment chosen can carry out linear, parabolic, circular or cubic interpolation. In the embodiment for parabolic interpolations, the circuit provides pulse trains for the X and Y slide motors of a two-axis machine to effect tool motion along a parabolic path. The pulse trains are generated by the circuit in such a way that parabolic tool motion is obtained from information contained in only one block of binary input data. A part contour may be approximated by one or more parabolic arcs. Acceleration and initial velocity values from a data block are set in fixed bit size registers for each axis separately but simultaneously and the values are integrated to obtain the movement along the respective axis as a function of time. Integration is performed by continual addition at a specified rate of an integrand value stored in one register to the remainder temporarily stored in another identical size register. Overflows from the addition process are indicative of the integral. The overflow output pulses from the second integration may be applied to motors which position the respective machine slides according to a parabolic motion in time to produce a parabolic machine tool motion in space. An additional register for each axis is provided in the circuit to allow "floating" of the radix points of the integrand registers and the velocity increment to improve position accuracy and to reduce errors encountered when the acceleration integrand magnitudes are small when compared to the velocity integrands. A divider circuit is provided in the output of the circuit to smooth the output pulse spacing and prevent motor stall, because the overflow pulses produced in the binary addition process are spaced unevenly in time. The divider has the effect of passing only every nth motor drive pulse, with n being specifiable. The circuit inputs (integrands, rates, etc.) are scaled to give exactly n times the
The role of station density for predicting daily runoff by top-kriging interpolation in Austria
Directory of Open Access Journals (Sweden)
Parajka Juraj
2015-09-01
Full Text Available Direct interpolation of daily runoff observations to ungauged sites is an alternative to hydrological model regionalisation. Such estimation is particularly important in small headwater basins characterized by sparse hydrological and climate observations, but often large spatial variability. The main objective of this study is to evaluate predictive accuracy of top-kriging interpolation driven by different number of stations (i.e. station densities in an input dataset. The idea is to interpolate daily runoff for different station densities in Austria and to evaluate the minimum number of stations needed for accurate runoff predictions. Top-kriging efficiency is tested for ten different random samples in ten different stations densities. The predictive accuracy is evaluated by ordinary cross-validation and full-sample crossvalidations. The methodology is tested by using 555 gauges with daily observations in the period 1987-1997. The results of the cross-validation indicate that, in Austria, top-kriging interpolation is superior to hydrological model regionalisation if station density exceeds approximately 2 stations per 1000 km2 (175 stations in Austria. The average median of Nash-Sutcliffe cross-validation efficiency is larger than 0.7 for densities above 2.4 stations/1000 km2. For such densities, the variability of runoff efficiency is very small over ten random samples. Lower runoff efficiency is found for low station densities (less than 1 station/1000 km2 and in some smaller headwater basins.
Functions with disconnected spectrum sampling, interpolation, translates
Olevskii, Alexander M
2016-01-01
The classical sampling problem is to reconstruct entire functions with given spectrum S from their values on a discrete set L. From the geometric point of view, the possibility of such reconstruction is equivalent to determining for which sets L the exponential system with frequencies in L forms a frame in the space L^2(S). The book also treats the problem of interpolation of discrete functions by analytic ones with spectrum in S and the problem of completeness of discrete translates. The size and arithmetic structure of both the spectrum S and the discrete set L play a crucial role in these problems. After an elementary introduction, the authors give a new presentation of classical results due to Beurling, Kahane, and Landau. The main part of the book focuses on recent progress in the area, such as construction of universal sampling sets, high-dimensional and non-analytic phenomena. The reader will see how methods of harmonic and complex analysis interplay with various important concepts in different areas, ...
Spatiotemporal video deinterlacing using control grid interpolation
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Rainfall variation by geostatistical interpolation method
Directory of Open Access Journals (Sweden)
Glauber Epifanio Loureiro
2013-08-01
Full Text Available This article analyses the variation of rainfall in the Tocantins-Araguaia hydrographic region in the last two decades, based upon the rain gauge stations of the ANA (Brazilian National Water Agency HidroWeb database for the years 1983, 1993 and 2003. The information was systemized and treated with Hydrologic methods such as method of contour and interpolation for ordinary kriging. The treatment considered the consistency of the data, the density of the space distribution of the stations and the periods of study. The results demonstrated that the total volume of water precipitated annually did not change significantly in the 20 years analyzed. However, a significant variation occurred in its spatial distribution. By analyzing the isohyet it was shown that there is a displacement of the precipitation at Tocantins Baixo (TOB of approximately 10% of the total precipitated volume. This displacement can be caused by global change, by anthropogenic activities or by regional natural phenomena. However, this paper does not explore possible causes of the displacement.
Spatial interpolation mthods for integrating Newton's equation
International Nuclear Information System (INIS)
Gueron, S.; Shalloway, D.
1996-01-01
Numerical integration of Newton's equation in multiple dimensions plays an important role in many fields such as biochemistry and astrophysics. Currently, some of the most important practical questions in these areas cannot be addressed because the large dimensionality of the variable space and complexity of the required force evaluations precludes integration over sufficiently large time intervals. Improving the efficiency of algorithms for this purpose is therefore of great importance. Standard numerical integration schemes (e.g., leap-frog and Runge-Kutta) ignore the special structure of Newton's equation that, for conservative systems, constrains the force to be the gradient of a scalar potential. We propose a new class of open-quotes spatial interpolationclose quotes (SI) integrators that exploit this property by interpolating the force in space rather than (as with standard methods) in time. Since the force is usually a smoother function of space than of time, this can improve algorithmic efficiency and accuracy. In particular, an SI integrator solves the one- and two-dimensional harmonic oscillators exactly with one force evaluation per step. A simple type of time-reversible SI algorithm is described and tested. Significantly improved performance is achieved on one- and multi-dimensional benchmark problems. 19 refs., 4 figs., 1 tab
Xue, Fei; Bompard, Ettore; Huang, Tao; Jiang, Lin; Lu, Shaofeng; Zhu, Huaiying
2017-09-01
As the modern power system is expected to develop to a more intelligent and efficient version, i.e. the smart grid, or to be the central backbone of energy internet for free energy interactions, security concerns related to cascading failures have been raised with consideration of catastrophic results. The researches of topological analysis based on complex networks have made great contributions in revealing structural vulnerabilities of power grids including cascading failure analysis. However, existing literature with inappropriate assumptions in modeling still cannot distinguish the effects between the structure and operational state to give meaningful guidance for system operation. This paper is to reveal the interrelation between network structure and operational states in cascading failure and give quantitative evaluation by integrating both perspectives. For structure analysis, cascading paths will be identified by extended betweenness and quantitatively described by cascading drop and cascading gradient. Furthermore, the operational state for cascading paths will be described by loading level. Then, the risk of cascading failure along a specific cascading path can be quantitatively evaluated considering these two factors. The maximum cascading gradient of all possible cascading paths can be used as an overall metric to evaluate the entire power grid for its features related to cascading failure. The proposed method is tested and verified on IEEE30-bus system and IEEE118-bus system, simulation evidences presented in this paper suggests that the proposed model can identify the structural causes for cascading failure and is promising to give meaningful guidance for the protection of system operation in the future.
DEFF Research Database (Denmark)
Damgaard, Mads
2018-01-01
Through a content analysis of 8,800 news items and six months of front pages in three Brazilian newspapers, all dealing with corruption and political transgression, this article documents the remarkable skew of media attention to corruption scandals. The bias is examined as an information...... phenomenon, arising from systemic and commercial factors of Brazil’s news media: An information cascade of news on corruption formed, destabilizing the governing coalition and legitimizing the impeachment process of Dilma Rousseff. As this process gained momentum, questions of accountability were disregarded...... by the media, with harmful effects on democracy....
DEFF Research Database (Denmark)
Damgaard, Mads
2018-01-01
Through a content analysis of 8,800 news items and six months of front pages in three Brazilian newspapers, all dealing with corruption and political transgression, this article documents the remarkable skew of media attention to corruption scandals. The bias is examined as an information...... phenomenon, arising from systemic and commercial factors of Brazil’s news media: An information cascade of news on corruption formed, destabilizing the governing coalition and legitimizing the impeachment process of Dilma Rousseff. As this process gained momentum, questions of accountability were disregarded...
DEFF Research Database (Denmark)
Guo, Xiaoqiang; Jia, X.; Lu, Z.
2016-01-01
Leakage current reduction is one of the important issues for the transformelress PV systems. In this paper, the transformerless single-phase cascaded H-bridge PV inverter is investigated. The common mode model for the cascaded H4 inverter is analyzed. And the reason why the conventional cascade H...
[An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].
Xu, Yonghong; Gao, Shangce; Hao, Xiaofei
2016-04-01
Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.
Geothermal research, Oregon Cascades: Final technical report
Energy Technology Data Exchange (ETDEWEB)
Priest, G.R.; Black, G.L.
1988-10-27
Previous USDOE-funded geothermal studies have produced an extensive temperature gradient and heat flow data base for the State of Oregon. One of the important features identified as a result of these studies is a rapid transition from heat flow values on the order of 40 mW/m/sup 2/ in the Willamette Valley and Western Cascades to values of greater than or equal to100 mW/m/sup 2/ in the High Cascades and the eastern portion of the Western Cascades. These data indicate that the Cascade Range in Oregon has potential as a major geothermal province and stimulated much of the later work completed by government agencies and private industry. Additional data generated as a result of this grant and published in DOGAMI Open-File Report 0-86-2 further define the location and magnitude of this transition zone. In addition, abundant data collected from the vicinity of Breitenbush and Austin Hot Springs have permitted the formulation of relatively detailed models of these hydrothermal systems. These models are published in DOGAMI Open-File Report 0-88-5. Task 1.2 of the Deliverables section of Amendment M001 is fulfilled by DOGAMI publication GMS-48, Geologic map of the McKenzie Bridge quadrangle, Lane County, Oregon. This map was printed in October, 1988, and is part of the final submission to USDOE. 8 refs.
Trivariate Local Lagrange Interpolation and Macro Elements of Arbitrary Smoothness
Matt, Michael Andreas
2012-01-01
Michael A. Matt constructs two trivariate local Lagrange interpolation methods which yield optimal approximation order and Cr macro-elements based on the Alfeld and the Worsey-Farin split of a tetrahedral partition. The first interpolation method is based on cubic C1 splines over type-4 cube partitions, for which numerical tests are given. The second is the first trivariate Lagrange interpolation method using C2 splines. It is based on arbitrary tetrahedral partitions using splines of degree nine. The author constructs trivariate macro-elements based on the Alfeld split, where each tetrahedron
INTERPOL DVI best-practice standards--An overview.
Sweet, David
2010-09-10
A description of the International Criminal Police Organization and its role in disaster victim identification is provided along with a summary of the standards developed and circulated to responders in INTERPOL member countries (188 throughout the world) to insure evidence-based DVI practices. Following the INTERPOL-mediated DVI response in 2005 to the SE Asia tsunami, many lessons learned have been recorded. Based on these current standards, INTERPOL's approach to DVI reflects a modern approach and philosophy. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Four-Point n-Ary Interpolating Subdivision Schemes
Directory of Open Access Journals (Sweden)
Ghulam Mustafa
2013-01-01
Full Text Available We present an efficient and simple algorithm to generate 4-point n-ary interpolating schemes. Our algorithm is based on three simple steps: second divided differences, determination of position of vertices by using second divided differences, and computation of new vertices. It is observed that 4-point n-ary interpolating schemes generated by completely different frameworks (i.e., Lagrange interpolant and wavelet theory can also be generated by the proposed algorithm. Furthermore, we have discussed continuity, Hölder regularity, degree of polynomial generation, polynomial reproduction, and approximation order of the schemes.
Availability Cascades & the Sharing Economy
DEFF Research Database (Denmark)
Netter, Sarah
2014-01-01
In search of a new concept that will provide answers to as to how modern societies should not only make sense but also resolve the social and environmental problems linked with our modes of production and consumption, collaborative consumption and the sharing economy are increasingly attracting...... attention. This conceptual paper attempts to explain the emergent focus on the sharing economy and associated business and consumption models by applying cascade theory. Risks associated with this behavior will be especially examined with regard to the sustainability claim of collaborative consumption....... With academics, practitioners, and civil society alike having a shared history in being rather fast in accepting new concepts that will not only provide business opportunities but also a good conscience, this study proposes a critical study of the implications of collaborative consumption, before engaging...
An Improved Minimum Error Interpolator of CNC for General Curves Based on FPGA
Directory of Open Access Journals (Sweden)
Jiye HUANG
2014-05-01
Full Text Available This paper presents an improved minimum error interpolation algorithm for general curves generation in computer numerical control (CNC. Compared with the conventional interpolation algorithms such as the By-Point Comparison method, the Minimum- Error method and the Digital Differential Analyzer (DDA method, the proposed improved Minimum-Error interpolation algorithm can find a balance between accuracy and efficiency. The new algorithm is applicable for the curves of linear, circular, elliptical and parabolic. The proposed algorithm is realized on a field programmable gate array (FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe. The algorithm has the following advantages: firstly, the maximum interpolation error is only half of the minimum step-size; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.
Directory of Open Access Journals (Sweden)
COJOCARU ŞTEFANA
2014-03-01
Full Text Available patial interpolation, in the context of spatial analysis, can be defined as the derivation of new data from already known information, a technique frequently used to predict and quantify spatial variation of a certain property or parameter. In this study we compared the performance of Inverse Distance Weighted (IDW, Ordinary Kriging and Natural Neighbor techniques, applied in spatial interpolation of precipitation parameters (pH, electrical conductivity and total dissolved solids. These techniques are often used when the area of interest is relatively small and the sampled locations are regularly spaced. The methods were tested on data collected in Iasi city (Romania between March – May 2013. Spatial modeling was performed on a small dataset, consisting of 7 sample locations and 13 different known values of each analyzed parameter. The precision of the techniques used is directly dependent on sample density as well as data variation, greater fluctuations in values between locations causing a decrease in the accuracy of the methods used. To validate the results and reveal the best method of interpolating rainfall characteristics, leave-one – out cross-validation approach was used. Comparing residues between the known values and the estimated values of pH, electrical conductivity and total dissolved solids, it was revealed that Natural Neighbor stands out as generating the smallest residues for pH and electrical conductivity, whereas IDW presents the smallest error in interpolating total dissolved solids (the parameter with the highest fluctuations in value.
MODIS Snow Cover Recovery Using Variational Interpolation
Tran, H.; Nguyen, P.; Hsu, K. L.; Sorooshian, S.
2017-12-01
Cloud obscuration is one of the major problems that limit the usages of satellite images in general and in NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) global Snow-Covered Area (SCA) products in particular. Among the approaches to resolve the problem, the Variational Interpolation (VI) algorithm method, proposed by Xia et al., 2012, obtains cloud-free dynamic SCA images from MODIS. This method is automatic and robust. However, computational deficiency is a main drawback that degrades applying the method for larger scales (i.e., spatial and temporal scales). To overcome this difficulty, this study introduces an improved version of the original VI. The modified VI algorithm integrates the MINimum RESidual (MINRES) iteration (Paige and Saunders., 1975) to prevent the system from breaking up when applied to much broader scales. An experiment was done to demonstrate the crash-proof ability of the new algorithm in comparison with the original VI method, an ability that is obtained when maintaining the distribution of the weights set after solving the linear system. After that, the new VI algorithm was applied to the whole Contiguous United States (CONUS) over four winter months of 2016 and 2017, and validated using the snow station network (SNOTEL). The resulting cloud free images have high accuracy in capturing the dynamical changes of snow in contrast with the MODIS snow cover maps. Lastly, the algorithm was applied to create a Cloud free images dataset from March 10, 2000 to February 28, 2017, which is able to provide an overview of snow trends over CONUS for nearly two decades. ACKNOWLEDGMENTSWe would like to acknowledge NASA, NOAA Office of Hydrologic Development (OHD) National Weather Service (NWS), Cooperative Institute for Climate and Satellites (CICS), Army Research Office (ARO), ICIWaRM, and UNESCO for supporting this research.
International Nuclear Information System (INIS)
Lindenbaum, S.J.; Foley, K.J.; Eiseman, S.E.
1988-01-01
We have developed and successfully tested a TPC Magnetic Spectrometer to search for QGP signals produced by ion beams at AGS. We also developed a cascade and plasma event generator the predictions of which are used to illustrate how our technique can detect possible plasma signals. 4 refs., 6 figs., 1 tab
Interpolation and Inversion - New Features in the Matlab Sesimic Anisotropy Toolbox
Walker, A.; Wookey, J. M.
2015-12-01
A key step in studies of seismic anisotropy in the mantle is often the creation of models designed to explain its physical origin. We previously released MSAT (the Matlab Seismic Anisotropy Toolbox), which includes a range of functions that can be used together to build these models and provide geological or geophysical insight given measurements of, for example, shear-wave splitting. Here we describe some of the new features of MSAT that will be included in a new release timed to coincide with the 2015 Fall Meeting. A critical step in testing models of the origin of seismic anisotropy is the determination of the misfit between shear-wave splitting parameters predicted from a model and measured from seismic observations. Is a model that correctly reproduces the delay time "better" than a model that correctly reproduces the fast polarization? We have introduced several new methods that use both parameters to calculate the misfit in a meaningful way and these can be used as part of an inversion scheme in order to find a model that best matches measured shear wave splitting. Our preferred approach involves the creation, "splitting", and "unsplitting" of a test wavelet. A measure of the misfit is then provided by the normalized second eigenvalue of the covariance matrix of particle motion for the two wavelets in a way similar to that used to find splitting parameters from data. This can be used as part of an inverse scheme to find a model that can reproduce a set of shear-wave splitting observations. A second challenge is the interpolation of elastic constants between two known points. Naive element-by-element interpolation can result in anomalous seismic velocities from the interpolated tensor. We introduce an interpolation technique involving both the orientation (defined in terms of the eigenvectors of the dilatational or Voigt stiffness tensor) and magnitude of the two end-member elastic tensors. This permits changes in symmetry between the end-members and removes
Efficient Algorithms and Design for Interpolation Filters in Digital Receiver
Directory of Open Access Journals (Sweden)
Xiaowei Niu
2014-05-01
Full Text Available Based on polynomial functions this paper introduces a generalized design method for interpolation filters. The polynomial-based interpolation filters can be implemented efficiently by using a modified Farrow structure with an arbitrary frequency response, the filters allow many pass- bands and stop-bands, and for each band the desired amplitude and weight can be set arbitrarily. The optimization coefficients of the interpolation filters in time domain are got by minimizing the weighted mean squared error function, then converting to solve the quadratic programming problem. The optimization coefficients in frequency domain are got by minimizing the maxima (MiniMax of the weighted mean squared error function. The degree of polynomials and the length of interpolation filter can be selected arbitrarily. Numerical examples verified the proposed design method not only can reduce the hardware cost effectively but also guarantee an excellent performance.
Analysis of Spatial Interpolation in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2010-01-01
This paper analyses different types of spatial interpolation for the material-point method The interpolations include quadratic elements and cubic splines in addition to the standard linear shape functions usually applied. For the small-strain problem of a vibrating bar, the best results...... are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...... of field quantities The properties of different interpolation functions are analysed using numerical examples, including the classical cantil-evered beam problem....
[Multimodal medical image registration using cubic spline interpolation method].
He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan
2007-12-01
Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.
Nonlinear interpolation fractal classifier for multiple cardiac arrhythmias recognition
Energy Technology Data Exchange (ETDEWEB)
Lin, C.-H. [Department of Electrical Engineering, Kao-Yuan University, No. 1821, Jhongshan Rd., Lujhu Township, Kaohsiung County 821, Taiwan (China); Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)], E-mail: eechl53@cc.kyu.edu.tw; Du, Y.-C.; Chen Tainsong [Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)
2009-11-30
This paper proposes a method for cardiac arrhythmias recognition using the nonlinear interpolation fractal classifier. A typical electrocardiogram (ECG) consists of P-wave, QRS-complexes, and T-wave. Iterated function system (IFS) uses the nonlinear interpolation in the map and uses similarity maps to construct various data sequences including the fractal patterns of supraventricular ectopic beat, bundle branch ectopic beat, and ventricular ectopic beat. Grey relational analysis (GRA) is proposed to recognize normal heartbeat and cardiac arrhythmias. The nonlinear interpolation terms produce family functions with fractal dimension (FD), the so-called nonlinear interpolation function (NIF), and make fractal patterns more distinguishing between normal and ill subjects. The proposed QRS classifier is tested using the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Compared with other methods, the proposed hybrid methods demonstrate greater efficiency and higher accuracy in recognizing ECG signals.
PSPLINE: Princeton Spline and Hermite cubic interpolation routines
McCune, Doug
2017-10-01
PSPLINE is a collection of Spline and Hermite interpolation tools for 1D, 2D, and 3D datasets on rectilinear grids. Spline routines give full control over boundary conditions, including periodic, 1st or 2nd derivative match, or divided difference-based boundary conditions on either end of each grid dimension. Hermite routines take the function value and derivatives at each grid point as input, giving back a representation of the function between grid points. Routines are provided for creating Hermite datasets, with appropriate boundary conditions applied. The 1D spline and Hermite routines are based on standard methods; the 2D and 3D spline or Hermite interpolation functions are constructed from 1D spline or Hermite interpolation functions in a straightforward manner. Spline and Hermite interpolation functions are often much faster to evaluate than other representations using e.g. Fourier series or otherwise involving transcendental functions.
Some observations on interpolating gauges and non-covariant gauges
Indian Academy of Sciences (India)
covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition defining term. We show that the boundary condition needed to maintain gauge- invariance as the interpolating ...
Comparing interpolation schemes in dynamic receive ultrasound beamforming
DEFF Research Database (Denmark)
Kortbek, Jacob; Andresen, Henrik; Nikolov, Svetoslav
2005-01-01
conventional B-mode imaging and linear interpolation, the difference in mean SLMLR is 6.2 dB. With polynomial interpolation the ratio is in the range 6.2 dB to 0.3 dB using 2nd to 5th order polynomials, and with FIR interpolation the ratio is in the range 5.8 dB to 0.1 dB depending on the filter design....... The SNR is between 21 dB and 45 dB with the polynomial interpolation and between 37 dB and 43 dB with FIR filtering. In the synthetic aperture imaging modality the difference in mean SLMLRrangesfrom14dBto33dBand6dBto31dBforthe polynomial and FIR filtering schemes respectively. By using a proper...
A Meshfree Quasi-Interpolation Method for Solving Burgers’ Equation
Directory of Open Access Journals (Sweden)
Mingzhu Li
2014-01-01
Full Text Available The main aim of this work is to consider a meshfree algorithm for solving Burgers’ equation with the quartic B-spline quasi-interpolation. Quasi-interpolation is very useful in the study of approximation theory and its applications, since it can yield solutions directly without the need to solve any linear system of equations and overcome the ill-conditioning problem resulting from using the B-spline as a global interpolant. The numerical scheme is presented, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the time derivative of the dependent variable. Compared to other numerical methods, the main advantages of our scheme are higher accuracy and lower computational complexity. Meanwhile, the algorithm is very simple and easy to implement and the numerical experiments show that it is feasible and valid.
Rhie-Chow interpolation in strong centrifugal fields
Bogovalov, S. V.; Tronin, I. V.
2015-10-01
Rhie-Chow interpolation formulas are derived from the Navier-Stokes and continuity equations. These formulas are generalized to gas dynamics in strong centrifugal fields (as high as 106 g) occurring in gas centrifuges.
Interpolating and sampling sequences in finite Riemann surfaces
Ortega-Cerda, Joaquim
2007-01-01
We provide a description of the interpolating and sampling sequences on a space of holomorphic functions on a finite Riemann surface, where a uniform growth restriction is imposed on the holomorphic functions.
Interpol: An R package for preprocessing of protein sequences.
Heider, Dominik; Hoffmann, Daniel
2011-06-17
Most machine learning techniques currently applied in the literature need a fixed dimensionality of input data. However, this requirement is frequently violated by real input data, such as DNA and protein sequences, that often differ in length due to insertions and deletions. It is also notable that performance in classification and regression is often improved by numerical encoding of amino acids, compared to the commonly used sparse encoding. The software "Interpol" encodes amino acid sequences as numerical descriptor vectors using a database of currently 532 descriptors (mainly from AAindex), and normalizes sequences to uniform length with one of five linear or non-linear interpolation algorithms. Interpol is distributed with open source as platform independent R-package. It is typically used for preprocessing of amino acid sequences for classification or regression. The functionality of Interpol widens the spectrum of machine learning methods that can be applied to biological sequences, and it will in many cases improve their performance in classification and regression.
Application Of Laplace Interpolation In The Analysis Of Geopotential ...
African Journals Online (AJOL)
difference) method can be applied to regions of high data gradients without distortions and smoothing. However, by itself, this method is not convenient for the interpolation of geophysical data, which often consists of regions of widely variable ...
Cascading Failures as Continuous Phase-Space Transitions
Yang, Yang; Motter, Adilson E.
2017-12-01
In network systems, a local perturbation can amplify as it propagates, potentially leading to a large-scale cascading failure. Here we derive a continuous model to advance our understanding of cascading failures in power-grid networks. The model accounts for both the failure of transmission lines and the desynchronization of power generators and incorporates the transient dynamics between successive steps of the cascade. In this framework, we show that a cascade event is a phase-space transition from an equilibrium state with high energy to an equilibrium state with lower energy, which can be suitably described in a closed form using a global Hamiltonian-like function. From this function, we show that a perturbed system cannot always reach the equilibrium state predicted by quasi-steady-state cascade models, which would correspond to a reduced number of failures, and may instead undergo a larger cascade. We also show that, in the presence of two or more perturbations, the outcome depends strongly on the order and timing of the individual perturbations. These results offer new insights into the current understanding of cascading dynamics, with potential implications for control interventions.
Cascade Error Projection Learning Algorithm
Duong, T. A.; Stubberud, A. R.; Daud, T.
1995-01-01
A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.
Farr, W. M.; Mandel, I.; Stevens, D.
2015-06-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient "global" proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently.
Cascading Generative Adversarial Networks for Targeted
Hamdi, Abdullah
2018-04-09
Abundance of labelled data played a crucial role in the recent developments in computer vision, but that faces problems like scalability and transferability to the wild. One alternative approach is to utilize the data without labels, i.e. unsupervised learning, in learning valuable information and put it in use to tackle vision problems. Generative Adversarial Networks (GANs) have gained momentum for their ability to model image distributions in unsupervised manner. They learn to emulate the training set and that enables sampling from that domain and using the knowledge learned for useful applications. Several methods proposed enhancing GANs, including regularizing the loss with some feature matching. We seek to push GANs beyond the data in the training and try to explore unseen territory in the image manifold. We first propose a new regularizer for GAN based on K-Nearest Neighbor (K-NN) selective feature matching to a target set Y in high-level feature space, during the adversarial training of GAN on the base set X, and we call this novel model K-GAN. We show that minimizing the added term follows from cross-entropy minimization between the distributions of GAN and set Y. Then, we introduce a cascaded framework for GANs that try to address the task of imagining a new distribution that combines the base set X and target set Y by cascading sampling GANs with translation GANs, and we dub the cascade of such GANs as the Imaginative Adversarial Network (IAN). Several cascades are trained on a collected dataset Zoo-Faces and generated innovative samples are shown, including from K-GAN cascade. We conduct an objective and subjective evaluation for different IAN setups in the addressed task of generating innovative samples and we show the effect of regularizing GAN on different scores. We conclude with some useful applications for these IANs, like multi-domain manifold traversing.
Comparison Searching Process of Linear, Binary and Interpolation Algorithm
Rahim, Robbi; Nurarif, Saiful; Ramadhan, Mukhlis; Aisyah, Siti; Purba, Windania
2017-12-01
Searching is a process that cannot be issued for a transaction and communication process, many search algorithms that can be used to facilitate the search, linear, binary, and interpolation algorithms are some searching algorithms that can be utilized, the comparison of the three algorithms is performed by testing to search data with different length with pseudo process approach, and the result achieved that the interpolation algorithm is slightly faster than the other two algorithms.
Considerations Related to Interpolation of Experimental Data Using Piecewise Functions
Directory of Open Access Journals (Sweden)
Stelian Alaci
2016-12-01
Full Text Available The paper presents a method for experimental data interpolation by means of a piecewise function, the points where the form of the function changes being found simultaneously with the other parameters utilized in an optimization criterion. The optimization process is based on defining the interpolation function using a single expression founded on the Heaviside function and regarding the optimization function as a generalised infinitely derivable function. The exemplification of the methodology is made via a tangible example.
Data interpolation in the definition of management zones
Schenatto, Kelyn; Universidade Tecnológica Federal do Paraná; Souza, Eduardo Godoy; Universidade Estadual do Oeste do Paraná; Bazzi, Claudio Leones; Universidade Tecnológica Federal do Paraná; Bier, Vanderlei Arthur; Universidade Estadual do Oeste do Paraná; Betzek, Nelson Miguel; Universidade Tecnológica Federal do Paraná; Gavioli, Alan; Universidade Tecnológica Federal do Paraná
2016-01-01
Precision agriculture (PA) comprises the use of management zones (MZs). Sample data are usually interpolated to define MZs. Current research checks whether there is a need for data interpolation by evaluating the quality of MZs by five indices – variance reduction (VR), fuzzy performance index (FPI), modified partition entropy index (MPE), Kappa index and the cluster validation index (CVI), of which the latter has been focused in current assay. Soil texture, soil resistance to penetration, el...
The Interpolation Method for Estimating the Above-Ground Biomass Using Terrestrial-Based Inventory
Directory of Open Access Journals (Sweden)
I Nengah Surati Jaya
2014-09-01
Full Text Available This paper examined several methods for interpolating biomass on logged-over dry land forest using terrestrial-based forest inventory in Labanan, East Kalimantan and Lamandau, Kota Wringing Barat, Central Kalimantan. The plot-distances examined was 1,000−1,050 m for Labanan and 1,000−899m for Lawanda. The main objective of this study was to obtain the best interpolation method having the most accurate prediction on spatial distribution of forest biomass for dry land forest. Two main interpolation methods were examined: (1 deterministic approach using the IDW method and (2 geo-statistics approach using Kriging with spherical, circular, linear, exponential, and Gaussian models. The study results at both sites consistently showed that the IDW method was better than the Kriging method for estimating the spatial distribution of biomass. The validation results using chi-square test showed that the IDW interpolation provided accurate biomass estimation. Using the percentage of mean deviation value (MD(%, it was also recognized that the IDWs with power parameter (p of 2 provided relatively low value , i.e., only 15% for Labanan, East Kalimantan Province and 17% for Lamandau, Kota Wringing Barat Central Kalimantan Province. In general, IDW interpolation method provided better results than the Kriging, where the Kriging method provided MD(% of about 27% and 21% for Lamandau and Labanan sites, respectively.Keywords: deterministic, geostatistics, IDW, Kriging, above-groung biomass
Directory of Open Access Journals (Sweden)
A. Verworn
2011-02-01
Full Text Available Hydrological modelling of floods relies on precipitation data with a high resolution in space and time. A reliable spatial representation of short time step rainfall is often difficult to achieve due to a low network density. In this study hourly precipitation was spatially interpolated with the multivariate geostatistical method kriging with external drift (KED using additional information from topography, rainfall data from the denser daily networks and weather radar data. Investigations were carried out for several flood events in the time period between 2000 and 2005 caused by different meteorological conditions. The 125 km radius around the radar station Ummendorf in northern Germany covered the overall study region. One objective was to assess the effect of different approaches for estimation of semivariograms on the interpolation performance of short time step rainfall. Another objective was the refined application of the method kriging with external drift. Special attention was not only given to find the most relevant additional information, but also to combine the additional information in the best possible way. A multi-step interpolation procedure was applied to better consider sub-regions without rainfall.
The impact of different semivariogram types on the interpolation performance was low. While it varied over the events, an averaged semivariogram was sufficient overall. Weather radar data were the most valuable additional information for KED for convective summer events. For interpolation of stratiform winter events using daily rainfall as additional information was sufficient. The application of the multi-step procedure significantly helped to improve the representation of fractional precipitation coverage.
International Nuclear Information System (INIS)
Vurgaftman, I; Meyer, J R; Canedy, C L; Kim, C S; Bewley, W W; Merritt, C D; Abell, J; Weih, R; Kamp, M; Kim, M; Höfling, S
2015-01-01
We review the current status of interband cascade lasers (ICLs) emitting in the midwave infrared (IR). The ICL may be considered the hybrid of a conventional diode laser that generates photons via electron–hole recombination, and an intersubband-based quantum cascade laser (QCL) that stacks multiple stages for enhanced current efficiency. Following a brief historical overview, we discuss theoretical aspects of the active region and core designs, growth by molecular beam epitaxy, and the processing of broad-area, narrow-ridge, and distributed feedback (DFB) devices. We then review the experimental performance of pulsed broad area ICLs, as well as the continuous-wave (cw) characteristics of narrow ridges having good beam quality and DFBs producing output in a single spectral mode. Because the threshold drive powers are far lower than those of QCLs throughout the λ = 3–6 µm spectral band, ICLs are increasingly viewed as the laser of choice for mid-IR laser spectroscopy applications that do not require high output power but need to be hand-portable and/or battery operated. Demonstrated ICL performance characteristics to date include threshold current densities as low as 106 A cm −2 at room temperature (RT), cw threshold drive powers as low as 29 mW at RT, maximum cw operating temperatures as high as 118 °C, maximum cw output powers exceeding 400 mW at RT, maximum cw wallplug efficiencies as high as 18% at RT, maximum cw single-mode output powers as high as 55 mW at RT, and single-mode output at λ = 5.2 µm with a cw drive power of only 138 mW at RT. (topical review)
Vurgaftman, I.; Weih, R.; Kamp, M.; Meyer, J. R.; Canedy, C. L.; Kim, C. S.; Kim, M.; Bewley, W. W.; Merritt, C. D.; Abell, J.; Höfling, S.
2015-04-01
We review the current status of interband cascade lasers (ICLs) emitting in the midwave infrared (IR). The ICL may be considered the hybrid of a conventional diode laser that generates photons via electron-hole recombination, and an intersubband-based quantum cascade laser (QCL) that stacks multiple stages for enhanced current efficiency. Following a brief historical overview, we discuss theoretical aspects of the active region and core designs, growth by molecular beam epitaxy, and the processing of broad-area, narrow-ridge, and distributed feedback (DFB) devices. We then review the experimental performance of pulsed broad area ICLs, as well as the continuous-wave (cw) characteristics of narrow ridges having good beam quality and DFBs producing output in a single spectral mode. Because the threshold drive powers are far lower than those of QCLs throughout the λ = 3-6 µm spectral band, ICLs are increasingly viewed as the laser of choice for mid-IR laser spectroscopy applications that do not require high output power but need to be hand-portable and/or battery operated. Demonstrated ICL performance characteristics to date include threshold current densities as low as 106 A cm-2 at room temperature (RT), cw threshold drive powers as low as 29 mW at RT, maximum cw operating temperatures as high as 118 °C, maximum cw output powers exceeding 400 mW at RT, maximum cw wallplug efficiencies as high as 18% at RT, maximum cw single-mode output powers as high as 55 mW at RT, and single-mode output at λ = 5.2 µm with a cw drive power of only 138 mW at RT.
Survey: interpolation methods for whole slide image processing.
Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T
2017-02-01
Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Fitzpatrick, Benjamin R; Lamb, David W; Mengersen, Kerrie
2016-01-01
Modern soil mapping is characterised by the need to interpolate point referenced (geostatistical) observations and the availability of large numbers of environmental characteristics for consideration as covariates to aid this interpolation. Modelling tasks of this nature also occur in other fields such as biogeography and environmental science. This analysis employs the Least Angle Regression (LAR) algorithm for fitting Least Absolute Shrinkage and Selection Operator (LASSO) penalized Multiple Linear Regressions models. This analysis demonstrates the efficiency of the LAR algorithm at selecting covariates to aid the interpolation of geostatistical soil carbon observations. Where an exhaustive search of the models that could be constructed from 800 potential covariate terms and 60 observations would be prohibitively demanding, LASSO variable selection is accomplished with trivial computational investment.
Rufo, Montaña; Antolín, Alicia; Paniagua, Jesús M; Jiménez, Antonio
2018-04-01
A comparative study was made of three methods of interpolation - inverse distance weighting (IDW), spline and ordinary kriging - after optimization of their characteristic parameters. These interpolation methods were used to represent the electric field levels for three emission frequencies (774kHz, 900kHz, and 1107kHz) and for the electrical stimulation quotient, Q E , characteristic of complex electromagnetic environments. Measurements were made with a spectrum analyser in a village in the vicinity of medium-wave radio broadcasting antennas. The accuracy of the models was quantified by comparing their predictions with levels measured at the control points not used to generate the models. The results showed that optimizing the characteristic parameters of each interpolation method allows any of them to be used. However, the best results in terms of the regression coefficient between each model's predictions and the actual control point field measurements were for the IDW method. Copyright © 2018 Elsevier Inc. All rights reserved.
Lamb, David W.; Mengersen, Kerrie
2016-01-01
Modern soil mapping is characterised by the need to interpolate point referenced (geostatistical) observations and the availability of large numbers of environmental characteristics for consideration as covariates to aid this interpolation. Modelling tasks of this nature also occur in other fields such as biogeography and environmental science. This analysis employs the Least Angle Regression (LAR) algorithm for fitting Least Absolute Shrinkage and Selection Operator (LASSO) penalized Multiple Linear Regressions models. This analysis demonstrates the efficiency of the LAR algorithm at selecting covariates to aid the interpolation of geostatistical soil carbon observations. Where an exhaustive search of the models that could be constructed from 800 potential covariate terms and 60 observations would be prohibitively demanding, LASSO variable selection is accomplished with trivial computational investment. PMID:27603135
Risk Assessment of Cascading Outages: Methodologies and Challenges
Energy Technology Data Exchange (ETDEWEB)
Vaiman, Marianna; Bell, Keith; Chen, Yousu; Chowdhury, Badrul; Dobson, Ian; Hines, Paul; Papic, Milorad; Miller, Stephen; Zhang, Pei
2012-05-31
Abstract- This paper is a result of ongoing activity carried out by Understanding, Prediction, Mitigation and Restoration of Cascading Failures Task Force under IEEE Computer Analytical Methods Subcommittee (CAMS). The task force's previous papers are focused on general aspects of cascading outages such as understanding, prediction, prevention and restoration from cascading failures. This is the first of two new papers, which extend this previous work to summarize the state of the art in cascading failure risk analysis methodologies and modeling tools. This paper is intended to be a reference document to summarize the state of the art in the methodologies for performing risk assessment of cascading outages caused by some initiating event(s). A risk assessment should cover the entire potential chain of cascades starting with the initiating event(s) and ending with some final condition(s). However, this is a difficult task and heuristic approaches and approximations have been suggested. This paper discusses different approaches to this and suggests directions for future development of methodologies. The second paper summarizes the state of the art in modeling tools for risk assessment of cascading outages.
Cascade Structure of Digital Predistorter for Power Amplifier Linearization
Directory of Open Access Journals (Sweden)
E. B. Solovyeva
2015-12-01
Full Text Available In this paper, a cascade structure of nonlinear digital predistorter (DPD synthesized by the direct learning adaptive algorithm is represented. DPD is used for linearization of power amplifier (PA characteristic, namely for compensation of PA nonlinear distortion. Blocks of the cascade DPD are described by different models: the functional link artificial neural network (FLANN, the polynomial perceptron network (PPN and the radially pruned Volterra model (RPVM. At synthesis of the cascade DPD there is possibility to overcome the ill conditionality problem due to reducing the dimension of DPD nonlinear operator approximation. Results of compensating nonlinear distortion in Wiener–Hammerstein model of PA at the GSM–signal with four carriers are shown. The highest accuracy of PA linearization is produced by the cascade DPD containing PPN and RPVM.
Analysis of Interpolation Methods in the Image Reconstruction Tasks
Directory of Open Access Journals (Sweden)
V. T. Nguyen
2017-01-01
Full Text Available The article studies the interpolation methods used for image reconstruction. These methods were also implemented and tested with several images to estimate their effectiveness.The considered interpolation methods are a nearest-neighbor method, linear method, a cubic B-spline method, a cubic convolution method, and a Lanczos method. For each method were presented an interpolation kernel (interpolation function and a frequency response (Fourier transform.As a result of the experiment, the following conclusions were drawn:- the nearest neighbor algorithm is very simple and often used. With using this method, the reconstructed images contain artifacts (blurring and haloing;- the linear method is quickly and easily performed. It also reduces some visual distortion caused by changing image size. Despite the advantages using this method causes a large amount of interpolation artifacts, such as blurring and haloing;- cubic B-spline method provides smoothness of reconstructed images and eliminates apparent ramp phenomenon. But in the interpolation process a low-pass filter is used, and a high frequency component is suppressed. This will lead to fuzzy edge and false artificial traces;- cubic convolution method offers less distortion interpolation. But its algorithm is more complicated and more execution time is required as compared to the nearest-neighbor method and the linear method;- using the Lanczos method allows us to achieve a high-definition image. In spite of the great advantage the method requires more execution time as compared to the other methods of interpolation.The result obtained not only shows a comparison of the considered interpolation methods for various aspects, but also enables users to select an appropriate interpolation method for their applications.It is advisable to study further the existing methods and develop new ones using a number of methods
5-D interpolation with wave-front attributes
Xie, Yujiang; Gajewski, Dirk
2017-11-01
Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that
Directory of Open Access Journals (Sweden)
Emile Faye
Full Text Available Bridging the gap between the predictions of coarse-scale climate models and the fine-scale climatic reality of species is a key issue of climate change biology research. While it is now well known that most organisms do not experience the climatic conditions recorded at weather stations, there is little information on the discrepancies between microclimates and global interpolated temperatures used in species distribution models, and their consequences for organisms' performance. To address this issue, we examined the fine-scale spatiotemporal heterogeneity in air, crop canopy and soil temperatures of agricultural landscapes in the Ecuadorian Andes and compared them to predictions of global interpolated climatic grids. Temperature time-series were measured in air, canopy and soil for 108 localities at three altitudes and analysed using Fourier transform. Discrepancies between local temperatures vs. global interpolated grids and their implications for pest performance were then mapped and analysed using GIS statistical toolbox. Our results showed that global interpolated predictions over-estimate by 77.5 ± 10% and under-estimate by 82.1 ± 12% local minimum and maximum air temperatures recorded in the studied grid. Additional modifications of local air temperatures were due to the thermal buffering of plant canopies (from -2.7 °K during daytime to 1.3 °K during night-time and soils (from -4.9 °K during daytime to 6.7 °K during night-time with a significant effect of crop phenology on the buffer effect. This discrepancies between interpolated and local temperatures strongly affected predictions of the performance of an ectothermic crop pest as interpolated temperatures predicted pest growth rates 2.3-4.3 times lower than those predicted by local temperatures. This study provides quantitative information on the limitation of coarse-scale climate data to capture the reality of the climatic environment experienced by living organisms. In highly
Cascade multiplicity inside deuteron in Π d high energy collisions
International Nuclear Information System (INIS)
Kisielewska, D.
1983-01-01
Multiplicity distribution of double scattering events is analysed using the additive quark model including the cascading effect. The mean multiplicity of particles produced in the process of cascading estimated for Π d experiments at 100, 205 and 360 GeV/c is equal to 1.15 ± .31. This value does not depend on the momentum of the incident pion. Some indications are found that the probability of cascading depends on multiplicity of the collision with the first nucleon and is smaller for low multiplicities. (author)
Cascade Apartments: Deep Energy Multifamily Retrofit
Energy Technology Data Exchange (ETDEWEB)
Gordon, A. [Washington State Univ. Energy Program, Olympia, WA (United States); Mattheis, L. [Washington State Univ. Energy Program, Olympia, WA (United States); Kunkle, R. [Washington State Univ. Energy Program, Olympia, WA (United States); Howard, L. [Washington State Univ. Energy Program, Olympia, WA (United States); Lubliner, M. [Washington State Univ. Energy Program, Olympia, WA (United States)
2014-02-01
In December of 2009-10, King County Housing Authority (KCHA) implemented energy retrofit improvements in the Cascade multifamily community, located in Kent, Washington (marine climate.)This research effort involved significant coordination from stakeholders KCHA, WA State Department of Commerce, utility Puget Sound Energy, and Cascade tenants. This report focuses on the following three primary BA research questions: 1. What are the modeled energy savings using DOE low income weatherization approved TREAT software? 2. How did the modeled energy savings compare with measured energy savings from aggregate utility billing analysis? 3. What is the Savings to Investment Ratio (SIR) of the retrofit package after considering utility window incentives and KCHA capitol improvement funding.
Cascade Apartments: Deep Energy Multifamily Retrofit
Energy Technology Data Exchange (ETDEWEB)
Gordon, A. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mattheis, L. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kunkle, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Howard, L. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lubliner, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2014-02-01
In December of 2009-10, King County Housing Authority (KCHA) implemented energy retrofit improvements in the Cascade multifamily community, located in Kent, Washington (marine climate.)This research effort involved significant coordination from stakeholders KCHA, WA State Department of Commerce, utility Puget Sound Energy, and Cascade tenants. This report focuses on the following three primary BA research questions : 1. What are the modeled energy savings using DOE low income weatherization approved TREAT software? 2. How did the modeled energy savings compare with measured energy savings from aggregate utility billing analysis? 3. What is the Savings to Investment Ratio (SIR) of the retrofit package after considering utility window incentives and KCHA capitol improvement funding.
Garcia, Matthew; Peters-Lidard, Christa D.; Goodrich, David C.
2008-05-01
Inaccuracy in spatially distributed precipitation fields can contribute significantly to the uncertainty of hydrological states and fluxes estimated from land surface models. This paper examines the results of selected interpolation methods for both convective and mixed/stratiform events that occurred during the North American monsoon season over a dense gauge network at the U.S. Department of Agriculture Agricultural Research Service Walnut Gulch Experimental Watershed in the southwestern United States. The spatial coefficient of variation for the precipitation field is employed as an indicator of event morphology, and a gauge clustering factor CF is formulated as a new, scale-independent measure of network organization. We consider that CF 0 (clustering in the gauge network) will produce errors because of reduced areal representation of the precipitation field. Spatial interpolation is performed using both inverse-distance-weighted (IDW) and multiquadric-biharmonic (MQB) methods. We employ ensembles of randomly selected network subsets for the statistical evaluation of interpolation errors in comparison with the observed precipitation. The magnitude of interpolation errors and differences in accuracy between interpolation methods depend on both the density and the geometrical organization of the gauge network. Generally, MQB methods outperform IDW methods in terms of interpolation accuracy under all conditions, but it is found that the order of the IDW method is important to the results and may, under some conditions, be just as accurate as the MQB method. In almost all results it is demonstrated that the inverse-distance-squared method for spatial interpolation, commonly employed in operational analyses and for engineering assessments, is inferior to the ID-cubed method, which is also more computationally efficient than the MQB method in studies of large networks.
Cascade Mountain Range in Oregon
Sherrod, David R.
2016-01-01
The Cascade mountain system extends from northern California to central British Columbia. In Oregon, it comprises the Cascade Range, which is 260 miles long and, at greatest breadth, 90 miles wide (fig. 1). Oregon’s Cascade Range covers roughly 17,000 square miles, or about 17 percent of the state, an area larger than each of the smallest nine of the fifty United States. The range is bounded on the east by U.S. Highways 97 and 197. On the west it reaches nearly to Interstate 5, forming the eastern margin of the Willamette Valley and, farther south, abutting the Coast Ranges.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Energy Technology Data Exchange (ETDEWEB)
Baak, M., E-mail: max.baak@cern.ch [CERN, CH-1211 Geneva 23 (Switzerland); Gadatsch, S., E-mail: stefan.gadatsch@nikhef.nl [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands); Harrington, R. [School of Physics and Astronomy, University of Edinburgh, Mayfield Road, Edinburgh, EH9 3JZ, Scotland (United Kingdom); Verkerke, W. [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands)
2015-01-21
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
International Nuclear Information System (INIS)
Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, Max; Harrington, Robert; Verkerke, Wouter
2014-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, Max; Harrington, Robert; Verkerke, Wouter
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
Long-Haul TCP vs. Cascaded TCP
Feng, Wu-chun
2006-01-01
In this work, we investigate the bandwidth and transfer time of long-haul TCP versus cascaded TCP [5]. First, we discuss the models for TCP throughput. For TCP flows in support of bulk data transfer (i.e., long-lived TCP flows), the TCP throughput models have been derived [2, 3]. These models rely on the congestion-avoidance algorithm of TCP. Though these models cannot be applied with short-lived TCP connections, our interest relative to logistical networking is in longer-li...
Cascade orificial resistive device
Bitsakis, Nicholas; Cassidy, James
1994-07-01
A cascade orificial resistive device for throttling fluid flow which minimizes acoustic noise and internal vibrations is described herein. The device has a hollow body defining a fluid passageway, a plurality of perforated plates mounted within the passageway, a fixed end ring adjacent one end of the perforated plates, and a threadable end ring adjacent an opposite end of the perforated plates to place the plates in compression. Each of the perforated plates is a single piece molded plate having an integral outer ring and an integrally formed center keying mechanism as well as a plurality of orifices. The keying mechanism formed on each plate is designed so that adjacent ones of the plates have their orifices misaligned. In this manner, a pressure drop across each plate is created and the fluid flow through the device is throttled. The device of the present invention has utility in a number of onboard marine vessel systems wherein reduced acoustic noise and internal vibrations are particularly desirable.
A FAST MORPHING-BASED INTERPOLATION FOR MEDICAL IMAGES: APPLICATION TO CONFORMAL RADIOTHERAPY
Directory of Open Access Journals (Sweden)
Hussein Atoui
2011-05-01
Full Text Available A method is presented for fast interpolation between medical images. The method is intended for both slice and projective interpolation. It allows offline interpolation between neighboring slices in tomographic data. Spatial correspondence between adjacent images is established using a block matching algorithm. Interpolation of image intensities is then carried out by morphing between the images. The morphing-based method is compared to standard linear interpolation, block-matching-based interpolation and registrationbased interpolation in 3D tomographic data sets. Results show that the proposed method scored similar performance in comparison to registration-based interpolation, and significantly outperforms both linear and block-matching-based interpolation. This method is applied in the context of conformal radiotherapy for online projective interpolation between Digitally Reconstructed Radiographs (DRRs.
International Nuclear Information System (INIS)
Olsson, Magnus.
1993-02-01
A model is proposed for the production of transverse jets from diffractively excited protons. We propose that transverse jets can be obtained from gluonic bremsstrahlung in a way similar to the emission in DIS. Qualitative agreement is obtained between the model and the uncorrected data published by the UA8 collaboration. Perturbative QCD in the MLLA approximation is applied to multiple jet production in e + e - -annihilation. We propose modified evolution equations for deriving the jet cross sections, defined in the 'k t ' or 'Durham' algorithm. The mean number of jets as a function of the jet resolution is studied, and analytical predictions are compared to the results of MC simulations. We also study a set of differential-difference equations for multiplicity distributions in e + e - -annihilations, supplemented with appropriate boundary conditions. These equations take into account nonsingular terms in the GLAP splitting functions as well as kinematical constraints related to recoil effects. The presence of retarded terms imply that the cascade develops more slowly and reduces the fluctuations. The solutions agree well with MC simulations and experimental data. (authors)
Data interpolation in the definition of management zones
Directory of Open Access Journals (Sweden)
Kelyn Schenatto
2016-01-01
Full Text Available Precision agriculture (PA comprises the use of management zones (MZs. Sample data are usually interpolated to define MZs. Current research checks whether there is a need for data interpolation by evaluating the quality of MZs by five indices – variance reduction (VR, fuzzy performance index (FPI, modified partition entropy index (MPE, Kappa index and the cluster validation index (CVI, of which the latter has been focused in current assay. Soil texture, soil resistance to penetration, elevation and slope in an experimental area of 15.5 ha were employed as attributes to the generation of MZ, correlating them with data of soybean yield from 2011-2012 and 2012-2013 harvests. Data interpolation prior to MZs generation is important to achieve MZs as a smoother contour and for a greater reduction in data variance. The Kriging interpolator had the best performance. CVI index proved to be efficient in choosing MZs, with a less subjective decision on the best interpolator or number of MZs.
Ultrarelativistic cascades and strangeness production
International Nuclear Information System (INIS)
Kahana, D.E.; Kahana, S.H.
1998-01-01
A two-phase cascade code, LUCIFER II, developed for the treatment of ultra high energy-ion-ion collisions is applied to the production of strangeness at SPS energies √(s)=17-20. This simulation is able to simultaneously describe both hard processes such as Drell-Yan and slower, soft processes such as the production of light mesons by separating the dynamics into two steps, a fast cascade involving only the nucleons in the original colliding relativistic ions followed, after an appropriate delay, by a normal multiscattering of the resulting excited baryons and mesons produced virtually in the first step. No energy loss can take place in the short time interval over which the first cascade takes place. The chief result is a reconciliation of the important Drell-Yan measurements with the apparent success of standard cascades to describe the nucleon stopping and meson production in heavy-ion experiments at the CERN SPS. (orig.)
Ultrarelativistic cascades and strangeness production
Energy Technology Data Exchange (ETDEWEB)
Kahana, D.E. [State Univ. of New York, Stony Brook, NY (United States). Physics Dept.; Kahana, S.H. [Brookhaven National Lab., Upton, NY (United States). Physics Dept.
1998-02-01
A two phase cascade, LUCIFER II, developed for the treatment of ultra high energy Ion-Ion collisions is applied to the production of strangeness at SPS energies. This simulation is able to simultaneously describe both hard processes such as Drell-Yan and slower, soft processes such as the production of light mesons by separating the dynamics into two steps, a fast cascade involving only the nucleons in the original colliding relativistic ions followed, after an appropriate delay, by a normal multiscattering of the resulting excited baryons and mesons produced virtually in the first step. No energy loss can take place in the short time interval over which the first cascade takes place. The chief result is a reconciliation of the important Drell-Yan measurements with the apparent success of standard cascades to describe the nucleon stopping and meson production in heavy ion experiments at the CERN SPS.
Ultrarelativistic cascades and strangeness production
International Nuclear Information System (INIS)
Kahana, D.E.; Kahana, S.H.
1998-02-01
A two phase cascade, LUCIFER II, developed for the treatment of ultra high energy Ion-Ion collisions is applied to the production of strangeness at SPS energies. This simulation is able to simultaneously describe both hard processes such as Drell-Yan and slower, soft processes such as the production of light mesons by separating the dynamics into two steps, a fast cascade involving only the nucleons in the original colliding relativistic ions followed, after an appropriate delay, by a normal multiscattering of the resulting excited baryons and mesons produced virtually in the first step. No energy loss can take place in the short time interval over which the first cascade takes place. The chief result is a reconciliation of the important Drell-Yan measurements with the apparent success of standard cascades to describe the nucleon stopping and meson production in heavy ion experiments at the CERN SPS
Research on the DDA Precision Interpolation Algorithm for Continuity of Speed and Acceleration
Directory of Open Access Journals (Sweden)
Kai Sun
2014-05-01
Full Text Available The interpolation technology is critical to performance of CNC and industrial robots; this paper proposes a new precision interpolation algorithm based on analysis of root cause in speed and acceleration. To satisfy continuity of speed and acceleration in interpolation process, this paper describes, respectively, variable acceleration precision interpolation of two stages and three sections. Testing shows that CNC system can be enhanced significantly by using the new fine interpolation algorithm in this paper.
Computation of inverse magnetic cascades
International Nuclear Information System (INIS)
Montgomery, D.
1981-10-01
Inverse cascades of magnetic quantities for turbulent incompressible magnetohydrodynamics are reviewed, for two and three dimensions. The theory is extended to the Strauss equations, a description intermediate between two and three dimensions appropriate to tokamak magnetofluids. Consideration of the absolute equilibrium Gibbs ensemble for the system leads to a prediction of an inverse cascade of magnetic helicity, which may manifest itself as a major disruption. An agenda for computational investigation of this conjecture is proposed
Correlation Structure of Wavelet Cascades
Greiner, Martin; Giesemann, Jens
The following sections are included: * Introduction * Some Basics about Wavelets * Multiresolution analysis * Dilation equations * Wavelet transformation * Multiplicative Haar-Wavelet Cascade * Binary random multiplicative branching processes * n-point correlation densities * Haar-wavelet transformed correlation densities * Daubechies-wavelet transformed correlation densities * Multiplicative Daubechies-Wavelet Cascade * Random multiplicative branching processes on a D4-wavelet tree * n-point correlation densities * Wavelet transformed correlation densities * Scaling behavior of moments * Conclusion * REFERENCES
The analysis of composite laminated beams using a 2D interpolating meshless technique
Sadek, S. H. M.; Belinha, J.; Parente, M. P. L.; Natal Jorge, R. M.; de Sá, J. M. A. César; Ferreira, A. J. M.
2018-02-01
Laminated composite materials are widely implemented in several engineering constructions. For its relative light weight, these materials are suitable for aerospace, military, marine, and automotive structural applications. To obtain safe and economical structures, the modelling analysis accuracy is highly relevant. Since meshless methods in the recent years achieved a remarkable progress in computational mechanics, the present work uses one of the most flexible and stable interpolation meshless technique available in the literature—the Radial Point Interpolation Method (RPIM). Here, a 2D approach is considered to numerically analyse composite laminated beams. Both the meshless formulation and the equilibrium equations ruling the studied physical phenomenon are presented with detail. Several benchmark beam examples are studied and the results are compared with exact solutions available in the literature and the results obtained from a commercial finite element software. The results show the efficiency and accuracy of the proposed numeric technique.
Hoarau, Charlotte; Christophe, Sidonie
2017-05-01
Graphic interfaces of geoportals allow visualizing and overlaying various (visually) heterogeneous geographical data, often by image blending: vector data, maps, aerial imagery, Digital Terrain Model, etc. Map design and geo-visualization may benefit from methods and tools to hybrid, i.e. visually integrate, heterogeneous geographical data and cartographic representations. In this paper, we aim at designing continuous hybrid visualizations between ortho-imagery and symbolized vector data, in order to control a particular visual property, i.e. the photo-realism perception. The natural appearance (colors, textures) and various texture effects are used to drive the control the photo-realism level of the visualization: color and texture interpolation blocks have been developed. We present a global design method that allows to manipulate the behavior of those interpolation blocks on each type of geographical layer, in various ways, in order to provide various cartographic continua.
Evaluation of Interpolants in Their Ability to Fit Seismometric Time Series
Directory of Open Access Journals (Sweden)
Kanadpriya Basu
2015-08-01
Full Text Available This article is devoted to the study of the ASARCO demolition seismic data. Two different classes of modeling techniques are explored: First, mathematical interpolation methods and second statistical smoothing approaches for curve fitting. We estimate the characteristic parameters of the propagation medium for seismic waves with multiple mathematical and statistical techniques, and provide the relative advantages of each approach to address fitting of such data. We conclude that mathematical interpolation techniques and statistical curve fitting techniques complement each other and can add value to the study of one dimensional time series seismographic data: they can be use to add more data to the system in case the data set is not large enough to perform standard statistical tests.
Biexciton cascade emission in multilayered organic nanofibers
Evaristo de Sousa, Leonardo; Ferreira da Cunha, Wiliam; Antônio da Silva Filho, Demétrio; de Oliveira Neto, Pedro Henrique
2018-04-01
The optical performance of multilayered organic nanofibers results from the dynamics of excited states in the system. Here, we show that the presence of biexcitons is crucial to correctly describe such dynamics. This may be the case even if the intensity of the light source is not high. The cascade emission mediated by biexcitons is mainly responsible for the behavior of the photoluminescence profile in the initial steps after light absorption. By using a combination of Kinetic Monte Carlo model and Genetic Algorithm, we simulate Time-Resolved Photoluminescence measurements of multilayered nanofibers. These simulations are compared with experimental results, thus revealing that the usual singlet exciton recombination is insufficient to reproduce the complete physical picture. Our results also include predictions for the behavior of the biexciton signal. These findings are observed to be valid for a wide temperature range, showing the importance of the biexciton cascade emission in several regimes for organic nanofibers in general.
Directory of Open Access Journals (Sweden)
Jinyang Song
2018-01-01
Full Text Available Many modulated signals exhibit a cyclostationarity property, which can be exploited in direction-of-arrival (DOA estimation to effectively eliminate interference and noise. In this paper, our aim is to integrate the cyclostationarity with the spatial domain and enable the algorithm to estimate more sources than sensors. However, DOA estimation with a sparse array is performed in the coarray domain and the holes within the coarray limit the usage of the complete coarray information. In order to use the complete coarray information to increase the degrees-of-freedom (DOFs, sparsity-aware-based methods and the difference coarray interpolation methods have been proposed. In this paper, the coarray interpolation technique is further explored with cyclostationary signals. Besides the difference coarray model and its corresponding Toeplitz completion formulation, we build up a sum coarray model and formulate a Hankel completion problem. In order to further improve the performance of the structured matrix completion, we define the spatial spectrum sampling operations and the derivative (conjugate correlation subspaces, which can be exploited to construct orthogonal constraints for the autocorrelation vectors in the coarray interpolation problem. Prior knowledge of the source interval can also be incorporated into the problem. Simulation results demonstrate that the additional constraints contribute to a remarkable performance improvement.
Interpolant tree automata and their application in Horn clause verification
DEFF Research Database (Denmark)
Kafle, Bishoksan; Gallagher, John Patrick
2016-01-01
This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this ......This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way...... clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead....
Discrete Sine Transform-Based Interpolation Filter for Video Compression
Directory of Open Access Journals (Sweden)
MyungJun Kim
2017-11-01
Full Text Available Fractional pixel motion compensation in high-efficiency video coding (HEVC uses an 8-point filter and a 7-point filter, which are based on the discrete cosine transform (DCT, for the 1/2-pixel and 1/4-pixel interpolations, respectively. In this paper, discrete sine transform (DST-based interpolation filters (DST-IFs are proposed for fractional pixel motion compensation in terms of coding efficiency improvement. Firstly, a performance of the DST-based interpolation filters (DST-IFs using 8-point and 7-point filters for the 1/2-pixel and 1/4-pixel interpolations is compared with that of the DCT-based IFs (DCT-IFs using 8-point and 7-point filters for the 1/2-pixel and 1/4-pixel interpolations, respectively, for fractional pixel motion compensation. Finally, the DST-IFs using 12-point and 11-point filters for the 1/2-pixel and 1/4-pixel interpolations, respectively, are proposed only for bi-directional motion compensation in terms of the coding efficiency. The 8-point and 7-point DST-IF methods showed average Bjøntegaard Delta (BD-rate reductions of 0.7% and 0.3% in the random access (RA and low delay B (LDB configurations, respectively, in HEVC. The 12-point and 11-point DST-IF methods showed average BD-rate reductions of 1.4% and 1.2% in the RA and LDB configurations for the Luma component, respectively, in HEVC.
Gribov ambiguities at the Landau-maximal Abelian interpolating gauge
Energy Technology Data Exchange (ETDEWEB)
Pereira, Antonio D.; Sobreiro, Rodrigo F. [UFF-Universidade Federal Fluminense, Instituto de Fisica, Niteroi, RJ (Brazil)
2014-08-15
In a previous work, we presented a new method to account for the Gribov ambiguities in non-Abelian gauge theories. The method consists on the introduction of an extra constraint which directly eliminates the infinitesimal Gribov copies without the usual geometric approach. Such strategy allows one to treat gauges with non-hermitian Faddeev-Popov operator. In this work, we apply this method to a gauge which interpolates among the Landau and maximal Abelian gauges. The result is a local and power counting renormalizable action, free of infinitesimal Gribov copies. Moreover, the interpolating tree-level gluon propagator is derived. (orig.)
Minimum Entropy-Based Cascade Control for Governing Hydroelectric Turbines
Directory of Open Access Journals (Sweden)
Mifeng Ren
2014-06-01
Full Text Available In this paper, an improved cascade control strategy is presented for hydroturbine speed governors. Different from traditional proportional-integral-derivative (PID control and model predictive control (MPC strategies, the performance index of the outer controller is constructed by integrating the entropy and mean value of the tracking error with the constraints on control energy. The inner controller is implemented by a proportional controller. Compared with the conventional PID-P and MPC-P cascade control methods, the proposed cascade control strategy can effectively decrease fluctuations of hydro-turbine speed under non-Gaussian disturbance conditions in practical hydropower plants. Simulation results show the advantages of the proposed cascade control method.
Production of defects in metals by collision cascades: TEM experiments
International Nuclear Information System (INIS)
Kirk, M.A.
1995-01-01
The author reviews his experimental TEM data on the production of dislocation loops by low energy ion bombardment to low doses, as simulations of similar collision cascades produced by fast neutron irradiation, in various metal and alloys. The dependence of vacancy dislocation loop formation on recoil energy, sample temperature, and specific metal or alloy will be examined. Special emphasis will be placed on the effects of dilute alloy additions. A model for cascade melting will be employed to understand these effects, and will require an examination of the role of electron-phonon coupling in cascade cooling and recrystallization. The formation of interstitial dislocation loops as cascade defects, and the influence of the nearby surfaces in these experiments will be briefly discussed
Vulnerability and Cosusceptibility Determine the Size of Network Cascades
Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.
2017-01-01
In a network, a local disturbance can propagate and eventually cause a substantial part of the system to fail in cascade events that are easy to conceptualize but extraordinarily difficult to predict. Here, we develop a statistical framework that can predict cascade size distributions by incorporating two ingredients only: the vulnerability of individual components and the cosusceptibility of groups of components (i.e., their tendency to fail together). Using cascades in power grids as a representative example, we show that correlations between component failures define structured and often surprisingly large groups of cosusceptible components. Aside from their implications for blackout studies, these results provide insights and a new modeling framework for understanding cascades in financial systems, food webs, and complex networks in general.
Directory of Open Access Journals (Sweden)
Kellom Matthew
2012-05-01
Full Text Available Abstract Background Neuroinflammation, caused by six days of intracerebroventricular infusion of bacterial lipopolysaccharide (LPS, stimulates rat brain arachidonic acid (AA metabolism. The molecular changes associated with increased AA metabolism are not clear. We examined effects of a six-day infusion of a low-dose (0.5 ng/h and a high-dose (250 ng/h of LPS on neuroinflammatory, AA cascade, and pre- and post-synaptic markers in rat brain. We used artificial cerebrospinal fluid-infused brains as controls. Results Infusion of low- or high-dose LPS increased brain protein levels of TNFα, and iNOS, without significantly changing GFAP. High-dose LPS infusion upregulated brain protein and mRNA levels of AA cascade markers (cytosolic cPLA2-IVA, secretory sPLA2-V, cyclooxygenase-2 and 5-lipoxygenase, and of transcription factor NF-κB p50 DNA binding activity. Both LPS doses increased cPLA2 and p38 mitogen-activated protein kinase levels, while reducing protein levels of the pre-synaptic marker, synaptophysin. Post-synaptic markers drebrin and PSD95 protein levels were decreased with high- but not low-dose LPS. Conclusions Chronic LPS infusion has differential effects, depending on dose, on inflammatory, AA and synaptic markers in rat brain. Neuroinflammation associated with upregulated brain AA metabolism can lead to synaptic dysfunction.
Sakaguchi, Daisaku; Sakue, Daiki; Tun, Min Thaw
2018-04-01
A three-dimensional blade of a low solidity circular cascade diffuser in centrifugal blowers is designed by means of a multi-point optimization technique. The optimization aims at improving static pressure coefficient at a design point and at a small flow rate condition. Moreover, a clear definition of secondary flow expressed by positive radial velocity at hub side is taken into consideration in constraints. The number of design parameters for three-dimensional blade reaches to 10 in this study, such as a radial gap, a radial chord length and mean camber angle distribution of the LSD blade with five control points, control point between hub and shroud with two design freedom. Optimization results show clear Pareto front and selected optimum design shows good improvement of pressure rise in diffuser at small flow rate conditions. It is found that three-dimensional blade has advantage to stabilize the secondary flow effect with improving pressure recovery of the low solidity circular cascade diffuser.
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different
Directory of Open Access Journals (Sweden)
Shulun Liu
2018-01-01
Full Text Available Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW, nearest neighbors (NN, linear spline (LN, and ordinary Kriging (OK, were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In terms of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations
Directory of Open Access Journals (Sweden)
Aihua Liu
2017-01-01
Full Text Available A method of direction-of-arrival (DOA estimation using array interpolation is proposed in this paper to increase the number of resolvable sources and improve the DOA estimation performance for coprime array configuration with holes in its virtual array. The virtual symmetric nonuniform linear array (VSNLA of coprime array signal model is introduced, with the conventional MUSIC with spatial smoothing algorithm (SS-MUSIC applied on the continuous lags in the VSNLA; the degrees of freedom (DoFs for DOA estimation are obviously not fully exploited. To effectively utilize the extent of DoFs offered by the coarray configuration, a compressing sensing based array interpolation algorithm is proposed. The compressing sensing technique is used to obtain the coarse initial DOA estimation, and a modified iterative initial DOA estimation based interpolation algorithm (IMCA-AI is then utilized to obtain the final DOA estimation, which maps the sample covariance matrix of the VSNLA to the covariance matrix of a filled virtual symmetric uniform linear array (VSULA with the same aperture size. The proposed DOA estimation method can efficiently improve the DOA estimation performance. The numerical simulations are provided to demonstrate the effectiveness of the proposed method.
Lindley, S. J.; Walsh, T.
There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area
Time structure of cascade showers
International Nuclear Information System (INIS)
Nakatsuka, Takao
1984-01-01
Interesting results have been reported on the time structure of the electromagnetic components of air showers which have been obtained by using recent fast electronic circuit technology. However, these analyses and explanations seem not very persuasive. One of the reasons is that there is not satisfactory theoretical calculation yet to explain the delay of electromagnetic components in cascade processes which are the object of direct observation. Therefore, Monte Carlo calculation was attempted for examining the relationship between the altitude at which high energy γ-ray is generated up in the air and the time structure of cascade showers at the level of observation. The investigation of a dominant factor over the delay of electromagnetic components indicated that the delay due to the multiple scattering of electrons was essential. The author used the analytical solution found by himself of C. N. Yang's equation for the study on the delay due to multiple scattering. The results were as follows: The average delay time and the spread of distribution of electromagnetic cascades were approximately in linear relationship with the mass of a material having passed in a thin uniform medium; the rise time of arrival time distribution for electromagnetic cascade showers was very steep under the condition that they were generated up in the air and observed on the ground; the subpeaks delayed by tens of ns in arrival time may sometimes appear due to the perturbation in electromagnetic cascade processes. (Wakatsuki, Y.)
Blend Shape Interpolation and FACS for Realistic Avatar
Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila
2015-03-01
The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.
Geospatial Interpolation and Mapping of Tropospheric Ozone Pollution Using Geostatistics
Directory of Open Access Journals (Sweden)
Swatantra R. Kethireddy
2014-01-01
Full Text Available Tropospheric ozone (O3 pollution is a major problem worldwide, including in the United States of America (USA, particularly during the summer months. Ozone oxidative capacity and its impact on human health have attracted the attention of the scientific community. In the USA, sparse spatial observations for O3 may not provide a reliable source of data over a geo-environmental region. Geostatistical Analyst in ArcGIS has the capability to interpolate values in unmonitored geo-spaces of interest. In this study of eastern Texas O3 pollution, hourly episodes for spring and summer 2012 were selectively identified. To visualize the O3 distribution, geostatistical techniques were employed in ArcMap. Using ordinary Kriging, geostatistical layers of O3 for all the studied hours were predicted and mapped at a spatial resolution of 1 kilometer. A decent level of prediction accuracy was achieved and was confirmed from cross-validation results. The mean prediction error was close to 0, the root mean-standardized-prediction error was close to 1, and the root mean square and average standard errors were small. O3 pollution map data can be further used in analysis and modeling studies. Kriging results and O3 decadal trends indicate that the populace in Houston-Sugar Land-Baytown, Dallas-Fort Worth-Arlington, Beaumont-Port Arthur, San Antonio, and Longview are repeatedly exposed to high levels of O3-related pollution, and are prone to the corresponding respiratory and cardiovascular health effects. Optimization of the monitoring network proves to be an added advantage for the accurate prediction of exposure levels.
A Note on Interpolation of Stable Processes | Nassiuma | Journal of ...
African Journals Online (AJOL)
Interpolation procedures tailored for gaussian processes may not be applied to infinite variance stable processes. Alternative techniques suitable for a limited set of stable case with index α∈(1,2] were initially studied by Pourahmadi (1984) for harmonizable processes. This was later extended to the ARMA stable process ...
The Grand Tour via Geodesic Interpolation of 2-frames
Asimov, Daniel; Buja, Andreas
1994-01-01
Grand tours are a class of methods for visualizing multivariate data, or any finite set of points in n-space. The idea is to create an animation of data projections by moving a 2-dimensional projection plane through n-space. The path of planes used in the animation is chosen so that it becomes dense, that is, it comes arbitrarily close to any plane. One of the original inspirations for the grand tour was the experience of trying to comprehend an abstract sculpture in a museum. One tends to walk around the sculpture, viewing it from many different angles. A useful class of grand tours is based on the idea of continuously interpolating an infinite sequence of randomly chosen planes. Visiting randomly (more precisely: uniformly) distributed planes guarantees denseness of the interpolating path. In computer implementations, 2-dimensional orthogonal projections are specified by two 1-dimensional projections which map to the horizontal and vertical screen dimensions, respectively. Hence, a grand tour is specified by a path of pairs of orthonormal projection vectors. This paper describes an interpolation scheme for smoothly connecting two pairs of orthonormal vectors, and thus for constructing interpolating grand tours. The scheme is optimal in the sense that connecting paths are geodesics in a natural Riemannian geometry.