Fuzzy linguistic model for interpolation
Energy Technology Data Exchange (ETDEWEB)
Abbasbandy, S. [Department of Mathematics, Science and Research Branch, Islamic Azad University, Tehran 14778 (Iran, Islamic Republic of); Department of Mathematics, Faculty of Science, Imam Khomeini International University, Qazvin 34194-288 (Iran, Islamic Republic of); Adabitabar Firozja, M. [Department of Mathematics, Science and Research Branch, Islamic Azad University, Tehran 14778 (Iran, Islamic Republic of)
2007-10-15
In this paper, a fuzzy method for interpolating of smooth curves was represented. We present a novel approach to interpolate real data by applying the universal approximation method. In proposed method, fuzzy linguistic model (FLM) applied as universal approximation for any nonlinear continuous function. Finally, we give some numerical examples and compare the proposed method with spline method.
Spatiotemporal Interpolation for Environmental Modelling
Directory of Open Access Journals (Sweden)
Ferry Susanto
2016-08-01
Full Text Available A variation of the reduction-based approach to spatiotemporal interpolation (STI, in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications.
Spatiotemporal Interpolation for Environmental Modelling.
Susanto, Ferry; de Souza, Paulo; He, Jing
2016-08-06
A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania's South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications.
Spatiotemporal Interpolation for Environmental Modelling
Ferry Susanto; Paulo de Souza; Jing He
2016-01-01
A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hy...
Curve interpolation model for visualising disjointed neural elements
Institute of Scientific and Technical Information of China (English)
Mohd Shafry Mohd Rahim; Norhasana Razzali; Mohd Shahrizal Sunar; Ayman Abdualaziz Abdullah; Amjad Rehman
2012-01-01
Neuron cell are built from a myriad of axon and dendrite structures. It transmits electrochemical signals between the brain and the nervous system. Three-dimensional visualization of neuron structure could help to facilitate deeper understanding of neuron and its models. An accurate neuron model could aid understanding of brain's functionalities, diagnosis and knowledge of entire nervous system. Existing neuron models have been found to be defective in the aspect of realism. Whereas in the actual biological neuron, there is continuous growth as the soma extending to the axon and the dendrite; but, the current neuron visualization models present it as disjointed segments that has greatly mediated effective realism. In this research, a new reconstruction model comprising of the Bounding Cylinder, Curve Interpolation and Gouraud Shading is proposed to visualize neuron model in order to improve realism. The reconstructed model is used to design algorithms for generating neuron branching from neuron SWC data. The Bounding Cylinder and Curve Interpolation methods are used to improve the connected segments of the neuron model using a series of cascaded cylinders along the neuron's connection path. Three control points are proposed between two adjacent neuron segments. Finally, the model is rendered with Gouraud Shading for smoothening of the model surface. This produce a near-perfection model of the natural neurons with attended realism. The model is validated by a group of bioinformatics analysts' responses to a predefined survey. The result shows about 82% acceptance and satisfaction rate.
Interpolation of climate variables and temperature modeling
Samanta, Sailesh; Pal, Dilip Kumar; Lohar, Debasish; Pal, Babita
2012-01-01
Geographic Information Systems (GIS) and modeling are becoming powerful tools in agricultural research and natural resource management. This study proposes an empirical methodology for modeling and mapping of the monthly and annual air temperature using remote sensing and GIS techniques. The study area is Gangetic West Bengal and its neighborhood in the eastern India, where a number of weather systems occur throughout the year. Gangetic West Bengal is a region of strong heterogeneous surface with several weather disturbances. This paper also examines statistical approaches for interpolating climatic data over large regions, providing different interpolation techniques for climate variables' use in agricultural research. Three interpolation approaches, like inverse distance weighted averaging, thin-plate smoothing splines, and co-kriging are evaluated for 4° × 4° area, covering the eastern part of India. The land use/land cover, soil texture, and digital elevation model are used as the independent variables for temperature modeling. Multiple regression analysis with standard method is used to add dependent variables into regression equation. Prediction of mean temperature for monsoon season is better than winter season. Finally standard deviation errors are evaluated after comparing the predicted temperature and observed temperature of the area. For better improvement, distance from the coastline and seasonal wind pattern are stressed to be included as independent variables.
Comparative study and error analysis of digital elevation model interpolations
Institute of Scientific and Technical Information of China (English)
CHEN Ji-long; WU Wei; LIU Hong-bin
2008-01-01
Researchers in P.R.China commonly create triangulate irregular networks (TINs) from contours and then convert TINs into digital elevation models (DEMs). However, the DEM produced by this method can not precisely describe and simulate key hydrological features such as rivers and drainage borders. Taking a hilly region in southwestern China as a research area and using ArcGISTM software, we analyzed the errors of different interpolations to obtain distributions of the errors and precisions of different algorithms and to provide references for DEM productions. The results show that different interpolation errors satisfy normal distributions, and large error exists near the structure line of the terrain. Furthermore, the results also show that the precision of a DEM interpolated with the Australian National University digital elevation model (ANUDEM) is higher than that interpolated with TIN. The DEM interpolated with TIN is acceptable for generating DEMs in the hilly region of southwestern China.
Climate model; Downscaling; Fars; GIS; Interpolation
Directory of Open Access Journals (Sweden)
Reza Deihimfard
2016-03-01
Full Text Available Introduction Today, Climate change issue is one of the main challenges for scientists and due to the critical role of water in human life, the study of climate change impacts on severity and frequency of drought in each region seems to be indispensable (Hulme et al., 1999. Drought is usually occurred over a period of water shortage owing to less rainfall, high evapotranspiration and pumping a huge amount of water from water tables. This issue could have extensive consequences on agriculture, ecosystems and communities. The objectives of this study were to predict meteorological parameters, calculation of drought index and its zoning under the changing climate in Fars province. Materials and methods In order to predict the future climate in nine districts of Fars province (including Shiraz, Eghlid, Fasa, Lar, Lamerd, Darab, Zarghan, Neiriz and Abadeh, two climate models (HadCM3 and IPCM4 was applied under three scenarios (B1, A1B and A2. LARS-WG software was applied to downscale climate parameters (Semenov and Barrow, 2002. To predict incident probability of drought in the all study locations, a drought index (Standardize Precipitation Index, SPI was calculated at a time scale of 12 months. SPI is the most commonly used drought index. SPI is calculated based upon the differences between monthly rainfall and average rainfall for a certain period of time according to the time scale (Mckee et al., 1995. In this study the SPI time series have been estimated for the historical base period 1980-1990 and for three future periods (2011-2030, 2046-2065, 2080-2099. Finally, drought maps and zoning were conducted in the whole province using GIS and based on IDW interpolation method. Results and discussion Results of climate models evaluation indicated that LARS-GW well predicted radiation, and maximum and minimum temperatures (RMSE of 0.51, 0.46 and 1.02%, respectively. However, the accuracy in prediction of rainfall was not as good as the other climatic
A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models
Giunta, Anthony A.; Watson, Layne T.
1998-01-01
Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.
Sparse representation based image interpolation with nonlocal autoregressive modeling.
Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming
2013-04-01
Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.
Interpolation techniques in robust constrained model predictive control
Kheawhom, Soorathep; Bumroongsri, Pornchai
2017-05-01
This work investigates interpolation techniques that can be employed on off-line robust constrained model predictive control for a discrete time-varying system. A sequence of feedback gains is determined by solving off-line a series of optimal control optimization problems. A sequence of nested corresponding robustly positive invariant set, which is either ellipsoidal or polyhedral set, is then constructed. At each sampling time, the smallest invariant set containing the current state is determined. If the current invariant set is the innermost set, the pre-computed gain associated with the innermost set is applied. If otherwise, a feedback gain is variable and determined by a linear interpolation of the pre-computed gains. The proposed algorithms are illustrated with case studies of a two-tank system. The simulation results showed that the proposed interpolation techniques significantly improve control performance of off-line robust model predictive control without much sacrificing on-line computational performance.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Does better rainfall interpolation improve hydrological model performance?
Bàrdossy, Andràs; Kilsby, Chris; Lewis, Elisabeth
2017-04-01
High spatial variability of precipitation is one of the main sources of uncertainty in rainfall/runoff modelling. Spatially distributed models require detailed space time information on precipitation as input. In the past decades a lot of effort was spent on improving precipitation interpolation using point observations. Different geostatistical methods like Ordinary Kriging, External Drift Kriging or Copula based interpolation can be used to find the best estimators for unsampled locations. The purpose of this work is to investigate to what extents more sophisticated precipitation estimation methods can improve model performance. For this purpose the Wye catchment in Wales was selected. The physically-based spatially-distributed hydrological model SHETRAN is used to describe the hydrological processes in the catchment. 31 raingauges with 1 hourly temporal resolution are available for a time period of 6 years. In order to avoid the effect of model uncertainty model parameters were not altered in this study. Instead 100 random subsets consisting of 14 stations each were selected. For each of the configurations precipitation was interpolated for each time step using nearest neighbor (NN), inverse distance (ID) and Ordinary Kriging (OK). The variogram was obtained using the temporal correlation of the time series measured at different locations. The interpolated data were used as input for the spatially distributed model. Performance was evaluated for daily mean discharges using the Nash-Sutcliffe coefficient, temporal correlations, flow volumes and flow duration curves. The results show that the simplest NN and the sophisticated OK performances are practically equally good, while ID performed worse. NN was often better for high flows. The reason for this is that NN does not reduce the variance, while OK and ID yield smooth precipitation fields. The study points out the importance of precipitation variability and suggests the use of conditional spatial simulation as
A NEW DERIVATIVE FREE OPTIMIZATION METHOD BASED ON CONIC INTERPOLATION MODEL
Institute of Scientific and Technical Information of China (English)
倪勤; 胡书华
2004-01-01
In this paper, a new derivative free trust region method is developed based on the conic interpolation model for the unconstrained optimization. The conic interpolation model is built by means of the quadratic model function, the collinear scaling formula, quadratic approximation and interpolation. All the parameters in this model are determined by objective function interpolation condition. A new derivative free method is developed based upon this model and the global convergence of this new method is proved without any information on gradient.
Curve Fitting And Interpolation Model Applied In Nonel Dosage Detection
Directory of Open Access Journals (Sweden)
Jiuling Li
2013-06-01
Full Text Available The Curve Fitting and Interpolation Model are applied in Nonel dosage detection in this paper firstly, and the gray of continuous explosive in the Nonel has been forecasted. Although the traditional infrared equipment establishes the relationship of explosive dosage and light intensity, but the forecast accuracy is very low. Therefore, gray prediction models based on curve fitting and interpolation are framed separately, and the deviations from the different models are compared. Simultaneously, combining on the sample library features, the cubic polynomial fitting curve of the higher precision is used to predict grays, and 5mg-28mg Nonel gray values are calculated by MATLAB. Through the predictive values, the dosage detection operations are simplified, and the defect missing rate of the Nonel are reduced. Finally, the quality of Nonel is improved.
Interpolation of daily rainfall using spatiotemporal models and clustering
Militino, A. F.
2014-06-11
Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.
Self-organized model of cascade spreading
Gualdi, S.; Medo, M.; Zhang, Y.-C.
2011-01-01
We study simultaneous price drops of real stocks and show that for high drop thresholds they follow a power-law distribution. To reproduce these collective downturns, we propose a minimal self-organized model of cascade spreading based on a probabilistic response of the system elements to stress conditions. This model is solvable using the theory of branching processes and the mean-field approximation. For a wide range of parameters, the system is in a critical state and displays a power-law cascade-size distribution similar to the empirically observed one. We further generalize the model to reproduce volatility clustering and other observed properties of real stocks.
Self-organized model of cascade spreading
Gualdi, Stanislao; Zhang, Yi-Cheng
2010-01-01
We study simultaneous price drops of real stocks and show that for high drop thresholds they follow a power-law distribution. To reproduce these collective downturns, we propose a self-organized model of cascade spreading based on a probabilistic response of the system's elements to stress conditions. This model is solvable using the theory of branching processes and the mean-field approximation and displays a power-law cascade-size distribution-similar to the empirically observed one-over a wide range of parameters.
Coelho, Antonio Augusto Rodrigues
2016-01-01
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723
Modeling and simulation of cascading contingencies
Zhang, Jianfeng
This dissertation proposes a new approach to model and study cascading contingencies in large power systems. The most important contribution of the work involves the development and validation of a heuristic analytic model to assess the likelihood of cascading contingencies, and the development and validation of a uniform search strategy. We model the probability of cascading contingencies as a function of power flow and power flow changes. Utilizing logistic regression, the proposed model is calibrated using real industry data. This dissertation analyzes random search strategies for Monte Carlo simulations and proposes a new uniform search strategy based on the Metropolis-Hastings Algorithm. The proposed search strategy is capable of selecting the most significant cascading contingencies, and it is capable of constructing an unbiased estimator to provide a measure of system security. This dissertation makes it possible to reasonably quantify system security and justify security operations when economic concerns conflict with reliability concerns in the new competitive power market environment. It can also provide guidance to system operators about actions that may be taken to reduce the risk of major system blackouts. Various applications can be developed to take advantage of the quantitative security measures provided in this dissertation.
On the paradoxical evolution of the number of photons in a new model of interpolating Hamiltonians
Valverde, C
2016-01-01
We introduce a new Hamiltonian model which interpolates between the Jaynes-Cummings model and other types of such Hamiltonians. It works with two interpolating parameters, rather than one as traditional. Taking advantage of this greater degree of freedom, we can perform continuous interpolation between the various types of these Hamiltonians. As applications we discuss a paradox raised in literature and compare the time evolution of photon statistics obtained in the various interpolating models. The role played by the average excitation in these comparisons is also highlighted.
Smith, Bradford Scott, Jr.
The hypothesis of this research is that exponential interpolation functions will approximate fluid properties at shock waves with less error than polynomial interpolation functions. Exponential interpolation functions are derived for the purpose of modeling sharp gradients. General equations for conservation of mass, momentum, and energy for an inviscid flow of a perfect gas are converted to finite element equations using the least-squares method. Boundary conditions and a mesh adaptation scheme are also presented. An oblique shock reflection problem is used as a benchmark to determine whether or not exponential interpolation provides any advantages over Lagrange polynomial interpolation. Using exponential interpolation in elements downstream of a shock and having edges coincident with the shock showed a slight reduction in the solution error. However there was very little qualitative difference between solutions using polynomial and exponential interpolation. Regardless of the type of interpolation used, the shocks were smeared and oscillations were present both upstream and downstream of the shock waves. When a mesh adaptation scheme was implemented, exponential elements adjacent to the shock waves became much smaller and the numerical solution diverged. Changing the exponential elements to polynomial elements yielded a convergent solution. There appears to be no significant advantage to using exponential interpolation in comparison to Lagrange polynomial interpolation.
Transonic Cascade Measurements to Support Analytical Modeling
2007-11-02
RECEIVED JUL 0 12005 FINAL REPORT FOR: AFOSR GRANT F49260-02-1-0284 TRANSONIC CASCADE MEASUREMENTS TO SUPPORT ANALYTICAL MODELING Paul A. Durbin ...PAD); 650-723-1971 (JKE) durbin @vk.stanford.edu; eaton@vk.stanford.edu submitted to: Attn: Dr. John Schmisseur Air Force Office of Scientific Research...both spline and control points for subsequent wall shape definitions. An algebraic grid generator was used to generate the grid for the blade-wall
Mixtures of multiplicative cascade models in geochemistry
Directory of Open Access Journals (Sweden)
F. P. Agterberg
2007-05-01
Full Text Available Multifractal modeling of geochemical map data can help to explain the nature of frequency distributions of element concentration values for small rock samples and their spatial covariance structure. Useful frequency distribution models are the lognormal and Pareto distributions which plot as straight lines on logarithmic probability and log-log paper, respectively. The model of de Wijs is a simple multiplicative cascade resulting in discrete logbinomial distribution that closely approximates the lognormal. In this model, smaller blocks resulting from dividing larger blocks into parts have concentration values with constant ratios that are scale-independent. The approach can be modified by adopting random variables for these ratios. Other modifications include a single cascade model with ratio parameters that depend on magnitude of concentration value. The Turcotte model, which is another variant of the model of de Wijs, results in a Pareto distribution. Often a single straight line on logarithmic probability or log-log paper does not provide a good fit to observed data and two or more distributions should be fitted. For example, geochemical background and anomalies (extremely high values have separate frequency distributions for concentration values and for local singularity coefficients. Mixtures of distributions can be simulated by adding the results of separate cascade models. Regardless of properties of background, an unbiased estimate can be obtained of the parameter of the Pareto distribution characterizing anomalies in the upper tail of the element concentration frequency distribution or lower tail of the local singularity distribution. Computer simulation experiments and practical examples are used to illustrate the approach.
Results of Satellite Brightness Modeling Using Kringing Optimized Interpolation
Weeden, C.; Hejduk, M.
At the 2005 AMOS conference, Kriging Optimized Interpolation (KOI) was presented as a tool to model satellite brightness as a function of phase angle and solar declination angle (J.M Okada and M.D. Hejduk). Since November 2005, this method has been used to support the tasking algorithm for all optical sensors in the Space Surveillance Network (SSN). The satellite brightness maps generated by the KOI program are compared to each sensor's ability to detect an object as a function of the brightness of the background sky and angular rate of the object. This will determine if the sensor can technically detect an object based on an explicit calculation of the object's probability of detection. In addition, recent upgrades at Ground-Based Electro Optical Deep Space Surveillance Sites (GEODSS) sites have increased the amount and quality of brightness data collected and therefore available for analysis. This in turn has provided enough data to study the modeling process in more detail in order to obtain the most accurate brightness prediction of satellites. Analysis of two years of brightness data gathered from optical sensors and modeled via KOI solutions are outlined in this paper. By comparison, geo-stationary objects (GEO) were tracked less than non-GEO objects but had higher density tracking in phase angle due to artifices of scheduling. A statistically-significant fit to a deterministic model was possible less than half the time in both GEO and non-GEO tracks, showing that a stochastic model must often be used alone to produce brightness results, but such results are nonetheless serviceable. Within the Kriging solution, the exponential variogram model was the most frequently employed in both GEO and non-GEO tracks, indicating that monotonic brightness variation with both phase and solar declination angle is common and testifying to the suitability to the application of regionalized variable theory to this particular problem. Finally, the average nugget value, or
Nonlinear modeling of thermoacoustically driven energy cascade
Gupta, Prateek; Scalo, Carlo; Lodato, Guido
2016-11-01
We present an investigation of nonlinear energy cascade in thermoacoustically driven high-amplitude oscillations, from the initial weakly nonlinear regime to the shock wave dominated limit cycle. We develop a first principle based quasi-1D model for nonlinear wave propagation in a canonical minimal unit thermoacoustic device inspired by the experimental setup of Biwa et al.. Retaining up to quadratic nonlinear terms in the governing equations, we develop model equations for nonlinear wave propagation in the proximity of differentially heated no-slip boundaries. Furthermore, we discard the effects of acoustic streaming in the present study and focus on nonlinear energy cascade due to high amplitude wave propagation. Our model correctly predicts the observed exponential growth of the thermoacoustically amplified second harmonic, as well as the energy transfer rate to higher harmonics causing wave steepening. Moreover, we note that nonlinear coupling of local pressure with heat transfer reduces thermoacoustic amplification gradually thus causing the system to reach limit cycle exhibiting shock waves. Throughout, we verify the results from the quasi-1D model with fully compressible Navier-Stokes simulations.
Interpolation of steady-state concentration data by inverse modeling.
Schwede, Ronnie L; Cirpka, Olaf A
2010-01-01
In most groundwater applications, measurements of concentration are limited in number and sparsely distributed within the domain of interest. Therefore, interpolation techniques are needed to obtain most likely values of concentration at locations where no measurements are available. For further processing, for example, in environmental risk analysis, interpolated values should be given with uncertainty bounds, so that a geostatistical framework is preferable. Linear interpolation of steady-state concentration measurements is problematic because the dependence of concentration on the primary uncertain material property, the hydraulic conductivity field, is highly nonlinear, suggesting that the statistical interrelationship between concentration values at different points is also nonlinear. We suggest interpolating steady-state concentration measurements by conditioning an ensemble of the underlying log-conductivity field on the available hydrological data in a conditional Monte Carlo approach. Flow and transport simulations for each conditional conductivity field must meet the measurements within their given uncertainty. The ensemble of transport simulations based on the conditional log-conductivity fields yields conditional statistical distributions of concentration at points between observation points. This method implicitly meets physical bounds of concentration values and non-Gaussianity of their statistical distributions and obeys the nonlinearity of the underlying processes. We validate our method by artificial test cases and compare the results to kriging estimates assuming different conditional statistical distributions of concentration. Assuming a beta distribution in kriging leads to estimates of concentration with zero probability of concentrations below zero or above the maximal possible value; however, the concentrations are not forced to meet the advection-dispersion equation.
Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling
Hyojin Lee; Kwangmin Kang
2015-01-01
Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data...
Damped trophic cascades driven by fishing in model marine ecosystems
DEFF Research Database (Denmark)
Andersen, Ken Haste; Pedersen, Martin
2010-01-01
that fishing does not change the overall slope of the size spectrum, but depletes the largest individuals and induces trophic cascades. A trophic cascade can propagate both up and down in trophic levels driven by a combination of changes in predation mortality and food limitation. The cascade is damped...... cascade triggered by the removal of top predators. Here we use a novel size- and trait-based model to explore how marine ecosystems might react to perturbations from different types of fishing pressure. The model explicitly resolves the whole life history of fish, from larvae to adults. The results show...
A stochastic model of cascades in 2D turbulence
Ditlevsen, Peter D
2012-01-01
The dual cascade of energy and enstrophy in 2D turbulence cannot easily be understood in terms of an analog to the Richardson-Kolmogorov scenario describing the energy cascade in 3D turbulence. The coherent up- and downscale fluxes points to non-locality of interactions in spectral space, and thus the specific spatial structure of the flow could be important. Shell models, which lack spacial structure and have only local interactions in spectral space, indeed fail in reproducing the correct scaling for the inverse cascade of energy. In order to exclude the possibility that non-locality of interactions in spectral space is crucial for the dual cascade, we introduce a stochastic spectral model of the cascades which is local in spectral space and which shows the correct scaling for both the direct enstrophy - and the inverse energy cascade.
Directory of Open Access Journals (Sweden)
J. Shen
2017-09-01
Full Text Available Interpolation methods have significant impacts on the accuracy of the digital elevation model (DEM from contours which are one of frequently employed data sources. In this paper, an interpolation method is presented to build DEM from contour lines by fusion/integration of morphological reconstruction and distance transformation with obstacles. Particularly, morphological reconstruction is used to get the elevation values of the higher contour lines and the lower contour lines of any a spatial point between two contour lines, and distance transformation with obstacles is used to get the geodesic distances of the spatial point to the higher contour lines and the lower contour lines respectively. At last, linear interpolation along water flow line is used to get the elevation values of the pixels to be interpolated. The experiment demonstrates that feasibility of our proposed method.
A geometric growth model interpolating between regular and small-world networks
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhongzhi [Department of Computer Science and Engineering, Fudan University, Shanghai 200433 (China); Zhou, Shuigeng [Department of Computer Science and Engineering, Fudan University, Shanghai 200433 (China); Wang, Zhiyong [Department of Computer Science and Engineering, Fudan University, Shanghai 200433 (China); Shen, Zhen [Department of Computer Science and Engineering, Fudan University, Shanghai 200433 (China)
2007-09-28
We propose a geometric growth model which interpolates between one-dimensional linear graphs and small-world networks. The model undergoes a transition from large to small worlds. We study the topological characteristics by both theoretical predictions and numerical simulations, which are in good accordance with each other. Our geometrically growing model is a complementarity for the static WS model.
RECONFIGURING POWER SYSTEMS TO MINIMIZE CASCADING FAILURES: MODELS AND ALGORITHMS
Energy Technology Data Exchange (ETDEWEB)
Bienstock, Daniel
2014-04-11
the main goal of this project was to develop new scientific tools, based on optimization techniques, with the purpose of controlling and modeling cascading failures of electrical power transmission systems. We have developed a high-quality tool for simulating cascading failures. The problem of how to control a cascade was addressed, with the aim of stopping the cascade with a minimum of load lost. Yet another aspect of cascade is the investigation of which events would trigger a cascade, or more appropriately the computation of the most harmful initiating event given some constraint on the severity of the event. One common feature of the cascade models described (indeed, of several of the cascade models found in the literature) is that we study thermally-induced line tripping. We have produced a study that accounts for exogenous randomness (e.g. wind and ambient temperature) that could affect the thermal behavior of a line, with a focus on controlling the power flow of the line while maintaining safe probability of line overload. This was done by means of a rigorous analysis of a stochastic version of the heat equation. we incorporated a model of randomness in the behavior of wind power output; again modeling an OPF-like problem that uses chance-constraints to maintain low probability of line overloads; this work has been continued so as to account for generator dynamics as well.
Model for cascading failures in congested Internet
Institute of Scientific and Technical Information of China (English)
Jian WANG; Yan-heng LIU; Jian-qi ZHU; Yu JIAO
2008-01-01
Cascading failures often occur in congested networks such as the Internet. A cascading failure can be described as a three-phase process: generation, diffusion, and dissipation of the congestion. In this account, we present a function that represents the extent of congestion on a given node. This approach is different from existing functions based on betweenness centrality. By introducing the concept of 'delay time', we designate an intergradation between permanent removal and nouremoval. We also construct an evaluation function of network efficiency, based on congestion, which measures the damage caused by cascading failures. Finally, we investigate the effects of network structure and size, delay time, processing ability and packet generation speed on congestion propagation. Also, we uncover the relationship between the cascade dynamics and some properties of the network such as structure and size.
Dynamic Modeling of Cascading Failure in Power Systems
Song, Jiajia; Ghanavati, Goodarz; Hines, Paul D H
2014-01-01
The modeling of cascading failure in power systems is difficult because of the many different mechanisms involved; no single model captures all of these mechanisms. Understanding the relative importance of these different mechanisms is an important step in choosing which mechanisms need to be modeled for particular types of cascading failure analysis. This work presents a dynamic simulation model of both power networks and protection systems, which can simulate a wider variety of cascading outage mechanisms, relative to existing quasi-steady state (QSS) models. The model allows one to test the impact of different load models and protections on cascading outage sizes. This paper describes each module of the developed dynamic model and demonstrates how different mechanisms interact. In order to test the model we simulated a batch of randomly selected $N-2$ contingencies for several different static load configurations, and found that the distribution of blackout sizes and event lengths from the proposed dynamic...
Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling
Directory of Open Access Journals (Sweden)
Hyojin Lee
2015-01-01
Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.
A modeling framework for system restoration from cascading failures.
Liu, Chaoran; Li, Daqing; Zio, Enrico; Kang, Rui
2014-01-01
System restoration from cascading failures is an integral part of the overall defense against catastrophic breakdown in networked critical infrastructures. From the outbreak of cascading failures to the system complete breakdown, actions can be taken to prevent failure propagation through the entire network. While most analysis efforts have been carried out before or after cascading failures, restoration during cascading failures has been rarely studied. In this paper, we present a modeling framework to investigate the effects of in-process restoration, which depends strongly on the timing and strength of the restoration actions. Furthermore, in the model we also consider additional disturbances to the system due to restoration actions themselves. We demonstrate that the effect of restoration is also influenced by the combination of system loading level and restoration disturbance. Our modeling framework will help to provide insights on practical restoration from cascading failures and guide improvements of reliability and resilience of actual network systems.
Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.
2013-09-01
One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.
Geostrophic balance preserving interpolation in mesh adaptive shallow-water ocean modelling
Maddison, James R; Farrell, Patrick E
2010-01-01
The accurate representation of geostrophic balance is an essential requirement for numerical modelling of geophysical flows. Significant effort is often put into the selection of accurate or optimal balance representation by the discretisation of the fundamental equations. The issue of accurate balance representation is particularly challenging when applying dynamic mesh adaptivity, where there is potential for additional imbalance injection when interpolating to new, optimised meshes. In the context of shallow-water modelling, we present a new method for preservation of geostrophic balance when applying dynamic mesh adaptivity. This approach is based upon interpolation of the Helmholtz decomposition of the Coriolis acceleration. We apply this in combination with a discretisation for which states in geostrophic balance are exactly steady solutions of the linearised equations on an f-plane; this method guarantees that a balanced and steady flow on a donor mesh remains balanced and steady after interpolation on...
Geurts, Bernard J.; Meyers, Johan
2006-01-01
We propose the successive inverse polynomial interpolation method to optimize model parameters in subgrid parameterization for large-eddy simulation. This approach is illustrated for the Smagorinsky eddy-viscosity model used in homogeneous decaying turbulence. The optimal Smagorinsky parameter is re
Geurts, Bernardus J.; Meyers, Johan
We propose the successive inverse polynomial interpolation method to optimize model parameters in subgrid parameterization for large-eddy simulation. This approach is illustrated for the Smagorinsky eddy-viscosity model used in homogeneous decaying turbulence. The optimal Smagorinsky parameter is
Zhang, Yongqiang; Vaze, Jai; Chiew, Francis H. S.; Teng, Jin; Li, Ming
2014-09-01
Understanding a catchment's behaviours in terms of its underlying hydrological signatures is a fundamental task in surface water hydrology. It can help in water resource management, catchment classification, and prediction of runoff time series. This study investigated three approaches for predicting six hydrological signatures in southeastern Australia. These approaches were (1) spatial interpolation with three weighting schemes, (2) index model that estimates hydrological signatures using catchment characteristics, and (3) classical rainfall-runoff modelling. The six hydrological signatures fell into two categories: (1) long-term aggregated signatures - annual runoff coefficient, mean of log-transformed daily runoff, and zero flow ratio, and (2) signatures obtained from daily flow metrics - concavity index, seasonality ratio of runoff, and standard deviation of log-transformed daily flow. A total of 228 unregulated catchments were selected, with half the catchments randomly selected as gauged (or donors) for model building and the rest considered as ungauged (or receivers) to evaluate performance of the three approaches. The results showed that for two long-term aggregated signatures - the log-transformed daily runoff and runoff coefficient, the index model and rainfall-runoff modelling performed similarly, and were better than the spatial interpolation methods. For the zero flow ratio, the index model was best and the rainfall-runoff modelling performed worst. The other three signatures, derived from daily flow metrics and considered to be salient flow characteristics, were best predicted by the spatial interpolation methods of inverse distance weighting (IDW) and kriging. Comparison of flow duration curves predicted by the three approaches showed that the IDW method was best. The results found here provide guidelines for choosing the most appropriate approach for predicting hydrological behaviours at large scales.
Directory of Open Access Journals (Sweden)
Ly, S.
2013-01-01
Full Text Available Watershed management and hydrological modeling require data related to the very important matter of precipitation, often measured using raingages or weather stations. Hydrological models often require a preliminary spatial interpolation as part of the modeling process. The success of spatial interpolation varies according to the type of model chosen, its mode of geographical management and the resolution used. The quality of a result is determined by the quality of the continuous spatial rainfall, which ensues from the interpolation method used. The objective of this article is to review the existing methods for interpolation of rainfall data that are usually required in hydrological modeling. We review the basis for the application of certain common methods and geostatistical approaches used in interpolation of rainfall. Previous studies have highlighted the need for new research to investigate ways of improving the quality of rainfall data and ultimately, the quality of hydrological modeling.
Improvement of energy model based on cubic interpolation curve
Institute of Scientific and Technical Information of China (English)
Li Peipei; Li Xuemei; and Wei Yu
2012-01-01
In CAGD and CG, energy model is often used to control the curves and surfaces shape. In curve/surface modeling, we can get fair curve/surface by minimizing the energy of curve/surface. However, our research indicates that in some cases we can＇t get fair curves/surface using the current energy model. So an improved energy model is presented in this paper. Examples are also included to show that fair curves can be obtained using the improved energy model.
Digital elevation modeling via curvature interpolation for lidar data
Digital elevation model (DEM) is a three-dimensional (3D) representation of a terrain's surface - for a planet (including Earth), moon, or asteroid - created from point cloud data which measure terrain elevation. Its modeling requires surface reconstruction for the scattered data, which is an ill-p...
Rate equation modelling and investigation of quantum cascade detector characteristics
Saha, Sumit; Kumar, Jitendra
2016-10-01
A simple precise transport model has been proposed using rate equation approach for the characterization of a quantum cascade detector. The resonant tunneling transport is incorporated in the rate equation model through a resonant tunneling current density term. All the major scattering processes are included in the rate equation model. The effect of temperature on the quantum cascade detector characteristics has been examined considering the temperature dependent band parameters and the carrier scattering processes. Incorporation of the resonant tunneling process in the rate equation model improves the detector performance appreciably and reproduces the detector characteristics within experimental accuracy.
Modeling of Bit Error Rate in Cascaded 2R Regenerators
DEFF Research Database (Denmark)
Öhman, Filip; Mørk, Jesper
2006-01-01
This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments and the rege......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...
Geostatistical interpolation for modelling SPT data in northern Izmir
Indian Academy of Sciences (India)
Selim Altun; A Burak Göktepe; Alper Sezer
2013-12-01
In this study, it was aimed to map the corrected Standard Penetration Test(SPT) values in Karşıyaka city center by kriging approach. Six maps were prepared by this geostatistical approach at depths of 3, 6, 9, 13.5, 18 and 25.5m. Borehole test results obtained from 388 boreholes in central Karşıyaka were used to model the spatial variation of $(\\text{N}_1)_{\\text{60cs}}$ values in an area of 5.5 km2. Corrections were made for depth, hammer energy, rod length, sampler, borehole diameter and fines content, to the data in hand. At various depths, prepared variograms and the kriging method were used together to model the variation of corrected SPT data in the region, which enabled the estimation of missing data in the region. The results revealed that the estimation ability of the models were acceptable, which were validated by a number of parameters as well as the comparisons of the actual and estimated data. Outcomes of this study can be used in microzonation studies, site response analyses, calculation of bearing capacity of subsoils in the region and producing a number of parameters which are empirically related to corrected SPT number as well.
Atmospheric radiance interpolation for the modeling of hyperspectral data
Fuehrer, Perry; Healey, Glenn; Rauch, Brian; Slater, David; Ratkowski, Anthony
2008-04-01
The calibration of data from hyperspectral sensors to spectral radiance enables the use of physical models to predict measured spectra. Since environmental conditions are often unknown, material detection algorithms have emerged that utilize predicted spectra over ranges of environmental conditions. The predicted spectra are typically generated by a radiative transfer (RT) code such as MODTRAN TM. Such techniques require the specification of a set of environmental conditions. This is particularly challenging in the LWIR for which temperature and atmospheric constituent profiles are required as inputs for the RT codes. We have developed an automated method for generating environmental conditions to obtain a desired sampling of spectra in the sensor radiance domain. Our method provides a way of eliminating the usual problems encountered, because sensor radiance spectra depend nonlinearly on the environmental parameters, when model conditions are specified by a uniform sampling of environmental parameters. It uses an initial set of radiance vectors concatenated over a set of conditions to define the mapping from environmental conditions to sensor spectral radiance. This approach enables a given number of model conditions to span the space of desired radiance spectra and improves both the accuracy and efficiency of detection algorithms that rely upon use of predicted spectra.
A Cascading Model Of An Active Magnetic Regenerator System
DEFF Research Database (Denmark)
Tahavori, M.; Filonenko, K.; Veje, C. T.;
2016-01-01
In recent years, significant amounts of studies have been done on modeling and analysis of active magnetic regenerators (AMRs). Depending on the AMR geometry and the magnetocaloric material being modeled, the AMR may not be able to provide the required performance demanded by practical applications....... Some AMR models in the literature predict high performance but with relatively low temperature spans at either end of the AMR. Therefore, they may not be sufficient for practical applications, such as providing the heat exchanger temperature spans required for residential and commercial space air...... conditioning. To remedy this, one solution is cascading of multiple single layer AMRs. In this work, a cascading AMR model is presented and studied. In a cascade configuration, N number of single layer AMRs are connected. The results show that higher hot and cold side temperature differences may be achieved...
Up and down cascades: three-dimensional magnetic field model.
Blanter, E M; Shnirman, M G; Le Mouël, J L
2002-06-01
In our previous works we already have proposed a two-dimensional model of geodynamo. Now we use the same approach to build a three-dimensional self-excited geodynamo model that generates a large scale magnetic field from whatever small initial field, using the up and down cascade effects of a multiscale turbulent system of cyclones. The multiscale system of turbulent cyclones evolves in six domains of an equatorial cylindrical layer of the core. The appearance of new cyclones is realized by two cascades: a turbulent direct cascade and an inverse cascade of coupling of similar cyclones. The interaction between the different domains is effected through a direct cascade parameter which is essential for the statistics of the long-life symmetry breaking. Generation of the secondary magnetic field results from the interaction of the components of the primary magnetic field with the turbulent cyclones. The amplification of the magnetic field is due to the transfer of energy from the turbulent helical motion to the generated magnetic field. The model demonstrates a phase transition through the parameter characterizing this energy transfer. In the supercritical domain we obtain long-term intervals of constant polarity (chrons) and quick reversals; relevant time constants agree with paleomagnetic observations. Possible application of the model to the study of the geometrical structure of the geomagnetic field (and briefly other planetary fields) is discussed.
One-dimensional hydrodynamic model generating turbulent cascade
Matsumoto, Takeshi
2016-01-01
As a minimal mathematical model generating cascade analogous to that of the Navier-Stokes turbulence in the inertial range, we propose a one-dimensional partial-differential-equation model that conserves the integral of the squared vorticity analogue (enstrophy) in the inviscid case. With a large-scale forcing and small viscosity, we find numerically that the model exhibits the enstrophy cascade, the broad energy spectrum with a sizable correction to the dimensional-analysis prediction, peculiar intermittency and self-similarity in the dynamical system structure.
One-dimensional hydrodynamic model generating a turbulent cascade
Matsumoto, Takeshi; Sakajo, Takashi
2016-05-01
As a minimal mathematical model generating cascade analogous to that of the Navier-Stokes turbulence in the inertial range, we propose a one-dimensional partial-differential-equation model that conserves the integral of the squared vorticity analog (enstrophy) in the inviscid case. With a large-scale random forcing and small viscosity, we find numerically that the model exhibits the enstrophy cascade, the broad energy spectrum with a sizable correction to the dimensional-analysis prediction, peculiar intermittency, and self-similarity in the dynamical system structure.
Robustness of Power-law Behavior in Cascading Failure Models
Sloothaak, F; Zwart, A P
2016-01-01
Inspired by reliability issues in electric transmission networks, we use a probabilistic approach to study the occurrence of large failures in a stylized cascading failure model. In this model, lines have random capacities that initially meet the load demands imposed on the network. Every single line failure changes the load distribution in the surviving network, possibly causing further lines to become overloaded and trip as well. An initial single line failure can therefore potentially trigger massive cascading effects, and in this paper we measure the risk of such cascading events by the probability that the number of failed lines exceeds a certain large threshold. Under particular critical conditions, the exceedance probability follows a power-law distribution, implying a significant risk of severe failures. We examine the robustness of the power-law behavior by exploring under which assumptions this behavior prevails.
Effect of the precipitation interpolation method on the performance of a snowmelt runoff model
Jacquin, Alexandra
2014-05-01
Uncertainties on the spatial distribution of precipitation seriously affect the reliability of the discharge estimates produced by watershed models. Although there is abundant research evaluating the goodness of fit of precipitation estimates obtained with different gauge interpolation methods, few studies have focused on the influence of the interpolation strategy on the response of watershed models. The relevance of this choice may be even greater in the case of mountain catchments, because of the influence of orography on precipitation. This study evaluates the effect of the precipitation interpolation method on the performance of conceptual type snowmelt runoff models. The HBV Light model version 4.0.0.2, operating at daily time steps, is used as a case study. The model is applied in Aconcagua at Chacabuquito catchment, located in the Andes Mountains of Central Chile. The catchment's area is 2110[Km2] and elevation ranges from 950[m.a.s.l.] to 5930[m.a.s.l.] The local meteorological network is sparse, with all precipitation gauges located below 3000[m.a.s.l.] Precipitation amounts corresponding to different elevation zones are estimated through areal averaging of precipitation fields interpolated from gauge data. Interpolation methods applied include kriging with external drift (KED), optimal interpolation method (OIM), Thiessen polygons (TP), multiquadratic functions fitting (MFF) and inverse distance weighting (IDW). Both KED and OIM are able to account for the existence of a spatial trend in the expectation of precipitation. By contrast, TP, MFF and IDW, traditional methods widely used in engineering hydrology, cannot explicitly incorporate this information. Preliminary analysis confirmed that these methods notably underestimate precipitation in the study catchment, while KED and OIM are able to reduce the bias; this analysis also revealed that OIM provides more reliable estimations than KED in this region. Using input precipitation obtained by each method
The Run up Tsunami Modeling in Bengkulu using the Spatial Interpolation of Kriging Technique
Directory of Open Access Journals (Sweden)
Yulian Fauzi
2014-12-01
Full Text Available This research aims to design a tsunami hazard zone with the scenario of tsunami run-up height variation based on land use, slope and distance from the shoreline. The method used in this research is spatial modelling with GIS via Ordinary Kriging interpolation technique. Kriging interpolation method that is the best in this study is shown by Circular Kriging method with good semivariogram and RMSE values which are small compared to other RMSE kriging methods. The results shows that the area affected by the tsunami inundation run-up height, slope and land use. In the run-up to 30 meters, flooded areas are about 3,148.99 hectares or 20.7% of the total area of the city of Bengkulu.
An Online Method for Interpolating Linear Parametric Reduced-Order Models
Amsallem, David
2011-01-01
A two-step online method is proposed for interpolating projection-based linear parametric reduced-order models (ROMs) in order to construct a new ROM for a new set of parameter values. The first step of this method transforms each precomputed ROM into a consistent set of generalized coordinates. The second step interpolates the associated linear operators on their appropriate matrix manifold. Real-time performance is achieved by precomputing inner products between the reduced-order bases underlying the precomputed ROMs. The proposed method is illustrated by applications in mechanical and aeronautical engineering. In particular, its robustness is demonstrated by its ability to handle the case where the sampled parameter set values exhibit a mode veering phenomenon. © 2011 Society for Industrial and Applied Mathematics.
Modeling Spatiotemporal Precipitation: Effects of Density, Interpolation, and Land Use Distribution
Directory of Open Access Journals (Sweden)
Christopher L. Shope
2015-01-01
Full Text Available Characterization of precipitation is critical in quantifying distributed catchment-wide discharge. The gauge network is a key driver in hydrologic modeling to characterize discharge. The accuracy of precipitation is dependent on the location of stations, the density of the network, and the interpolation scheme. Our study examines 16 weather stations in a 64 km2 catchment. We develop a weighted, distributed approach for gap-filling the observed meteorological dataset. We analyze five interpolation methods (Thiessen, IDW, nearest neighbor, spline, and ordinary Kriging at five gauge densities. We utilize precipitation in a SWAT model to estimate discharge in lumped parameter simulations and in a distributed approach at the multiple densities (1, 16, 50, 142, and 300 stations. Gauge density has a substantial impact on distributed discharge and the optimal gauge density is between 50 and 142 stations. Our results also indicate that the IDW interpolation scheme was optimum, although the Kriging and Thiessen polygon methods produced similar results. To further examine variability in discharge, we characterized the land use and soil distribution throughout each of the subbasins. The optimal rain gauge position and distribution of the gauges drastically influence catchment-wide runoff. We found that it is best to locate the gauges near less permeable locations.
Feigenbaum Cascade of Discrete Breathers in a Model of DNA
Maniadis, P; Bishop, A R; Rasmussen, K \\O
2010-01-01
We demonstrate that period-doubled discrete breathers appear from the anti-continuum limit of the driven Peyrard-Bishop-Dauxois model of DNA. These novel breathers result from a stability overlap between sub-harmonic solutions of the driven Morse oscillator. Sub-harmonic breathers exist whenever a stability overlap is present within the Feigenbaum cascade to chaos and therefore an entire cascade of such breathers exists. This phenomenon is present in any driven lattice where the on-site potential admits sub-harmonic solutions. In DNA these breathers may have ramifications for cellular gene expression.
Interpolation Routines Assessment in ALS-Derived Digital Elevation Models for Forestry Applications
Directory of Open Access Journals (Sweden)
Antonio Luis Montealegre
2015-07-01
Full Text Available Airborne Laser Scanning (ALS is capable of estimating a variety of forest parameters using different metrics extracted from the normalized heights of the point cloud using a Digital Elevation Model (DEM. In this study, six interpolation routines were tested over a range of land cover and terrain roughness in order to generate a collection of DEMs with spatial resolution of 1 and 2 m. The accuracy of the DEMs was assessed twice, first using a test sample extracted from the ALS point cloud, second using a set of 55 ground control points collected with a high precision Global Positioning System (GPS. The effects of terrain slope, land cover, ground point density and pulse penetration on the interpolation error were examined stratifying the study area with these variables. In addition, a Classification and Regression Tree (CART analysis allowed the development of a prediction uncertainty map to identify in which areas DEMs and Airborne Light Detection and Ranging (LiDAR derived products may be of low quality. The Triangulated Irregular Network (TIN to raster interpolation method produced the best result in the validation process with the training data set while the Inverse Distance Weighted (IDW routine was the best in the validation with GPS (RMSE of 2.68 cm and RMSE of 37.10 cm, respectively.
A psychological cascade model for persisting voice problems in teachers
de Jong, FICRS; Cornelis, BE; Wuyts, FL; Kooijman, PGC; Schutte, HK; Oudes, MJ; Graamans, K
2003-01-01
In 76 teachers with persisting voice problems, the maintaining factors and coping strategies were examined. Physical, functional, psychological and socioeconomic factors were assessed. A parallel was drawn to a psychological cascade model designed for patients with chronic back pain. The majority of
A psychological cascade model for persisting voice problems in teachers
de Jong, FICRS; Cornelis, BE; Wuyts, FL; Kooijman, PGC; Schutte, HK; Oudes, MJ; Graamans, K
2003-01-01
In 76 teachers with persisting voice problems, the maintaining factors and coping strategies were examined. Physical, functional, psychological and socioeconomic factors were assessed. A parallel was drawn to a psychological cascade model designed for patients with chronic back pain. The majority of
Modeling Events with Cascades of Poisson Processes
Simma, Aleksandr
2012-01-01
We present a probabilistic model of events in continuous time in which each event triggers a Poisson process of successor events. The ensemble of observed events is thereby modeled as a superposition of Poisson processes. Efficient inference is feasible under this model with an EM algorithm. Moreover, the EM algorithm can be implemented as a distributed algorithm, permitting the model to be applied to very large datasets. We apply these techniques to the modeling of Twitter messages and the revision history of Wikipedia.
Redman, Jeremiah D.; Holmes, Heather A.; Balachandran, Sivaraman; Maier, Marissa L.; Zhai, Xinxin; Ivey, Cesunica; Digby, Kyle; Mulholland, James A.; Russell, Armistead G.
2016-09-01
The impacts of emissions sources on air quality in St. Louis, Missouri are assessed for use in acute health effects studies. However, like many locations in the United States, the speciated particulate matter (PM) measurements from regulatory monitoring networks in St. Louis are only available every third day. The power of studies investigating acute health effects of air pollution is reduced when using one-in-three day source impacts compared to daily source impacts. This paper presents a temporal interpolation model to estimate daily speciated PM2.5 mass concentrations and source impact estimates using one-in-three day measurements. The model is used to interpolate 1-in-3 day source impact estimates and to interpolate the 1-in-3 day PM species concentrations prior to source apportionment (SA). Both approaches are compared and evaluated using two years (June 2001-May 2003) of daily data from the St. Louis Midwest Supersite (STL-SS). Data withholding is used to simulate a 1-in-3 day data set from the daily data to evaluate interpolated estimates. After evaluation using the STL-SS data, the model is used to estimate daily source impacts at another site approximately seven kilometers (7 km) northwest of the STL-SS (Blair); results between the sites are compared. For interpolated species concentrations, the model performs better for secondary species (sulfate, nitrate, ammonium, and organic carbon) than for primary species (metals and elemental carbon), likely due to the greater spatial autocorrelation of secondary species. Pearson correlation (R) values for sulfate, nitrate, ammonium, elemental carbon, and organic carbon ranged from 0.61 (elemental carbon, EC2) to 0.97 (sulfate). For trace metals, the R values ranged from 0.31 (Ba) to 0.81 (K). The interpolated source impact estimates also indicated a stronger correlation for secondary sources. Correlations of the secondary source impact estimates based on measurement data and interpolation data ranged from 0.68 to 0
Pressure Decimation and Interpolation (PDI) method for a baroclinic non-hydrostatic model
Shi, Jian; Shi, Fengyan; Kirby, James T.; Ma, Gangfeng; Wu, Guoxiang; Tong, Chaofeng; Zheng, Jinhai
2015-12-01
Non-hydrostatic models are computationally expensive in simulating density flows and mass transport problems due to the requirement of sufficient grid resolution to resolve density and flow structures. Numerical tests based on the Non-Hydrostatic Wave Model, NHWAVE (Ma et al., 2012), indicated that up to 70% of the total computational cost may be born by the pressure Poisson solver in cases with high grid resolution in both vertical and horizontal directions. However, recent studies using Poisson solver-based non-hydrostatic models have shown that an accurate prediction of wave dispersion does not require a large number of vertical layers if the dynamic pressure is properly discretized. In this study, we explore the possibility that the solution for the dynamic pressure field may, in general, be decimated to a resolution far coarser than that used in representing velocities and other transported quantities, without sacrificing accuracy of solutions. Following van Reeuwijk (2002), we determine the dynamic pressure field by solving the Poisson equation on a coarser grid and then interpolate the pressure field onto a finer grid used for solving for the remaining dynamic variables. With the Pressure Decimation and Interpolation (PDI) method, computational efficiency is greatly improved. We use three test cases to demonstrate the model's accuracy and efficiency in modeling density flows.
Multi-Dimensional Piece-Wise Self-Affine Fractal Interpolation Model
Institute of Scientific and Technical Information of China (English)
ZHANG Tong; ZHUANG Zhuo
2007-01-01
Iterated function system (IFS) models have been used to represent discrete sequences where the attractor of the IFS is piece-wise self-affine in R2 or R3 (R is the set of real numbers). In this paper, the piece-wise self-affine IFS model is extended from R3 to Rn (n is an integer greater than 3), which is called the multi-dimensional piece-wise self-affine fractal interpolation model. This model uses a "mapping partial derivative", and a constrained inverse algorithm to identify the model parameters. The model values depend continuously on all the model parameters, and represent most data which are not multi-dimensional self-affine in Rn. Therefore, the result is very general. The class of functions obtained is much more diverse because their values depend continuously on all of the variables, with all the coefficients of the possible multi-dimensional affine maps determining the functions.
Stein, A.
1991-01-01
The theory and practical application of techniques of statistical interpolation are studied in this thesis, and new developments in multivariate spatial interpolation and the design of sampling plans are discussed. Several applications to studies in soil science are presented.Sampling s
Interpolation-based reduced-order modelling for steady transonic flows via manifold learning
DEFF Research Database (Denmark)
Franz, Thomas; Zimmermann, Ralf; Goertz, Stefan
2014-01-01
This paper presents a parametric reduced-order model (ROM) based on manifold learning (ML) for use in steady transonic aerodynamic applications. The main objective of this work is to derive an efficient ROM that exploits the low-dimensional nonlinear solution manifold to ensure an improved...... that has the ability to predict approximate CFD solutions at untried parameter combinations, Isomap is coupled with an interpolation method to capture the variations in parameters like the angle of attack or the Mach number. Furthermore, an approximate local inverse mapping from the reduced...
Muñoz, Randy; Paredes, Javier; Huggel, Christian; Drenkhan, Fabian; García, Javier
2017-04-01
The availability and consistency of data is a determining factor for the reliability of any hydrological model and simulated results. Unfortunately, there are many regions worldwide where data is not available in the desired quantity and quality. The Santa River basin (SRB), located within a complex topographic and climatic setting in the tropical Andes of Peru is a clear example of this challenging situation. A monitoring network of in-situ stations in the SRB recorded series of hydro-meteorological variables which finally ceased to operate in 1999. In the following years, several researchers evaluated and completed many of these series. This database was used by multiple research and policy-oriented projects in the SRB. However, hydroclimatic information remains limited, making it difficult to perform research, especially when dealing with the assessment of current and future water resources. In this context, here the evaluation of different methodologies to interpolate temperature and precipitation data at a monthly time step as well as ice volume data in glacierized basins with limited data is presented. The methodologies were evaluated for the Quillcay River, a tributary of the SRB, where the hydro-meteorological data is available from nearby monitoring stations since 1983. The study period was 1983 - 1999 with a validation period among 1993 - 1999. For temperature series the aim was to extend the observed data and interpolate it. Data from Reanalysis NCEP was used to extend the observed series: 1) using a simple correlation with multiple field stations, or 2) applying the altitudinal correction proposed in previous studies. The interpolation then was applied as a function of altitude. Both methodologies provide very close results, by parsimony simple correlation is shown as a viable choice. For precipitation series, the aim was to interpolate observed data. Two methodologies were evaluated: 1) Inverse Distance Weighting whose results underestimate the amount
Threshold model of cascades in temporal networks
Karimi, Fariba
2012-01-01
Threshold models try to explain the consequences of social influence like the spread of fads and opinions. Along with models of epidemics, they constitute a major theoretical framework of social spreading processes. In threshold models on static networks, an individual changes her state if a certain fraction of her neighbors has done the same. When there are strong correlations in the temporal aspects of contact patterns, it is useful to represent the system as a temporal network. In such a system, not only contacts but also the time of the contacts are represented explicitly. There is a consensus that bursty temporal patterns slow down disease spreading. However, as we will see, this is not a universal truth for threshold models. In this work, we propose an extension of Watts' classic threshold model to temporal networks. We do this by assuming that an agent is influenced by contacts which lie a certain time into the past. I.e., the individuals are affected by contacts within a time window. In addition to th...
Fast regularized image interpolation method
Institute of Scientific and Technical Information of China (English)
Hongchen Liu; Yong Feng; Linjing Li
2007-01-01
The regularized image interpolation method is widely used based on the vector interpolation model in which down-sampling matrix has very large dimension and needs large storage consumption and higher computation complexity. In this paper, a fast algorithm for image interpolation based on the tensor product of matrices is presented, which transforms the vector interpolation model to matrix form. The proposed algorithm can extremely reduce the storage requirement and time consumption. The simulation results verify their validity.
Roy, Subrata P.
2014-01-28
The method of moments with interpolative closure (MOMIC) for soot formation and growth provides a detailed modeling framework maintaining a good balance in generality, accuracy, robustness, and computational efficiency. This study presents several computational issues in the development and implementation of the MOMIC-based soot modeling for direct numerical simulations (DNS). The issues of concern include a wide dynamic range of numbers, choice of normalization, high effective Schmidt number of soot particles, and realizability of the soot particle size distribution function (PSDF). These problems are not unique to DNS, but they are often exacerbated by the high-order numerical schemes used in DNS. Four specific issues are discussed in this article: the treatment of soot diffusion, choice of interpolation scheme for MOMIC, an approach to deal with strongly oxidizing environments, and realizability of the PSDF. General, robust, and stable approaches are sought to address these issues, minimizing the use of ad hoc treatments such as clipping. The solutions proposed and demonstrated here are being applied to generate new physical insight into complex turbulence-chemistry-soot-radiation interactions in turbulent reacting flows using DNS. © 2014 Copyright Taylor and Francis Group, LLC.
Resistor mesh model of a spherical head: part 1: applications to scalp potential interpolation.
Chauveau, N; Morucci, J P; Franceries, X; Celsis, P; Rigaud, B
2005-11-01
A resistor mesh model (RMM) has been implemented to describe the electrical properties of the head and the configuration of the intracerebral current sources by simulation of forward and inverse problems in electroencephalogram/event related potential (EEG/ERP) studies. For this study, the RMM representing the three basic tissues of the human head (brain, skull and scalp) was superimposed on a spherical volume mimicking the head volume: it included 43 102 resistances and 14 123 nodes. The validation was performed with reference to the analytical model by consideration of a set of four dipoles close to the cortex. Using the RMM and the chosen dipoles, four distinct families of interpolation technique (nearest neighbour, polynomial, splines and lead fields) were tested and compared so that the scalp potentials could be recovered from the electrode potentials. The 3D spline interpolation and the inverse forward technique (IFT) gave the best results. The IFT is very easy to use when the lead-field matrix between scalp electrodes and cortex nodes has been calculated. By simple application of the Moore-Penrose pseudo inverse matrix to the electrode cap potentials, a set of current sources on the cortex is obtained. Then, the forward problem using these cortex sources renders all the scalp potentials.
Cascades in the Threshold Model for varying system sizes
Karampourniotis, Panagiotis; Sreenivasan, Sameet; Szymanski, Boleslaw; Korniss, Gyorgy
2015-03-01
A classical model in opinion dynamics is the Threshold Model (TM) aiming to model the spread of a new opinion based on the social drive of peer pressure. Under the TM a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. Cascades in the TM depend on multiple parameters, such as the number and selection strategy of the initially active nodes (initiators), and the threshold distribution of the nodes. For a uniform threshold in the network there is a critical fraction of initiators for which a transition from small to large cascades occurs, which for ER graphs is largerly independent of the system size. Here, we study the spread contribution of each newly assigned initiator under the TM for different initiator selection strategies for synthetic graphs of various sizes. We observe that for ER graphs when large cascades occur, the spread contribution of the added initiator on the transition point is independent of the system size, while the contribution of the rest of the initiators converges to zero at infinite system size. This property is used for the identification of large transitions for various threshold distributions. Supported in part by ARL NS-CTA, ARO, ONR, and DARPA.
Nakai, T; Marutani, Y
1992-09-01
We have developed a unique laser fabrication system that uses an ultraviolet laser beam and liquid photopolymer. The system can easily be used to fabricate physical models without milling tools in only one process by using digital data obtained from medical computed tomography (CT) scanners or computer-aided design systems. We describe the fabrication of a smooth physical model such as a cerebrum, using the laser fabrication system, with the help of CT and magnetic resonance images that are made with coarse slices. Each sandwiched area between adjoining images is interpolated by using third-order spline curves in the cylindrical coordinate system. This modeling technique can play a major role in personal prosthesis, surgical planning, and implant design.
Cascade Failure in a Phase Model of Power Grids
Sakaguchi, Hidetsugu; Matsuo, Tatsuma
2012-07-01
We propose a phase model to study cascade failure in power grids composed of generators and loads. If the power demand is below a critical value, the model system of power grids maintains the standard frequency by feedback control. On the other hand, if the power demand exceeds the critical value, an electric failure occurs via step out (loss of synchronization) or voltage collapse. The two failures are incorporated as two removal rules of generator nodes and load nodes. We perform direct numerical simulation of the phase model on a square lattice and a scale-free network and compare the results with a mean-field approximation.
Cascaded process model based control: packed absorption column application.
Govindarajan, Anand; Jayaraman, Suresh Kumar; Sethuraman, Vijayalakshmi; Raul, Pramod R; Rhinehart, R Russell
2014-03-01
Nonlinear, adaptive, process-model based control is demonstrated in a cascaded single-input-single-output mode for pressure drop control in a pilot-scale packed absorption column. The process is shown to be nonlinear. Control is demonstrated in both servo and regulatory modes, for no wind-up in a constrained situation, and for bumpless transfer. Model adaptation is demonstrated and shown to provide process insight. The application procedure is revealed as a design guide to aid others in implementing process-model based control.
Partitioning and interpolation based hybrid ARIMA–ANN model for time series forecasting
Indian Academy of Sciences (India)
C NARENDRA BABU; PALLAVIRAM SURE
2016-07-01
Time series data (TSD) originating from different applications have dissimilar characteristics. Hence for prediction of TSD, diversified varieties of prediction models exist. In many applications, hybrid models provide more accurate predictions than individual models. One such hybrid model, namely auto regressive integrated moving average – artificial neural network (ARIMA–ANN) is devised in many different ways in the literature. However, the prediction accuracy of hybrid ARIMA–ANN model can be further improved by devising suitable processing techniques. In this paper, a hybrid ARIMA–ANN model is proposed, which combines the concepts of the recently developed moving average (MA) filter based hybrid ARIMA–ANN model, with a processing technique involving a partitioning–interpolation (PI) step. The improved prediction accuracy of the proposed PI based hybrid ARIMA–ANN model is justified using a simulation experiment.Further, on different experimental TSD like sunspots TSD and electricity price TSD, the proposed hybrid model is applied along with four existing state-of-the-art models and it is found that the proposed model outperforms all the others, and hence is a promising model for TSD prediction
Pursiainen, Sampsa; Wolters, Carsten H
2016-01-01
The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method....
Charge-based MOSFET model based on the Hermite interpolation polynomial
Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt
2017-04-01
An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.
Digital elevation modeling via curvature interpolation for LiDAR data
Directory of Open Access Journals (Sweden)
Hwamog Kim
2016-03-01
Full Text Available Digital elevation model (DEM is a three-dimensional (3D representation of a terrain's surface - for a planet (including Earth, moon, or asteroid - created from point cloud data which measure terrain elevation. Its modeling requires surface reconstruction for the scattered data, which is an ill-posed problem and most computational algorithms become overly expensive as the number of sample points increases. This article studies an effective partial differential equation (PDE-based algorithm, called the curvature interpolation method (CIM. The new method iteratively utilizes curvature information, estimated from an intermediate surface, to construct a reliable image surface that contains all of the data points. The CIM is applied for DEM for point cloud data acquired by light detection and ranging (LiDAR technology. It converges to a piecewise smooth image, requiring O(N operations independently of the number of sample points, where $N$ is the number of grid points.
Kinematic modelling of a 3-axis NC machine tool in linear and circular interpolation
Pessoles, Xavier; Rubio, Walter; 10.1007/s00170-009-2236-z
2010-01-01
Machining time is a major performance criterion when it comes to high-speed machining. CAM software can help in estimating that time for a given strategy. But in practice, CAM-programmed feed rates are rarely achieved, especially where complex surface finishing is concerned. This means that machining time forecasts are often more than one step removed from reality. The reason behind this is that CAM routines do not take either the dynamic performances of the machines or their specific machining tolerances into account. The present article seeks to improve simulation of high-speed NC machine dynamic behaviour and machining time prediction, offering two models. The first contributes through enhanced simulation of three-axis paths in linear and circular interpolation, taking high-speed machine accelerations and jerks into account. The second model allows transition passages between blocks to be integrated in the simulation by adding in a polynomial transition path that caters for the true machining environment t...
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Boolean Models of Biological Processes Explain Cascade-Like Behavior.
Chen, Hao; Wang, Guanyu; Simha, Rahul; Du, Chenghang; Zeng, Chen
2016-01-29
Biological networks play a key role in determining biological function and therefore, an understanding of their structure and dynamics is of central interest in systems biology. In Boolean models of such networks, the status of each molecule is either "on" or "off" and along with the molecules interact with each other, their individual status changes from "on" to "off" or vice-versa and the system of molecules in the network collectively go through a sequence of changes in state. This sequence of changes is termed a biological process. In this paper, we examine the common perception that events in biomolecular networks occur sequentially, in a cascade-like manner, and ask whether this is likely to be an inherent property. In further investigations of the budding and fission yeast cell-cycle, we identify two generic dynamical rules. A Boolean system that complies with these rules will automatically have a certain robustness. By considering the biological requirements in robustness and designability, we show that those Boolean dynamical systems, compared to an arbitrary dynamical system, statistically present the characteristics of cascadeness and sequentiality, as observed in the budding and fission yeast cell- cycle. These results suggest that cascade-like behavior might be an intrinsic property of biological processes.
Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model
Directory of Open Access Journals (Sweden)
C. Narendra Babu
2015-07-01
Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.
The subspace Nevanlinna interpolation problem and the most powerful unfalsified model
Rapisarda, P; Willems, JC
1997-01-01
A generalization of the tangential Nevanlinna interpolation problem will be studied from a behavioral point of view. Necessary and sufficient conditions for its solvability and a characterization of art its solutions are derived. These results are obtained by associating to the interpolation data a
Fix-point Multiplier Distributions in Discrete Turbulent Cascade Models
Jouault, B; Lipa, P
1998-01-01
One-point time-series measurements limit the observation of three-dimensional fully developed turbulence to one dimension. For one-dimensional models, like multiplicative branching processes, this implies that the energy flux from large to small scales is not conserved locally. This then renders the random weights used in the cascade curdling to be different from the multipliers obtained from a backward averaging procedure. The resulting multiplier distributions become solutions of a fix-point problem. With a further restoration of homogeneity, all observed correlations between multipliers in the energy dissipation field can be understood in terms of simple scale-invariant multiplicative branching processes.
Modelling of GaN quantum dot terahertz cascade laser
Asgari, A.; Khorrami, A. A.
2013-03-01
In this paper GaN based spherical quantum dot cascade lasers has been modelled, where the generation of the terahertz waves are obtained. The Schrödinger, Poisson, and the laser rate equations have been solved self-consistently including all dominant physical effects such as piezoelectric and spontaneous polarization in nitride-based QDs and the effects of the temperature. The exact value of the energy levels, the wavefunctions, the lifetimes of electron levels, and the lasing frequency are calculated. Also the laser parameters such as the optical gain, the output power and the threshold current density have been calculated at different temperatures and applied electric fields.
Emotion: Appraisal-coping model for the "Cascades" problem
Mahboub, Karim; Bertelle, Cyrille; Jay, Véronique
2009-01-01
Modelling emotion has become a challenge nowadays. Therefore, several models have been produced in order to express human emotional activity. However, only a few of them are currently able to express the close relationship existing between emotion and cognition. An appraisal-coping model is presented here, with the aim to simulate the emotional impact caused by the evaluation of a particular situation (appraisal), along with the consequent cognitive reaction intended to face the situation (coping). This model is applied to the "Cascades" problem, a small arithmetical exercise designed for ten-year-old pupils. The goal is to create a model corresponding to a child's behaviour when solving the problem using his own strategies.
Emotional intelligence: an integrative meta-analysis and cascading model.
Joseph, Dana L; Newman, Daniel A
2010-01-01
Research and valid practice in emotional intelligence (EI) have been impeded by lack of theoretical clarity regarding (a) the relative roles of emotion perception, emotion understanding, and emotion regulation facets in explaining job performance; (b) conceptual redundancy of EI with cognitive intelligence and Big Five personality; and (c) application of the EI label to 2 distinct sets of constructs (i.e., ability-based EI and mixed-based EI). In the current article, the authors propose and then test a theoretical model that integrates these factors. They specify a progressive (cascading) pattern among ability-based EI facets, in which emotion perception must causally precede emotion understanding, which in turn precedes conscious emotion regulation and job performance. The sequential elements in this progressive model are believed to selectively reflect Conscientiousness, cognitive ability, and Neuroticism, respectively. "Mixed-based" measures of EI are expected to explain variance in job performance beyond cognitive ability and personality. The cascading model of EI is empirically confirmed via meta-analytic data, although relationships between ability-based EI and job performance are shown to be inconsistent (i.e., EI positively predicts performance for high emotional labor jobs and negatively predicts performance for low emotional labor jobs). Gender and race differences in EI are also meta-analyzed. Implications for linking the EI fad in personnel selection to established psychological theory are discussed.
Energy Technology Data Exchange (ETDEWEB)
Maiden, D E
1998-10-01
A method for constructing bicubic interpolation polynomials for the pressure P and internal energy E that are thermodynamically consistent at the mesh ponts and continuous across mesh boundaries is presented. The slope boundary conditions for the pressure and energy are derived from finite differences of the data and from Maxwell's consistency relation. Monotonicity of the sound speed and the specific heat is obtained by a bilinear interpolation of the slopes of the tabulated data. Monotonicity of the functions near steep gradients may be achieved by mesh refinement or by using a non-consistent bilinear to the data. Mesh refinement is very efficient for uniform-linear or uniform-logarithmic spaced data because a direct table lookup can be used. The direct method was compared to binary search and was 37 percent faster for logarithmic-spaced data and 106 percent faster for linear-spaced data. This improvement in speed is very important in the radiation-transport opacity-lookup part of the calculation. Interpolation in P-E space, with mesh refinement, can be made simple, robust, and conserve energy. In the final analysis the interpolation of the free energy and entropy (Maiden and Cook) remains a competitor.
Monte Carlo Modeling Electronuclear Processes in Cascade Subcritical Reactor
Bznuni, S A; Zhamkochyan, V M; Polyanskii, A A; Sosnin, A N; Khudaverdian, A G
2000-01-01
Accelerator driven subcritical cascade reactor composed of the main thermal neutron reactor constructed analogous to the core of the VVER-1000 reactor and a booster-reactor, which is constructed similar to the core of the BN-350 fast breeder reactor, is taken as a model example. It is shown by means of Monte Carlo calculations that such system is a safe energy source (k_{eff}=0.94-0.98) and it is capable of transmuting produced radioactive wastes (neutron flux density in the thermal zone is PHI^{max} (r,z)=10^{14} n/(cm^{-2} s^{-1}), neutron flux in the fast zone is respectively equal PHI^{max} (r,z)=2.25 cdot 10^{15} n/(cm^{-2} s^{-1}) if the beam current of the proton accelerator is k_{eff}=0.98 and I=5.3 mA). Suggested configuration of the "cascade" reactor system essentially reduces the requirements on the proton accelerator current.
Toward Holistic Scene Understanding: Feedback Enabled Cascaded Classification Models.
Li, Congcong; Kowdle, Adarsh; Saxena, Ashutosh; Chen, Tsuhan
2012-07-01
Scene understanding includes many related subtasks, such as scene categorization, depth estimation, object detection, etc. Each of these subtasks is often notoriously hard, and state-of-the-art classifiers already exist for many of them. These classifiers operate on the same raw image and provide correlated outputs. It is desirable to have an algorithm that can capture such correlation without requiring any changes to the inner workings of any classifier. We propose Feedback Enabled Cascaded Classification Models (FE-CCM), that jointly optimizes all the subtasks while requiring only a "black box" interface to the original classifier for each subtask. We use a two-layer cascade of classifiers, which are repeated instantiations of the original ones, with the output of the first layer fed into the second layer as input. Our training method involves a feedback step that allows later classifiers to provide earlier classifiers information about which error modes to focus on. We show that our method significantly improves performance in all the subtasks in the domain of scene understanding, where we consider depth estimation, scene categorization, event categorization, object detection, geometric labeling, and saliency detection. Our method also improves performance in two robotic applications: an object-grasping robot and an object-finding robot.
Pursiainen, S.; Vorwerk, J.; Wolters, C. H.
2016-12-01
The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.
Energy Technology Data Exchange (ETDEWEB)
MACKAY, W.W.; LUCCIO, A.U.
2006-06-23
It is important to have symplectic maps for the various electromagnetic elements in an accelerator ring. For some tracking problems we must consider elements which evolve during a ramp. Rather than performing a computationally intensive numerical integration for every turn, it should be possible to integrate the trajectory for a few sets of parameters, and then interpolate the transport map as a function of one or more parameters, such as energy. We present two methods for interpolation of symplectic matrices as a function of parameters: one method is based on the calculation of a representation in terms of a basis of group generators [2, 3] and the other is based on the related but simpler symplectification method of Healy [1]. Both algorithms guarantee a symplectic result.
Stankiewicz, Witold; Morzyński, Marek; Kotecki, Krzysztof; Noack, Bernd R.
2017-04-01
We present a low-dimensional Galerkin model with state-dependent modes capturing linear and nonlinear dynamics. Departure point is a direct numerical simulation of the three-dimensional incompressible flow around a sphere at Reynolds numbers 400. This solution starts near the unstable steady Navier-Stokes solution and converges to a periodic limit cycle. The investigated Galerkin models are based on the dynamic mode decomposition (DMD) and derive the dynamical system from first principles, the Navier-Stokes equations. A DMD model with training data from the initial linear transient fails to predict the limit cycle. Conversely, a model from limit-cycle data underpredicts the initial growth rate roughly by a factor 5. Key enablers for uniform accuracy throughout the transient are a continuous mode interpolation between both oscillatory fluctuations and the addition of a shift mode. This interpolated model is shown to capture both the transient growth of the oscillation and the limit cycle.
Calibration of a modified Sierra Model 235 slotted cascade impactor
Energy Technology Data Exchange (ETDEWEB)
Knuth, R.H.
1979-07-01
For measurements of ore dust in uranium concentrating mills, a Sierra Model 235 slotted cascade impactor was calibrated at a flow rate of .21 m/sup 3//min, using solid monodisperse particles and an impaction surface of Whatman No. 41 filter paper soaked in mineral oil. The reduction from the impactor's design flow rate of 1.13 m/sup 3//min (40 cfm) to 0.21 m/sup 3//min (7.5 cfm) increased the stage cut-off diameters by an average factor of 2.3, a necessary adjustment because of the anticipated large particle sizes of ore dust. The underestimation of mass median diameters, often caused by the rebound and reentrainment of solid particles from dry impaction surfaces, was virtually eliminated by using the oiled Whatman No. 41 impaction surface. Observations of satisfactory performance in the laboratory were verified by tests of the impactor in ore mills.
Modeling Collisional Cascades In Debris Disks: The Numerical Method
Gaspar, Andras; Ozel, Feryal; Rieke, George H; Cooney, Alan
2011-01-01
We develop a new numerical algorithm to model collisional cascades in debris disks. Because of the large dynamical range in particle masses, we solve the integro-differential equations describing erosive and catastrophic collisions in a particle-in-a-box approach, while treating the orbital dynamics of the particles in an approximate fashion. We employ a new scheme for describing erosive (cratering) collisions that yields a continuous set of outcomes as a function of colliding masses. We demonstrate the stability and convergence characteristics of our algorithm and compare it with other treatments. We show that incorporating the effects of erosive collisions results in a decay of the particle distribution that is significantly faster than with purely catastrophic collisions.
MODELING COLLISIONAL CASCADES IN DEBRIS DISKS: THE NUMERICAL METHOD
Energy Technology Data Exchange (ETDEWEB)
Gaspar, Andras; Psaltis, Dimitrios; Oezel, Feryal; Rieke, George H.; Cooney, Alan, E-mail: agaspar@as.arizona.edu, E-mail: dpsaltis@as.arizona.edu, E-mail: fozel@as.arizona.edu, E-mail: grieke@as.arizona.edu, E-mail: acooney@physics.arizona.edu [Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States)
2012-04-10
We develop a new numerical algorithm to model collisional cascades in debris disks. Because of the large dynamical range in particle masses, we solve the integro-differential equations describing erosive and catastrophic collisions in a particle-in-a-box approach, while treating the orbital dynamics of the particles in an approximate fashion. We employ a new scheme for describing erosive (cratering) collisions that yields a continuous set of outcomes as a function of colliding masses. We demonstrate the stability and convergence characteristics of our algorithm and compare it with other treatments. We show that incorporating the effects of erosive collisions results in a decay of the particle distribution that is significantly faster than with purely catastrophic collisions.
Directory of Open Access Journals (Sweden)
R. Khosravi
2014-09-01
Full Text Available Climatic change can impose physiological constraints on species and can therefore affect species distribution. Bioclimatic predictors, including annual trends, regimes, thresholds and bio-limiting factors are the most important independent variables in species distribution models. Water and temperature are the most limiting factors in arid ecosystem in central Iran. Therefore, mapping of climatic factors in species distribution models seems necessary. In this study, we describe the extraction of 20 important bioclimatic variables from climatic data and compare different interpolation methods including inverse distance weighting, ordinary kriging, kriging with external trend, cokriging, and five radial basis functions. Normal climatic data (1950-2010 in 26 synoptic stations in central Iran were used to extract bioclimatic data. Spatial correlation, heterogeneity and trend in data were evaluated using three models of semivariogram (spherical, exponential and Gaussian and the best model was selected using cross validation. The optimum model for bioclimatic variables was assessed based on the root mean square error and mean bias error. Exponential model was considered to be the best fit mathematical model to empirical semivariogram. IDW and cokriging were recognised as the best interpolating methods for average annual temperature and annual precipitation, respectively. Use of elevation as an auxiliary variable appeared to be necessary for optimizing interpolation methods of climatic and bioclimatic variables.
Interpolation and partial differential equations
MALIGRANDA, Lech; Persson, Lars-Erik; Wyller, John
1994-01-01
One of the main motivations for developing the theory of interpolation was to apply it to the theory of partial differential equations (PDEs). Nowadays interpolation theory has been developed in an almost unbelievable way {see the bibliography of Maligranda [Interpolation of Operators and Applications (1926-1990), 2nd ed. (Luleå University, Luleå, 1993), p. 154]}. In this article some model examples are presented which display how powerful this theory is when dealing with PDEs. One main aim i...
Testing the inhibitory cascade model in Mesozoic and Cenozoic mammaliaforms
2013-01-01
Background Much of the current research in the growing field of evolutionary development concerns relating developmental pathways to large-scale patterns of morphological evolution, with developmental constraints on variation, and hence diversity, a field of particular interest. Tooth morphology offers an excellent model system for such ‘evo-devo’ studies, because teeth are well preserved in the fossil record, and are commonly used in phylogenetic analyses and as ecological proxies. Moreover, tooth development is relatively well studied, and has provided several testable hypotheses of developmental influences on macroevolutionary patterns. The recently-described Inhibitory Cascade (IC) Model provides just such a hypothesis for mammalian lower molar evolution. Derived from experimental data, the IC Model suggests that a balance between mesenchymal activators and molar-derived inhibitors determines the size of the immediately posterior molar, predicting firstly that molars either decrease in size along the tooth row, or increase in size, or are all of equal size, and secondly that the second lower molar should occupy one third of lower molar area. Here, we tested the IC Model in a large selection of taxa from diverse extant and fossil mammalian groups, ranging from the Middle Jurassic (~176 to 161 Ma) to the Recent. Results Results show that most taxa (~65%) fell within the predicted areas of the Inhibitory Cascade Model. However, members of several extinct groups fell into the regions where m2 was largest, or rarely, smallest, including the majority of the polyphyletic “condylarths”. Most Mesozoic mammals fell near the centre of the space with equality of size in all three molars. The distribution of taxa was significantly clustered by diet and by phylogenetic group. Conclusions Overall, the IC Model was supported as a plesiomorphic developmental system for Mammalia, suggesting that mammal tooth size has been subjected to this developmental constraint at
A cascaded neuro-computational model for spoken word recognition
Hoya, Tetsuya; van Leeuwen, Cees
2010-03-01
In human speech recognition, words are analysed at both pre-lexical (i.e., sub-word) and lexical (word) levels. The aim of this paper is to propose a constructive neuro-computational model that incorporates both these levels as cascaded layers of pre-lexical and lexical units. The layered structure enables the system to handle the variability of real speech input. Within the model, receptive fields of the pre-lexical layer consist of radial basis functions; the lexical layer is composed of units that perform pattern matching between their internal template and a series of labels, corresponding to the winning receptive fields in the pre-lexical layer. The model adapts through self-tuning of all units, in combination with the formation of a connectivity structure through unsupervised (first layer) and supervised (higher layers) network growth. Simulation studies show that the model can achieve a level of performance in spoken word recognition similar to that of a benchmark approach using hidden Markov models, while enabling parallel access to word candidates in lexical decision making.
A simple model of global cascades on random networks
Watts, Duncan J.
2002-04-01
The origin of large but rare cascades that are triggered by small initial shocks is a phenomenon that manifests itself as diversely as cultural fads, collective action, the diffusion of norms and innovations, and cascading failures in infrastructure and organizational networks. This paper presents a possible explanation of this phenomenon in terms of a sparse, random network of interacting agents whose decisions are determined by the actions of their neighbors according to a simple threshold rule. Two regimes are identified in which the network is susceptible to very large cascadesherein called global cascadesthat occur very rarely. When cascade propagation is limited by the connectivity of the network, a power law distribution of cascade sizes is observed, analogous to the cluster size distribution in standard percolation theory and avalanches in self-organized criticality. But when the network is highly connected, cascade propagation is limited instead by the local stability of the nodes themselves, and the size distribution of cascades is bimodal, implying a more extreme kind of instability that is correspondingly harder to anticipate. In the first regime, where the distribution of network neighbors is highly skewed, it is found that the most connected nodes are far more likely than average nodes to trigger cascades, but not in the second regime. Finally, it is shown that heterogeneity plays an ambiguous role in determining a system's stability: increasingly heterogeneous thresholds make the system more vulnerable to global cascades; but an increasingly heterogeneous degree distribution makes it less vulnerable.
Hybrid Model for Cascading Outage in a Power System: A Numerical Study
Susuki, Yoshihiko; Takatsuji, Yu; Hikihara, Takashi
2009-01-01
Analysis of cascading outages in power systems is important for understanding why large blackouts emerge and how to prevent them. Cascading outages are complex dynamics of power systems, and one cause of them is the interaction between swing dynamics of synchronous machines and protection operation of relays and circuit breakers. This paper uses hybrid dynamical systems as a mathematical model for cascading outages caused by the interaction. Hybrid dynamical systems can combine families of fl...
Decision-making model for risk management of cascade hydropower stations
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
In a medium-term electricity market,in order to reduce the risks of price and inflow uncertainties, the cascade hydropower stations may use the options contract with electricity supply companies. A profit-based model for risk management of cascade hydropower stations in the medium-term electricity market is presented. The objective function is profit maximization of cascade hydropower stations. In order to avoid the risks of price and inflow uncertainties, two different risk-aversion constraints: a minimum ...
Developmental Cascade Model for Adolescent Substance Use from Infancy to Late Adolescence
Eiden, Rina D.; Lessard, Jared; Colder, Craig R.; Livingston, Jennifer; Casey, Meghan; Leonard, Kenneth E.
2016-01-01
A developmental cascade model for adolescent substance use beginning in infancy was examined in a sample of children with alcoholic and nonalcoholic parents. The model examined the role of parents' alcohol diagnoses, depression and antisocial behavior in a cascading process of risk via 3 major hypothesized pathways: first, via parental…
Developmental Cascade Model for Adolescent Substance Use from Infancy to Late Adolescence
Eiden, Rina D.; Lessard, Jared; Colder, Craig R.; Livingston, Jennifer; Casey, Meghan; Leonard, Kenneth E.
2016-01-01
A developmental cascade model for adolescent substance use beginning in infancy was examined in a sample of children with alcoholic and nonalcoholic parents. The model examined the role of parents' alcohol diagnoses, depression and antisocial behavior in a cascading process of risk via 3 major hypothesized pathways: first, via parental…
Discontinuous Transition of a Multistage Independent Cascade Model on Networks
Hasegawa, Takehisa
2012-01-01
We study a multistage independent cascade (MIC) model in complex networks. This model is parameterized by two probabilities: T1 is the probability that a node adopting a fad increases the awareness of a neighboring susceptible node until it abandons the fad, and T2 is the probability that an adopter directly causes a susceptible node to adopt the fad. We formulate a framework of tree approximation for the MIC model on an uncorrelated network with an arbitrary given degree distribution. As an application, we study this model on a random regular network with degree k=6 to show that it has a rich phase diagram including continuous and discontinuous transition lines for the percolation of fads as well as a continuous transition line for the percolation of susceptible nodes. In particular, the percolation transition of fads is discontinuous (continuous) when T1 is larger (smaller) than a certain value. Furthermore, the phase boundaries drastically change by assigning a finite fraction of initial adopters. We discu...
Improved Ternary Subdivision Interpolation Scheme
Institute of Scientific and Technical Information of China (English)
WANG Huawei; QIN Kaihuai
2005-01-01
An improved ternary subdivision interpolation scheme was developed for computer graphics applications that can manipulate open control polygons unlike the previous ternary scheme, with the resulting curve proved to be still C2-continuous. Parameterizations of the limit curve near the two endpoints are given with expressions for the boundary derivatives. The split joint problem is handled with the interpolating ternary subdivision scheme. The improved scheme can be used for modeling interpolation curves in computer aided geometric design systems, and provides a method for joining two limit curves of interpolating ternary subdivisions.
Beguerisse-Díaz, Mariano; Desikan, Radhika; Barahona, Mauricio
2016-08-01
Cellular signal transduction usually involves activation cascades, the sequential activation of a series of proteins following the reception of an input signal. Here, we study the classic model of weakly activated cascades and obtain analytical solutions for a variety of inputs. We show that in the special but important case of optimal gain cascades (i.e. when the deactivation rates are identical) the downstream output of the cascade can be represented exactly as a lumped nonlinear module containing an incomplete gamma function with real parameters that depend on the rates and length of the cascade, as well as parameters of the input signal. The expressions obtained can be applied to the non-identical case when the deactivation rates are random to capture the variability in the cascade outputs. We also show that cascades can be rearranged so that blocks with similar rates can be lumped and represented through our nonlinear modules. Our results can be used both to represent cascades in computational models of differential equations and to fit data efficiently, by reducing the number of equations and parameters involved. In particular, the length of the cascade appears as a real-valued parameter and can thus be fitted in the same manner as Hill coefficients. Finally, we show how the obtained nonlinear modules can be used instead of delay differential equations to model delays in signal transduction.
Numerical Physical Mechanism and Model of Turbulent Cascades in a Barotropic Atmosphere
Institute of Scientific and Technical Information of China (English)
黄锋; 刘式适
2004-01-01
In a barotropic atmosphere,new Reynolds mean momentum equations including turbulent viscosity,dispersion,and instability are used not only to derive the KdV-Burgers-Kuramoto equation but also to analyze the physical mechanism of the cascades of energy and enstrophy.It shows that it is the effects of dispersion and instability that result in the inverse cascade.Then based on the conservation laws of the energy and enstrophy,a cascade model is put forward and the processes of the cascades are described.
Parsa, Mohammad; Maghsoudi, Abbas; Yousefi, Mahyar; Carranza, Emmanuel John M.
2017-04-01
The spectrum-area (S-A) fractal model is a powerful tool for decomposition of complex anomaly patterns of gridded geochemical data. Ordinary moving average interpolation techniques are commonly being used for gridding geochemical data; however, these methods suffer from two major drawbacks of (1) ignoring the locally high values and (2) smoothing the interpolated surface. Multifractal moving average interpolation methods have been developed to overcome the shortcomings of ordinary moving average methods. This study seeks to compare two sets of multifractal and ordinary gridded geochemical data using success rate curves and applies the S-A fractal model to decompose anomalous geochemical patterns. A set of stream sediment geochemical data in Ahar area, NW Iran, was used as a case study. Then, a mineralization-related multi-element geochemical signature was gridded by ordinary and multifractal approaches and considered for further analyses. The S-A fractal method was applied to decompose anomaly and background components of the resultant multi-element geochemical signature. Exploration targets were delimited and further evaluated using two bivariate statistical procedures of Student's t-value and normalized density index. The results revealed that (a) application of multifractal gridded data enhances the predicting ability of geochemical signatures, (b) application of S-A fractal model on multifractal gridded data allows for superior discrimination of geochemical anomalies, and (c) the multi-element geochemical anomalies in the Ahar area related to porphyry-Cu deposits were properly delineated through sequence application of multifractal interpolation and S-A fractal model.
A robust interpolation method for constructing digital elevation models from remote sensing data
Chen, Chuanfa; Liu, Fengying; Li, Yanyan; Yan, Changqing; Liu, Guolin
2016-09-01
A digital elevation model (DEM) derived from remote sensing data often suffers from outliers due to various reasons such as the physical limitation of sensors and low contrast of terrain textures. In order to reduce the effect of outliers on DEM construction, a robust algorithm of multiquadric (MQ) methodology based on M-estimators (MQ-M) was proposed. MQ-M adopts an adaptive weight function with three-parts. The weight function is null for large errors, one for small errors and quadric for others. A mathematical surface was employed to comparatively analyze the robustness of MQ-M, and its performance was compared with those of the classical MQ and a recently developed robust MQ method based on least absolute deviation (MQ-L). Numerical tests show that MQ-M is comparative to the classical MQ and superior to MQ-L when sample points follow normal and Laplace distributions, and under the presence of outliers the former is more accurate than the latter. A real-world example of DEM construction using stereo images indicates that compared with the classical interpolation methods, such as natural neighbor (NN), ordinary kriging (OK), ANUDEM, MQ-L and MQ, MQ-M has a better ability of preserving subtle terrain features. MQ-M replaces thin plate spline for reference DEM construction to assess the contribution to our recently developed multiresolution hierarchical classification method (MHC). Classifying the 15 groups of benchmark datasets provided by the ISPRS Commission demonstrates that MQ-M-based MHC is more accurate than MQ-L-based and TPS-based MHCs. MQ-M has high potential for DEM construction.
Korez, Robert; Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž
2015-08-01
Automated and semi-automated detection and segmentation of spinal and vertebral structures from computed tomography (CT) images is a challenging task due to a relatively high degree of anatomical complexity, presence of unclear boundaries and articulation of vertebrae with each other, as well as due to insufficient image spatial resolution, partial volume effects, presence of image artifacts, intensity variations and low signal-to-noise ratio. In this paper, we describe a novel framework for automated spine and vertebrae detection and segmentation from 3-D CT images. A novel optimization technique based on interpolation theory is applied to detect the location of the whole spine in the 3-D image and, using the obtained location of the whole spine, to further detect the location of individual vertebrae within the spinal column. The obtained vertebra detection results represent a robust and accurate initialization for the subsequent segmentation of individual vertebrae, which is performed by an improved shape-constrained deformable model approach. The framework was evaluated on two publicly available CT spine image databases of 50 lumbar and 170 thoracolumbar vertebrae. Quantitative comparison against corresponding reference vertebra segmentations yielded an overall mean centroid-to-centroid distance of 1.1 mm and Dice coefficient of 83.6% for vertebra detection, and an overall mean symmetric surface distance of 0.3 mm and Dice coefficient of 94.6% for vertebra segmentation. The results indicate that by applying the proposed automated detection and segmentation framework, vertebrae can be successfully detected and accurately segmented in 3-D from CT spine images.
Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh
2011-12-01
Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers.
Mesoscopic Modeling of Blood Clotting: Coagulation Cascade and Platelets Adhesion
Yazdani, Alireza; Li, Zhen; Karniadakis, George
2015-11-01
The process of clot formation and growth at a site on a blood vessel wall involve a number of multi-scale simultaneous processes including: multiple chemical reactions in the coagulation cascade, species transport and flow. To model these processes we have incorporated advection-diffusion-reaction (ADR) of multiple species into an extended version of Dissipative Particle Dynamics (DPD) method which is considered as a coarse-grained Molecular Dynamics method. At the continuum level this is equivalent to the Navier-Stokes equation plus one advection-diffusion equation for each specie. The chemistry of clot formation is now understood to be determined by mechanisms involving reactions among many species in dilute solution, where reaction rate constants and species diffusion coefficients in plasma are known. The role of blood particulates, i.e. red cells and platelets, in the clotting process is studied by including them separately and together in the simulations. An agonist-induced platelet activation mechanism is presented, while platelets adhesive dynamics based on a stochastic bond formation/dissociation process is included in the model.
Interpolating function and Stokes Phenomena
Honda, Masazumi
2015-01-01
When we have two expansions of physical quantity around two different points in parameter space, we can usually construct a family of functions, which interpolates the both expansions. In this paper we study analytic structures of such interpolating functions and discuss their physical implications. We propose that the analytic structures of the interpolating functions provide information on analytic property and Stokes phenomena of the physical quantity, which we approximate by the interpolating functions. We explicitly check our proposal for partition functions of zero-dimensional $\\varphi^4$ theory and Sine-Gordon model. In the zero dimensional Sine-Gordon model, we compare our result with a recent result from resurgence analysis. We also comment on construction of interpolating function in Borel plane.
Hybrid Model for Cascading Outage in a Power System: A Numerical Study
Susuki, Yoshihiko; Takatsuji, Yu; Hikihara, Takashi
Analysis of cascading outages in power systems is important for understanding why large blackouts emerge and how to prevent them. Cascading outages are complex dynamics of power systems, and one cause of them is the interaction between swing dynamics of synchronous machines and protection operation of relays and circuit breakers. This paper uses hybrid dynamical systems as a mathematical model for cascading outages caused by the interaction. Hybrid dynamical systems can combine families of flows describing swing dynamics with switching rules that are based on protection operation. This paper refers to data on a cascading outage in the September 2003 blackout in Italy and shows a hybrid dynamical system by which propagation of outages reproduced is consistent with the data. This result suggests that hybrid dynamical systems can provide an effective model for the analysis of cascading outages in power systems.
Energy Technology Data Exchange (ETDEWEB)
Mooibroek, D.; Hoogerbrugge, R. [Centrum voor Milieukwaliteit, Rijksinstituut voor Volksgezondheid en Milieu RIVM, Bilthoven (Netherlands)
2013-12-15
In Belgium, an interpolation technique has been developed that takes into account local conditions of air pollution: the RIO interpolation technique. The RIO interpolation model is used in Belgium to inform the public about the current air quality. Realtime RIO air quality maps for Belgium are published on the website of IRCEL (www.irceline.be). In 2009, the RIO-interpolation method was extended by VITO and adjusted such that it became possible to apply the model in the Netherlands. This article gives a brief introduction of this new interpolation technique for the Netherlands, and a comparison of the performance compared to the INTERPOL method used until now [Dutch] In Belgie is een interpolatietechniek ontwikkeld die rekening houdt met het lokale karakter van luchtverontreiniging: de RIO-interpolatietechniek. Het RIO-interpolatiemodel wordt in Belgie gebruikt om het publiek te informeren over de actuele luchtkwaliteit. Realtime RIO-luchtkwaliteitskaarten voor België worden gepubliceerd op de website van IRCEL (www.irceline.be). In 2009 werd de RIO-interpolatiemethode in opdracht van het RIVM door VITO uitgebreid en aangepast, zodanig dat ook toepassing in Nederland mogelijk werd. Dit artikel geeft een korte introductie van deze nieuwe interpolatietechniek voor Nederland en een vergelijking van de prestaties ten opzichte van de tot nu gebruikte methode INTERPOL.
Chen, G; de Figueiredo, R P
1993-01-01
The unified approach to optimal image interpolation problems presented provides a constructive procedure for finding explicit and closed-form optimal solutions to image interpolation problems when the type of interpolation can be either spatial or temporal-spatial. The unknown image is reconstructed from a finite set of sampled data in such a way that a mean-square error is minimized by first expressing the solution in terms of the reproducing kernel of a related Hilbert space, and then constructing this kernel using the fundamental solution of an induced linear partial differential equation, or the Green's function of the corresponding self-adjoint operator. It is proved that in most cases, closed-form fundamental solutions (or Green's functions) for the corresponding linear partial differential operators can be found in the general image reconstruction problem described by a first- or second-order linear partial differential operator. An efficient method for obtaining the corresponding closed-form fundamental solutions (or Green's functions) of the operators is presented. A computer simulation demonstrates the reconstruction procedure.
DESIGN OF A NEW INTERPOLATED CONTROLLER FOR STABILIZATION OF A SET OF INTERPOLATED PLANTS
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
Stabilization of a plant with variable operating conditions was considered. The plant is assumed to lie in a set of interpolated models composed of all interpolations generated between certain sets of proper stable coprime factorizations of transfer functions of two representative models that are defined at two representative operating points. An interpolated controller that is linear interpolation of coprime factorizations of two stabilizing controllers for the two representative models is designed to stabilize this set of interpolated models. Design of such an interpolated controller was converted to a feasibility problem constrained by several LMIs and a BMI, and a two-step iteration algorithm was employed to solve it.
Robustness and perturbation in the modeled cascade heart rate variability
Lin, D. C.
2003-03-01
In this study, numerical experiments are conducted to examine the robustness of using cascade to describe the multifractal heart rate variability (HRV) by perturbing the hierarchical time scale structure and the multiplicative rule of the cascade. It is shown that a rigid structure of the multiple time scales is not essential for the multifractal scaling in healthy HRV. So long as there exists a tree structure for the multiplication to take place, a multifractal HRV and related properties can be captured by using the cascade. But the perturbation of the multiplicative rule can lead to a qualitative change. In particular, a multifractal to monofractal HRV transition can result after the product law is perturbed to an additive one at the fast time scale. We suggest that this explains the similar HRV scaling transition in the parasympathetic nervous system blockade.
Piecewise-polynomial and cascade models of predistorter for linearization of power amplifier
2012-01-01
To combat non-linear signal distortions in a power amplifier we suggest using predistorter with cascade structure in which first and second nodes have piecewise-polynomial and polynomial models. On example of linearizing the Winner–Hammerstein amplifier model we demonstrate that cascade structure of predistorter improves precision of amplifier’s linearization. To simplify predistorter’s synthesis the degree of polynomial model used in first node should be moderate, while precision should be i...
Bikić Siniša M.; Uzelac Dušan N.; Bukurov Maša Ž.; Radojčin Milivoj T.; Pavkov Ivan S.
2016-01-01
This paper is focused on the mathematical model of the Air Torque Position dampers. The mathematical model establishes a link between the velocity of air in front of the damper, position of the damper blade and the moment acting on the blade caused by the air flow. This research aims to experimentally verify the mathematical model for the damper type with non-cascading blades. Four different types of dampers with non-cascading blades were considered: single...
Transfinite thin plate spline interpolation
Bejancu, Aurelian
2009-01-01
Duchon's method of thin plate splines defines a polyharmonic interpolant to scattered data values as the minimizer of a certain integral functional. For transfinite interpolation, i.e. interpolation of continuous data prescribed on curves or hypersurfaces, Kounchev has developed the method of polysplines, which are piecewise polyharmonic functions of fixed smoothness across the given hypersurfaces and satisfy some boundary conditions. Recently, Bejancu has introduced boundary conditions of Beppo Levi type to construct a semi-cardinal model for polyspline interpolation to data on an infinite set of parallel hyperplanes. The present paper proves that, for periodic data on a finite set of parallel hyperplanes, the polyspline interpolant satisfying Beppo Levi boundary conditions is in fact a thin plate spline, i.e. it minimizes a Duchon type functional.
Establishment and evaluation of operation function model for cascade hydropower station
Directory of Open Access Journals (Sweden)
Chang-ming JI
2010-12-01
Full Text Available Toward solving the actual operation problems of cascade hydropower stations under hydrologic uncertainty, this paper presents the process of extraction of statistical characteristics from long-term optimal cascade operation, and proposes a monthly operation function algorithm for the actual operation of cascade hydropower stations through the identification, processing, and screening of available information during long-term optimal operation. Applying the operation function to the cascade hydropower stations on the Jinshajiang-Yangtze River system, the modeled long-term electric generation is shown to have high precision and provide benefits. Through comparison with optimal operation, the simulation results show that the operation function proposed retains the characteristics of optimal operation. Also, the inadequacies and attribution of the algorithm are discussed based on case study, providing decision support and reference information for research on large-scale cascade operation work.
Short term forecasting of surface layer wind speed using a continuous cascade model
Baile, Rachel; Poggi, Philippe
2010-01-01
This paper describes a statistical method for short-term forecasting of surface layer wind velocity amplitude relying on the notion of continuous cascades. Inspired by recent empirical findings that suggest the existence of some cascading process in the mesoscale range, we consider that wind speed can be described by a seasonal component and a fluctuating part represented by a "multifractal noise" associated with a random cascade. Performances of our model are tested on hourly wind speed series gathered at various locations in Corsica (France) and Netherlands. The obtained results show a systematic improvement of the prediction as compared to reference models like persistence or Artificial Neural Networks.
The theoretical development of the cascade model excimer laser irradiation on the organ of vision
Directory of Open Access Journals (Sweden)
V. N. Trubilin
2012-01-01
Full Text Available Authors analyzed the baseline (Pershin K. B., 2000 and advanced (trubilin V. N Pozharitskii M. D., 2011 theoretical model of the cascade excimer laser effects on eyesight.the analysis indicates a lack of elaboration of issues related to the cascade of «a priori measures» aimed at medical and psycho- logical prediction «quality of life» and post-operative rehabilitation. In theory, authors suggest further improvement of the cascade model of excimer laser irradiation on the organ of vision. the proposed theoretical concepts will provide a practical matter, improvefunctional and subjective results of the excimer laser correction of refractive errors.
Information cascade, Kirman's ant colony model, and kinetic Ising model
Hisakado, Masato
2014-01-01
In this paper, we discuss a voting model in which voters can obtain information from a finite number of previous voters. There exist three groups of voters: (i) digital herders and independent voters, (ii) analog herders and independent voters, and (iii) tanh-type herders. In our previous paper, we used the mean field approximation for case (i). In that study, if the reference number r is above three, phase transition occurs and the solution converges to one of the equilibria. In contrast, in the current study, the solution oscillates between the two equilibria, that is, good and bad equilibria. In this paper, we show that there is no phase transition when r is finite. If the annealing schedule is adequately slow from finite r to infinite r, the voting rate converges only to the good equilibrium. In case (ii), the state of reference votes is equivalent to that of Kirman's ant colony model, and it follows beta binomial distribution. In case (iii), we show that the model is equivalent to the finite-size kinetic...
Directory of Open Access Journals (Sweden)
S. Ly
2011-07-01
Full Text Available Spatial interpolation of precipitation data is of great importance for hydrological modelling. Geostatistical methods (kriging are widely applied in spatial interpolation from point measurement to continuous surfaces. The first step in kriging computation is the semi-variogram modelling which usually used only one variogram model for all-moment data. The objective of this paper was to develop different algorithms of spatial interpolation for daily rainfall on 1 km^{2} regular grids in the catchment area and to compare the results of geostatistical and deterministic approaches. This study leaned on 30-yr daily rainfall data of 70 raingages in the hilly landscape of the Ourthe and Ambleve catchments in Belgium (2908 km^{2}. This area lies between 35 and 693 m in elevation and consists of river networks, which are tributaries of the Meuse River. For geostatistical algorithms, seven semi-variogram models (logarithmic, power, exponential, Gaussian, rational quadratic, spherical and penta-spherical were fitted to daily sample semi-variogram on a daily basis. These seven variogram models were also adopted to avoid negative interpolated rainfall. The elevation, extracted from a digital elevation model, was incorporated into multivariate geostatistics. Seven validation raingages and cross validation were used to compare the interpolation performance of these algorithms applied to different densities of raingages. We found that between the seven variogram models used, the Gaussian model was the most frequently best fit. Using seven variogram models can avoid negative daily rainfall in ordinary kriging. The negative estimates of kriging were observed for convective more than stratiform rain. The performance of the different methods varied slightly according to the density of raingages, particularly between 8 and 70 raingages but it was much different for interpolation using 4 raingages. Spatial interpolation with the geostatistical and
Modeling self-sustained activity cascades in socio-technical networks
Piedrahíta, Pablo; Moreno, Yamir; Arenas, Alex
2013-01-01
The ability to understand and eventually predict the emergence of information and activation cascades in social networks is core to complex socio-technical systems research. However, the complexity of social interactions makes this a challenging enterprise. Previous works on cascade models assume that the emergence of this collective phenomenon is related to the activity observed in the local neighborhood of individuals, but do not consider what determines the willingness to spread information in a time-varying process. Here we present a mechanistic model that accounts for the temporal evolution of the individual state in a simplified setup. We model the activity of the individuals as a complex network of interacting integrate-and-fire oscillators. The model reproduces the statistical characteristics of the cascades in real systems, and provides a framework to study time-evolution of cascades in a state-dependent activity scenario.
Impedance Interaction Modeling and Analysis for Bidirectional Cascaded Converters
DEFF Research Database (Denmark)
Tian, Yanjun; Deng, Fujin; Chen, Zhe;
2015-01-01
more uncertainty to the system stability. An investigation is performed here for showing that the forward and reverse interactions are prominently different in terms of dynamics and stability even though the cascaded converter control remains unchanged. An important guideline has been drawn...
Fang, Yiping; Pedroni, Nicola; Zio, Enrico
2015-04-01
Large-scale outages on real-world critical infrastructures, although infrequent, are increasingly disastrous to our society. In this article, we are primarily concerned with power transmission networks and we consider the problem of allocation of generation to distributors by rewiring links under the objectives of maximizing network resilience to cascading failure and minimizing investment costs. The combinatorial multiobjective optimization is carried out by a nondominated sorting binary differential evolution (NSBDE) algorithm. For each generators-distributors connection pattern considered in the NSBDE search, a computationally cheap, topological model of failure cascading in a complex network (named the Motter-Lai [ML] model) is used to simulate and quantify network resilience to cascading failures initiated by targeted attacks. The results on the 400 kV French power transmission network case study show that the proposed method allows us to identify optimal patterns of generators-distributors connection that improve cascading resilience at an acceptable cost. To verify the realistic character of the results obtained by the NSBDE with the embedded ML topological model, a more realistic but also more computationally expensive model of cascading failures is adopted, based on optimal power flow (namely, the ORNL-Pserc-Alaska) model). The consistent results between the two models provide impetus for the use of topological, complex network theory models for analysis and optimization of large infrastructures against cascading failure with the advantages of simplicity, scalability, and low computational cost.
Testing bedrock incision models: Holocene channel evolution, High Cascades, Oregon
Sweeney, K. E.; Roering, J. J.; Fonstad, M. A.
2013-12-01
There is abundant field evidence that sediment supply controls the incision of bedrock channels by both protecting the bed from incision and providing tools to incise the bed. Despite several theoretical models for sediment-dependent bedrock abrasion, many investigations of natural channel response to climatic, lithologic, or tectonic forcing rely on the stream power model, which does not consider the role of sediment. Here, we use a well-constrained fluvial channel cut into a Holocene lava flow in the High Cascades, Oregon to compare incision predictions of the stream power model and of the full physics of theoretical models for saltation-abrasion incision by bedload and suspended load. The blocky andesite of Collier lava flow erupted from Collier Cone ~1500 years ago, paving over the existing landscape and erasing fine-scale landscape dissection. Since the eruption, a 6 km stream channel has been incised into the lava flow. The channel is comprised of three alluvial reaches with sediment deposits up to 2 m thick and two bedrock gorges with incision of up to 8 m, with larger magnitude incision in the upstream gorge. Abraded forms such as flutes are present in both gorges. Given the low magnitude and duration of modern snowmelt flow in the channel, it is likely that much of the incision was driven by sediment-laden outburst floods from the terminus of Collier Glacier, which is situated just upstream of the lava flow and has produced two outburst floods in the past 100 years. This site is well suited for comparing incision models because of the relatively uniform lithology of the lava flow and our ability to constrain the timing and depth of incision using the undissected lava surface above the channel as an initial condition. Using a simple finite difference scheme with airborne-Lidar-derived pre-incision topography as an initial condition, we predict incision in the two gorges through time with both stream power and sediment-dependent models. Field observations
Directory of Open Access Journals (Sweden)
Jianli Li
2013-01-01
Full Text Available In order to improve the precision of Strapdown Inertial Navigation System (SINS and reduce the complexity of the traditional calibration method, a novel calibration and compensation scheme is proposed. An optimization calibration method with four-direction rotations is designed to calculate all error coefficients of Ring Laser Gyroscope (RLG SINS in a series of constant temperatures. According to the actual working environment, the temperature errors of RLG SINS are compensated by a nonlinear interpolation compensation algorithm. The experimental results show that the inertial navigation errors of the proposed method are reduced.
Interpolation functors and interpolation spaces
Brudnyi, Yu A
1991-01-01
The theory of interpolation spaces has its origin in the classical work of Riesz and Marcinkiewicz but had its first flowering in the years around 1960 with the pioneering work of Aronszajn, Calderón, Gagliardo, Krein, Lions and a few others. It is interesting to note that what originally triggered off this avalanche were concrete problems in the theory of elliptic boundary value problems related to the scale of Sobolev spaces. Later on, applications were found in many other areas of mathematics: harmonic analysis, approximation theory, theoretical numerical analysis, geometry of Banach spaces, nonlinear functional analysis, etc. Besides this the theory has a considerable internal beauty and must by now be regarded as an independent branch of analysis, with its own problems and methods. Further development in the 1970s and 1980s included the solution by the authors of this book of one of the outstanding questions in the theory of the real method, the K-divisibility problem. In a way, this book harvests the r...
Up and down cascade in a dynamo model: spontaneous symmetry breaking.
Blanter, E M; Narteau, C; Shnirman, M G; Le Mouël, J L
1999-05-01
A multiscale turbulent model of dynamo is proposed. A secondary magnetic field is generated from a primary field by a flow made of turbulent helical vortices (cyclones) of different ranges, and amplified by an up and down cascade mechanism. The model displays symmetry breakings of different ranges although the system construction is completely symmetric. Large-scale symmetry breakings for symmetric conditions of the system evolution are investigated for all kinds of cascades: pure direct cascade, pure inverse cascade, and up and down cascade. It is shown that long lived symmetry breakings of high scales can be obtained only in the case of the up and down cascade. The symmetry breakings find expression in intervals of constant polarity of the secondary field (called chrons of the geomagnetic field). Long intervals of constant polarity with quick reversals are obtained in the model; conditions for such a behavior are investigated. Strong variations of the generated magnetic field during intervals of constant polarity are also observed in the model. Possible applications of the model to geodynamo modeling and various directions of future investigation are briefly discussed.
COMPARISONS BETWEEN DIFFERENT INTERPOLATION TECHNIQUES
Directory of Open Access Journals (Sweden)
G. Garnero
2014-01-01
In the present study different algorithms will be analysed in order to spot an optimal interpolation methodology. The availability of the recent digital model produced by the Regione Piemonte with airborne LIDAR and the presence of sections of testing realized with higher resolutions and the presence of independent digital models on the same territory allow to set a series of analysis with consequent determination of the best methodologies of interpolation. The analysis of the residuals on the test sites allows to calculate the descriptive statistics of the computed values: all the algorithms have furnished interesting results; all the more interesting, notably for dense models, the IDW (Inverse Distance Weighing algorithm results to give best results in this study case. Moreover, a comparative analysis was carried out by interpolating data at different input point density, with the purpose of highlighting thresholds in input density that may influence the quality reduction of the final output in the interpolation phase.
DISOPE distributed model predictive control of cascade systems with network communication
Institute of Scientific and Technical Information of China (English)
Yan ZHANG; Shaoyuan LI
2005-01-01
A novel distributed model predictive control scheme based on dynamic integrated system optimization and parameter estimation (DISOPE) was proposed for nonlinear cascade systems under network environment.Under the distributed control structure,online optimization of the cascade system was composed of several cascaded agents that can cooperate and exchange information via network communication.By iterating on modified distributed linear optimal control problems on the basis of estimating parameters at every iteration the correct optimal control action of the nonlinear model predictive control problem of the cascade system could be obtained,assuming that the algorithm was convergent.This approach avoids solving the complex nonlinear optimization problem and significantly reduces the computational burden.The simulation results of the fossil fuel power unit are illustrated to verify the effectiveness and practicability of the proposed algorithm.
SIMULATION MODELING OF AN ENHANCED LOW-EMISSION SWIRL-CASCADE BURNER
Energy Technology Data Exchange (ETDEWEB)
Ala Qubbaj
2004-04-01
Based on the physical and computational models outlined in the previous technical progress reports, Natural gas jet diffusion flames in baseline, cascade, swirl, and swirlcascade burners were numerically modeled. The thermal, composition, and flow (velocity) fields were simulated. The temperature, CO{sub 2} and O{sub 2} concentrations, as well as the axial and radial velocity profiles were computed and analyzed. The numerical results showed that swirl and cascade burners have a more efficient fuel/air mixing, a shorter flame, and a lower NOx emission levels, compared to the baseline case. The results also revealed that the optimal configurations of the cascaded and swirling flames have not produced an improved performance when combined together in a ''swirl-cascade burner''.
Testing an Idealized Dynamic Cascade Model of the Development of Serious Violence in Adolescence
Dodge, Kenneth A.; Greenberg, Mark T.; Malone, Patrick S.
2008-01-01
A dynamic cascade model of development of serious adolescent violence was proposed and tested through prospective inquiry with 754 children (50% male; 43% African American) from 27 schools at 4 geographic sites followed annually from kindergarten through grade 11 (ages 5 through 18). Self, parent, teacher, peer, observer, and administrative reports provided data. Partial least squares (PLS) analyses revealed a cascade of prediction and mediation: An early social context of disadvantage predic...
Directory of Open Access Journals (Sweden)
Mei Hong
2017-01-01
Full Text Available Prediction in Ungauged Basins (PUB is an important task for water resources planning and management and remains a fundamental challenge for the hydrological community. In recent years, geostatistical methods have proven valuable for estimating hydrological variables in ungauged catchments. However, four major problems restrict the development of geostatistical methods. We established a new information diffusion model based on genetic algorithm (GIDM for spatial interpolating of runoff in the ungauged basins. Genetic algorithms (GA are used to generate high-quality solutions to optimization and search problems. So, using GA, the parameter of optimal window width can be obtained. To test our new method, seven experiments for the annual runoff interpolation based on GIDM at 17 stations on the mainstream and tributaries of the Yellow River are carried out and compared with the inverse distance weighting (IDW method, Cokriging (COK method, and conventional IDMs using the same sparse observed data. The seven experiments all show that the GIDM method can solve four problems of the previous geostatistical methods to some extent and obtains best accuracy among four different models. The key problems of the PUB research are the lack of observation data and the difficulties in information extraction. So the GIDM is a new and useful tool to solve the Prediction in Ungauged Basins (PUB problem and to improve the water management.
Xiaolong Wang; Yi Wang; Zhizhu Cao; Weizhong Zou; Liping Wang; Guojun Yu; Bo Yu; Jinjun Zhang
2013-01-01
In general, proper orthogonal decomposition (POD) method is used to deal with single-parameter problems in engineering practice, and the linear interpolation is employed to establish the reduced model. Recently, this method is extended to solve the double-parameter problems with the amplitudes being achieved by cubic B-spline interpolation. In this paper, the accuracy of reduced models, which are established with linear interpolation and cubic B-spline interpolation, respectively, is verified...
An LCOR model for suppressing cascading failure in weighted complex networks
Institute of Scientific and Technical Information of China (English)
Chen Shi-Ming; Pang Shao-Peng; Zou Xiao-Qun
2013-01-01
Based on the relationship between capacity and load,cascading failure on weighted complex networks is investigated,and a load-capacity optimal relationship (LCOR) model is proposed in this paper.Compared with three other kinds of loadcapacity linear or non-linear relationship models in model networks as well as a number of real-world weighted networks including the railway network,the airports network and the metro network,the LCOR model is shown to have the best robustness against cascading failure with less cost.Furthermore,theoretical analysis and computational method of its cost threshold are provided to validate the effectiveness of the LCOR model.The results show that the LCOR model is effective for designing real-world networks with high robustness and less cost against cascading failure.
A Comparative Modelling Study of PWM Control Techniques for Multilevel Cascaded Inverter
Directory of Open Access Journals (Sweden)
A. TAHRI
2005-01-01
Full Text Available The emergence of multilevel converters has been in increase since the last decade. These new types of converters are suitable for high voltage and high power application due to their ability to synthesize waveforms with better harmonic spectrum. Numerous topologies have been introduced and widely studied for utility and drive applications. Amongst these topologies, the multilevel cascaded inverter was introduced in Static Var compensation and drive systems. This paper investigates several control techniques applied to the Multilevel Cascaded Inverter in order to ensure an efficient voltage utilization and better harmonic spectrum. A modelling and control strategy of a single phase Multilevel Cascaded Inverter is also investigated. Computer simulation results using Matlab program are reported and discussed together with a comparative study of the different control techniques of multilevel cascaded inverter.Moreover, experimental results are carried out on a scaled down prototype to prove the effectiveness of the proposed analysis.
Pei, Sen; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A
2016-01-01
In spreading dynamics in social networks, there exists an optimal set of influencers whose activation can induce a global-scale cascade of information. To find the optimal, or minimal, set of spreaders, a method based on collective influence theory has been proposed for spreading dynamics with a continuous phase transition that can be mapped to optimal percolation. However, when it comes to diffusion processes exhibiting a first-order, or discontinuous transition, identifying the set of optimal spreaders with a linear algorithm for large-scale networks still remains a challenging task. Here we address this issue by exploring the collective influence in general threshold models of opinion cascading. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a linearly scalable algorithm for m...
A NEW CASCADING FAILURE MODEL WITH DELAY TIME IN CONGESTED COMPLEX NETWORKS
Institute of Scientific and Technical Information of China (English)
Jian WANG; Yanheng LIU; Yu JIAO
2009-01-01
Cascading failures often occur in congested complex networks. Cascading failures can be expressed as a three-phase process: generation, diffusion, and dissipation of congestion. Different from the betweenness centrality, a congestion function is proposed to represent the extent of congestion on a given node. Inspired by the restart process of a node, we introduce the concept of "delay time," during which the overloaded node cannot receive or forward any traffic, so an intergradation between permanent removal and nonremoval is built and the flexibility of the presented model is demonstrated. Considering the connectivity of a network before and after cascading failures is not cracked because the overloaded node are not removed from network permanently in our model, a new evaluation function of network efficiency is also proposed to measure the damage caused by cascading failures. Finally, we investigate the effects of network structure and size, delay time, processing ability, and traffic generation speed on congestion propagation. Cascading processes composed of three phases and some factors affecting cascade propagation are uncovered as well.
Ruelland, D.; Ardoin-Bardin, S.; Billen, G.; Servat, E.
2008-10-01
SummaryThis paper examines the sensitivity of a hydrological model to several methods of spatial interpolation of rainfall data. The question is investigated in a context of scarcity of data over a large West African catchment (100,000 km 2) subject to a drastic trend of rain deficit since the 1970s. Thirteen widely scattered rainfall stations and their daily time series were used to interpolate gridded rainfall surfaces over the 1950-1992 period via various methods: Thiessen polygons, inverse distance weighted (IDW) method, thin smooth plate splines (spline), and ordinary kriging. The accuracy of these interpolated datasets was evaluated using two complementary approaches. First, a point-by-point assessment was conducted, involving comparison of the interpolated values by reference to observed point data. Second, a conceptual rainfall-runoff model (Hydrostrahler) was used in order to assess whether and to what extent the alternative sets of interpolated rainfall impacted on the hydrological simulations. A lumped modelling exercise over a long period (1952-1992) and a semi-distributed exercise over a short period (1971-1976) were performed, using calibrations aimed at optimizing a Nash-Sutcliffe criterion. The results were evaluated for each interpolated forcing dataset using statistical analysis and visual inspection of the simulated and observed hydrographs and the parameters obtained from calibration. Assessment of the interpolation methods by reference to point data indicates that interpolations using the IDW and kriging methods are more efficient than the simple Thiessen technique, and, to a lesser extent, than spline. The use of these data in a daily lumped modelling application shows a different ranking of the various interpolation methods with regard to various hydrological assessments. The model is particularly sensitive to the differences in the rainfall input volume produced by each interpolation method: the IDW dataset yields the highest hydrological
Modelling of the Blood Coagulation Cascade in an In Vitro Flow System
DEFF Research Database (Denmark)
Andersen, Nina Marianne; Sørensen, Mads Peter; Efendiev, Messoud A.;
2010-01-01
We derive a mathematical model of a part of the blood coagulation cascade set up in a perfusion experiment. Our purpose is to simulate the influence of blood flow and diffusion on the blood coagulation pathway. The resulting model consists of a system of partial differential equations taking into...... and flow equations, which guarantee non negative concentrations at all times. The criteria is applied to the model of the blood coagulation cascade.......We derive a mathematical model of a part of the blood coagulation cascade set up in a perfusion experiment. Our purpose is to simulate the influence of blood flow and diffusion on the blood coagulation pathway. The resulting model consists of a system of partial differential equations taking...
SIMULATION MODELING OF AN ENHANCED LOW-EMISSION SWIRL-CASCADE BURNER
Energy Technology Data Exchange (ETDEWEB)
Ala Qubbaj
2003-04-01
The research team was formed. The advanced CFDRC-CHEMKIN software package was installed on a SUN-SPARC dual processor workstation. The literature pertinent to the project was collected. The physical model was set and all parameters and variables were identified. Based on the physical model, the geometric modeling and grid generation processes were performed using the CFD-GEOM (Interactive Geometric Modeling and Grid Generation software). A total number of 11160 cells (248 x 45) were generated to numerically model the baseline, cascaded, swirling, and swirling-cascaded flames. With the cascade being added to the jet, the geometric complexity of the problem increased; which required multi-domain structured grid systems to be connected and matched on the boundaries.
Empirical analysis of cascade deformable models for multi-view face detection
Orozco, J.; Martinez, B.; Pantic, M.
2013-01-01
In this paper, we present a face detector based on Cascade Deformable Part Models (CDPM) [1]. Our model is learnt from partially labelled images using Latent Support Vector Machines (LSVM). Recently Zhu et al. [2] proposed a Tree StructureModel for multi-view face detection trained with facial landm
3-D model-based frame interpolation for distributed video coding of static scenes.
Maitre, Matthieu; Guillemot, Christine; Morin, Luce
2007-05-01
This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content.
Power-law behavior in a cascade process with stopping events: a solvable model.
Yamamoto, Ken; Yamazaki, Yoshihiro
2012-01-01
The present paper proposes a stochastic model to be solved analytically, and a power-law-like distribution is derived. This model is formulated based on a cascade fracture with the additional effect that each fragment at each stage of a cascade ceases fracture with a certain probability. When the probability is constant, the exponent of the power-law cumulative distribution lies between -1 and 0, depending not only on the probability but the distribution of fracture points. Whereas, when the probability depends on the size of a fragment, the exponent is less than -1, irrespective of the distribution of fracture points. The applicability of our model is also discussed.
An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.
Sander, P Martin
2013-01-01
Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism"). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.
An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.
Directory of Open Access Journals (Sweden)
P Martin Sander
Full Text Available Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM. This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism". Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.
Analysis of car-following model with cascade compensation strategy
Zhu, Wen-Xing; Zhang, Li-Dong
2016-05-01
Cascade compensation mechanism was designed to improve the dynamical performance of traffic flow system. Two compensation methods were used to study unit step response in time domain and frequency characteristics with different parameters. The overshoot and phase margins are proportional to the compensation parameter in an underdamped condition. Through the comparison we choose the phase-lead compensation method as the main strategy in suppressing the traffic jam. The simulations were conducted under two boundary conditions to verify the validity of the compensator. The conclusion can be drawn that the stability of the system is strengthened with increased phase-lead compensation parameter. Moreover, the numerical simulation results are in good agreement with analytical results.
Institute of Scientific and Technical Information of China (English)
Zhu Yiqing; Hu Bin; Li Hui; Jiang Fengyun
2005-01-01
In this paper, the spatial-temporal gravity variation patterns of the northeastern margin of Qinghai-Xizang (Tibet) Plateau in 1992～ 2001 are modeled using bicubic spline interpolation functions and the relations of gravity change with seismicity and tectonic movement are discussed preliminarily. The results show as follows: ① Regionalgravitational field changes regularly and the gravity abnormity zone or gravity concentration zone appears in the earthquake preparation process; ② In the significant time period, the gravity variation shows different features in the northwest, southeast and northeast parts of the surveyed region respectively, with Lanzhou as its boundary; ③ The gravity variation distribution is basically identical to the strike of tectonic fault zone of the region, and the contour of gravity variation is closely related to the fault distribution.
Empirical analysis of cascade deformable models for multi-view face detection
Orozco, Javier; Martinez, Brais; Pantic, Maja
2015-01-01
We present a multi-view face detector based on Cascade Deformable Part Models (CDPM). Over the last decade, there have been several attempts to extend the well-established Viola&Jones face detector algorithm to solve the problem of multi-view face detection. Recently a tree structure model for multi
ARRA: Reconfiguring Power Systems to Minimize Cascading Failures - Models and Algorithms
Energy Technology Data Exchange (ETDEWEB)
Dobson, Ian [Iowa State University; Hiskens, Ian [Unversity of Michigan; Linderoth, Jeffrey [University of Wisconsin-Madison; Wright, Stephen [University of Wisconsin-Madison
2013-12-16
Building on models of electrical power systems, and on powerful mathematical techniques including optimization, model predictive control, and simluation, this project investigated important issues related to the stable operation of power grids. A topic of particular focus was cascading failures of the power grid: simulation, quantification, mitigation, and control. We also analyzed the vulnerability of networks to component failures, and the design of networks that are responsive to and robust to such failures. Numerous other related topics were investigated, including energy hubs and cascading stall of induction machines
Zayane, Chadia
2014-06-01
In this paper, we address a special case of state and parameter estimation, where the system can be put on a cascade form allowing to estimate the state components and the set of unknown parameters separately. Inspired by the nonlinear Balloon hemodynamic model for functional Magnetic Resonance Imaging problem, we propose a hierarchical approach. The system is divided into two subsystems in cascade. The state and input are first estimated from a noisy measured signal using an adaptive observer. The obtained input is then used to estimate the parameters of a linear system using the modulating functions method. Some numerical results are presented to illustrate the efficiency of the proposed method.
On Double Interpolation in Polar Coordinates
Directory of Open Access Journals (Sweden)
Antoniu Nicula
2009-10-01
Full Text Available Interpolation is an important tool in numerical modeling of real-life systems. The Lagrange interpolation is frequently used, due to particular advantages in implementation. The bi-dimensional version may be implemented with Cartesian or with polar coordinate system. Choice of the coordinate system is important in order to obtain accurate results. The polar case has particular properties that can be exploited to minimize some of the common disadvantages of polynomial interpolation.
Simulation Modeling of an Enhanced Low-Emission Swirl-Cascade Burner
Energy Technology Data Exchange (ETDEWEB)
Ala Qubbaj
2004-09-01
''Cascade-burners'' is a passive technique to control the stoichiometry of the flame through changing the flow dynamics and rates of mixing in the combustion zone with a set of venturis surrounding the flame. Cascade-burners have shown advantages over other techniques; its reliability, flexibility, safety, and cost makes it more attractive and desirable. On the other hand, the application of ''Swirl-burners'' has shown superiority in producing a stable flame under a variety of operating conditions and fuel types. The basic idea is to impart swirl to the air or fuel stream, or both. This not only helps to stabilize the flame but also enhances mixing in the combustion zone. As a result, nonpremixed (diffusion) swirl burners have been increasingly used in industrial combustion systems such as gas turbines, boilers, and furnaces, due to their advantages of safety and stability. Despite the advantages of cascade and swirl burners, both are passive control techniques, which resulted in a moderate pollutant emissions reduction compared to SCR, SNCR and FGR (active) methods. The present investigation will study the prospects of combining both techniques in what to be named as ''an enhanced swirl-cascade burner''. Natural gas jet diffusion flames in baseline, cascade, swirl, and swirl-cascade burners were numerically modeled using CFDRC package. The thermal, composition, and flow (velocity) fields were simulated. The numerical results showed that swirl and cascade burners have a more efficient fuel/air mixing, a shorter flame, and a lower NOx emission levels, compared to the baseline case. The results also revealed that the optimal configurations of the cascaded and swirling flames have not produced an improved performance when combined together in a ''swirl-cascade burner''. The non-linearity and complexity of the system accounts for such a result, and therefore, all possible combinations, i
Reji, G; Chander, Subhash; Kamble, Kalpana
2014-09-01
Rice stem borer is an important insect pest causing severe damage to rice crop in India. The relationship between weather parameters such as maximum (T(max)) and minimum temperature (T(min)), morning (RH1) and afternoon relative humidity (RH2) and the severity of stem borer damage (SB) were studied. Multiple linear regression analysis was used for formulating pest-weather models at three sites in southern India namely, Warangal, Coimbatore and Pattambi as SB = -66.849 + 2.102 T(max) + 0.095 RH1, SB = 156.518 - 3.509 T(min) - 0.785 RH1 and SB = 43.483 - 0.418 T(min) - 0.283 RH1 respectively. The pest damage predicted using the model at three sites did not significantly differ from the observed damage (t = 0.442; p > 0.05). The range of weather parameters favourable for stem borer damage at each site were also predicted using the models. Geospatial interpolation (kriging) of the pest-weather models were carried out to predict the zones of stem borer damage in southern India. Maps showing areas with high, medium and low risk of stem borer damage were prepared using geographical information system. The risk maps of rice stem borer would be useful in devising management strategies for the pest in the region.
Directory of Open Access Journals (Sweden)
Jie Yang
2015-01-01
Full Text Available It is difficult to effectively identify and eliminate the multiple correlation influence among the independent factors by least-squares regression. Focusing on this insufficiency, the sediment deposition risk of cascade reservoirs and fitting model of sediment flux into the reservoir are studied. The partial least-squares regression (PLSR method is adopted for modeling analysis; the model fitting is organically combined with the non-model-style data content analysis, so as to realize the regression model, data structure simplification, and multiple correlations analysis among factors; meanwhile the accuracy of the model is ensured through cross validity check. The modeling analysis of sediment flux into the cascade reservoirs of Long-Liu section upstream of the Yellow River indicates that partial least-squares regression can effectively overcome the multiple correlation influence among factors, and the isolated factor variables have better ability to explain the physical cause of measured results.
Directory of Open Access Journals (Sweden)
Xiaolong Wang
2013-01-01
Full Text Available In general, proper orthogonal decomposition (POD method is used to deal with single-parameter problems in engineering practice, and the linear interpolation is employed to establish the reduced model. Recently, this method is extended to solve the double-parameter problems with the amplitudes being achieved by cubic B-spline interpolation. In this paper, the accuracy of reduced models, which are established with linear interpolation and cubic B-spline interpolation, respectively, is verified via two typical examples. Both results of the two methods are satisfying, and the results of cubic B-spline interpolation are more accurate than those of linear interpolation. The results are meaningful for guiding the application of the POD interpolation to complex multiparameter problems.
Using the Cascade Model to Improve Antenatal Screening for the Hemoglobin Disorders
Gould, Dinah; Papadopoulos, Irena; Kelly, Daniel
2012-01-01
Introduction: The inherited hemoglobin disorders constitute a major public health problem. Facilitators (experienced hemoglobin counselors) were trained to deliver knowledge and skills to "frontline" practitioners to enable them to support parents during antenatal screening via a cascade (train-the-trainer) model. Objectives of…
Martel, Michelle M.; Pierce, Laura; Nigg, Joel T.; Jester, Jennifer M.; Adams, Kenneth; Puttler, Leon I.; Buu, Anne; Fitzgerald, Hiram; Zucker, Robert A.
2009-01-01
Temperament traits may increase risk for developmental psychopathology like Attention-Deficit/Hyperactivity Disorder (ADHD) and disruptive behaviors during childhood, as well as predisposing to substance abuse during adolescence. In the current study, a cascade model of trait pathways to adolescent substance abuse was examined. Component…
The Transfer of Content Knowledge in a Cascade Model of Professional Development
Turner, Fay; Brownhill, Simon; Wilson, Elaine
2017-01-01
A cascade model of professional development presents a particular risk that "knowledge" promoted in a programme will be diluted or distorted as it passes from originators of the programme to local trainers and then to the target teachers. Careful monitoring of trainers' and teachers' knowledge as it is transferred through the system is…
Using the Cascade Model to Improve Antenatal Screening for the Hemoglobin Disorders
Gould, Dinah; Papadopoulos, Irena; Kelly, Daniel
2012-01-01
Introduction: The inherited hemoglobin disorders constitute a major public health problem. Facilitators (experienced hemoglobin counselors) were trained to deliver knowledge and skills to "frontline" practitioners to enable them to support parents during antenatal screening via a cascade (train-the-trainer) model. Objectives of evaluation were to…
Martel, Michelle M.; Pierce, Laura; Nigg, Joel T.; Jester, Jennifer M.; Adams, Kenneth; Puttler, Leon I.; Buu, Anne; Fitzgerald, Hiram; Zucker, Robert A.
2009-01-01
Temperament traits may increase risk for developmental psychopathology like Attention-Deficit/Hyperactivity Disorder (ADHD) and disruptive behaviors during childhood, as well as predisposing to substance abuse during adolescence. In the current study, a cascade model of trait pathways to adolescent substance abuse was examined. Component…
Testing an Idealized Dynamic Cascade Model of the Development of Serious Violence in Adolescence
Dodge, Kenneth A.; Greenberg, Mark T.; Malone, Patrick S.
2008-01-01
A dynamic cascade model of development of serious adolescent violence was proposed and tested through prospective inquiry with 754 children (50% male; 43% African American) from 27 schools at 4 geographic sites followed annually from kindergarten through Grade 11 (ages 5-18). Self, parent, teacher, peer, observer, and administrative reports…
Molecular dynamics and binary collision modeling of the primary damage state of collision cascades
DEFF Research Database (Denmark)
Heinisch, H.L.; Singh, B.N.
1992-01-01
Quantitative information on defect production in cascades in copper obtained from recent molecular dynamics simulations is compared to defect production information determined earlier with a model based on the binary collision approximation (BCA). The total numbers of residual defects, the fracti...
Kim, Chang Woo; Rhee, Young Min
2016-11-08
Constructing a reliable potential energy surface (PES) is a key step toward computationally studying the chemical dynamics of any molecular system. The interpolation scheme is a useful tool that can closely follow the accuracy of quantum chemical means at a dramatically reduced computational cost. However, applying interpolation to building a PES of a large molecule is not a straightforward black-box approach, as it frequently encounters practical difficulties associated with its large dimensionality. Here, we present detailed courses of applying interpolation toward building a PES of a large chromophore molecule. We take the example of S0 and S1 electronic states of bacteriochlorophyll a (BChla) molecules in the Fenna-Matthews-Olson light harvesting complex. With a reduced model molecule that bears BChla's main π-conjugated ring, various practical approaches are designed for improving the PES quality in a stable manner and for fine-tuning the final surface such that the surface can be adopted for long time molecular dynamics simulations. Combined with parallel implementation, we show that interpolated mechanics/molecular mechanics (IM/MM) simulations of the entire complex in the nanosecond time scale can be conducted readily without any practical issues. With 1500 interpolation data points for each chromophore unit, the PES error relative to the reference quantum chemical calculation is found to be ∼0.15 eV in the thermally accessible region of the conformational space, together with ∼0.01 eV error in S0 - S1 transition energies. The performance issue related to the use of a large interpolation database within the framework of our parallel routines is also discussed.
Modeling cascading failures with the crisis of trust in social networks
Yi, Chengqi; Bao, Yuanyuan; Jiang, Jingchi; Xue, Yibo
2015-10-01
In social networks, some friends often post or disseminate malicious information, such as advertising messages, informal overseas purchasing messages, illegal messages, or rumors. Too much malicious information may cause a feeling of intense annoyance. When the feeling exceeds a certain threshold, it will lead social network users to distrust these friends, which we call the crisis of trust. The crisis of trust in social networks has already become a universal concern and an urgent unsolved problem. As a result of the crisis of trust, users will cut off their relationships with some of their untrustworthy friends. Once a few of these relationships are made unavailable, it is likely that other friends will decline trust, and a large portion of the social network will be influenced. The phenomenon in which the unavailability of a few relationships will trigger the failure of successive relationships is known as cascading failure dynamics. To our best knowledge, no one has formally proposed cascading failures dynamics with the crisis of trust in social networks. In this paper, we address this potential issue, quantify the trust between two users based on user similarity, and model the minimum tolerance with a nonlinear equation. Furthermore, we construct the processes of cascading failures dynamics by considering the unique features of social networks. Based on real social network datasets (Sina Weibo, Facebook and Twitter), we adopt two attack strategies (the highest trust attack (HT) and the lowest trust attack (LT)) to evaluate the proposed dynamics and to further analyze the changes of the topology, connectivity, cascading time and cascade effect under the above attacks. We numerically find that the sparse and inhomogeneous network structure in our cascading model can better improve the robustness of social networks than the dense and homogeneous structure. However, the network structure that seems like ripples is more vulnerable than the other two network
An Exactly Soluble Hierarchical Clustering Model Inverse Cascades, Self-Similarity, and Scaling
Gabrielov, A; Turcotte, D L
1999-01-01
We show how clustering as a general hierarchical dynamical process proceeds via a sequence of inverse cascades to produce self-similar scaling, as an intermediate asymptotic, which then truncates at the largest spatial scales. We show how this model can provide a general explanation for the behavior of several models that has been described as ``self-organized critical,'' including forest-fire, sandpile, and slider-block models.
Interpolation-based H2 Model Reduction for port-Hamiltonian Systems
Gugercin, Serkan; Polyuga, Rostyslav V.; Beattie, Christopher A.; Schaft, Arjan J. van der
2009-01-01
Port network modeling of physical systems leads directly to an important class of passive state space systems: port-Hamiltonian systems. We consider here methods for model reduction of large scale port-Hamiltonian systems that preserve port-Hamiltonian structure and are capable of yielding reduced o
AMMI模型的DEM内插方法不确定性研究%Uncertainty Analysis of Different DEM Interpolation Methods Based on AMMI Model
Institute of Scientific and Technical Information of China (English)
赵明伟; 汤国安; 田剑
2012-01-01
Analysis of evaluation of interpolation models is a hot topic in the DEM interpolation studies. Most studies focused on the interpolation model in the last decades, while ignored the influencing factors between the interpolation models and environments. That is to say, on the one side, different interpolation models influence the accuracy of the analysis result; on the other side, difference environments also influence the accuracy of a certain interpolation model. In order to analysis the applicability of different interpolation methods in different environments, this paper selected test areas under different geomorphic types, and used the AMMI model to analyse the accuracy of the different interpolation models and the applicability of the studied models to different geomorphic types. The experiment results showed that the AMMI model could test the influencing factors between the interpolation models and the environments. Taking the test of this paper as an example, in the Northern Shaanxi region, the ordinary Kriging model is the best choice in the DEM construction. Finally, by analyzing the correlation coefficient between the environment coefficient and several landform parameters, it can be found that the slope gradient could represent the first environment coefficient.%内插模型的精度评价问题一直是DEM内插研究中的热点问题.以往较多的研究关注插值模型本身的精度评价,却忽略了插值模型与应用环境之间的交互作用,例如,普通克里金方法作DEM内插一般精度较差,但是当插值区域平坦时,该方法的插值精度却很高,这表明该方法对平坦地形的插值问题具有较好的适应性.为了分析不同插值模型在不同地形环境下的适用性,本文选取陕北黄土高原地区不同地貌类型的实验样区,应用AMMI模型对不同内插模型的精度,以及对不同地貌类型的适用性进行评价,该模型最大的特点是很好地结合了方差分析与回归分析
Mathematical Model of Extrinsic Blood Coagulation Cascade Dynamic System
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The blood coagulation system is very important to life. This paper presents a mathematical blood coagulation model for the extrinsic pathway. This model simulates clotting factor VIII, which plays an important role in the coagulation mechanism. The mathematical model is used to study the equilibrium stability, orbit structure, attractors and global stability behavior, with conclusions in accordance with the physiological phenomena. Moreover, the results provide information about blood related illnesses, which can be used for further study of the coagulation mechanism.
Mudunuru, M K; Harp, D R; Guthrie, G D; Viswanathan, H S
2016-01-01
The goal of this paper is to assess the utility of Reduced-Order Models (ROMs) developed from 3D physics-based models for predicting transient thermal power output for an enhanced geothermal reservoir while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on Latin Hypercube Sampling (LHS) of model inputs drawn from uniform probability distributions. Key sensitive parameters are identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. The inputs for ROMs are based on these key sensitive parameters. The ROMs are then used to evaluate the influence of subsurface attributes on thermal power production curves. The resulting ROMs are compared with field-data and the detailed physics-based numerical simulations. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production cu...
Digital elevation model (DEM) data are essential to hydrological applications and have been widely used to calculate a variety of useful topographic characteristics, e.g., slope, flow direction, flow accumulation area, stream channel network, topographic index, and others. Excep...
Extension of the Li\\`ege Intranuclear-Cascade model to reactions induced by light nuclei
Mancusi, Davide; Cugnon, Joseph; David, Jean-Christophe; Kaitaniemi, Pekka; Leray, Sylvie
2014-01-01
The purpose of this paper is twofold. First, we present the extension of the Li\\`ege Intranuclear Cascade model to reactions induced by light ions. Second, we describe the C++ version of the code, which it is physics-wise equivalent to the legacy version, is available in Geant4 and will serve as the basis for all future development of the model. We describe the ideas upon which we built our treatment of nucleus-nucleus reactions and we compare the model predictions against a vast set of heterogeneous experimental data. In spite of the discussed limitations of the intranuclear-cascade scheme, we find that our model yields valid predictions for a number of observables and positions itself as one of the most attractive alternatives available to Geant4 users for the simulation of light-ion-induced reactions.
Effects of temporal correlations on cascades: Threshold models on temporal networks
Backlund, Ville-Pekka; Pan, Raj Kumar
2014-01-01
A person's decision to adopt an idea or product is often driven by the decisions of peers, mediated through a network of social ties. A common way of modeling adoption dynamics is to use threshold models, where a node may become an adopter given a high enough rate of contacts with adopted neighbors. We study the dynamics of threshold models that take both the network topology and the timings of contacts into account, using empirical contact sequences as substrates. The models are designed such that adoption is driven by the number of contacts with different adopted neighbors within a chosen time. We find that while some networks support cascades leading to network-level adoption, some do not: the propagation of adoption depends on several factors from the frequency of contacts to burstiness and timing correlations of contact sequences. More specifically, burstiness is seen to suppress cascades sizes when compared to randomised contact timings, while timing correlations between contacts on adjacent links facil...
Directory of Open Access Journals (Sweden)
Bikić Siniša M.
2016-01-01
Full Text Available This paper is focused on the mathematical model of the Air Torque Position dampers. The mathematical model establishes a link between the velocity of air in front of the damper, position of the damper blade and the moment acting on the blade caused by the air flow. This research aims to experimentally verify the mathematical model for the damper type with non-cascading blades. Four different types of dampers with non-cascading blades were considered: single blade dampers, dampers with two cross-blades, dampers with two parallel blades and dampers with two blades of which one is a fixed blade in the horizontal position. The case of a damper with a straight pipeline positioned in front of and behind the damper was taken in consideration. Calibration and verification of the mathematical model was conducted experimentally. The experiment was conducted on the laboratory facility for testing dampers used for regulation of the air flow rate in heating, ventilation and air conditioning systems. The design and setup of the laboratory facility, as well as construction, adjustment and calibration of the laboratory damper are presented in this paper. The mathematical model was calibrated by using one set of data, while the verification of the mathematical model was conducted by using the second set of data. The mathematical model was successfully validated and it can be used for accurate measurement of the air velocity on dampers with non-cascading blades under different operating conditions. [Projekat Ministarstva nauke Republike Srbije, br. TR31058
A new interpolation method to model thickness, isopachs, extent, and volume of tephra fall deposits
Yang, Qingyuan; Bursik, Marcus
2016-10-01
Tephra thickness distribution is the primary piece of information used to reconstruct the histories of past explosive volcanic eruptions. We present a method for modeling tephra thickness with less subjectivity than is the case with hand-drawn isopachs, the current, most frequently used method. The algorithm separates the thickness of a tephra fall deposit into a trend and local variations and models them separately using segmented linear regression and ordinary kriging. The distance to the source vent and downwind distance are used to characterize the trend model. The algorithm is applied to thickness datasets for the Fogo Member A and North Mono Bed 1 tephras. Simulations on subsets of data and cross-validation are implemented to test the effectiveness of the algorithm in the construction of the trend model and the model of local variations. The results indicate that model isopach maps and volume estimations are consistent with previous studies and point to some inconsistencies in hand-drawn maps and their interpretation. The most striking feature noticed in hand-drawn mapping is a lack of adherence to the data in drawing isopachs locally. Since the model assumes a stable wind field, divergences from the predicted decrease in thickness with distance are readily noticed. Hence, wind direction, although weak in the case of Fogo A, was not unidirectional during deposition. A combination of the isopach algorithm with a new, data transformation can be used to estimate the extent of fall deposits. A limitation of the algorithm is that one must estimate "by hand" the wind direction based on the thickness data.
PACIAE 2.0: An Updated Parton and Hadron Cascade Model (Program) for Relativistic Nuclear Collisions
Institute of Scientific and Technical Information of China (English)
SA; Ben-hao; ZHOU; Dai-mei; YAN; Yu-liang; LI; Xiao-mei; FENG; Sheng-qing; DONG; Bao-guo; CAI; Xu
2012-01-01
<正>We have updated the parton and hadron cascade model PACIAE for the relativistic nuclear collisions, from based on JETSET 6.4 and PYTHIA 5.7, and referred to as PACIAE 2.0. The main physics concerning the stages of the parton initiation, parton rescattering, hadronization, and hadron rescattering were discussed. The structures of the programs were briefly explained. In addition, some calculated examples were compared with the experimental data. It turns out that this model (program) works well.
Kriging空间插值最优估计模型的研究%A Model of Estimating the Kriging Interpolation Optimum
Institute of Scientific and Technical Information of China (English)
张博华
2014-01-01
空间插值对土地价格、降雨量、人口分布，等进行估计是最常用也是最有效的方法。空间插值的理论假设是空间位置上越靠近的点，越可能具有相似的特征值；而距离越远的点，其特征值相似的可能性越小。Kriging插值方法以统计学理论为基础。如果采样点数据比较少的情况下，可设计出对Kriging插值的半变异函数进行拟合的实验方案，并从中选出误差最小的估计模型。%Spatial interpolation is the most useful and usual method to estimate the land price, rainfall and population distribution. The theory about Spatial interpolation means that the more the points are close, the more they are alike, and vice versa. Kriging interpolation is based on the statistic theory. We can design the experiment plan for the semi variant function fitting with less sam-pling points. And then, we can pick the best estimate model for the interpolation.
Cascade recursion models of computing the temperatures of underground layers
Institute of Scientific and Technical Information of China (English)
HAN; Liqun; BI; Siwen; SONG; Shixin
2006-01-01
An RBF neural network was used to construct computational models of the underground temperatures of different layers, using ground-surface parameters and the temperatures of various underground layers. Because series recursion models also enable researchers to use above-ground surface parameters to compute the temperatures of different underground layers, this method provides a new way of using thermal infrared remote sensing to monitor the suture zones of large areas of blocks and to research thermal anomalies in geologic structures.
Institute of Scientific and Technical Information of China (English)
顾传青; 张莺
2007-01-01
A theorem for osculatory rational interpolation was shown to establish a new criterion of interpolation. On the basis of this conclusion a practical algorithm was presented to get a reduction model of the linear systems. Some numerical examples were given to explain the result in this paper.
CSIR Research Space (South Africa)
Van den Bergh, F
2006-01-01
Full Text Available RKHS model for the first experiment. MSE = (0.5363, 0.7331). motivation for this approach was that the amount of compu- tation per cycle would be reduced significantly. The specific example in Figure 4 shows the RKHS model—initially fitted to cycle...
Institute of Scientific and Technical Information of China (English)
LYU Guokun; WANG Hui; ZHU Jiang; WANG Dakui; XIE Jiping; LIU Guimei
2014-01-01
The ensemble optimal interpolation (EnOI) is applied to the regional ocean modeling system (ROMS) with the ability to assimilate the along-track sea level anomaly (TSLA). This system is tested with an eddy-resolv-ing system of the South China Sea (SCS). Background errors are derived from a running seasonal ensemble to account for the seasonal variability within the SCS. A fifth-order localization function with a 250 km lo-calization radius is chosen to reduce the negative effects of sampling errors. The data assimilation system is tested from January 2004 to December 2006. The results show that the root mean square deviation (RMSD) of the sea level anomaly decreased from 10.57 to 6.70 cm, which represents a 36.6%reduction of error. The data assimilation reduces error for temperature within the upper 800 m and for salinity within the upper 200 m, although error degrades slightly at deeper depths. Surface currents are in better agreement with tra-jectories of surface drifters after data assimilation. The variance of sea level improves significantly in terms of both the amplitude and position of the strong and weak variance regions after assimilating TSLA. Results with AGE error (AGE) perform better than no AGE error (NoAGE) when considering the improvements of the temperature and the salinity. Furthermore, reasons for the extremely strong variability in the northern SCS in high resolution models are investigated. The results demonstrate that the strong variability of sea level in the high resolution model is caused by an extremely strong Kuroshio intrusion. Therefore, it is demonstrated that it is necessary to assimilate the TSLA in order to better simulate the SCS with high resolution models.
Anslow, Faron S.; Hostetler, S.; Bidlake, W.R.; Clark, P.U.
2008-01-01
We have developed a physically based, distributed surface energy balance model to simulate glacier mass balance under meteorological and climatological forcing. Here we apply the model to estimate summer ablation on South Cascade Glacier, Washington, for the 2004 and 2005 mass balance seasons. To arrive at optimal mass balance simulations, we investigate and quantify model uncertainty associated with selecting from a range of physical parameter values that are not commonly measured in glaciological mass balance field studies. We optimize the performance of the model by varying values for atmospheric transmissivity, the albedo of surrounding topography, precipitation-elevation lapse rate, surface roughness for turbulent exchange of momentum, and snow albedo aging coefficient. Of these the snow aging parameter and precipitation lapse rates have the greatest influence on the modeled ablation. We examined model sensitivity to varying parameters by performing an additional 103 realizations with parameters randomly chosen over a ??5% range centered about the optimum values. The best fit suite of model parameters yielded a net balance of -1.69??0.38 m water equivalent (WE) for the 2004 water year and -2.10??0.30 m WE up to 11 September 2005. The 2004 result is within 3% of the measured value. These simulations account for 91% and 93% of the variance in measured ablation for the respective years. Copyright 2008 by the American Geophysical Union.
Importance of coherence in models of mid-infrared quantum cascade laser gain spectra
Cui, Yuzhang I.; Harter, Michael P.; Dikmelik, Yamac; Hoffman, Anthony J.
2017-09-01
We present a three-level model based on a density matrix to examine the influence of coherence and dephasing on the gain spectrum of mid-infrared quantum cascade lasers. The model is used to examine a quantum cascade active region with multiple optical transitions. We show how coherence can explain the origin of additional peaks in the gain spectrum. We also analyze the spectra calculated using the three-level model with a rate equation formalism to demonstrate the importance of considering interface roughness and limitations of the rate equation formalism. Specifically, we present how interface roughness influences the broadening and oscillator strength that are recovered using a rate equation analysis. The results of this work are important when considering the design of active regions with multiple optical transitions and could lead to devices with improved performance.
Hydraulic modeling for lahar hazards at cascades volcanoes
Costa, J.E.
1997-01-01
The National Weather Service flood routing model DAMBRK is able to closely replicate field-documented stages of historic and prehistoric lahars from Mt. Rainier, Washington, and Mt. Hood, Oregon. Modeled time-of-travel of flow waves are generally consistent with documented lahar travel-times from other volcanoes around the world. The model adequately replicates a range of lahars and debris flows, including the 230 million km3 Electron lahar from Mt. Rainier, as well as a 10 m3 debris flow generated in a large outdoor experimental flume. The model is used to simulate a hypothetical lahar with a volume of 50 million m3 down the East Fork Hood River from Mt. Hood, Oregon. Although a flow such as this is thought to be possible in the Hood River valley, no field evidence exists on which to base a hazards assessment. DAMBRK seems likely to be usable in many volcanic settings to estimate discharge, velocity, and inundation areas of lahars when input hydrographs and energy-loss coefficients can be reasonably estimated.
Integrated snow and hydrology modeling for climate change impact assessment in Oregon Cascades
Safeeq, M.; Grant, G.; Lewis, S.; Nolin, A. W.; Hempel, L. A.; Cooper, M.; Tague, C.
2014-12-01
In the Pacific Northwest (PNW), increasing temperatures are expected to alter the hydrologic regimes of streams by shifting precipitation from snow to rain and forcing earlier snowmelt. How are such changes likely to affect peak flows across the region? Shifts in peak flows have obvious implications for changing flood risk, but are also likely to affect channel morphology, sediment transport, aquatic habitat, and water quality, issues with potentially high economic and environmental cost. Our goal, then, is to rigorously evaluate sensitivity to potential peak flow changes across the PNW. We address this by developing a detailed representation of snowpack and streamflow evolution under varying climate scenarios using a cascade-modeling approach. We have identified paired watersheds located on the east (Metolius River) and west (McKenzie River) sides of the Cascades, representing dry and wet climatic regimes, respectively. The tributaries of these two rivers are comprised of contrasting hydrologic regimes: surface-runoff dominated western cascades and deep-groundwater dominated high-cascades systems. We use a detailed hydro-ecological model (RHESSys) in conjunction with a spatially distributed snowpack evolution model (SnowModel) to characterize the peak flow behavior under present and future climate. We first calibrated and validated the SnowModel using observed temperature, precipitation, snow water equivalent, and manual snow survey data sets. We then employed a multi-objective calibration strategy for RHESSys using the simulated snow accumulation and melt from SnowModel and observed streamflow. The Nash-Sutcliffe Efficiency between observed and simulated streamflow varies between 0.5 in groundwater and 0.71 in surface-runoff dominated systems. The initial results indicate enhanced peak flow under future climate across all basins, but the magnitude of increase varies by the level of snowpack and deep-groundwater contribution in the watershed. Our continuing effort
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
A Hybrid Interpolation for Reconstructing Near-Surface Model%混合插值法重构近地表模型
Institute of Scientific and Technical Information of China (English)
魏亦文; 王彦春
2012-01-01
When both the number of control points and the size of grid become large, TPS based reconstruction of near-surface model is often very time-consuming. Therefore, such method affects the efficiency of building near-surface model in the static correction. To address this problem, a hybrid interpolation combining TPS with cubic spline interpolation is used for reconstructing near surface model. Firstly, not only recursive LU decomposition but also GPU based LU decomposition is used to solve large linear system of equations. And then, TPS interpolation function is created. Secondly, the grid is rarefied with appropriate steps in the X and Y directions and then, TPS interpolation function is evaluated on the sparse grid. Based on which cubic spline interpolation function is created and then, it calculates the value of the other points. Finally, OpenGL visualizes the 3D near-surface model. The experimental results show that this algorithm speeds up the reconstruction of near-surface model and approximates the TPS interpolation in accuracy.%当控制点多和网格稠密时,基于薄板样条(TPS)插值的近地表模型重构往往很耗时,影响了静校正中近地表建模的效率.针对此问题,采用一种TPS插值和三次样条插值相结合的混合插值法重构近地表模型.首先利用矩阵递归LU分解及GPU加速的LU分解算法求解大型线性方程组,建立TPS插值函数；然后在X和Y方向上使用适当的步长对网格进行抽稀,运用TPS插值函数计算稀疏网格点的值,再通过稀疏网格点建立三次样条插值函数并计算剩余网格点的值；最后用OpenGL实现近地表模型的三维可视化.实验结果表明,文中算法提高了近地表模型重构的速度,其精度接近TPS插值精度.
Cascaded Network Body Channel Model for Intrabody Communication.
Wang, Hao; Tang, Xian; Choy, Chiu Sing; Sobelman, Gerald E
2016-07-01
Intrabody communication has been of great research interest in recent years. This paper proposes a novel, compact but accurate body transmission channel model based on RC distribution networks and transmission line theory. The comparison between simulation and measurement results indicates that the proposed approach accurately models the body channel characteristics. In addition, the impedance-matching networks at the transmitter output and the receiver input further maximize the power transferred to the receiver, relax the receiver complexity, and increase the transmission performance. Based on the simulation results, the power gain can be increased by up to 16 dB after matching. A binary phase-shift keying modulation scheme is also used to evaluate the bit-error-rate improvement.
Modeling elephant-mediated cascading effects of water point closure.
Hilbers, Jelle P; Van Langevelde, Frank; Prins, Herbert H T; Grant, C C; Peel, Mike J S; Coughenour, Michael B; De Knegt, Henrik J; Slotow, Rob; Smit, Izak P J; Kiker, Greg A; De Boer, Willem F
2015-03-01
Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are, however, alternative ways to control wildlife densities, such as opening or closing water points. The effects of these alternatives are poorly studied. In this paper, we focus on manipulating large herbivores through the closure of water points (WPs). Removal of artificial WPs has been suggested in order to change the distribution of African elephants, which occur in high densities in national parks in Southern Africa and are thought to have a destructive effect on the vegetation. Here, we modeled the long-term effects of different scenarios of WP closure on the spatial distribution of elephants, and consequential effects on the vegetation and other herbivores in Kruger National Park, South Africa. Using a dynamic ecosystem model, SAVANNA, scenarios were evaluated that varied in availability of artificial WPs; levels of natural water; and elephant densities. Our modeling results showed that elephants can indirectly negatively affect the distributions of meso-mixed feeders, meso-browsers, and some meso-grazers under wet conditions. The closure of artificial WPs hardly had any effect during these natural wet conditions. Under dry conditions, the spatial distribution of both elephant bulls and cows changed when the availability of artificial water was severely reduced in the model. These changes in spatial distribution triggered changes in the spatial availability of woody biomass over the simulation period of 80 years, and this led to changes in the rest of the herbivore community, resulting in increased densities of all herbivores, except for giraffe and steenbok, in areas close to rivers. The spatial distributions of elephant bulls and cows showed to be less affected by the closure of WPs than most of the other herbivore species. Our study contributes to ecologically
Hogan, Robert C
2007-01-01
A cascade model is described based on multiplier distributions determined from 3D direct numerical simulations (DNS) of turbulent particle laden flows, which include two-way coupling between the phases at global mass loadings equal to unity. The governing Eulerian equations are solved using pseudo-spectral methods on up to 512**3 computional grid points. DNS results for particle concentration and enstrophy at Taylor microscale Reynolds numbers in the range 34 - 170 were used to directly determine multiplier distributions (PDFs) on spatial scales 3 times the Kolmogorov length scale. The width of the PDFs, which is a measure of intermittency, decreases with increasing mass loading within the local region where the multipliers are measured. The functional form of this dependence is not sensitive to Reynolds numbers in the range considered. A partition correlation probability is included in the cascade model to account for the observed spatial anticorrelation between particle concentration and enstrophy. Joint pr...
Martel, Michelle M.; Pierce, Laura; Nigg, Joel T.; Jester, Jennifer M.; Adams, Kenneth; Puttler, Leon I.; Buu, Anne; Fitzgerald, Hiram; Zucker, Robert A.
2009-01-01
Temperament traits may increase risk for developmental psychopathology like Attention-Deficit/Hyperactivity Disorder (ADHD) and disruptive behaviors during childhood, as well as predisposing to substance abuse during adolescence. In the current study, a cascade model of trait pathways to adolescent substance abuse was examined. Component hypotheses were that (a) maladaptive traits would increase risk for inattention/hyperactivity, (b) inattention/hyperactivity would increase risk for disrupti...
Institute of Scientific and Technical Information of China (English)
张秋实; 朱锋杰; 周浩淼
2015-01-01
A lumped-equivalent circuit model of a novel magnetoelectric tunable bandpass filter, which is realized in the form of multi-stage cascading between a plurality of magnetoelectric laminates, is established in this paper for convenient analysis. The multi-stage cascaded filter is degraded to the coupling microstrip filter with only one magnetoelectric laminate and then compared with the existing experiment results. The comparison reveals that the insertion loss curves predicted by the degraded circuit model are in good agreement with the experiment results and the predicted results of the electromagnetic field simulation, thus the validity of the model is verified. The model is then degraded to the two-stage cascaded magneto-electric filter with two magnetoelectric laminates. It is revealed that if the applied external bias magnetic or electric fields on the two magnetoelectric laminates are identical, then the passband of the filter will drift under the changed external field; that is to say, the filter has the characteristics of external magnetic field tunability and electric field tunability. If the applied external bias magnetic or electric fields on two magnetoelectric laminates are different, then the passband will disappear so that the switching characteristic is achieved. When the same magnetic fields are applied to the laminates, the passband bandwidth of the two-stage cascaded magnetoelectric filter with two magnetoelectric laminates becomes nearly doubled in comparison with the passband filter which contains only one magnetoelectric laminate. The bandpass effect is also improved obviously. This research will provide a theoretical basis for the design, preparation, and application of a new high performance magnetoelectric tunable microwave device.
Directory of Open Access Journals (Sweden)
L. Yao
2011-03-01
Full Text Available Relations between mineralization and certain geological processes are established mostly by geologist's knowledge of field observations. However, these relations are descriptive and a quantitative model of how certain geological processes strengthen or hinder mineralization is not clear, that is to say, the mechanism of the interactions between mineralization and the geological framework has not been thoroughly studied. The dynamics behind these interactions are key in the understanding of fractal or multifractal formations caused by mineralization, among which singularities arise due to anomalous concentration of metals in narrow space. From a statistical point of view, we think that cascade dynamics play an important role in mineralization and studying them can reveal the nature of the various interactions throughout the process. We have constructed a multiplicative cascade model to simulate these dynamics. The probabilities of mineral deposit occurrences are used to represent direct results of mineralization. Multifractal simulation of probabilities of mineral potential based on our model is exemplified by a case study dealing with hydrothermal gold deposits in southern Nova Scotia, Canada. The extent of the impacts of certain geological processes on gold mineralization is related to the scale of the cascade process, especially to the maximum cascade division number n_{max}. Our research helps to understand how the singularity occurs during mineralization, which remains unanswered up to now, and the simulation may provide a more accurate distribution of mineral deposit occurrences that can be used to improve the results of the weights of evidence model in mapping mineral potential.
Hogan, R C; Cuzzi, J N
2007-05-01
A cascade model is described based on multiplier distributions determined from three-dimensional (3D) direct numerical simulations (DNS) of turbulent particle laden flows, which include two-way coupling between the phases at global mass loadings equal to unity. The governing Eulerian equations are solved using psuedospectral methods on up to 512(3) computional grid points. DNS results for particle concentration and enstrophy at Taylor microscale Reynolds numbers in the range 34-170 were used to directly determine multiplier distributions on spatial scales three times the Kolmogorov length scale. The multiplier probability distribution functions (PDFs) are well characterized by the beta distribution function. The width of the PDFs, which is a measure of intermittency, decreases with increasing mass loading within the local region where the multipliers are measured. The functional form of this dependence is not sensitive to Reynolds numbers in the range considered. A partition correlation probability is included in the cascade model to account for the observed spatial anticorrelation between particle concentration and enstrophy. Joint probability distribution functions of concentration and enstrophy generated using the cascade model are shown to be in excellent agreement with those derived directly from our 3D simulations. Probabilities predicted by the cascade model are presented at Reynolds numbers well beyond what is achievable by direct simulation. These results clearly indicate that particle mass loading significantly reduces the probabilities of high particle concentration and enstrophy relative to those resulting from unloaded runs. Particle mass density appears to reach a limit at around 100 times the gas density. This approach has promise for significant computational savings in certain applications.
Interpolating point spread function anisotropy
Gentile, M.; Courbin, F.; Meylan, G.
2013-01-01
Planned wide-field weak lensing surveys are expected to reduce the statistical errors on the shear field to unprecedented levels. In contrast, systematic errors like those induced by the convolution with the point spread function (PSF) will not benefit from that scaling effect and will require very accurate modeling and correction. While numerous methods have been devised to carry out the PSF correction itself, modeling of the PSF shape and its spatial variations across the instrument field of view has, so far, attracted much less attention. This step is nevertheless crucial because the PSF is only known at star positions while the correction has to be performed at any position on the sky. A reliable interpolation scheme is therefore mandatory and a popular approach has been to use low-order bivariate polynomials. In the present paper, we evaluate four other classical spatial interpolation methods based on splines (B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and ordinary Kriging (OK). These methods are tested on the Star-challenge part of the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) simulated data and are compared with the classical polynomial fitting (Polyfit). In all our methods we model the PSF using a single Moffat profile and we interpolate the fitted parameters at a set of required positions. This allowed us to win the Star-challenge of GREAT10, with the B-splines method. However, we also test all our interpolation methods independently of the way the PSF is modeled, by interpolating the GREAT10 star fields themselves (i.e., the PSF parameters are known exactly at star positions). We find in that case RBF to be the clear winner, closely followed by the other local methods, IDW and OK. The global methods, Polyfit and B-splines, are largely behind, especially in fields with (ground-based) turbulent PSFs. In fields with non-turbulent PSFs, all interpolators reach a variance on PSF systematics σ2sys better than the 1
Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States
Sousan, Sinan Dhia Jameel
This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that
SIMULATION MODELING OF AN ENHANCED LOW-EMISSION SWIRL-CASCADE BURNER
Energy Technology Data Exchange (ETDEWEB)
Ala Qubbaj
2003-10-01
The numerical computations were conducted using the CFD-CHEMKIN computational program. A cell-centered control volume approach was used, in which the discretized equations or the finite difference equations (FDE) were formulated by evaluating and integrating fluxes across the faces of control volumes in order to satisfy the continuity, momentum, energy and mixture fractions conservation equations. The first order upwind scheme and the well-known SIMPLEC algorithm were used. The standard k-{var_epsilon} model was used to close the set of equations. The thermal and composition fields in the baseline, cascade, swirl, and swirl-cascade burners were simulated. The temperature and CO{sub 2} concentration fields were just computed and the observations are reported. The analysis of these results is currently underway.
DRC: a dual route cascaded model of visual word recognition and reading aloud.
Coltheart, M; Rastle, K; Perry, C; Langdon, R; Ziegler, J
2001-01-01
This article describes the Dual Route Cascaded (DRC) model, a computational model of visual word recognition and reading aloud. The DRC is a computational realization of the dual-route theory of reading, and is the only computational model of reading that can perform the 2 tasks most commonly used to study reading: lexical decision and reading aloud. For both tasks, the authors show that a wide variety of variables that influence human latencies influence the DRC model's latencies in exactly the same way. The DRC model simulates a number of such effects that other computational models of reading do not, but there appear to be no effects that any other current computational model of reading can simulate but that the DRC model cannot. The authors conclude that the DRC model is the most successful of the existing computational models of reading.
Institute of Scientific and Technical Information of China (English)
Xing-hua; WANG
2007-01-01
Explicit representations for the Hermite interpolation and their derivatives of any order are provided.Furthermore,suppose that the interpolated function f has continuous derivatives of sufficiently high order on some sufficiently small neighborhood of a given point x and any group of nodes are also given on the neighborhood.If the derivatives of any order of the Hermite interpolation polynomial of f at the point x are applied to approximating the corresponding derivatives of the function f(x),the asymptotic representations for the remainder are presented.
Neural Network Methods for NURBS Curve and Surface Interpolation
Institute of Scientific and Technical Information of China (English)
秦开怀
1997-01-01
New algorithms based on artificial neural network models are presented for cubic NURBS curve and surface interpolation.When all th knot spans are identical,the NURBS curve interpolation procedure degenerates into that of uniform rational B-spline curves.If all the weights of data points are identical,then the NURBS curve interpolation procedure degenerates into the integral B-spline curve interpolation.
Generating Symbolic Interpolants for Scattered Data with Normal Vectors
Institute of Scientific and Technical Information of China (English)
Ming Li; Xiao-Shan Gao; Jin-San Cheng
2005-01-01
Algorithms to generate a triangular or a quadrilateral interpolant with G1-continuity are given in this paper for arbitrary scattered data with associated normal vectors over a prescribed triangular or quadrilateral decomposition.constraints. With the algorithm, we may obtain interpolants in complete symbolic parametric forms, leading to a fast computation of the interpolant. A dynamic interpolation solid modelling software package DISM is implemented based on the algorithm which can be used to generate and manipulate solid objects in an interactive way.
INTERPOLATION METHODS AND ACCURACY ANALYSIS BASED ON GRID QUASI-GEOID MODEL%基于似大地水准面格网的插值方法及精度分析
Institute of Scientific and Technical Information of China (English)
张兴福; 魏德宏
2011-01-01
There are two factors: affecting the accuracy of GPS height transformation based on quasi-geoid model, the geodetic height accuracy and interpolated height anomaly accuracy. On the basis of the simulation and practical quasi-geoid models, the effect and accuracy of GPS height anomaly interpolation are analyzed with four methods, including inverse distance interpolation, linear interpolation, Shepard interpolation and Chebyshev interpolation, the results show that the Chebyshev interpolation method is accurate and stable for the high resolution quasi-geoid model.%影响由似大地水准面模型进行GPS高程转换的因素有两个:GPS点大地高的测量精度和该点内插高程异常的精度.利用模拟以及某一区域似大地水准面模型比较了反距离加权插值、线性插值、谢别德插值以及切比雪夫插值方法用于GPS点高程异常内插的精度,结果表明:对于分辨率较高的似大地水准面,切比雪夫插值具有很好的内插效果.
Hassan, Ehab; Hatch, D. R.; Morrison, P. J.; Horton, W.
2016-09-01
Progress in understanding the coupling between plasma instabilities in the equatorial electrojet based on a unified fluid model is reported. Simulations with parameters set to various ionospheric background conditions revealed properties of the gradient-drift and Farley-Buneman instabilities. Notably, sharper density gradients increase linear growth rates at all scales, whereas variations in cross-field E × B drift velocity only affect small-scale instabilities. A formalism defining turbulent fluctuation energy for the system is introduced, and the turbulence is analyzed within this framework. This exercise serves as a useful verification test of the numerical simulations and also elucidates the physics underlying the ionospheric turbulence. Various physical mechanisms involved in the energetics are categorized as sources, sinks, nonlinear transfer, and cross-field coupling. The physics of the nonlinear transfer terms is studied to identify their roles in producing energy cascades, which explain the generation of small-scale structures that are stable in the linear regime. The theory of two-step energy cascading to generate the 3 m plasma irregularities in the equatorial electrojet is verified for the first time in the fluid regime. In addition, the nonlinearity of the system allows the possibility of an inverse energy cascade, potentially responsible for generating large-scale plasma structures at the top of the electrojet as found in different rocket and radar observations.
Signal-to-noise performance analysis of streak tube imaging lidar systems. I. Cascaded model.
Yang, Hongru; Wu, Lei; Wang, Xiaopeng; Chen, Chao; Yu, Bing; Yang, Bin; Yuan, Liang; Wu, Lipeng; Xue, Zhanli; Li, Gaoping; Wu, Baoning
2012-12-20
Streak tube imaging lidar (STIL) is an active imaging system using a pulsed laser transmitter and a streak tube receiver to produce 3D range and intensity imagery. The STIL has recently attracted a great deal of interest and attention due to its advantages of wide azimuth field-of-view, high range and angle resolution, and high frame rate. This work investigates the signal-to-noise performance of STIL systems. A theoretical model for characterizing the signal-to-noise performance of the STIL system with an internal or external intensified streak tube receiver is presented, based on the linear cascaded systems theory of signal and noise propagation. The STIL system is decomposed into a series of cascaded imaging chains whose signal and noise transfer properties are described by the general (or the spatial-frequency dependent) noise factors (NFs). Expressions for the general NFs of the cascaded chains (or the main components) in the STIL system are derived. The work presented here is useful for the design and evaluation of STIL systems.
Displacement cascades in Fesbnd Nisbnd Mnsbnd Cu alloys: RVP model alloys
Terentyev, D.; Zinovev, A.; Bonny, G.
2016-07-01
Primary damage due to displacement cascades (10-100 keV) has been assessed in Fesbnd 1%Mnsbnd 1%Ni-0.5%Cu and its binary alloys by molecular dynamics (MD), using a recent interatomic potential, specially developed to address features of the Fesbnd Mnsbnd Nisbnd Cu system in the dilute limit. The latter system represents the model matrix for reactor pressure vessel steels. The applied potential reproduces major interaction features of the solutes with point defects in the binary, ternary and quaternary dilute alloys. As compared to pure Fe, the addition of one type of a solute or all solutes together does not change the major characteristics of primary damage. However, the chemical structure of the self-interstitial defects is strongly sensitive to the presence and distribution of Mn and Cu in the matrix. 20 keV cascades were also studied in the Fesbnd Nisbnd Mnsbnd Cu matrix containing dislocation loops (with density of 1024 m-3 and size 2 nm). Two solute distributions were investigated, namely: a random one and one obtained by Metropolis Monte Carlo simulations from our previous work. The presence of the loops did not affect the defect production efficiency but slightly reduced the fraction of isolated self-interstitials and vacancies. The cascade event led to the transformation of the loops into ½ glissile configurations with a success rate of 10% in the matrix with random solute distribution, while all the pre-created loops remain stable if the alloy's distribution was applied using the Monte-Carlo method. This suggests that solute segregation to loops "stabilizes" the pre-existing loops against transformation or migration induced by collision cascades.
Modified k-ωmodel using kinematic vorticity for corner separation in compressor cascades
Institute of Scientific and Technical Information of China (English)
LIU YangWei; YAN Hao; FANG Le; LU LiPeng; LI QiuShi; SHAO Liang
2016-01-01
A new method of modifying the conventional k-ω turbulence model for comer separation is proposed in this paper.The production term in the ω equation is modified using kinematic vorticity considering fluid rotation and deformation in complex geometric boundary conditions.The comer separation flow in linear compressor cascades is calculated using the original k-ω model,the modified k-ωmodel and the Reynolds stress model (RSM).The numerical results of the modified model are compared with the available experimental data,as well as the corresponding results of the original k-comodel and RSM.In terms of accuracy,the modified model,which significantly improves the performance of the original k-ω model for predicting comer separation,is quite competitive with the RSM.However,the modified model,which has considerably lower computational cost,is more robust than the RSM.
Interferometric interpolation of sparse marine data
Hanafy, Sherif M.
2013-10-11
We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.
样条型矩阵有理插值%SPLINE-TYPE MATRIX VALUED RATIONAL INTERPOLATION
Institute of Scientific and Technical Information of China (English)
杨松林
2005-01-01
The matrix valued rational interpolation is very useful in the partial realization problem and model reduction for all the linear system theory. Lagrange basic functions have been used in matrix valued rational interpolation. In this paper, according to the property of cardinal spline interpolation, we constructed a kind of spline type matrix valued rational interpolation, which based on cardinal spline. This spline type interpolation can avoid instability of high order polynomial interpolation and we obtained a useful formula.
Diversification improves interpolation
Giesbrecht, Mark
2011-01-01
We consider the problem of interpolating an unknown multivariate polynomial with coefficients taken from a finite field or as numerical approximations of complex numbers. Building on the recent work of Garg and Schost, we improve on the best-known algorithm for interpolation over large finite fields by presenting a Las Vegas randomized algorithm that uses fewer black box evaluations. Using related techniques, we also address numerical interpolation of sparse complex polynomials, and provide the first provably stable algorithm (in the sense of relative error) for this problem, at the cost of modestly more interpolation points. A key new technique is a randomization which makes all coefficients of the unknown polynomial distinguishable, producing what we call a diverse polynomial. Another departure of our algorithms from most previous approaches is that they do not rely on root finding as a subroutine. We show how these improvements affect the practical performance with trial implementations.
Extension Of Lagrange Interpolation
Directory of Open Access Journals (Sweden)
Mousa Makey Krady
2015-01-01
Full Text Available Abstract In this paper is to present generalization of Lagrange interpolation polynomials in higher dimensions by using Gramers formula .The aim of this paper is to construct a polynomials in space with error tends to zero.
Extension Of Lagrange Interpolation
Mousa Makey Krady
2015-01-01
Abstract In this paper is to present generalization of Lagrange interpolation polynomials in higher dimensions by using Gramers formula .The aim of this paper is to construct a polynomials in space with error tends to zero.
Mitigating cascades in sandpile models: an immunization strategy for systemic risk?
Scala, Antonio; Zlatić, Vinko; Caldarelli, Guido; D'Agostino, Gregorio
2016-10-01
We use a simple model of distress propagation (the sandpile model) to show how financial systems are naturally subject to the risk of systemic failures. Taking into account possible network structures among financial institutions, we investigate if simple policies can limit financial distress propagation to avoid system-wide crises, i.e. to dampen systemic risk. We therefore compare different immunization policies (i.e. targeted helps to financial institutions) and find that the information coming from the network topology allows to mitigate systemic cascades by targeting just few institutions.
Elliptic flow in a hadron-string cascade model at 130 GeV energy
Indian Academy of Sciences (India)
P K Sahu; A Ohnishi; M Isse; N Otuka; S C Phatak
2006-08-01
We present the analysis of elliptic flow at $\\sqrt{s} = 130$ A GeV energy in a hadron-string cascade model. We find that the final hadronic yields are qualitatively described. The elliptic flow 2 is reasonably well-described at low transverse momentum (T < 1 GeV/c) in mid-central collisions. On the other hand, this model does not explain 2 at high T or in peripheral collisions and thus generally, it underestimates the elliptic flow at RHIC energy.
Significance of initial interpolation in band-limited signal interpolation
Yegnanarayana, B.; Fathima, S. Tanveer; Nehru, B. T. K. R.; Venkataramanan, B.
1989-01-01
An improved version of the Papoulis algorithm for bandlimited signal interpolation is presented. This algorithm uses the concept of initial interpolation. The justification for initial interpolation is developed only through experimental studies. It is shown that the performance of the interpolation scheme depends on the number and distribution of the known data samples.
A cascade model of information processing and encoding for retinal prosthesis
Directory of Open Access Journals (Sweden)
Zhi-jun Pei
2016-01-01
Full Text Available Retinal prosthesis offers a potential treatment for individuals suffering from photoreceptor degeneration diseases. Establishing biological retinal models and simulating how the biological retina convert incoming light signal into spike trains that can be properly decoded by the brain is a key issue. Some retinal models have been presented, ranking from structural models inspired by the layered architecture to functional models originated from a set of specific physiological phenomena. However, Most of these focus on stimulus image compression, edge detection and reconstruction, but do not generate spike trains corresponding to visual image. In this study, based on state-of-the-art retinal physiological mechanism, including effective visual information extraction, static nonlinear rectification of biological systems and neurons Poisson coding, a cascade model of the retina including the out plexiform layer for information processing and the inner plexiform layer for information encoding was brought forward, which integrates both anatomic connections and functional computations of retina. Using MATLAB software, spike trains corresponding to stimulus image were numerically computed by four steps: linear spatiotemporal filtering, static nonlinear rectification, radial sampling and then Poisson spike generation. The simulated results suggested that such a cascade model could recreate visual information processing and encoding functionalities of the retina, which is helpful in developing artificial retina for the retinally blind.
A cascade model of information processing and encoding for retinal prosthesis
Institute of Scientific and Technical Information of China (English)
Zhi-jun Pei; Guan-xin Gao; Bo Hao; Qing-li Qiao; Hui-jian Ai
2016-01-01
Retinal prosthesis offers a potential treatment for individuals suffering from photoreceptor degeneration diseases. Establishing biological retinal models and simulating how the biological retina convert incoming light signal into spike trains that can be properly decoded by the brain is a key issue. Some retinal models have been presented, ranking from structural models inspired by the layered architecture to functional models originated from a set of speciifc physiological phenomena. However, Most of these focus on stimulus image com-pression, edge detection and reconstruction, but do not generate spike trains corresponding to visual image. In this study, based on state-of-the-art retinal physiological mechanism, including effective visual information extraction, static nonlinear rectiifcation of biological systems and neurons Poisson coding, a cascade model of the retina including the out plexiform layer for information processing and the inner plexiform layer for information encoding was brought forward, which integrates both anatomic connections and functional com-putations of retina. Using MATLAB software, spike trains corresponding to stimulus image were numerically computed by four steps:linear spatiotemporal ifltering, static nonlinear rectiifcation, radial sampling and then Poisson spike generation. The simulated results suggested that such a cascade model could recreate visual information processing and encoding functionalities of the retina, which is helpful in developing artiifcial retina for the retinally blind.
The cascade model of teachers’ continuing professional development in Kenya: A time for change?
Directory of Open Access Journals (Sweden)
Harry Kipkemoi Bett
2016-12-01
Full Text Available Kenya is one of the countries whose teachers the UNESCO (2015 report cited as lacking curriculum support in the classroom. As is the case in many African countries, a large portion of teachers in Kenya enter the teaching profession when inadequately prepared, while those already in the field receive insufficient support in their professional lives. The cascade model has often been utilized in the country whenever need for teachers’ continuing professional development (TCPD has arisen, especially on a large scale. The preference for the model is due to, among others, its cost effectiveness and ability to reach out to many teachers within a short period of time. Many researchers have however cast aspersions with this model for its glaring shortcomings. On the contrary, TCPD programmes that are collaborative in nature and based on teachers’ contexts have been found to be more effective than those that are not. This paper briefly examines cases of the cascade model in Kenya, the challenges associated with this model and proposes the adoption of collaborative and institution-based models to mitigate these challenges. The education sectors in many nations in Africa, and those in the developing world will find the discussions here relevant.
Information Theory Analysis of Cascading Process in a Synthetic Model of Fluid Turbulence
Directory of Open Access Journals (Sweden)
Massimo Materassi
2014-02-01
Full Text Available The use of transfer entropy has proven to be helpful in detecting which is the verse of dynamical driving in the interaction of two processes, X and Y . In this paper, we present a different normalization for the transfer entropy, which is capable of better detecting the information transfer direction. This new normalized transfer entropy is applied to the detection of the verse of energy flux transfer in a synthetic model of fluid turbulence, namely the Gledzer–Ohkitana–Yamada shell model. Indeed, this is a fully well-known model able to model the fully developed turbulence in the Fourier space, which is characterized by an energy cascade towards the small scales (large wavenumbers k, so that the application of the information-theory analysis to its outcome tests the reliability of the analysis tool rather than exploring the model physics. As a result, the presence of a direct cascade along the scales in the shell model and the locality of the interactions in the space of wavenumbers come out as expected, indicating the validity of this data analysis tool. In this context, the use of a normalized version of transfer entropy, able to account for the difference of the intrinsic randomness of the interacting processes, appears to perform better, being able to discriminate the wrong conclusions to which the “traditional” transfer entropy would drive.
Interpolating point spread function anisotropy
Gentile, M; Meylan, G
2012-01-01
Planned wide-field weak lensing surveys are expected to reduce the statistical errors on the shear field to unprecedented levels. In contrast, systematic errors like those induced by the convolution with the point spread function (PSF) will not benefit from that scaling effect and will require very accurate modeling and correction. While numerous methods have been devised to carry out the PSF correction itself, modeling of the PSF shape and its spatial variations across the instrument field of view has, so far, attracted much less attention. This step is nevertheless crucial because the PSF is only known at star positions while the correction has to be performed at any position on the sky. A reliable interpolation scheme is therefore mandatory and a popular approach has been to use low-order bivariate polynomials. In the present paper, we evaluate four other classical spatial interpolation methods based on splines (B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and ordinary Kriging...
A probabilistic sediment cascade model of sediment transfer in the Illgraben
Bennett, G. L.; Molnar, P.; McArdell, B. W.; Burlando, P.
2014-02-01
We present a probabilistic sediment cascade model to simulate sediment transfer in a mountain basin (Illgraben, Switzerland) where sediment is produced by hillslope landslides and rockfalls and exported out of the basin by debris flows and floods. The model conceptualizes the fluvial system as a spatially lumped cascade of connected reservoirs representing hillslope and channel storages where sediment goes through cycles of storage and remobilization by surface runoff. The model includes all relevant hydrological processes that lead to runoff formation in an Alpine basin, such as precipitation, snow accumulation, snowmelt, evapotranspiration, and soil water storage. Although the processes of sediment transfer and debris flow generation are described in a simplified manner, the model produces complex sediment discharge behavior which is driven by the availability of sediment and antecedent wetness conditions (system memory) as well as the triggering potential (climatic forcing). The observed probability distribution of debris flow volumes and their seasonality in 2000-2009 are reproduced. The stochasticity of hillslope sediment input is important for reproducing realistic sediment storage variability, although many details of the hillslope landslide triggering procedures are filtered out by the sediment transfer system. The model allows us to explicitly quantify the division into transport and supply-limited sediment discharge events. We show that debris flows may be generated for a wide range of rainfall intensities because of variable antecedent basin wetness and snowmelt contribution to runoff, which helps to understand the limitations of methods based on a single rainfall threshold for debris flow initiation in Alpine basins.
Energy Technology Data Exchange (ETDEWEB)
Cammin, Jochen, E-mail: jcammin1@jhmi.edu, E-mail: ktaguchi@jhmi.edu; Taguchi, Katsuyuki, E-mail: jcammin1@jhmi.edu, E-mail: ktaguchi@jhmi.edu [Division of Medical Imaging Physics, The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21287 (United States); Xu, Jennifer [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21287 (United States); Barber, William C.; Iwanczyk, Jan S.; Hartsough, Neal E. [DxRay, Inc., Northridge, California 91324 (United States)
2014-04-15
Purpose: Energy discriminating, photon-counting detectors (PCDs) are an emerging technology for computed tomography (CT) with various potential benefits for clinical CT. The photon energies measured by PCDs can be distorted due to the interactions of a photon with the detector and the interaction of multiple coincident photons. These effects result in distorted recorded x-ray spectra which may lead to artifacts in reconstructed CT images and inaccuracies in tissue identification. Model-based compensation techniques have the potential to account for the distortion effects. This approach requires only a small number of parameters and is applicable to a wide range of spectra and count rates, but it needs an accurate model of the spectral distortions occurring in PCDs. The purpose of this study was to develop a model of those spectral distortions and to evaluate the model using a PCD (model DXMCT-1; DxRay, Inc., Northridge, CA) and various x-ray spectra in a wide range of count rates. Methods: The authors hypothesize that the complex phenomena of spectral distortions can be modeled by: (1) separating them into count-rate independent factors that we call the spectral response effects (SRE), and count-rate dependent factors that we call the pulse pileup effects (PPE), (2) developing separate models for SRE and PPE, and (3) cascading the SRE and PPE models into a combined SRE+PPE model that describes PCD distortions at both low and high count rates. The SRE model describes the probability distribution of the recorded spectrum, with a photo peak and a continuum tail, given the incident photon energy. Model parameters were obtained from calibration measurements with three radioisotopes and then interpolated linearly for other energies. The PPE model used was developed in the authors’ previous work [K. Taguchi et al., “Modeling the performance of a photon counting x-ray detector for CT: Energy response and pulse pileup effects,” Med. Phys. 38(2), 1089–1102 (2011
EOS Interpolation and Thermodynamic Consistency
Energy Technology Data Exchange (ETDEWEB)
Gammel, J. Tinka [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-16
As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.
EOS Interpolation and Thermodynamic Consistency
Energy Technology Data Exchange (ETDEWEB)
Gammel, J. Tinka [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-16
As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.
Billoire, Alain
2006-04-01
I use an interpolation formula, introduced recently by Guerra and Toninelli, in order to prove the existence of the free energy of the Sherrington-Kirkpatrick spin glass model in the infinite volume limit, to investigate numerically the finite-size corrections to the free energy of this model. The results are compatible with a (1/12N)ln(N/N0) behavior at Tc , as predicted by Parisi, Ritort, and Slanina, and a 1/N2/3 behavior below Tc .
Improved Intranuclear Cascade Models for the Codes CEM2k and LAQGSM
Mashnik, S G; Sierk, A J; Prael, R E
2005-01-01
An improved version of the Cascade-Exciton Model (CEM) of nuclear reactions implemented in the codes CEM2k and the Los Alamos version of the Quark-Gluon String Model (LAQGSM) has been developed recently at LANL to describe reactions induced by particles and nuclei at energies up to hundreds of GeV/nucleon for a number of applications. We present several improvements to the intranuclear cascade models used in CEM2k and LAQGSM developed recently to better describe the physics of nuclear reactions. First, we incorporate the photonuclear mode from CEM2k into LAQGSM to allow it to describe photonuclear reactions, not previously modeled there. Then, we develop new approximations to describe more accurately experimental elementary energy and angular distributions of secondary particles from hadron-hadron and photon-hadron interactions using available data and approximations published by other authors. Finally, to consider reactions involving very highly excited nuclei (E* > 2-3 MeV/A), we have incorporated into CEM2...
Extended Intranuclear Cascade model for pickup reactions induced by 50-MeV-range protons
Directory of Open Access Journals (Sweden)
Uozumi Yusuke
2016-01-01
Full Text Available The intranuclear cascade model was investigated to explain (p, dx and (p, ax reactions at incident energies of around 50 MeV. Since these reactions are governed mainly by the direct pickup process, the model was expanded to include exclusive pickup processes leading to hole-state-excitations. The energy of the outgoing clusters is determined with single-particle energies of transferred nucleons, the reaction Q-value, and the recoil of the residual nucleus. The rescattering of the produced cluster inside the nucleus is treated within the intranuclear cascade model. The emission angle is given by the sum of momentum vectors of transferred nucleons in addition to the deflection at the nuclear surface, which was introduced to explain angular distributions of elastic scattering. Double differential cross sections of reactions were calculated and compared with experimental data. The proposed model showed a high predictive power over the wide range of emission energies and angles. The treatment ofthe cluster transport inside the nucleus was also verified.
Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.
Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo
2013-01-01
To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.
Directory of Open Access Journals (Sweden)
Tilahun Derib Asfaw
2013-07-01
Full Text Available The operation of the four Perak cascading reservoirs namely, Temenggor, Bersia, Kenering and Chenderoh analyzed using the newly developed genetic algorithm model. The reservoirs are located in the state of Perak of Peninsular Malaysia that used for hydroelectric power generation and flood mitigation. The hydroelectric potential of the cascading scheme is 578 MW. However, the actual annual average generation was 228 MW, which is about 39% of the potential. The research aimed to improve the annual average hydroelectric power generation. The result of the fitness value used to select the optimal option from the test of eight model runs options. After repeated runs of the optimal option, the best model parameters are found. Therefore, optimality achieved at population size of 150, crossover probability of 0.75 and generation number of 60. The operation of GA model produced an additional of 12.17 MW per day. The additional power is found with the same total annual volume of release and similar natural inflow pattern. The additional hydroelectric power can worth over 22 million Ringgit Malaysia per year. In addition, it plays a significant role on the growing energy needs of the country.
Fluid-structure coupling in the guide vanes cascade of a pump-turbine scale model
Energy Technology Data Exchange (ETDEWEB)
Roth, S; Hasmatuchi, V; Botero, F; Farhat, M; Avellan, F, E-mail: steven.roth@epfl.c [Laboratory for Hydraulic Machines, Ecole Polytechnique Federale de Lausanne Av. de Cour 33bis, Lausanne, 1007 (Switzerland)
2010-08-15
The present study concerns fluid-structure coupling phenomena occurring in a guide vane cascade of a pump-turbine scale model placed in the EPFL PF3 test rig. An advanced instrument set is used to monitor both vibrating structures and the surrounding flow. The paper highlights the interaction between vibrating guide vanes and the flow behavior. The pressure fluctuations in the stay vanes region are found to be strongly influenced by the amplitude of the vibrating guide vanes. Moreover, the flow induces different hydrodynamic damping on the vibrating guide vanes depending on the operating point of the pump-turbine.
Emotion : mod\\`ele d'appraisal-coping pour le probl\\`eme des Cascades
Mahboub, Karim; Jay, Véronique; Clément, Evelyne
2009-01-01
Modeling emotion has become a challenge nowadays. Therefore, several models have been produced in order to express human emotional activity. However, only a few of them are currently able to express the close relationship existing between emotion and cognition. An appraisal-coping model is presented here, with the aim to simulate the emotional impact caused by the evaluation of a particular situation (appraisal), along with the consequent cognitive reaction intended to face the situation (coping). This model is applied to the ?Cascades? problem, a small arithmetical exercise designed for ten-year-old pupils. The goal is to create a model corresponding to a child's behavior when solving the problem using his own strategies.
iCRESTRIGRS: a coupled modeling system for cascading flood-landslide disaster forecasting
Zhang, Ke; Xue, Xianwu; Hong, Yang; Gourley, Jonathan J.; Lu, Ning; Wan, Zhanming; Hong, Zhen; Wooten, Rick
2016-12-01
Severe storm-triggered floods and landslides are two major natural hazards in the US, causing property losses of USD 6 billion and approximately 110-160 fatalities per year nationwide. Moreover, floods and landslides often occur in a cascading manner, posing significant risk and leading to losses that are significantly greater than the sum of the losses from the hazards when acting separately. It is pertinent to couple hydrological and geotechnical modeling processes to an integrated flood-landslide cascading disaster modeling system for improved disaster preparedness and hazard management. In this study, we developed the iCRESTRIGRS model, a coupled flash flood and landslide initiation modeling system, by integrating the Coupled Routing and Excess STorage (CREST) model with the physically based Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability (TRIGRS) landslide model. The iCRESTRIGRS system is evaluated in four river basins in western North Carolina that experienced a large number of floods, landslides and debris flows triggered by heavy rainfall from Hurricane Ivan during 16-18 September 2004. The modeled hourly hydrographs at four USGS gauge stations show generally good agreement with the observations during the entire storm period. In terms of landslide prediction in this case study, the coupled model has a global accuracy of 98.9 % and a true positive rate of 56.4 %. More importantly, it shows an improved predictive capability for landslides relative to the stand-alone TRIGRS model. This study highlights the important physical connection between rainfall, hydrological processes and slope stability, and provides a useful prototype model system for operational forecasting of flood and landslide.
Cascading uncertainties in flood inundation models to uncertain estimates of damage and loss
Fewtrell, Timothy; Michel, Gero; Ntelekos, Alexandros; Bates, Paul
2010-05-01
The complexity of flood processes, particularly in urban environments, and the difficulties of collecting data during flood events, presents significant and particular challenges to modellers, especially when considering large geographic areas. As a result, the modelling process incorporates a number of areas of uncertainty during model conceptualisation, construction and evaluation. There is a wealth of literature detailing the relative magnitudes of uncertainties in numerical flood input data (e.g. boundary conditions, model resolution and friction specification) for a wide variety of flood inundation scenarios (e.g. fluvial inundation and surface water flooding). Indeed, recent UK funded projects (e.g. FREE) have explicitly examined the effect of cascading uncertainties in ensembles of GCM output through rainfall-runoff models to hydraulic flood inundation models. However, there has been little work examining the effect of cascading uncertainties in flood hazard ensembles to estimates of damage and loss, the quantity of interest when assessing flood risk. Furthermore, vulnerability is possibly the largest area of uncertainty for (re-)insurers as in-depth and reliable of knowledge of portfolios is difficult to obtain. Insurance industry CAT models attempt to represent a credible range of flood events over large geographic areas and as such examining all sources of uncertainty is not computationally tractable. However, the insurance industry is also marked by a trend towards an increasing need to understand the variability in flood loss estimates derived from these CAT models. In order to assess the relative importance of uncertainties in flood inundation models and depth/damage curves, hypothetical 1-in-100 and 1-in-200 year return period flood events are propagated through the Greenwich embayment in London, UK. Errors resulting from topographic smoothing, friction specification and inflow boundary conditions are cascaded to form an ensemble of flood levels and
Institute of Scientific and Technical Information of China (English)
徐平; 杜向锋
2014-01-01
This article details several typical mathematical models of the interpolation methods of quasi-geoid,and compiles the corresponding model interpolation software on the basis of these models.Meanwhile,by using the interpolation methods provided by the inter-polation software and a certain quasi-geoid model,some GPS/leveling data are calculated by using elevation interpolation.Through the analysis of the interpolation results,some useful conclusions are drawn.%详细介绍了几种常用的似大地水准面插值方法的数学模型，并根据这些模型编写了相应的模型内插软件，利用该内插软件提供的内插方法及某似大地水准面模型，对一些GPS/水准数据进行了高程内插计算，通过对内插结果的分析获得了一些有益的结论。
Modelling and control of broadband trafﬁc using multiplicative multifractal cascades
Indian Academy of Sciences (India)
P Murali Krishna; Vikram M Gadre; Uday B Desai
2002-12-01
We present the results on the modelling and synthesis of broadband trafﬁc processes namely ethernet inter-arrival times using the VVGM (variable variance gaussian multiplier) multiplicative multifractal model. This model is shown to be more appropriate for modelling network trafﬁc which possess time varying scaling/self-similarity and burstiness. The model gives a simple and efﬁcient technique to synthesise Ethernet inter-arrival times. The results of the detailed statistical and multifractal analysis performed on the original and the synthesised traces are presented and the performance is compared with other models in the literature, such as the Poisson process, and the Multifractal Wavelet Model (MWM) process. It is also shown empirically that a single server queue preserves the multifractal character of the process by analysing its inter-departure process when fed with the multifractal traces. The result of the existence of a global-scaling exponent for multifractal cascades and its application in queueing theory are discussed. We propose tracking and control algorithms for controlling network congestion with bursty trafﬁc modelled by multifractal cascade processes, characterised by the Holder exponents, the value of which at an interval indicates the burstiness in the trafﬁc at that point. This value has to be estimated and used for the estimation of the congestion and predictive control of the trafﬁc in broadband networks. The estimation can be done by employing wavelet transforms and a Kalman ﬁlter based predictor for predicting the burstiness of the trafﬁc.
Multivariate Birkhoff interpolation
Lorentz, Rudolph A
1992-01-01
The subject of this book is Lagrange, Hermite and Birkhoff (lacunary Hermite) interpolation by multivariate algebraic polynomials. It unifies and extends a new algorithmic approach to this subject which was introduced and developed by G.G. Lorentz and the author. One particularly interesting feature of this algorithmic approach is that it obviates the necessity of finding a formula for the Vandermonde determinant of a multivariate interpolation in order to determine its regularity (which formulas are practically unknown anyways) by determining the regularity through simple geometric manipulations in the Euclidean space. Although interpolation is a classical problem, it is surprising how little is known about its basic properties in the multivariate case. The book therefore starts by exploring its fundamental properties and its limitations. The main part of the book is devoted to a complete and detailed elaboration of the new technique. A chapter with an extensive selection of finite elements follows as well a...
Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph
2017-04-01
Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative
Ansari, Imran Shafique
2010-12-01
The introduction of new schemes that are based on the communication among nodes has motivated the use of composite fading models due to the fact that the nodes experience different multipath fading and shadowing statistics, which subsequently determines the required statistics for the performance analysis of different transceivers. The end-to-end signal-to-noise-ratio (SNR) statistics plays an essential role in the determination of the performance of cascaded digital communication systems. In this thesis, a closed-form expression for the probability density function (PDF) of the end-end SNR for independent but not necessarily identically distributed (i.n.i.d.) cascaded generalized-K (GK) composite fading channels is derived. The developed PDF expression in terms of the Meijer-G function allows the derivation of subsequent performance metrics, applicable to different modulation schemes, including outage probability, bit error rate for coherent as well as non-coherent systems, and average channel capacity that provides insights into the performance of a digital communication system operating in N cascaded GK composite fading environment. Another line of research that was motivated by the introduction of composite fading channels is the error performance. Error performance is one of the main performance measures and derivation of its closed-form expression has proved to be quite involved for certain systems. Hence, in this thesis, a unified closed-form expression, applicable to different binary modulation schemes, for the bit error rate of dual-branch selection diversity based systems undergoing i.n.i.d. GK fading is derived in terms of the extended generalized bivariate Meijer G-function.
Niagara Falls Cascade Model for Interstellar Energetic Ions in the Heliosheath
Cooper, John F.
The origin of anomalous cosmic ray ions has long been assumed to be heliospheric pickup ion production from interstellar neutrals and acceleration at the solar wind termination shock. The Voyager-1 shock crossing showed a well-defined boundary for sharply increased keV ion fluxes in the heliosheath but no sign of local acceleration. Ion flux spectra at keV to MeV energies are instead unfolding with outward passage to approximate the E(-1.5) power-law expected for compressional magnetic tubulence. This spectrum provides excellent connection over many energy decades of a maxwellian distribution for local interstellar plasma ions to well-known flux spectra of high energy galactic ions at GeV energies. The Niagara Falls cascade model is proposed that the heliosheath is a transitional region for direct entry of ions from the local interstellar ‘river’ through a permeable heliopause into the supersonic outer heliosphere. As Voyager-1 moves outwards in the heliosheath to the heliopause, energy-dependent transport features can appear in the transitional 0.01 - 1 GeV/n energy band but otherwise a general unfolding to the interstellar limiting spectrum should continue by this model. Spectral regions then become dominated by bulk plasma flow at low energy, cascade transport at intermediate energies, and interstellar shock acceleration at higher energies.
Rudaz, Benjamin; Bardou, Eric; Jaboyedoff, Michel
2015-04-01
Alpine ephemeral streams act as links between high altitude erosional processes, slope movements and valley-floor fluvial systems or fan storage. Anticipating future mass wasting from these systems is crucial for hazard mitigation measures. Torrential activity is highly stochastic, with punctual transfers separating long periods of calm, during which the system evolves internally and recharges. Changes can originate from diffuse (rock faces, sheet erosion of bared moraines), concentrated external sources (rock glacier front, slope instabilities) or internal transfers (bed incision or aggradation). The proposed sediment cascade model takes into account those different processes and calculates sediment transfer from the slope to the channel reaches, and then propagates sediments downstream. The two controlling parameters are precipitation series (generated from existing rain gauge data using Gumbel and Extreme Probability Distribution functions) and temperature (generated from local meteorological stations data and IPCC scenarios). Snow accumulation and melting, and thus runoff can then be determined for each subsystem, to account for different altitudes and expositions. External stocks and sediment sources have each a specific response to temperature and precipitation. For instance, production from rock faces is dependent on frost-thaw cycles, in addition to precipitations. On the other hand, landslide velocity, and thus sediment production is linked to precipitations over longer periods of time. Finally, rock glaciers react to long-term temperature trends, but are also prone to sudden release of material during extreme rain events. All those modules feed the main sediment cascade model, constructed around homogeneous torrent reaches, to and from which sediments are transported by debris flows and bedload transport events. These events are determined using a runoff/erosion curve, with a threshold determining the occurrence of debris flows in the system. If a debris
Geant4 Hadronic Cascade Models and CMS Data Analysis : Computational Challenges in the LHC era
Heikkinen, Aatos
This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we es...
Beyond the Parton Cascade Model: Klaus Kinder-Geiger and VNI
Müller, Berndt
1999-07-01
I review Klaus Kinder-Geiger's contributions to the physics of relativistic heavy ion collisions, in particular, the Parton Cascade Model. Klaus developed this model in order to provide a QCD-based description of nucleus-nucleus reactions at high energies such as they will soon become available at the Brookhaven Relativistic Heavy Ion Collider. The PCM describes the collision dynamics within the early and dense phase of the reaction in terms of the relativistic, probabilistic transport of perturbative excitations (partons) of the QCD vacuum. I will present an overview of the current state of the numerical implementations of this model, as well as its predictions for nuclear collisions at RHIC and LHC.
Beyond the Parton Cascade Model Klaus Kinder-Geiger and VNI
Müller, B
1999-01-01
I review Klaus Kinder-Geiger's contributions to the physics of relativistic heavy ion collisions, in particular, the Parton Cascade Model. Klaus developed this model in order to provide a QCD-based description of nucleus-nucleus reactions at high energies such as they will soon become available at the Brookhaven Relativistic Heavy Ion Collider. The PCM describes the collision dynamics within the early and dense phase of the reaction in terms of the relativistic, probabilistic transport of perturbative excitations (partons) of the QCD vacuum. I will present an overview of the current state of the numerical implementations of this model, as well as its predictions for nuclear collisions at RHIC and LHC.
The cascade of uncertainty in modeling the impacts of climate change on Europe's forests
Reyer, Christopher; Lasch-Born, Petra; Suckow, Felicitas; Gutsch, Martin
2015-04-01
Projecting the impacts of global change on forest ecosystems is a cornerstone for designing sustainable forest management strategies and paramount for assessing the potential of Europe's forest to contribute to the EU bioeconomy. Research on climate change impacts on forests relies to a large extent on model applications along a model chain from Integrated Assessment Models to General and Regional Circulation Models that provide important driving variables for forest models. Or to decision support systems that synthesize findings of more detailed forest models to inform forest managers. At each step in the model chain, model-specific uncertainties about, amongst others, parameter values, input data or model structure accumulate, leading to a cascade of uncertainty. For example, climate change impacts on forests strongly depend on the in- or exclusion of CO2-effects or on the use of an ensemble of climate models rather than relying on one particular climate model. In the past, these uncertainties have not or only partly been considered in studies of climate change impacts on forests. This has left managers and decision-makers in doubt of how robust the projected impacts on forest ecosystems are. We deal with this cascade of uncertainty in a structured way and the objective of this presentation is to assess how different types of uncertainties affect projections of the effects of climate change on forest ecosystems. To address this objective we synthesized a large body of scientific literature on modeled productivity changes and the effects of extreme events on plant processes. Furthermore, we apply the process-based forest growth model 4C to forest stands all over Europe and assess how different climate models, emission scenarios and assumptions about the parameters and structure of 4C affect the uncertainty of the model projections. We show that there are consistent regional changes in forest productivity such as an increase in NPP in cold and wet regions while
Olofsson, Jonas K
2014-01-01
The timing of olfactory behavioral decisions may provide an important source of information about how the human olfactory-perceptual system is organized. This review integrates results from olfactory response-time (RT) measurements from a perspective of mental chronometry. Based on these findings, a new cascade model of human olfaction is presented. Results show that main perceptual decisions are executed with high accuracy within about 1~s of sniff onset. The cascade model proposes the existence of distinct processing stages within this brief time-window. According to the cascade model, different perceptual features become accessible to the perceiver at different time-points, and the output of earlier processing stages provides the input for later processing stages. The olfactory cascade starts with detecting the odor, which is followed by establishing an odor object. The odor object, in turn, triggers systems for determining odor valence and edibility. Evidence for the cascade model comes from studies showing that RTs for odor valence and edibility assessment are predicted by the shorter RTs needed to establish the odor object. Challenges for future research include innovative task designs for olfactory RT experiments and the integration of the behavioral processing sequence into the underlying cortical processes using complementary RT measures and neuroimaging methods.
Pritchard, Stephen C; Coltheart, Max; Palethorpe, Sallyanne; Castles, Anne
2012-10-01
Two prominent dual-route computational models of reading aloud are the dual-route cascaded (DRC) model, and the connectionist dual-process plus (CDP+) model. While sharing similarly designed lexical routes, the two models differ greatly in their respective nonlexical route architecture, such that they often differ on nonword pronunciation. Neither model has been appropriately tested for nonword reading pronunciation accuracy to date. We argue that empirical data on the nonword reading pronunciation of people is the ideal benchmark for testing. Data were gathered from 45 Australian-English-speaking psychology undergraduates reading aloud 412 nonwords. To provide contrast between the models, the nonwords were chosen specifically because DRC and CDP+ disagree on their pronunciation. Both models failed to accurately match the experiment data, and both have deficiencies in nonword reading performance. However, the CDP+ model performed significantly worse than the DRC model. CDP++, the recent successor to CDP+, had improved performance over CDP+, but was also significantly worse than DRC. In addition to highlighting performance shortcomings in each model, the variety of nonword responses given by participants points to a need for models that can account for this variety.
A photon splitting cascade model of soft gamma-ray repeaters
Harding, A K; Harding, Alice K; Baring, Matthew G
1996-01-01
The spectra of soft gamma-ray repeaters (SGRs), with the exception of the March 5, 1979 main burst, are characterized by high-energy cutoffs around 30 keV and low-energy turnovers that are much steeper than a Wien spectrum. Baring (1995) found that the spectra of cascades due to photon splitting in a very strong, homogeneous magnetic field can soften spectra and produce good fits to the soft spectra of SGRs. Magnetic field strengths somewhat above the QED critical field strength B_{\\rm cr}, where B_{\\rm cr} = 4.413 \\times 10^{13} G, is required to produce cutoffs at 30-40 keV. We have improved upon this model by computing Monte Carlo photon splitting cascade spectra in a neutron star dipole magnetic field, including effects of curved space-time in a Schwarzschild metric. We investigate spectra produced by photons emitted at different locations and observer angles. We find that the general results of Baring hold for surface emission throughout most of the magnetosphere, but that emission in equatorial regions ...
Fuzzy Interpolation and Other Interpolation Methods Used in Robot Calibrations
Directory of Open Access Journals (Sweden)
Ying Bai
2012-01-01
Full Text Available A novel interpolation algorithm, fuzzy interpolation, is presented and compared with other popular interpolation methods widely implemented in industrial robots calibrations and manufacturing applications. Different interpolation algorithms have been developed, reported, and implemented in many industrial robot calibrations and manufacturing processes in recent years. Most of them are based on looking for the optimal interpolation trajectories based on some known values on given points around a workspace. However, it is rare to build an optimal interpolation results based on some random noises, and this is one of the most popular topics in industrial testing and measurement applications. The fuzzy interpolation algorithm (FIA reported in this paper provides a convenient and simple way to solve this problem and offers more accurate interpolation results based on given position or orientation errors that are randomly distributed in real time. This method can be implemented in many industrial applications, such as manipulators measurements and calibrations, industrial automations, and semiconductor manufacturing processes.
Tricubic polynomial interpolation.
Birkhoff, G
1971-06-01
A new triangular "finite element" is described; it involves the 12-parameter family of all quartic polynomial functions that are "tricubic" in that their variation is cubic along any parallel to any side of the triangle. An interpolation scheme is described that approximates quite accurately any smooth function on any triangulated domain by a continuously differentiable function, tricubic on each triangular element.
Optical feedback effects on terahertz quantum cascade lasers: modelling and applications
Rakić, Aleksandar D.; Lim, Yah Leng; Taimre, Thomas; Agnew, Gary; Qi, Xiaoqiong; Bertling, Karl; Han, She; Wilson, Stephen J.; Kundu, Iman; Grier, Andrew; Ikonić, Zoran; Valavanis, Alexander; Demić, Aleksandar; Keeley, James; Li, Lianhe H.; Linfield, Edmund H.; Davies, A. Giles; Harrison, Paul; Ferguson, Blake; Walker, Graeme; Prow, Tarl; Indjin, Dragan; Soyer, H. Peter
2016-11-01
Terahertz (THz) quantum cascade lasers (QCLs) are compact sources of radiation in the 1-5 THz range with significant potential for applications in sensing and imaging. Laser feedback interferometry (LFI) with THz QCLs is a technique utilizing the sensitivity of the QCL to the radiation reflected back into the laser cavity from an external target. We will discuss modelling techniques and explore the applications of LFI in biological tissue imaging and will show that the confocal nature of the QCL in LFI systems, with their innate capacity for depth sectioning, makes them suitable for skin diagnostics with the well-known advantages of more conventional confocal microscopes. A demonstration of discrimination of neoplasia from healthy tissue using a THz, LFI-based system in the context of melanoma is presented using a transgenic mouse model.
Hassan, Ehab; Morrison, P J; Horton, W
2016-01-01
Progress in understanding the coupling between plasma instabilities in the equatorial electrojet based on a unified fluid model is reported. A deeper understanding of the linear and nonlinear evolution and the coupling of the gradient-drift and Farley-Buneman instabilities is achieved by studying the e?ect of di?erent combinations of the density-gradient scale-lengths (Ln) and cross-?eld (E?B) drifts on the plasma turbulence. Mechanisms and channels of energy transfer are illucidated for these multiscale instabilities. Energy for the uni?ed model is examined, including the injected, conservative redistribution (between ?elds and scales), and ultimate dissipation. Various physical mechanisms involved in the energetics are categorized as sources, sinks, nonlinear transfer, and coupling to show that the system satisfies the fundamental law of energy Oonservation. The physics of the nonlinear transfer terms is studied to identify their roles in producing energy cascades { the transference of energy from the domin...
Hamadou, A.; Thobel, J.-L.; Lamari, S.
2016-10-01
A four level rate equations model for a terahertz optically pumped electrically driven quantum cascade laser is here introduced and used to model the system both analytically and numerically. In the steady state, both in the presence and absence of the terahertz optical field, we solve the resulting nonlinear system of equations and obtain closed form expressions for the levels occupation, population inversion as well as the mid-infrared pump threshold intensity in terms of the device parameters. We also derive, for the first time for this system, an analytical formula for the optical external efficiency and analyze the simultaneous effects of the cavity length and pump intensity on it. At moderate to high pump intensities, we find that the optical external efficiency scales roughly as the reciprocal of the cavity length.
Hard state of the urban canopy layer turbulence and its self-similar multiplicative cascade models
Institute of Scientific and Technical Information of China (English)
HU; Fei; CHENG; Xueling; ZHAO; Songnian; QUAN; Lihong
2005-01-01
It is found by experiment that under the thermal convection condition, the temperature fluctuation in the urban canopy layer turbulence has the hard state character, and the temperature difference between two points has the exponential probability density function distribution. At the same time, the turbulent energy dissipation rate fits the log-normal distribution, and is in accord with the hypothesis proposed by Kolmogorov in 1962 and lots of reported experimental results. In this paper, the scaling law of hard state temperature n order structure function is educed by the self-similar multiplicative cascade models. The theory formula is Sn = n/3μ{n(n+6)/72+[2lnn!-nln2]/2ln6}, and μ Is intermittent exponent. The formula can fit the experimental results up to order 8 exponents, is superior to the predictions by the Kolmogorov theory, the β And log-normal model.
$K^{+}$ momentum spectrum from $(K^{-},K^{+})$ reactions in intranuclear cascade model
Nara, Y; Harada, T; Engel, A
1996-01-01
In a framework of intranuclear cascade (INC) type calculation, we study a momentum spectrum in reactions \\KK at a beam momentum of 1.65 GeV/c. INC model calculations are compared with the relativistic impulse approximation (RIA) calculations to perform the detailed study of the reaction mechanism. We find that the INC model can reproduce the experimental data on various targets. Especially, in the low-momentum region, the forward-angle cross sections of the $(K^-,K^+)$ reaction on from light to heavy targets are consistently explained with the two-step strangeness exchange and production processes with various intermediate mesons, and $\\phi$, $a_0$ and $f_0$ productions and their decay into $K^+K^-$. In the two-step processes, inclusion of meson and hyperon resonances is found to be essential.
Agarwal, Nishant; Khoury, Justin; Trodden, Mark
2009-01-01
We develop a fully covariant, well-posed 5D effective action for the 6D cascading gravity brane-world model, and use this to study cosmological solutions. We obtain this effective action through the 6D decoupling limit, in which an additional scalar degree mode, \\pi, called the brane-bending mode, determines the bulk-brane gravitational interaction. The 5D action obtained this way inherits from the sixth dimension an extra \\pi self-interaction kinetic term. We compute appropriate boundary terms, to supplement the 5D action, and hence derive fully covariant junction conditions and the 5D Einstein field equations. Using these, we derive the cosmological evolution induced on a 3-brane moving in a static bulk. We study the strong- and weak-coupling regimes analytically in this static ansatz, and perform a complete numerical analysis of our solution. Although the cascading model can generate an accelerating solution in which the \\pi field comes to dominate at late times, the presence of a critical singularity prev...
Hosseini, M.; Magagi, R.; Goita, K.
2013-12-01
Soil moisture is an important parameter in hydrology that can be derived from remote sensing. In different studies, it was shown that optical-thermal, active and passive microwave remote sensing data can be used for soil moisture estimation. However, the most promising approach to estimate soil moisture in large areas is passive microwave radiometry. Global estimation of soil moisture is now operational by using remote sensing techniques. The Advanced Microwave Scanning Radiometer-Earth Observing System Sensor (AMSR-E) and Soil Moisture and Ocean Salinity (SMOS) passive microwave radiometers that were lunched on 2002 and 2009 respectively along with the upcoming Soil Moisture Active-Passive (SMAP) satellite that was planned to be lunched in the time frame of 2014-2015 make remote sensing to be more useful in soil moisture estimation. However, the spatial resolutions of AMSR-E, SMOS and SMAP are 60 km, 40 km and 10 km respectively. These very low spatial resolutions can not show the temporal and spatial variability of soil moisture in field or small scales. So, using disaggregation methods is required to efficiently using the passive microwave derived soil moisture information in different scales. The low spatial resolutions of passive microwave satellites can be improved by using disaggregation methods. Random Cascade (RC) model (Over and Gupta, 1996) is used in this research to downscale the 40 km resolution of SMOS satellite. By using this statistical method, the SMOS soil moisture resolutions are improved to 20 km, 10 km, 5 km and 2.5 km, respectively. The data that were measured during Soil Moisture Active Passive Validation Experiment 2012 (SMAPVEX12) field campaign are used to do the experiments. Totally the ground data and SMOS images that were obtained during 13 different days from 7-June-2012 to 13-July-2012 are used. By comparison with ground soil moisture, it is observed that the SMOS soil moisture is underestimated for all the images and so bias amounts
Licznar, Paweł; Łomotowski, Janusz; Rupp, David E.
2011-03-01
Six variations of multiplicative random cascade models for generating fine-resolution (i.e., 5-minute interval) rainfall time series were evaluated for rainfall in Wroclaw, Poland. Of these variations, one included a new beta-normal generator for a microcanonical cascade. This newly proposed model successfully reproduces the statistical behavior of local 5-minute rainfalls, in terms of intermittency as well as variability. In contrast, both the canonical cascade models with either constant or time-scaled parameters and a microcanonical cascade model with a beta generator substantially underestimate 5-minute maximum rainfall intensities. The canonical models also fail to properly reproduce the intermittency of the rainfall process across a range of timescales. New observations are also made concerning the histograms of the breakdown coefficients (BDC). The tendency of the BDC histograms to have values exactly equal to 0.5 is identified and explained by the quality of pluviograph records. Moreover, the hierarchical evolution of BDC histograms from beta-like for long time steps to beta-normal histograms for short time steps is observed for the first time. The potential advantage is shown of synthetic high resolution rainfall time series generated by the revised microcanonical model for use in hydrology, especially hydrodynamic modelling of urban drainage networks.
Wavelet-based cascade model for intermittent structure in terrestrial environments
Wilson, D Keith; Vecherin, Sergey N
2013-01-01
A wavelet-like model for distributions of objects in natural and man-made terrestrial environments is developed. The model is constructed in a self-similar fashion, with the sizes, amplitudes, and numbers of objects occurring at a constant ratios between parent and offspring objects. The objects are randomly distributed in space according to a Poisson process. Fractal supports and a cascade model are used to organize objects intermittently in space. In its basic form, the model is for continuously varying random fields, although a level-cut is introduced to model two-phase random media. The report begins with a description of relevant concepts from fractal theory, and then progresses through static (time-invariant), steady-state, and non-steady models. The results can be applied to such diverse phenomena as turbulence, geologic distributions, urban buildings, vegetation, and arctic ice floes. The model can be used as a basis for synthesizing realistic terrestrial scenes, and for predicting the performance of ...
Yu, Jiang-Bo; Zhao, Yan; Wu, Yu-Qiang
2014-04-01
This article considers the global robust output regulation problem via output feedback for a class of cascaded nonlinear systems with input-to-state stable inverse dynamics. The system uncertainties depend not only on the measured output but also all the unmeasurable states. By introducing an internal model, the output regulation problem is converted into a stabilisation problem for an appropriately augmented system. The designed dynamic controller could achieve the global asymptotic tracking control for a class of time-varying reference signals for the system output while keeping all other closed-loop signals bounded. It is of interest to note that the developed control approach can be applied to the speed tracking control of the fan speed control system. The simulation results demonstrate its effectiveness.
On the role of exponential splines in image interpolation.
Kirshner, Hagai; Porat, Moshe
2009-10-01
A Sobolev reproducing-kernel Hilbert space approach to image interpolation is introduced. The underlying kernels are exponential functions and are related to stochastic autoregressive image modeling. The corresponding image interpolants can be implemented effectively using compactly-supported exponential B-splines. A tight l(2) upper-bound on the interpolation error is then derived, suggesting that the proposed exponential functions are optimal in this regard. Experimental results indicate that the proposed interpolation approach with properly-tuned, signal-dependent weights outperforms currently available polynomial B-spline models of comparable order. Furthermore, a unified approach to image interpolation by ideal and nonideal sampling procedures is derived, suggesting that the proposed exponential kernels may have a significant role in image modeling as well. Our conclusion is that the proposed Sobolev-based approach could be instrumental and a preferred alternative in many interpolation tasks.
A probabilistic sediment cascade model of sediment transfer through a mountain basin
Bennett, G. L.; Molnar, P.; McArdell, B. W.; Lane, S. N.; Burlando, P.
2013-12-01
Mountain basin sediment discharge poses a significant hazard to the downstream population, particularly in the form of debris flows. The importance and sensitivity of snow and ice melt processes in mountain basins along with their rapid rainfall-runoff response makes mountain basin sediment discharge particularly responsive to climate change. It is important to understand and model sediment transfer through mountain basins to be able to predict sediment discharge under a changing climate. We developed a probabilistic sediment cascade model, SedCas, to simulate sediment transfer in a mountain basin (Illgraben, Switzerland) where sediment is produced by hillslope landslides and exported out of the basin by debris flows and floods. We present the model setup, the calibration of the model for the period 2000 - 2009 and the application of SedCas to model sediment discharge in the Illgraben over the 19th and 20th centuries. SedCas conceptualizes the fluvial system as a spatially lumped cascade of connected reservoirs representing hillslope and channel storages where sediment goes through multiple cycles of storage and remobilization by surface runoff. Sediment input is drawn from a probability distribution of slope failures produced for the basin from a time series of DEMs and the model is driven by observed climate. The model includes all relevant hydrological processes that lead to runoff in an Alpine basin, such as snow cover accumulation, snowmelt, evapotranspiration, and soil water storage. Although the processes of sediment transfer and debris flow generation are described in a simplified manner, SedCas produces highly complex sediment discharge behavior which is driven by the availability of sediment and antecedent moisture (system memory) as well as triggering potential (climate). The model reproduces the first order properties of observed debris flows over the period 2000-2009 including their probability distribution, seasonal timing and probability of
Energy Technology Data Exchange (ETDEWEB)
Aatos, Heikkinen; Andi, Hektor; Veikko, Karimaki; Tomas, Linden [Helsinki Univ., Institute of Physics (Finland)
2003-07-01
We study the performance of a new Bertini intra-nuclear cascade model implemented in the general detector simulation tool-kit Geant4 with a High Throughput Computing (HTC) cluster architecture. A 60 node Pentium III open-Mosix cluster is used with the Mosix kernel performing automatic process load-balancing across several CPUs. The Mosix cluster consists of several computer classes equipped with Windows NT workstations that automatically boot, daily and become nodes of the Mosix cluster. The models included in our study are a Bertini intra-nuclear cascade model with excitons, consisting of a pre-equilibrium model, a nucleus explosion model, a fission model and an evaporation model. The speed and accuracy obtained for these models is presented. (authors)
Energy Technology Data Exchange (ETDEWEB)
Aatos, Heikkinen; Andi, Hektor; Veikko, Karimaki; Tomas, Linden [Helsinki Univ., Institute of Physics (Finland)
2003-07-01
We study the performance of a new Bertini intra-nuclear cascade model implemented in the general detector simulation tool-kit Geant4 with a High Throughput Computing (HTC) cluster architecture. A 60 node Pentium III open-Mosix cluster is used with the Mosix kernel performing automatic process load-balancing across several CPUs. The Mosix cluster consists of several computer classes equipped with Windows NT workstations that automatically boot, daily and become nodes of the Mosix cluster. The models included in our study are a Bertini intra-nuclear cascade model with excitons, consisting of a pre-equilibrium model, a nucleus explosion model, a fission model and an evaporation model. The speed and accuracy obtained for these models is presented. (authors)
Interpolation of Vector Measures
Institute of Scientific and Technical Information of China (English)
Ricardo del CAMPO; Antonio FERN(A)NDEZ; Fernando MAYORAL; Francisco NARANJO; Enrique A. S(A)NCHEZ-P(E)REZ
2011-01-01
Let (Ω, ∑) be a measurable space and m0: ∑→ X0 and m1: ∑ -→ X1 be positive vector measures with values in the Banach K(o)the function spaces X0 and X1. If 0 < α < 1, we define a X10-αXα1 and we analyze the space of integrable functions with respect to measure [m0, m1]α in order to prove suitable extensions of the classical Stein-Weiss formulas that hold for the complex interpolation of Lp-spaces.Since each p-convex order continuous K(o)the function space with weak order unit can be represented as a space of p-integrable functions with respect to a vector measure, we provide in this way a technique to obtain representations of the corresponding complex interpolation spaces. As applications, we provide a Riesz-Thorin theorem for spaces of p-integrable functions with respect to vector measures and a formula for representing the interpolation of the injective tensor product of such spaces.
Energy Technology Data Exchange (ETDEWEB)
Wampler, William R.; Myers, Samuel Maxwell,
2014-02-01
A model is presented for recombination of charge carriers at displacement damage in gallium arsenide, which includes clustering of the defects in atomic displacement cascades produced by neutron or ion irradiation. The carrier recombination model is based on an atomistic description of capture and emission of carriers by the defects with time evolution resulting from the migration and reaction of the defects. The physics and equations on which the model is based are presented, along with details of the numerical methods used for their solution. The model uses a continuum description of diffusion, field-drift and reaction of carriers and defects within a representative spherically symmetric cluster. The initial radial defect profiles within the cluster were chosen through pair-correlation-function analysis of the spatial distribution of defects obtained from the binary-collision code MARLOWE, using recoil energies for fission neutrons. Charging of the defects can produce high electric fields within the cluster which may influence transport and reaction of carriers and defects, and which may enhance carrier recombination through band-to-trap tunneling. Properties of the defects are discussed and values for their parameters are given, many of which were obtained from density functional theory. The model provides a basis for predicting the transient response of III-V heterojunction bipolar transistors to pulsed neutron irradiation.
A two-stage cascade model of BOLD responses in human visual cortex.
Directory of Open Access Journals (Sweden)
Kendrick N Kay
Full Text Available Visual neuroscientists have discovered fundamental properties of neural representation through careful analysis of responses to controlled stimuli. Typically, different properties are studied and modeled separately. To integrate our knowledge, it is necessary to build general models that begin with an input image and predict responses to a wide range of stimuli. In this study, we develop a model that accepts an arbitrary band-pass grayscale image as input and predicts blood oxygenation level dependent (BOLD responses in early visual cortex as output. The model has a cascade architecture, consisting of two stages of linear and nonlinear operations. The first stage involves well-established computations-local oriented filters and divisive normalization-whereas the second stage involves novel computations-compressive spatial summation (a form of normalization and a variance-like nonlinearity that generates selectivity for second-order contrast. The parameters of the model, which are estimated from BOLD data, vary systematically across visual field maps: compared to primary visual cortex, extrastriate maps generally have larger receptive field size, stronger levels of normalization, and increased selectivity for second-order contrast. Our results provide insight into how stimuli are encoded and transformed in successive stages of visual processing.
Xu, Kesheng; Maidana, Jean P.; Caviedes, Mauricio; Quero, Daniel; Aguirre, Pablo; Orio, Patricio
2017-01-01
In this article, we describe and analyze the chaotic behavior of a conductance-based neuronal bursting model. This is a model with a reduced number of variables, yet it retains biophysical plausibility. Inspired by the activity of cold thermoreceptors, the model contains a persistent Sodium current, a Calcium-activated Potassium current and a hyperpolarization-activated current (Ih) that drive a slow subthreshold oscillation. Driven by this oscillation, a fast subsystem (fast Sodium and Potassium currents) fires action potentials in a periodic fashion. Depending on the parameters, this model can generate a variety of firing patterns that includes bursting, regular tonic and polymodal firing. Here we show that the transitions between different firing patterns are often accompanied by a range of chaotic firing, as suggested by an irregular, non-periodic firing pattern. To confirm this, we measure the maximum Lyapunov exponent of the voltage trajectories, and the Lyapunov exponent and Lempel-Ziv's complexity of the ISI time series. The four-variable slow system (without spiking) also generates chaotic behavior, and bifurcation analysis shows that this is often originated by period doubling cascades. Either with or without spikes, chaos is no longer generated when the Ih is removed from the system. As the model is biologically plausible with biophysically meaningful parameters, we propose it as a useful tool to understand chaotic dynamics in neurons. PMID:28344550
Quantifier-Free Interpolation of a Theory of Arrays
Bruttomesso, Roberto; Ranise, Silvio
2012-01-01
The use of interpolants in model checking is becoming an enabling technology to allow fast and robust verification of hardware and software. The application of encodings based on the theory of arrays, however, is limited by the impossibility of deriving quantifier- free interpolants in general. In this paper, we show that it is possible to obtain quantifier-free interpolants for a Skolemized version of the extensional theory of arrays. We prove this in two ways: (1) non-constructively, by using the model theoretic notion of amalgamation, which is known to be equivalent to admit quantifier-free interpolation for universal theories; and (2) constructively, by designing an interpolating procedure, based on solving equations between array updates. (Interestingly, rewriting techniques are used in the key steps of the solver and its proof of correctness.) To the best of our knowledge, this is the first successful attempt of computing quantifier- free interpolants for a variant of the theory of arrays with extension...
Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong
2016-01-01
Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.
Directory of Open Access Journals (Sweden)
Ping Jiang
2015-01-01
Full Text Available Multidisciplinary design optimization (MDO has been applied widely in the design of complex engineering systems. To ease MDO problems, analytical target cascading (ATC organizes MDO process into multilevels according to the components of engineering systems, which provides a promising way to deal with MDO problems. ATC adopts a coordination strategy to coordinate the couplings between two adjacent levels in the design optimization process; however, existing coordination strategies in ATC face the obstacles of complicated coordination process and heavy computation cost. In order to conquer this problem, a quadratic exterior penalty function (QEPF based ATC (QEPF-ATC approach is proposed, where QEPF is adopted as the coordination strategy. Moreover, approximate models are adopted widely to replace the expensive simulation models in MDO; a QEPF-ATC and Kriging model combined approach is further proposed to deal with MDO problems, owing to the comprehensive performance, high approximation accuracy, and robustness of Kriging model. Finally, the geometric programming and reducer design cases are given to validate the applicability and efficiency of the proposed approach.
PACIAE 2.1: An Updated Issue of Parton and Hadron Cascade Model PACIAE 2.0
Institute of Scientific and Technical Information of China (English)
SA; Ben-hao; ZHOU; Dai-mei; YAN; Yu-liang; DONG; Bao-guo; CAI; Xu
2013-01-01
We have updated the parton and hadron cascade model PACIAE 2.0 to the new issue of PACIAE 2.1.The PACIAE model is based on PYTHIA.In the PYTHIA model,once the generated particle or parton transverse momentum pT is randomly sampled,the px and py components are originally put on the circle with radius pT randomly.Now,it is put
Local and Nonlocal Regularization to Image Interpolation
Directory of Open Access Journals (Sweden)
Yi Zhan
2014-01-01
Full Text Available This paper presents an image interpolation model with local and nonlocal regularization. A nonlocal bounded variation (BV regularizer is formulated by an exponential function including gradient. It acts as the Perona-Malik equation. Thus our nonlocal BV regularizer possesses the properties of the anisotropic diffusion equation and nonlocal functional. The local total variation (TV regularizer dissipates image energy along the orthogonal direction to the gradient to avoid blurring image edges. The derived model efficiently reconstructs the real image, leading to a natural interpolation which reduces blurring and staircase artifacts. We present experimental results that prove the potential and efficacy of the method.
Influence maximization in social networks under an independent cascade-based model
Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan
2016-02-01
The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.
Transient dynamics and food-web complexity in the Lotka-Volterra cascade model.
Chen, X.; Cohen, J. E.
2001-01-01
How does the long-term behaviour near equilibrium of model food webs correlate with their short-term transient dynamics? Here, simulations of the Lotka -Volterra cascade model of food webs provide the first evidence to answer this question. Transient behaviour is measured by resilience, reactivity, the maximum amplification of a perturbation and the time at which the maximum amplification occurs. Model food webs with a higher probability of local asymptotic stability may be less resilient and may have a larger transient growth of perturbations. Given a fixed connectance, the sizes and durations of transient responses to perturbations increase with the number of species. Given a fixed number of species, as connectance increases, the sizes and durations of transient responses to perturbations may increase or decrease depending on the type of link that is varied. Reactivity is more sensitive to changes in the number of donor-controlled links than to changes in the number of recipient-controlled links, while resilience is more sensitive to changes in the number of recipient-controlled links than to changes in the number of donor-controlled links. Transient behaviour is likely to be one of the important factors affecting the persistence of ecological communities. PMID:11345334
Market disruption, cascading effects, and economic recovery:a life-cycle hypothesis model.
Energy Technology Data Exchange (ETDEWEB)
Sprigg, James A.
2004-11-01
This paper builds upon previous work [Sprigg and Ehlen, 2004] by introducing a bond market into a model of production and employment. The previous paper described an economy in which households choose whether to enter the labor and product markets based on wages and prices. Firms experiment with prices and employment levels to maximize their profits. We developed agent-based simulations using Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate that multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment, but also suffer from market noise stemming from consumer churn. In this paper we introduce a bond market as a mechanism for household savings. We simulate an economy of continuous overlapping generations in which each household grows older in the course of the simulation and continually revises its target level of savings according to a life-cycle hypothesis. Households can seek employment, earn income, purchase goods, and contribute to savings until they reach the mandatory retirement age; upon retirement households must draw from savings in order to purchase goods. This paper demonstrates the simultaneous convergence of product, labor, and savings markets to their calculated equilibria, and simulates how a disruption to a productive sector will create cascading effects in all markets. Subsequent work will use similar models to simulate how disruptions, such as terrorist attacks, would interplay with consumer confidence to affect financial markets and the broader economy.
Transient dynamics and food-web complexity in the Lotka-Volterra cascade model.
Chen, X; Cohen, J E
2001-04-22
How does the long-term behaviour near equilibrium of model food webs correlate with their short-term transient dynamics? Here, simulations of the Lotka -Volterra cascade model of food webs provide the first evidence to answer this question. Transient behaviour is measured by resilience, reactivity, the maximum amplification of a perturbation and the time at which the maximum amplification occurs. Model food webs with a higher probability of local asymptotic stability may be less resilient and may have a larger transient growth of perturbations. Given a fixed connectance, the sizes and durations of transient responses to perturbations increase with the number of species. Given a fixed number of species, as connectance increases, the sizes and durations of transient responses to perturbations may increase or decrease depending on the type of link that is varied. Reactivity is more sensitive to changes in the number of donor-controlled links than to changes in the number of recipient-controlled links, while resilience is more sensitive to changes in the number of recipient-controlled links than to changes in the number of donor-controlled links. Transient behaviour is likely to be one of the important factors affecting the persistence of ecological communities.
Directory of Open Access Journals (Sweden)
J. T. dall'Amico
2012-03-01
Full Text Available For the validation of coarse resolution soil moisture products from missions such as the Soil Moisture and Ocean Salinity (SMOS mission, hydrological modelling of soil moisture is an important tool. The spatial distribution of precipitation is among the most crucial input data for such models. Thus, reliable time series of precipitation fields are required, but these often need to be interpolated from data delivered by scarcely distributed gauge station networks. In this study, a commercial precipitation product derived by Meteomedia AG from merging radar and gauge data is introduced as a novel means of adding the promising area-distributed information given by a radar network to the more accurate, but point-like measurements from a gauge station network. This precipitation product is first validated against an independent gauge station network. Further, the novel precipitation product is assimilated into the hydrological land surface model PROMET for the Upper Danube Catchment in southern Germany, one of the major SMOS calibration and validation sites in Europe. The modelled soil moisture fields are compared to those obtained when the operational interpolation from gauge station data is used to force the model. The results suggest that the assimilation of the novel precipitation product can lead to deviations of modelled soil moisture in the order of 0.15 m^{3} m^{−3} on small spatial (∼1 km^{2} and short temporal resolutions (∼1 day. As expected, after spatial aggregation to the coarser grid on which SMOS data are delivered (~195 km^{2}, these differences are reduced to the order of 0.04 m^{3} m^{−3}, which is the accuracy benchmark for SMOS. The results of both model runs are compared to brightness temperatures measured by the airborne L-band radiometer EMIRAD during the SMOS Validation Campaign 2010. Both comparisons yield equally good correlations, confirming the model's ability to
Perry, Bruce A.; Anderson, Molly S.
2015-01-01
The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station Water Processor Assembly to form a complete water recovery system for future missions. A preliminary chemical process simulation was previously developed using Aspen Custom Modeler® (ACM), but it could not simulate thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. This paper describes modifications to the ACM simulation of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version can be used to model thermal startup and predicts the total energy consumption of the CDS. The simulation has been validated for both NaC1 solution and pretreated urine feeds and no longer requires retuning when operating parameters change. The simulation was also used to predict how internal processes and operating conditions of the CDS affect its performance. In particular, it is shown that the coefficient of performance of the thermoelectric heat pump used to provide heating and cooling for the CDS is the largest factor in determining CDS efficiency. Intrastage heat transfer affects CDS performance indirectly through effects on the coefficient of performance.
Parallelizing Backpropagation Neural Network Using MapReduce and Cascading Model.
Liu, Yang; Jing, Weizhe; Xu, Lixiong
2016-01-01
Artificial Neural Network (ANN) is a widely used algorithm in pattern recognition, classification, and prediction fields. Among a number of neural networks, backpropagation neural network (BPNN) has become the most famous one due to its remarkable function approximation ability. However, a standard BPNN frequently employs a large number of sum and sigmoid calculations, which may result in low efficiency in dealing with large volume of data. Therefore to parallelize BPNN using distributed computing technologies is an effective way to improve the algorithm performance in terms of efficiency. However, traditional parallelization may lead to accuracy loss. Although several complements have been done, it is still difficult to find out a compromise between efficiency and precision. This paper presents a parallelized BPNN based on MapReduce computing model which supplies advanced features including fault tolerance, data replication, and load balancing. And also to improve the algorithm performance in terms of precision, this paper creates a cascading model based classification approach, which helps to refine the classification results. The experimental results indicate that the presented parallelized BPNN is able to offer high efficiency whilst maintaining excellent precision in enabling large-scale machine learning.
Parallelizing Backpropagation Neural Network Using MapReduce and Cascading Model
Directory of Open Access Journals (Sweden)
Yang Liu
2016-01-01
Full Text Available Artificial Neural Network (ANN is a widely used algorithm in pattern recognition, classification, and prediction fields. Among a number of neural networks, backpropagation neural network (BPNN has become the most famous one due to its remarkable function approximation ability. However, a standard BPNN frequently employs a large number of sum and sigmoid calculations, which may result in low efficiency in dealing with large volume of data. Therefore to parallelize BPNN using distributed computing technologies is an effective way to improve the algorithm performance in terms of efficiency. However, traditional parallelization may lead to accuracy loss. Although several complements have been done, it is still difficult to find out a compromise between efficiency and precision. This paper presents a parallelized BPNN based on MapReduce computing model which supplies advanced features including fault tolerance, data replication, and load balancing. And also to improve the algorithm performance in terms of precision, this paper creates a cascading model based classification approach, which helps to refine the classification results. The experimental results indicate that the presented parallelized BPNN is able to offer high efficiency whilst maintaining excellent precision in enabling large-scale machine learning.
Extended density-matrix model applied to silicon-based terahertz quantum cascade lasers
Dinh, T. V.; Valavanis, A.; Lever, L. J. M.; Ikonić, Z.; Kelsall, R. W.
2012-06-01
Silicon-based terahertz quantum cascade lasers (QCLs) offer potential advantages over existing III-V devices. Although coherent electron transport effects are known to be important in QCLs, they have never been considered in Si-based device designs. We describe a density-matrix transport model that is designed to be more general than those in previous studies and to require less a priori knowledge of electronic band structure, allowing its use in semiautomated design procedures. The basis of the model includes all states involved in interperiod transport, and our steady-state solution extends beyond the rotating-wave approximation by including dc and counterpropagating terms. We simulate the potential performance of bound-to-continuum Ge/SiGe QCLs and find that devices with 4-5-nm-thick barriers give the highest simulated optical gain. We also examine the effects of interdiffusion between Ge and SiGe layers; we show that if it is taken into account in the design, interdiffusion lengths of up to 1.5 nm do not significantly affect the simulated device performance.
Predator prey oscillations in a simple cascade model of drift wave turbulence
Energy Technology Data Exchange (ETDEWEB)
Berionni, V.; Guercan, Oe. D. [Laboratoire de Physique des Plasmas, Ecole Polytechnique, CNRS, 91128 Palaiseau Cedex (France)
2011-11-15
A reduced three shell limit of a simple cascade model of drift wave turbulence, which emphasizes nonlocal interactions with a large scale mode, is considered. It is shown to describe both the well known predator prey dynamics between the drift waves and zonal flows and to reduce to the standard three wave interaction equations. Here, this model is considered as a dynamical system whose characteristics are investigated. The analytical solutions for the purely nonlinear limit are given in terms of the Jacobi elliptic functions. An approximate analytical solution involving Jacobi elliptic functions and exponential growth is computed using scale separation for the case of unstable solutions that are observed when the energy injection rate is high. The fixed points of the system are determined, and the behavior around these fixed points is studied. The system is shown to display periodic solutions corresponding to limit cycle oscillations, apparently chaotic phase space orbits, as well as unstable solutions that grow slowly while oscillating rapidly. The period doubling route to transition to chaos is examined.
Uluca, Basak
This dissertation aims to achieve two goals. The first is to model the strategic interactions of firms that own cascaded reservoir-hydro plants in oligopolistic and mixed oligopolistic hydrothermal electricity generation markets. Although competition in thermal generation has been extensively modeled since the beginning of deregulation, the literature on competition in hydro generation is still limited; in particular, equilibrium models of oligopoly that study the competitive behavior of firms that own reservoir-hydro plants along the same river in hydrothermal electricity generation markets are still under development. In competitive markets, when the reservoirs are located along the same river, the water released from an upstream reservoir for electricity generation becomes input to the immediate downstream reservoir, which may be owned by a competitor, for current or future use. To capture the strategic interactions among firms with cascaded reservoir-hydro plants, the Upstream-Conjecture approach is proposed. Under the Upstream-Conjecture approach, a firm with an upstream reservoir-hydro plant assumes that firms with downstream reservoir-hydro plants will respond to changes in the upstream firm's water release by adjusting their water release by the same amount. The results of the Upstream Conjecture experiments indicate that firms that own upstream reservoirs in a cascade may have incentive to withhold or limit hydro generation, forcing a reduction in the utilization of the downstream hydro generation plants that are owned by competitors. Introducing competition to hydroelectricity generation markets is challenging and ownership allocation of the previously state-owned cascaded reservoir-hydro plants through privatization can have significant impact on the competitiveness of the generation market. The second goal of the dissertation is to extract empirical guidance about best policy choices for the ownership of the state-owned generation plants, including the
Interpolation of diffusion weighted imaging datasets
DEFF Research Database (Denmark)
Dyrby, Tim B; Lundell, Henrik; Burke, Mark W
2014-01-01
Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal...... to the voxel size showed that conventional higher-order interpolation methods improved the geometrical representation of white-matter tracts with reduced partial-volume-effect (PVE), except at tract boundaries. Simulations and interpolation of ex-vivo monkey brain DWI datasets revealed that conventional...
$\\gamma$-ray and X-ray luminosities from spin-powered pulsars in the full polar cap cascade model
Zhang, B; Zhang, Bing; Harding, Alice K.
2000-01-01
We modify the conventional curvature radiation (inverse Compton scattering) + synchrotron radiation polar cap cascade model by including the inverse Compton scattering of the higher generation pairs. Within the framework of the space-charge-limited-flow acceleration model with frame-dragging proposed by Harding & Muslimov (1998), such a full polar cap cascade scenario can well reproduce the $L_\\gamma \\propto (L_{\\rm sd})^{1/2}$ and the $L_x \\sim 10^{-3} L_{\\rm sd}$ dependences observed from the known spin-powered pulsars. According to this model, the ``pulsed'' soft ROSAT-band X-rays from most of the millisecond pulsars might be of thermal origin, if there are no strong multipole magnetic components near their surfaces.
Directory of Open Access Journals (Sweden)
T. Flament
2013-03-01
Full Text Available We describe a major subglacial lake drainage close to the ice divide in Wilkes Land, East Antarctica, and the subsequent cascading of water underneath the ice sheet toward the coast. To analyze the event, we combined altimetry data from several sources and bedrock data. We estimated the total volume of water that drained from Lake CookE2 by differencing digital elevation models (DEM derived from ASTER and SPOT5 stereo-imagery. With 5.2 ± 0.5 km3, this is the largest single subglacial drainage event reported so far in Antarctica. Elevation differences between ICESat laser altimetry and the SPOT5 DEM indicate that the discharge lasted approximately 2 yr. A 13-m uplift of the surface, corresponding to a refilling of about 0.64 ± 0.32 km3, was observed between the end of the discharge in October 2008 and February 2012. Using Envisat radar altimetry, with its high 35-day temporal resolution, we monitored the subsequent filling and drainage of connected subglacial lakes located downstream. In particular, a transient temporal signal can be detected within the theoretical 500-km long flow paths computed with the BEDMAP2 data set. The volume of water traveling in this wave is in agreement with the volume that drained from Lake CookE2. These observations contribute to a better understanding of the water transport beneath the East Antarctic ice sheet.
Modeling of dilute nitride cascaded quantum well solar cells for high efficiency photovoltaics
Vijaya, G.; Alemu, A.; Freundlich, A.
2013-03-01
III-V Dilute Nitride multi-quantum well structures are currently promising candidates to achieve 1 sun efficiencies of cell in a 4 junction configuration could yield 1 sun efficiencies greater than 40%. However for a conventional deep well design the characteristic carrier escape times could exceed that of radiative recombination hence limiting the current output of the cell, as has been indicated by prior experiments. In order to increase the current extraction here we evaluate the performance of a cascaded quantum well design whereby a thermally assisted resonant tunneling process is used to accelerate the carrier escape process (efficiency. The quantum efficiency of a p-i-n subcell where a periodic sequence of quantum wells with well and barrier thicknesses adjusted for the sequential extraction operation is calculated using a 2D drift diffusion model and taking into account absorption properties of resulting MQWs. The calculation also accounts for the E-field induced modifications of absorption properties and quantization in quantum wells. The results are then accounted for to calculate efficiencies for the proposed 4 junction design, and indicate potential for reaching efficiencies in excess of this structure is above 42% (1 sun) and above 50% (500 sun) AM1.5.
Distance in spatial interpolation of daily rain gauge data
Directory of Open Access Journals (Sweden)
B. Ahrens
2006-01-01
Full Text Available Spatial interpolation of rain gauge data is important in forcing of hydrological simulations or evaluation of weather predictions, for example. This paper investigates the application of statistical distance, like one minus common variance of observation time series, between data sites instead of geographical distance in interpolation. Here, as a typical representative of interpolation methods the inverse distance weighting interpolation is applied and the test data is daily precipitation observed in Austria. Choosing statistical distance instead of geographical distance in interpolation of available coarse network observations to sites of a denser network, which is not reporting for the interpolation date, yields more robust interpolation results. The most distinct performance enhancement is in or close to mountainous terrain. Therefore, application of statistical distance in the inverse distance weighting interpolation or in similar methods can parsimoniously densify the currently available observation network. Additionally, the success further motivates search for conceptual rain-orography interaction models as components of spatial rain interpolation algorithms in mountainous terrain.
Perry, Bruce; Anderson, Molly
2015-01-01
The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station (ISS) Water Processor Assembly (WPA) to form a complete Water Recovery System (WRS) for future missions. Independent chemical process simulations with varying levels of detail have previously been developed using Aspen Custom Modeler (ACM) to aid in the analysis of the CDS and several WPA components. The existing CDS simulation could not model behavior during thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. The first part of this paper describes modifications to the ACM model of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version of the model can accurately predict behavior during thermal startup for both NaCl solution and pretreated urine feeds. The model is used to predict how changing operating parameters and design features of the CDS affects its performance, and conclusions from these predictions are discussed. The second part of this paper describes the integration of the modified CDS model and the existing WPA component models into a single WRS model. The integrated model is used to demonstrate the effects that changes to one component can have on the dynamic behavior of the system as a whole.
Directory of Open Access Journals (Sweden)
J. Kent
2012-07-01
Full Text Available The accurate modelling of cascades to unresolved scales is an important part of the tracer transport component of dynamical cores of weather and climate models. This paper aims to investigate the ability of the advection schemes in the National Center for Atmospheric Research's Community Atmosphere Model version 5 (CAM5 to model this cascade. In order to quantify the effects of the different advection schemes in CAM5, four two-dimensional tracer transport test cases are presented. Three of the tests stretch the tracer below the scale of coarse resolution grids to ensure the downscale cascade of tracer variance. These results are compared with a high resolution reference solution, which is simulated on a resolution fine enough to resolve the tracer during the test. The fourth test has two separate flow cells, and is designed so that any tracer in the Western Hemisphere should not pass into the Eastern Hemisphere. This is to test whether the diffusion in transport schemes, often in the form of explicit hyper-diffusion terms or implicit through monotonic limiters, contains unphysical mixing.
An intercomparison of three of the dynamical cores of the National Center for Atmospheric Research's Community Atmosphere Model version 5 is performed. The results show that the finite-volume (CAM-FV and spectral element (CAM-SE dynamical cores model the downscale cascade of tracer variance better than the semi-Lagrangian transport scheme of the Eulerian spectral transform core (CAM-EUL. Each scheme tested produces unphysical mass in the Eastern Hemisphere of the separate cells test.
Directory of Open Access Journals (Sweden)
J. Kent
2012-12-01
Full Text Available The accurate modeling of cascades to unresolved scales is an important part of the tracer transport component of dynamical cores of weather and climate models. This paper aims to investigate the ability of the advection schemes in the National Center for Atmospheric Research's Community Atmosphere Model version 5 (CAM5 to model this cascade. In order to quantify the effects of the different advection schemes in CAM5, four two-dimensional tracer transport test cases are presented. Three of the tests stretch the tracer below the scale of coarse resolution grids to ensure the downscale cascade of tracer variance. These results are compared with a high resolution reference solution, which is simulated on a resolution fine enough to resolve the tracer during the test. The fourth test has two separate flow cells, and is designed so that any tracer in the western hemisphere should not pass into the eastern hemisphere. This is to test whether the diffusion in transport schemes, often in the form of explicit hyper-diffusion terms or implicit through monotonic limiters, contains unphysical mixing.
An intercomparison of three of the dynamical cores of the National Center for Atmospheric Research's Community Atmosphere Model version 5 is performed. The results show that the finite-volume (CAM-FV and spectral element (CAM-SE dynamical cores model the downscale cascade of tracer variance better than the semi-Lagrangian transport scheme of the Eulerian spectral transform core (CAM-EUL. Each scheme tested produces unphysical mass in the eastern hemisphere of the separate cells test.
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
It has always been a difficult problem to extract horizontal and vertical displacement components from the InSAR LOS (Line of Sight) displacement since the advent of monitoring ground surface deformation with InSAR technique. Having tried to fit the firsthand field investigation data with a least squares model and obtained a preliminary result, this paper, based on the previous field data and the InSAR data, presents a linear cubic interpolation model which well fits the feature of earthquake fracture zone. This model inherits the precision of investigation data;moreover make use of some advantages of the InSAR technique, such as quasi-real time observation, continuous recording and all-weather measurement. Accordingly, by means of the model this paper presents a method to decompose the InSAR slant range co-seismic displacement (i.e. LOS change) into horizontal and vertical displacement components. Approaching the real motion step by step, finally a serial of curves representing the co-seismic horizontal and vertical displacement component along the main earthquake fracture zone are approximately obtained.
MINIMAL RATIONAL INTERPOLATION AND PRONYS METHOD
ANTOULAS, AC; WILLEMS, JC
1990-01-01
A new method is proposed for dealing with the rational interpolation problem. It is based on the reachability of an appropriately defined pair of matrices. This method permits a complete clarification of several issues raised, but not answered, by the so-called Prony method of fitting a linear model
Energy Technology Data Exchange (ETDEWEB)
Rusov, V.D. [Department of Theoretical and Experimental Nuclear Physics, Odessa National Polytechnic University, Shevchenko av. 1, Odessa 65044 (Ukraine)]. E-mail: siiis@te.net.ua; Sharf, I.V. [Department of Theoretical and Experimental Nuclear Physics, Odessa National Polytechnic University, Shevchenko av. 1, Odessa 65044 (Ukraine)
2006-01-09
The inhomogeneous cascade-stochastic model of multiple hadrons production in inelastic p-bar p- and pp-interactions at high energies is proposed. In this model inclusive rapidity distributions of secondary hadrons are used as input data. One-parameter cascade-stochastic multiplicity distribution, where adjustable parameter has sense of the height of Feynman plateau of inclusive rapidity distribution of secondary hadrons in one hadron shower, is obtained. It is shown that all known now experimental data concerning both multiplicity and rapidity distributions of secondary particles in inelastic hh-processes in energy range s=30-1800 GeV, and also the ratio of cumulant moments to factorial moments of the multiplicity distributions in energy range s=200-900 GeV are described with high accuracy by one-parameter cascade-stochastic model of multiple hadrons production. The explicit form of asymptotic of distributions as well-known Polyakov-Dokshitzer's scaling function for quark and gluon jets is obtained by fitting of prognosis multiplicity distributions at energies s=14-30 TeV. A simplified quantitative analysis is presented and the qualitative explanation of behavior features of forward-backward multiplicity correlations is given within the framework of proposed model.
Interpolation of diffusion weighted imaging datasets.
Dyrby, Tim B; Lundell, Henrik; Burke, Mark W; Reislev, Nina L; Paulson, Olaf B; Ptito, Maurice; Siebner, Hartwig R
2014-12-01
Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal to the voxel size showed that conventional higher-order interpolation methods improved the geometrical representation of white-matter tracts with reduced partial-volume-effect (PVE), except at tract boundaries. Simulations and interpolation of ex-vivo monkey brain DWI datasets revealed that conventional interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical resolution and more anatomical details in complex regions such as tract boundaries and cortical layers, which are normally only visualized at higher image resolutions. Similar results were found with typical clinical human DWI dataset. However, a possible bias in quantitative values imposed by the interpolation method used should be considered. The results indicate that conventional interpolation methods can be successfully applied to DWI datasets for mining anatomical details that are normally seen only at higher resolutions, which will aid in tractography and microstructural mapping of tissue compartments.
Improvements in quantum cascade laser performance through comprehensive modeling and experiments
Howard, Scott Sheridan
Prior to the invention of the quantum cascade (QC) laser, many applications based on mid-infrared (mid-IR) laser absorption spectroscopy were not be explored. Development of the QC laser provided an inherently compact, semiconductor based, and tunable mid-IR source that could be used for laser absorption spectroscopy. Additionally, QC lasers can be designed to emit at a specific wavelength within a very wide wavelength range from between 3 and 30 mum and can be fabricated to operate single-mode to clearly scan mid-IR absorption "fingerprints" [1]. This allows lasers to be tailored to the exact wavelength of an absorption feature. Two examples of absorption spectroscopy experiments were carried out as part of this dissertation and described herein: C60 in space and dissolved gasses in living tissue. Although QC lasers allow for application development in the mid-IR, they are inefficient and heat dissipation is problematic. First generation QC lasers relied on either bulky cryogenic cooling systems for continuous wave operation or large, expensive pulse generators [2]. Later, advances in QC laser design, growth, and fabrication led to room-temperature continuous wave operation [3]. These advances promoted additional applications of QC lasers where cryogenic cooling was impossible or highly inconvenient. This dissertation presents comprehensive self-consistent models permitting the optimization of high operating temperature QC lasers. These models employ strategies counter to those used in designing low temperature devices and were used to design, fabricate, and demonstrate high-performance QC lasers. By self-consistently solving the temperature dependent threshold current density and heat equations, including temperature dependent thermal conductivities, phonon lifetimes, thermal "backfilling," thermionic emission, and energy level broadening, we calculated the effects of doping level, material choice, and waveguide layer thickness on the laser threshold performance
Manioudaki, Maria E; Poirazi, Panayiota
2013-01-01
Over the last decade, numerous computational methods have been developed in order to infer and model biological networks. Transcriptional networks in particular have attracted significant attention due to their critical role in cell survival. The majority of network inference methods use genome-wide experimental data to search for modules of genes with coherent expression profiles and common regulators, often ignoring the multi-layer structure of transcriptional cascades. Modeling methodologies on the other hand assume a given network structure and vary significantly in their algorithmic approach, ranging from over-simplified representations (e.g., Boolean networks) to detailed -but computationally expensive-network simulations (e.g., with differential equations). In this work we use Artificial Neural Networks (ANNs) to model transcriptional regulatory cascades that emerge during the stress response in Saccharomyces cerevisiae and extend in three layers. We confine the structure of the ANNs to match the structure of the biological networks as determined by gene expression, DNA-protein interaction and experimental evidence provided in publicly available databases. Trained ANNs are able to predict the expression profile of 11 target genes across multiple experimental conditions with a correlation coefficient >0.7. When time-dependent interactions between upstream transcription factors (TFs) and their indirect targets are also included in the ANNs, accurate predictions are achieved for 30/34 target genes. Moreover, heterodimer formation is taken into account. We show that ANNs can be used to (1) accurately predict the expression of downstream genes in a 3-layer transcriptional cascade based on the expression of their indirect regulators and (2) infer the condition- and time-dependent activity of various TFs as well as during heterodimer formation. We show that a three-layer regulatory cascade whose structure is determined by co-expressed gene modules and their
Energy Technology Data Exchange (ETDEWEB)
Steinbeck, T.; Rohr, J. [m.u.t. GmbH, Wedel (Germany)
2005-06-01
Quantum cascade lasers represent an almost ideal light source for infrared gas analysis. They allow sensitive and selective measurements in the mid-infrared. The detection of combustion gases for early fire detection represents an interesting field of application, where further technologic benefits are shown to advantage. The focus of this report is on the technical realization of a functional model and the electronic components. (orig.)
The impact of the topology on cascading failures in a power grid model
Koç, Yakup; Warnier, Martijn; Mieghem, Piet Van; Kooij, Robert E.; Brazier, Frances M. T.
2014-05-01
Cascading failures are one of the main reasons for large scale blackouts in power transmission grids. Secure electrical power supply requires, together with careful operation, a robust design of the electrical power grid topology. Currently, the impact of the topology on grid robustness is mainly assessed by purely topological approaches, that fail to capture the essence of electric power flow. This paper proposes a metric, the effective graph resistance, to relate the topology of a power grid to its robustness against cascading failures by deliberate attacks, while also taking the fundamental characteristics of the electric power grid into account such as power flow allocation according to Kirchhoff laws. Experimental verification on synthetic power systems shows that the proposed metric reflects the grid robustness accurately. The proposed metric is used to optimize a grid topology for a higher level of robustness. To demonstrate its applicability, the metric is applied on the IEEE 118 bus power system to improve its robustness against cascading failures.
Classification based polynomial image interpolation
Lenke, Sebastian; Schröder, Hartmut
2008-02-01
Due to the fast migration of high resolution displays for home and office environments there is a strong demand for high quality picture scaling. This is caused on the one hand by large picture sizes and on the other hand due to an enhanced visibility of picture artifacts on these displays [1]. There are many proposals for an enhanced spatial interpolation adaptively matched to picture contents like e.g. edges. The drawback of these approaches is the normally integer and often limited interpolation factor. In order to achieve rational factors there exist combinations of adaptive and non adaptive linear filters, but due to the non adaptive step the overall quality is notably limited. We present in this paper a content adaptive polyphase interpolation method which uses "offline" trained filter coefficients and an "online" linear filtering depending on a simple classification of the input situation. Furthermore we present a new approach to a content adaptive interpolation polynomial, which allows arbitrary polyphase interpolation factors at runtime and further improves the overall interpolation quality. The main goal of our new approach is to optimize interpolation quality by adapting higher order polynomials directly to the image content. In addition we derive filter constraints for enhanced picture quality. Furthermore we extend the classification based filtering to the temporal dimension in order to use it for an intermediate image interpolation.
A disposition of interpolation techniques
Knotters, M.; Heuvelink, G.B.M.
2010-01-01
A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated v
A disposition of interpolation techniques
Knotters, M.; Heuvelink, G.B.M.
2010-01-01
A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated v
Bárta, Miroslav; Büchner, Jörg; Karlický, Marian; Skála, Jan
2011-08-01
Magnetic reconnection is commonly considered to be a mechanism of solar (eruptive) flares. A deeper study of this scenario reveals, however, a number of open issues. Among them is the fundamental question of how the magnetic energy is transferred from large, accumulation scales to plasma scales where its actual dissipation takes place. In order to investigate this transfer over a broad range of scales, we address this question by means of a high-resolution MHD simulation. The simulation results indicate that the magnetic-energy transfer to small scales is realized via a cascade of consecutively smaller and smaller flux ropes (plasmoids), analogous to the vortex-tube cascade in (incompressible) fluid dynamics. Both tearing and (driven) "fragmenting coalescence" processes are equally important for the consecutive fragmentation of the magnetic field (and associated current density) into smaller elements. At the later stages, a dynamic balance between tearing and coalescence processes reveals a steady (power-law) scaling typical of cascading processes. It is shown that cascading reconnection also addresses other open issues in solar-flare research, such as the duality between the regular large-scale picture of (eruptive) flares and the observed signatures of fragmented (chaotic) energy release, as well as the huge number of accelerated particles. Indeed, spontaneous current-layer fragmentation and the formation of multiple channelized dissipative/acceleration regions embedded in the current layer appear to be intrinsic to the cascading process. The multiple small-scale current sheets may also facilitate the acceleration of a large number of particles. The structure, distribution, and dynamics of the embedded potential acceleration regions in a current layer fragmented by cascading reconnection are studied and discussed.
Energy Technology Data Exchange (ETDEWEB)
Hernandez, Andrew M. [Biomedical Engineering Graduate Group, University of California Davis, Sacramento, California 95817 (United States); Boone, John M., E-mail: john.boone@ucdmc.ucdavis.edu [Departments of Radiology and Biomedical Engineering, Biomedical Engineering Graduate Group, University of California Davis, Sacramento, California 95817 (United States)
2014-04-15
Purpose: Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. Methods: X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Results: Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R{sup 2}) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, “Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector,” Phys. Med. Biol. 24, 505–517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9
Dawson, Nathan J; Crescimanno, Michael
2013-01-01
We develop a model for off-resonant microscopic cascading of scalar polarizabilities using a self-consistent field approach, and use it to study the effects of boundaries on mesoscopic systems of nonlinear polarizable atoms and molecules. We find that higher-ordered susceptibilities can be enhanced by increasing the surface-to-volume ratio through reducing the distance between boundaries perpendicular to the linear polarization. We also show lattice scaling effects on the effective nonlinear refractive indices for Gaussian beams, and illustrate finite size effects on dipole field distributions in films subject to long-wavelength propagating fields. We derive simplified expressions for the microscopic cascading of the nonlinear optical response in guest-host systems.
Spectra of produced particles at CERN SPS heavy-ion collisions from a parton-cascade model
Srivastava, D K; Srivastava, Dinesh Kumar; Geiger, Klaus
1998-01-01
We evaluate the spectra of produced particles (pions, kaons, antiprotons) from partonic cascades which may develop in the wake of heavy-ion collisions at CERN SPS energies and which may hadronize by formation of clusters which decay into hadrons. Using the experimental data obtained by NA35 and NA44 collaborations for S+S and Pb+Pb collisions, we conclude that the Monte Carlo implementation of the recently developed parton-cascade/cluster-hadronization model provides a reasonable description of the distributions of the particles produced in such collisions. While the rapidity distribution of the mid-rapidity protons is described reasonably well, their transverse momentum distribution falls too rapidly compared to the experimental values, implying a significant effect of final state scattering among the produced hadrons neglected so far.
Numerical modeling of energy-separation in cascaded Leontiev tubes with a central body
Directory of Open Access Journals (Sweden)
Makarov Maksim
2017-01-01
Full Text Available Designs of two- and three-cascaded Leontiev tubes are proposed in the paper. The results of numerical simulation of the energy separation in such tubes are presented. The efficiency parameters are determined in direct flows of helium-xenon coolant with low Prandtl number.
1980-09-01
Miss Diane Rodimon for her programming skills and data reduction and to Mr. David Potash for the cascade construction and data acquisition. DDC TAB II...I were normalized by the wake half-width. The fit used in Reference 20 ( efl n2) is excellent for the far downstream traverses (i.e., X/BX > 0.057
The Impact of the Topology on Cascading Failures in a Power Grid Model
Koç, Y.; Warnier, M.; Van Mieghem, P.; Kooij, R.E.; Brazier, F.M.T.
2014-01-01
Cascading failures are one of the main reasons for large scale blackouts in power transmission grids. Secure electrical power supply requires, together with careful operation, a robust design of the electrical power grid topology. Currently, the impact of the topology on grid robustness is mainly as
Curve interpolation based on Catmull-Clark subdivision scheme
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
An efficient algorithm for curve interpolation is proposed. The algorithm can produce a subdivision surface that can interpolate the predefined cubic B-spline curves by applying the Catmull-Clark scheme to a polygonal mesh containing "symmetric zonal meshes", which possesses some special properties. Many kinds of curve interpolation problems can be dealt with by this algorithm, such as interpolating single open curve or closed curve, a mesh of nonintersecting or intersecting curve. The interpolating surface is C2 everywhere excepting at a finite number of points. At the same time, sharp creases can also be modeled on the limit subdivision surface by duplicating the vertices of the tagged edges of initial mesh, i.e. the surface is only C0 along the cubic B-spline curve that is defined by the tagged edges. Because of being simple and easy to implement, this method can be used for product shape design and graphic software development.
The accuracy assessment in areal interpolation:An empirical investigation
Institute of Scientific and Technical Information of China (English)
2008-01-01
Areal interpolation is the process of transferring data from source zones to target zones. While method development remains a top research priority in areal interpo-lation,the accuracy assessment aspect also begs for attention. This paper reports an empirical experience on probing an areal interpolation method to highlight the power and potential pitfalls in accuracy assessment. A kriging-based interpolation algorithm is evaluated by several approaches. It is found that accuracy assessment is a powerful tool to understand an interpolation method,e.g. the utility of ancillary data and semi-variogram modeling in kriging in our case study. However,different assessment methods and spatial units on which assessment is conducted can lead to rather different results. The typical practice to assess accuracy at the source zone level may overestimate interpolation accuracy. Assessment at the target zone level is suggested as a supplement.
Directory of Open Access Journals (Sweden)
Wenjun Zheng
Full Text Available Despite many experimental and computational studies of the gating transition of pentameric ligand-gated ion channels (pLGICs, the structural basis of how ligand binding couples to channel gating remains unknown. By using a newly developed interpolated elastic network model (iENM, we have attempted to compute a likely transition pathway from the closed- to the open-channel conformation of pLGICs as captured by the crystal structures of two prokaryotic pLGICs. The iENM pathway predicts a sequence of structural events that begins at the ligand-binding loops and is followed by the displacements of two key loops (loop 2 and loop 7 at the interface between the extracellular and transmembrane domain, the tilting/bending of the pore-lining M2 helix, and subsequent movements of M4, M3 and M1 helices in the transmembrane domain. The predicted order of structural events is in broad agreement with the Φ-value analysis of α subunit of nicotinic acetylcholine receptor mutants, which supports a conserved core mechanism for ligand-gated channel opening in pLGICs. Further perturbation analysis has supported the critical role of certain intra-subunit and inter-subunit interactions in dictating the above sequence of events.
Occlusion-Aware View Interpolation
Directory of Open Access Journals (Sweden)
Janusz Konrad
2009-01-01
Full Text Available View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.
Occlusion-Aware View Interpolation
Directory of Open Access Journals (Sweden)
Ince Serdar
2008-01-01
Full Text Available Abstract View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.
Inferring Network Structure from Cascades
Ghonge, Sushrut
2016-01-01
Many physical, biological and social phenomena can be described by cascades taking place on a network. Often, the activity can be empirically observed, but not the underlying network of interactions. In this paper we solve the dynamics of general cascade processes. We then offer three topological inversion methods to infer the structure of any directed network given a set of cascade arrival times. Our forward and inverse formulas hold for a very general class of models where the activation probability of a node is a generic function of its degree and the number of its active neighbors. We report high success rates for synthetic and real networks, for 5 different cascade models.
[Hybrid interpolation for CT metal artifact reducing].
Yu, Xiao-e; Li, Chan-juan; Chen, Wu-fan
2009-01-01
Numerous interpolation-based methods have been described for reducing metal artifacts in CT images, but due to the limit of the interpolation methods, interpolation alone often fails to meet the clinical demands. In this paper, we describe the use of quartic polynomial interpolation in reconstruction of the images of the metal implant followed by linear interpolation to eliminate the streaks. The two interpolation methods are combined according to their given weights to achieve good results.
Chen, Xin; Xing, Pei; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua
2017-02-01
A new dataset of surface temperature over North America has been constructed by merging climate model results and empirical tree-ring data through the application of an optimal interpolation algorithm. Errors of both the Community Climate System Model version 4 (CCSM4) simulation and the tree-ring reconstruction were considered to optimize the combination of the two elements. Variance matching was used to reconstruct the surface temperature series. The model simulation provided the background field, and the error covariance matrix was estimated statistically using samples from the simulation results with a running 31-year window for each grid. Thus, the merging process could continue with a time-varying gain matrix. This merging method (MM) was tested using two types of experiment, and the results indicated that the standard deviation of errors was about 0.4 °C lower than the tree-ring reconstructions and about 0.5 °C lower than the model simulation. Because of internal variabilities and uncertainties in the external forcing data, the simulated decadal warm-cool periods were readjusted by the MM such that the decadal variability was more reliable (e.g., the 1940-1960s cooling). During the two centuries (1601-1800 AD) of the preindustrial period, the MM results revealed a compromised spatial pattern of the linear trend of surface temperature, which is in accordance with the phase transition of the Pacific decadal oscillation and Atlantic multidecadal oscillation. Compared with pure CCSM4 simulations, it was demonstrated that the MM brought a significant improvement to the decadal variability of the gridded temperature via the merging of temperature-sensitive tree-ring records.
Liu Yu-Jen; Hung Jen-Pan; Chen Shang-I; Lin Cheng-Wei
2016-01-01
Electric arc is a discharge phenomenon caused by particular electrical events and arc produced facilities in power system, for example the occurrence of short-circuit fault in feeders and the use of electric arc furnace for steel-making. All of these electric arcs have a highly nonlinear nature and can be considered as a significant source of power quality problems. To investigate the impacts of the electric arcs on power quality studies the development of the electric arc models for simulati...
Institute of Scientific and Technical Information of China (English)
闻学泽
2001-01-01
This study reveals preliminarily the earthquake behavior of variable rupture-scale on active faults of the Chinese mainland, that is that on an individual fault portion earthquake￠s rupture-scale varies cycle to cycle, and hence earthquake￠s strength changes with time. The tendency of this variation has no necessity. On defining relative size of rupture scales, a statistical result shows that it is of the lowest probability that ruptures with the same scale occur in two successive cycles. While the rupture￠s scale in the preceding cycle is 2small2, the probability of the follow-ing rupture￠s scale being 2large2 is as many as 0.48. While the rupture￠s scale in the preceding cycle is 2middle2, the probability of the succeeding rupture being 2small2 or 2large2 scale is 0.69 or 0.25. While the rupture￠s scale in the preceding cycle is 2large2, the probability must be zero for the following rupture with 2large2 scale, and is 0.36 or 0.64 for the following rupture with 2small2 or 2middle2 scale. The author introduces and improves the cascade-rupturing model, and uses it to describe the variability and complexity of rupture scale on individual fault portions. Basic features of some active strike-slip faults on which cascade ruptures have occurred are summarized. Basing on these features the author proposes principles of cascade-rupture segmentation for this type of faults. As an ex-ample to application, the author segments one portion of the Anninghe fault zone, western Sichuan, for its future cascade rupture, and further assesses the probable strength and its corresponding probability of the coming earth-quake.
DEFF Research Database (Denmark)
Gaiotti, Marco; Rizzo, Cesare M.; Branner, Kim
2014-01-01
of composite laminates of wind turbine blades, results were found valuable for the marine industry as well, because similar laminates are used for the hull shell and stiffeners. Systematic calculations were carried out to assess the effects of an embedded delamination on the buckling load, varying the size...... and through thickness position of the delamination. Different finite element modeling strategies were considered and validated against the experimental results. The one applying the 9 nodes MITC shell elements was found matching the experimental data despite failure modes were different for the two...
Directory of Open Access Journals (Sweden)
P. Phaochoo
2016-01-01
Full Text Available In this paper, the fractional Black–Scholes equation in financial problem is solved by using the numerical techniques for the option price of a European call or European put under the Black–Scholes model. The MLPG and implicit finite difference method are used for discretizing the governing equation in option price and time variable, respectively. In MLPG method, the shape function is constructed by a moving kriging approximation. The Dirac delta function is chosen to be the test function. The numerical examples for varieties of variables are also included.
Institute of Scientific and Technical Information of China (English)
丁丽媛; 练秋生
2011-01-01
彩色图像的彩色滤波阵列(CFA)插值是从单传感器数字相机通过CFA获得的采样图像中重构完整RGB图像的过程.针对Bayer格式图像提出了一种基于轮廓波局部高斯模型与全变差的彩色图像CFA插值算法.为进一步提高图像边缘插值质量,将图像梯度的稀疏性结合到图像插值过程中,并且图像梯度的稀疏性用彩色全变差(CTV)来衡量.实验结果表明,该算法比现有的图像插值算法在峰值信噪比与主观视觉效果两方面均有显著提高.%Single-chip digital cameras use Color Filter Array (CFA) to sample different color information; the color image CFA interpolation algorithm interpolates these data to produce an RGB image. A color image CFA interpolation algorithm was proposed based on contourlet local Gaussian model and Total Variation (TV). In order to improve the edge interpolation quality, the sparsity of image gradient was integrated in interpolation process, and the Color Total Variation (CTV) was introduced to measure the sparsity of the image gradient. The experimental results show that the proposed algorithm outperforms the classical algorithms in terms of both Peak Signal-to-Noise Ratio (PSNR) and visual quality.
Covert, Michael
2015-01-01
This book is intended for software developers, system architects and analysts, big data project managers, and data scientists who wish to deploy big data solutions using the Cascading framework. You must have a basic understanding of the big data paradigm and should be familiar with Java development techniques.
Interpolating Operators for Multiapproximation
Directory of Open Access Journals (Sweden)
Eman S. Bhaya
2010-01-01
Full Text Available Problem statement: There are no simple definitions of operators for best multiapproximation and best one sided multiapproximation which work for any measurable function in Lp for, p>0. This study investigated operators that are good for best multiapproximation and best one sided multiapproximation. Approach: We first introduced some direct results related to the approximation problem of continuous functions by Hermit-Fejer interpolation based on the zeros of Chebyshev polynomials of the first or second kind in terms of the usual modulus of continuity. They were then improved to spaces Lp for pn(f of measurable functions, that operator based on the zeros of Chepyshev polynomials of the first kind and prove that for any measurable function defined on Lp[-1,1 ]d the sequence Hn(f converges uniformly to f. Results: The resulting operators were defined for functions f such that f(k, k = 0,1, is of bounded variation. Then, the order of best onesided trigonometric approximation to bounded measurable functions in terms of the average modulus of smoothness was characterized. Estimates characterizing the order of best onesided approximation in terms of the k-th averaged modulus of smoothness for any function in spaces Lp, pp[-1,1]d by defining a new operator for onesided approximation and prove a direct theorem for best one sided multiapproximation in terms of the first order averaged moduli of smoothness. Conclusion: The proposed method successfully construct operators for best multi approximation and best one sided multiapproximation for any measurable function in Lp for, p>0.
Bárta, Miroslav; Karlický, Marian; Skála, Jan
2010-01-01
Magnetic reconnection is commonly considered as a mechanism of solar (eruptive) flares. A deeper study of this scenario reveals, however, a number of open issues. Among them is the fundamental question, how the magnetic energy is transferred from large, accumulation scales to plasma scales where its actual dissipation takes place. In order to investigate this transfer over a broad range of scales we address this question by means of high-resolution MHD simulation. The simulation results indicate, that the magnetic-energy transfer to small scales is realized via a cascade of consecutive smaller and smaller flux-ropes (plasmoids), in analogy with the vortex-tube cascade in (incompressible) fluid dynamics. Both tearing and (driven) coalescence processes are equally important for the consecutive fragmentation of the magnetic field (and associated current density) to smaller elements. At the later stages a dynamic balance between tearing and coalescence processes reveals a steady (power-law) scaling typical for ca...
Liu, Qing
2016-01-01
As a numerically accurate and computationally efficient mesoscopic numerical method, the lattice Boltzmann (LB) method has achieved great success in simulating microscale rarefied gas flows. In this paper, an LB method based on the cascaded collision operator is presented to simulate microchannel gas flows in the transition flow regime. The Bosanquet-type effective viscosity is incorporated into the cascaded lattice Boltzmann (CLB) method to account for the rarefaction effects. In order to gain accurate simulations and match the Bosanquet-type effective viscosity, the combined bounce-back/specular-reflection scheme with a modified second-order slip boundary condition is employed in the CLB method. The present method is applied to study gas flow in a microchannel with periodic boundary condition and gas flow in a long microchannel with pressure boundary condition over a wide range of Knudsen numbers. The predicted results, including the velocity profile, the mass flow rate, and the non-linear pressure deviatio...
Nesterenok, A. V.; Naidenov, V. O.
2015-12-01
The interaction of primary cosmic rays with the Earth's atmosphere is investigated using the simulation toolkit GEANT4. Two reference lists of physical processes - QGSP_BIC_HP and FTFP_BERT_HP - are used in the simulations of cosmic ray cascade in the atmosphere. The cosmic ray neutron fluxes are calculated for mean level of solar activity, high geomagnetic latitudes and sea level. The calculated fluxes are compared with the published results of other analogous simulations and with experimental data.
Interpolation in Spaces of Functions
Directory of Open Access Journals (Sweden)
K. Mosaleheh
2006-03-01
Full Text Available In this paper we consider the interpolation by certain functions such as trigonometric and rational functions for finite dimensional linear space X. Then we extend this to infinite dimensional linear spaces
Geological Visualization System with GPU-Based Interpolation
Huang, L.; Chen, K.; Lai, Y.; Chang, P.; Song, S.
2011-12-01
There has been a large number of research using parallel-processing GPU to accelerate the computation. In Near Surface Geology efficient interpolations are critical for proper interpretation of measured data. Additionally, an appropriate interpolation method for generating proper results depends on the factors such as the dense of the measured locations and the estimation model. Therefore, fast interpolation process is needed to efficiently find a proper interpolation algorithm for a set of collected data. However, a general CPU framework has to process each computation in a sequential manner and is not efficient enough to handle a large number of interpolation generally needed in Near Surface Geology. When carefully observing the interpolation processing, the computation for each grid point is independent from all other computation. Therefore, the GPU parallel framework should be an efficient technology to accelerate the interpolation process which is critical in Near Surface Geology. Thus in this paper we design a geological visualization system whose core includes a set of interpolation algorithms including Nearest Neighbor, Inverse Distance and Kriging. All these interpolation algorithms are implemented using both the CPU framework and GPU framework. The comparison between CPU and GPU implementation in the aspect of precision and processing speed shows that parallel computation can accelerate the interpolation process and also demonstrates the possibility of using GPU-equipped personal computer to replace the expensive workstation. Immediate update at the measurement site is the dream of geologists. In the future the parallel and remote computation ability of cloud will be explored to make the mobile computation on the measurement site possible.
Li, Z. W.
2012-05-01
The propagation delay when radar signals travel from the troposphere has been one of the major limitations for the applications of high precision repeat-pass Interferometric Synthetic Aperture Radar (InSAR). In this paper, we first present an elevation-dependent atmospheric correction model for Advanced Synthetic Aperture Radar (ASAR—the instrument aboard the ENVISAT satellite) interferograms with Medium Resolution Imaging Spectrometer (MERIS) integrated water vapour (IWV) data. Then, using four ASAR interferometric pairs over Southern California as examples, we conduct the atmospheric correction experiments with cloud-free MERIS IWV data. The results show that after the correction the rms differences between InSAR and GPS have reduced by 69.6 per cent, 29 per cent, 31.8 per cent and 23.3 per cent, respectively for the four selected interferograms, with an average improvement of 38.4 per cent. Most importantly, after the correction, six distinct deformation areas have been identified, that is, Long Beach–Santa Ana Basin, Pomona–Ontario, San Bernardino and Elsinore basin, with the deformation velocities along the radar line-of-sight (LOS) direction ranging from −20 mm yr−1 to −30 mm yr−1 and on average around −25 mm yr−1, and Santa Fe Springs and Wilmington, with a slightly low deformation rate of about −10 mm yr−1 along LOS. Finally, through the method of stacking, we generate a mean deformation velocity map of Los Angeles over a period of 5 yr. The deformation is quite consistent with the historical deformation of the area. Thus, using the cloud-free MERIS IWV data correcting synchronized ASAR interferograms can significantly reduce the atmospheric effects in the interferograms and further better capture the ground deformation and other geophysical signals.
Model approach for stress induced steroidal hormone cascade changes in severe mental diseases.
Volko, Claus D; Regidor, Pedro A; Rohr, Uwe D
2016-03-01
Stress was described by Cushing and Selye as an adaptation to a foreign stressor by the anterior pituitary increasing ACTH, which stimulates the release of glucocorticoid and mineralocorticoid hormones. The question is raised whether stress can induce additional steroidal hormone cascade changes in severe mental diseases (SMD), since stress is the common denominator. A systematic literature review was conducted in PubMed, where the steroidal hormone cascade of patients with SMD was compared to the impact of increasing stress on the steroidal hormone cascade (a) in healthy amateur marathon runners with no overtraining; (b) in healthy well-trained elite soldiers of a ranger training unit in North Norway, who were under extreme physical and mental stress, sleep deprivation, and insufficient calories for 1 week; and, (c) in soldiers suffering from post traumatic stress disorder (PTSD), schizophrenia (SI), and bipolar disorders (BD). (a) When physical stress is exposed moderately to healthy men and women for 3-5 days, as in the case of amateur marathon runners, only few steroidal hormones are altered. A mild reduction in testosterone, cholesterol and triglycerides is detected in blood and in saliva, but there was no decrease in estradiol. Conversely, there is an increase of the glucocorticoids, aldosterone and cortisol. Cellular immunity, but not specific immunity, is reduced for a short time in these subjects. (b) These changes are also seen in healthy elite soldiers exposed to extreme physical and mental stress but to a somewhat greater extent. For instance, the aldosterone is increased by a factor of three. (c) In SMD, an irreversible effect on the entire steroidal hormone cascade is detected. Hormones at the top of the cascade, such as cholesterol, dehydroepiandrosterone (DHEA), aldosterone and other glucocorticoids, are increased. However, testosterone and estradiol and their metabolites, and other hormones at the lower end of the cascade, seem to be reduced. 1
Cascade trailing-edge noise modeling using a mode-matching technique and the edge-dipole theory
Roger, Michel; François, Benjamin; Moreau, Stéphane
2016-11-01
An original analytical approach is proposed to model the broadband trailing-edge noise produced by high-solidity outlet guide vanes in an axial turbomachine. The model is formulated in the frequency domain and first in two dimensions for a preliminary assessment of the method. In a first step the trailing-edge noise sources of a single vane are shown to be equivalent to the onset of a so-called edge dipole, the direct field of which is expanded in a series of plane-wave modes. A criterion for the distance of the dipole to the trailing-edge and a scaling of its amplitude is defined to yield a robust model. In a second step the diffraction of each plane-wave mode is derived considering the cascade as an array of bifurcated waveguides and using a mode-matching technique. The cascade response is finally synthesized by summing the diffracted fields of all cut-on modes to yield upstream and downstream sound power spectral densities. The obtained spectral shapes are physically consistent and the present results show that upstream radiation is typically 3 dB higher than downstream radiation, which has been experimentally observed previously. Even though the trailing-edge noise sources are not vane-to-vane correlated their radiation is strongly determined by a cascade effect that consequently must be accounted for. The interest of the approach is that it can be extended to a three-dimensional annular configuration without resorting to a strip theory approach. As such it is a promising and versatile alternative to previously published methods.
Institute of Scientific and Technical Information of China (English)
段祝庚; 肖化顺; 袁伟湘
2016-01-01
【目的】基于森林区域离散点云特点，利用不同插值方法构建冠层高度模型，并对不同插值方法进行比较、分析和评价，为森林冠层高度模型插值方法选择提供参考。【方法】以30 m ×30 m 样方离散点云数据为试验数据，采用开源软件 SAGS-GIS利用 B样条插值( B-Spline)、普通克里金插值法( OK)、线性插值三角网法( TLI)、反距离加权插值法(IDW)4种插值方法分别构建森林冠层高度模型，对森林冠层高度模型的平面视图、三维视图、剖面图及其像元统计量进行比较和分析；同时对反距离加权插值法的插值参数搜索半径进行讨论、比较和分析。【结果】对于森林区域空间分布均匀且存在高度突变的点云数据，B-Spline插值对空值区域都进行了填充，林冠空隙也被过分填充，且 CHM 像元最大值明显偏离原始插值数据； TLI插值的 CHM 显得比较破碎； OK 插值法对影像过度平滑，生成的 CHM 影像模糊；而 IDW插值法对冠层顶部进行了适当填充和平滑，但冠层边缘不被过度平滑，保留高度突变，同时林冠空隙仍然保留也不被过分填充。IDW 插值应选择合适的搜索半径，搜索半径为原始点云间隔的1.5～2.5倍较为合适。【结论】IDW插值法优于 B-Spline，OK，TLI插值法，生成的 CHM 能较准确反映森林冠层的真实自然形态，有利于森林参数的提取。%Objective]According to the characteristics of the discrete point cloud in forest area,canopy height model ( CHM ) was built through different interpolation methods. The results of the different interpolation methods were compared,analyzed and evaluated in order to provide the reference for choice of interpolation methods. [Method]In this study,the discrete point cloud data in plots(30 m × 30 m) were used as the experimental data. CHMs were generated by B-Spline,triangulation with linear interpolation ( TLI
A model of the TeV flare of Cygnus X-1: electron acceleration and extended pair cascades
Zdziarski, A A; Bednarek, W
2008-01-01
We consider theoretical models of emission of TeV photons by Cyg X-1 during a flare discovered by the MAGIC detector. We study acceleration of electrons to energies sufficient for TeV emission, and find the emission site is allowed to be close to the black hole. We then consider pair absorption in the photon field of the central X-ray source and a surrounding accretion disc, and find its optical depth is 3 TeV, in which photons travel far away from the star, initiating a spatially extended pair cascade. This qualitatively explains the observed TeV spectrum, though still not its exact shape.
Optimization of contrast-enhanced breast imaging: Analysis using a cascaded linear system model.
Hu, Yue-Houng; Scaduto, David A; Zhao, Wei
2017-01-01
Contrast-enhanced (CE) breast imaging involves the injection contrast agents (i.e., iodine) to increase conspicuity of malignant lesions. CE imaging may be used in conjunction with digital mammography (DM) or digital breast tomosynthesis (DBT) and has shown promise in improving diagnostic specificity. Both CE-DM and CE-DBT techniques require optimization as clinical diagnostic tools. Physical factors including x-ray spectra, subtraction technique, and the signal from iodine contrast, must be considered to provide the greatest object detectability and image quality. We developed a cascaded linear system model (CLSM) for the optimization of CE-DM and CE-DBT employing dual energy (DE) subtraction or temporal (TE) subtraction. We have previously developed a CLSM for DBT implemented with an a-Se flat panel imager (FPI) and filtered backprojection (FBP) reconstruction algorithm. The model is used to track image quality metrics - modulation transfer function (MTF) and noise power spectrum (NPS) - at each stage of the imaging chain. In this study, the CLSM is extended for CE breast imaging. The effect of x-ray spectrum (varied by changing tube potential and the filter) and DE and TE subtraction techniques on breast structural noise was measured was studied and included as a deterministic source of noise in the CLSM. From the two-dimensional (2D) and three-dimensional (3D) MTF and NPS, the ideal observer signal-to-noise ratio (SNR), also known as the detectability index (d'), may be calculated. Using d' as a FOM, we discuss the optimization of CE imaging for the task of iodinated contrast object detection within structured backgrounds. Increasing x-ray energy was determined to decrease the magnitude of structural noise and not its correlation. By performing DE subtraction, the magnitude of the structural noise was further reduced at the expense of increased stochastic (quantum and electronic) noise. TE subtraction exhibited essentially no residual structural noise at the
MDI Synoptic Charts of Magnetic Field: Interpolation of Polar Fields
Liu, Yang; Hoeksema, J. T.; Zhao, X.; Larson, R. M.
2007-05-01
In this poster, we compare various methods for interpolation of polar field for the MDI synoptic charts of magnetic field. By examining the coronal and heliospheric magnetic field computed from the synoptic charts based on a Potential Field Source Surface model (PFSS), and by comparing the heliospheric current sheets and footpoints of open fields with the observations, we conclude that the coronal and heliospheric fields calculated from the synoptic charts are sensitive to the polar field interpolation, and a time-dependent interpolation method using the observed polar fields is the best among the seven methods investigated.
Interpolating sliding mode observer for a ball and beam system
Luai Hammadih, Mohammad; Hosani, Khalifa Al; Boiko, Igor
2016-09-01
A principle of interpolating sliding mode observer is introduced in this paper. The observer incorporates multiple linear observers through interpolation of multiple estimates, which is treated as a type of adaptation. The principle is then applied to the ball and beam system for observation of the slope of the beam from the measurement of the ball position. The linearised model of the ball and beam system using multiple linearisation points is developed. The observer dynamics implemented in Matlab/Simulink Real Time Workshop environment. Experiments conducted on the ball and beam experimental setup demonstrate excellent performance of the designed novel interpolating (adaptive) observer.
Pappenberger, F.; K. J. Beven; N. M. Hunter; Bates, P. D.; B. T. Gouweleeuw; Thielen, J.; A. P. J. De De Roo
2005-01-01
International audience; The political pressure on the scientific community to provide medium to long term flood forecasts has increased in the light of recent flooding events in Europe. Such demands can be met by a system consisting of three different model components (weather forecast, rainfall-runoff forecast and flood inundation forecast) which are all liable to considerable uncertainty in the input, output and model parameters. Thus, an understanding of cascaded uncertainties is a necessa...
Modelling and Extremum Seeking Control of a Cascade of Two Anaerobic Bioreactors
Directory of Open Access Journals (Sweden)
Ivan Simeonov
2011-05-01
Full Text Available The principle of extremum seeking control has been applied on a cascade of two anaerobic bioreactors using the dilution rate as control action and the biogas flow rates as measured outputs to be maximized. In all cases maximum biogas flow rate with sensible decrease of the general output depollution parameter (compared to the case of one single bioreactor has been obtained, starting from different initial conditions. With the same algorithm, good performances have been obtained in the presence of variations of the inlet organics. Its implication for biotechnology may result in substantial economic benefits.
Spatial interpolation of monthly mean air temperature data for Latvia
Aniskevich, Svetlana
2016-04-01
Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.
Pattern-oriented memory interpolation of sparse historical rainfall records
Matos, J. P.; Cohen Liechti, T.; Portela, M. M.; Schleiss, A. J.
2014-03-01
The pattern-oriented memory (POM) is a novel historical rainfall interpolation method that explicitly takes into account the time dimension in order to interpolate areal rainfall maps. The method is based on the idea that rainfall patterns exist and can be identified over a certain area by means of non-linear regressions. Having been previously benchmarked with a vast array of interpolation methods using proxy satellite data under different time and space availabilities, in the scope of the present contribution POM is applied to rain gauge data in order to produce areal rainfall maps. Tested over the Zambezi River Basin for the period from 1979 to 1997 (accurate satellite rainfall estimates based on spaceborne instruments are not available for dates prior to 1998), the novel pattern-oriented memory historical interpolation method has revealed itself as a better alternative than Kriging or Inverse Distance Weighing in the light of a Monte Carlo cross-validation procedure. Superior in most metrics to the other tested interpolation methods, in terms of the Pearson correlation coefficient and bias the accuracy of POM's historical interpolation results are even comparable with that of recent satellite rainfall products. The new method holds the possibility of calculating detailed and performing daily areal rainfall estimates, even in the case of sparse rain gauging grids. Besides their performance, the similarity to satellite rainfall estimates inherent to POM interpolations can contribute to substantially extend the length of the rainfall series used in hydrological models and water availability studies in remote areas.
An Algorithm for Interpolating Ship Motion Vectors
Directory of Open Access Journals (Sweden)
Qinyou Hu
2014-03-01
Full Text Available Interpolation of ship motion vectors is able to be used for estimating the lost ship AIS dynamic information, which is important for replaying marine accidents and for analysing marine traffic data. The previous methods can only interpolate ship's position, while not including ship's course and speed. In this paper, vector function is used to express the relationship between the ship's time and space coordinates, and the tangent of the vector function and its change rate are able to express physical characteristics of ship's course, speed and acceleration. The given AIS dynamic information can be applied to calculate the parameters of ship's vector function and then the interpolation model for ship motion vectors is developed to estimate the lost ship dynamic information at any given moment. Experiment results show that the ship motion vector function is able to depict the characteristics of ship motions accurately and the model can estimate not only the ship's position but also ship's course and speed at any given moment with limited differences.
Interpolation by two-dimensional cubic convolution
Shi, Jiazheng; Reichenbach, Stephen E.
2003-08-01
This paper presents results of image interpolation with an improved method for two-dimensional cubic convolution. Convolution with a piecewise cubic is one of the most popular methods for image reconstruction, but the traditional approach uses a separable two-dimensional convolution kernel that is based on a one-dimensional derivation. The traditional, separable method is sub-optimal for the usual case of non-separable images. The improved method in this paper implements the most general non-separable, two-dimensional, piecewise-cubic interpolator with constraints for symmetry, continuity, and smoothness. The improved method of two-dimensional cubic convolution has three parameters that can be tuned to yield maximal fidelity for specific scene ensembles characterized by autocorrelation or power-spectrum. This paper illustrates examples for several scene models (a circular disk of parametric size, a square pulse with parametric rotation, and a Markov random field with parametric spatial detail) and actual images -- presenting the optimal parameters and the resulting fidelity for each model. In these examples, improved two-dimensional cubic convolution is superior to several other popular small-kernel interpolation methods.
Investigation of Positively Curved Blade in Compressor Cascade Based on Transition Model
Chen, Shaowen; Lan, Yunhe; Zhou, Zhihua; Wang, Songtao
2016-06-01
Experiment and numerical simulation of flow transition in a compressor cascade with positively curved blade is carried out in a low speed. In the experimental investigation, the outlet aerodynamic parameters are measured using a five-hole aerodynamic probe, and an ink-trace flow visualization is applied to the cascade surface. The effects of transition flow on the boundary layer development, three-dimensional flow separation and aerodynamic performance are studied. The feasibility of a commercial computational fluid dynamic code is validated and the numerical results show a good agreement with experimental data. The blade-positive curving intensifies the radial force from the endwalls to the mid-span near the suction surface, which leads to the smaller scope of the intermittent region, the lesser extents of turbulence intensity and the shorter radial height of the separation bubble near the endwalls, but has little influence on the flow near the mid-span. The large passage vortex is divided into two smaller shedding vortexes under the impact of the radial pressure gradient due to the positively curved blade. The new concentrated shedding vortex results in an increase in the turbulence intensity and secondary flow loss of the corresponding region.
Long term reliability study and life time model of quantum cascade lasers
Xie, Feng; Nguyen, Hong-Ky; Leblanc, Herve; Hughes, Larry; Wang, Jie; Wen, Jianguo; Miller, Dean J.; Lascola, Kevin
2016-09-01
Here, we present results of quantum cascade laser lifetime tests under various aging conditions including an accelerated life test. The total accumulated life time exceeds 1.5 × 106 device hours. The longest single device aging time was 46 500 hours without failure in the room temperature aging test. Four failures were found in a group of 19 devices subjected to the accelerated life test with a heat-sink temperature of 60 °C and a continuous-wave current of 1 A. Failure mode analyses revealed that thermally induced oxidation of InP in the semi-insulating layer is the cause of failure. An activation energy of 1.2 eV is derived from the dependence of the failure rate on laser core temperature. The mean time to failure of the quantum cascade lasers operating at a typical condition with the current density of 5 kA/cm2 and heat-sink temperature of 25 °C is expected to be 809 000 hours.
Cross sections of proton- and neutron-induced reactions by the Liège intranuclear cascade model
Chen, Jian; Dong, Tiekuang; Ren, Zhongzhou
2016-06-01
The purpose of the paper is mainly to test the validity of the Liège intranuclear cascade (INCL) model in calculating the cross sections of proton-induced reactions for cosmogenic nuclei using the newly compiled database of proton cross sections. The model calculations of 3He display the rising tendency of cross sections with the increase of energy, in accordance with the experimental data. Meanwhile, the differences between the theoretical results and experimental data of production cross sections (10Be and 26Al) are generally within a factor of 3, meaning that the INCL model works quite well for the proton-induced reactions. Based on the good agreement, we predict the production cross sections of 26Al from reactions n + 27Al, n + 28Si, and n + 40Ca and those of 10Be from reactions n + 16O and n + 28Si. The results also show a good agreement with a posteriori excitation functions.
Image Interpolation Through Surface Reconstruction
Institute of Scientific and Technical Information of China (English)
ZHANG Ling; LI Xue-mei
2013-01-01
Reconstructing an HR (high-resolution) image which preserves the image intrinsic structures from its LR ( low-resolution) counterpart is highly challenging. This paper proposes a new surface reconstruction algorithm applied to image interpolation. The interpolation surface for the whole image is generated by putting all the quadratic polynomial patches together. In order to eliminate the jaggies of the edge, a new weight function containing edge information is incorporated into the patch reconstruction procedure as a constraint. Extensive experimental results demonstrate that our method produces better results across a wide range of scenes in terms of both quantitative evaluation and subjective visual quality.
Herron, Seth; Williams, Eric
2013-08-06
Subsidy programs for new energy technologies are motivated by the experience curve: increased adoption of a technology leads to learning and economies of scale that lower costs. Geographic differences in fuel prices and climate lead to large variability in the economic performance of energy technologies. The notion of cascading diffusion is that regions with favorable economic conditions serve as the basis to build scale and reduce costs so that the technology becomes attractive in new regions. We develop a model of cascading diffusion and implement via a case study of residential solid oxide fuel cells (SOFCs) for combined heating and power. We consider diffusion paths within the U.S. and internationally. We construct market willingness-to-pay curves and estimate future manufacturing costs via an experience curve. Combining market and cost results, we find that for rapid cost reductions (learning rate = 25%), a modest public subsidy can make SOFC investment profitable for 20-160 million households. If cost reductions are slow however (learning rate = 15%), residential SOFCs may not become economically competitive. Due to higher energy prices in some countries, international diffusion is more favorable than domestic, mitigating much of the uncertainty in the learning rate.
DEFF Research Database (Denmark)
Liu, Fenghai; Rasmussen, Christian Jørgen; Pedersen, Rune Johan Skullerud
1998-01-01
Integrated optical N×N wavelength routers based on arrayed-waveguide gratings (AWGs) are likely to become key devices in future WDM networks. A cascade of wavelength routers is formed when several N×N networks are interconnected. Due to the nonideal transfer function of physical AWGs, a node...... will receive not only the signal from the node it is connected to, but also cross talk at the same wavelength from the other nodes. In a cascade of AWG routers, the cross talk will accumulate and thereby limit the number of stages. The Gaussian cross talk model is considered to be a worst-case model when...... for investigation of router cascades. Very good agreement is demonstrated between the measured penalty and the penalty predicted from our improved Gaussian cross talk model...
Williams, Kate E; Berthelsen, Donna; Walker, Sue; Nicholson, Jan M
2017-01-01
This article documents the longitudinal and reciprocal relations among behavioral sleep problems and emotional and attentional self-regulation in a population sample of 4,109 children participating in Growing Up in Australia: The Longitudinal Study of Australian Children (LSAC)-Infant Cohort. Maternal reports of children's sleep problems and self-regulation were collected at five time-points from infancy to 8-9 years of age. Longitudinal structural equation modeling supported a developmental cascade model in which sleep problems have a persistent negative effect on emotional regulation, which in turn contributes to ongoing sleep problems and poorer attentional regulation in children over time. Findings suggest that sleep behaviors are a key target for interventions that aim to improve children's self-regulatory capacities.
Guse, Björn; Kail, Jochem; Radinger, Johannes; Schröder, Maria; Kiesel, Jens; Hering, Daniel; Wolter, Christian; Fohrer, Nicola
2015-11-15
Climate and land use changes affect the hydro- and biosphere at different spatial scales. These changes alter hydrological processes at the catchment scale, which impact hydrodynamics and habitat conditions for biota at the river reach scale. In order to investigate the impact of large-scale changes on biota, a cascade of models at different scales is required. Using scenario simulations, the impact of climate and land use change can be compared along the model cascade. Such a cascade of consecutively coupled models was applied in this study. Discharge and water quality are predicted with a hydrological model at the catchment scale. The hydraulic flow conditions are predicted by hydrodynamic models. The habitat suitability under these hydraulic and water quality conditions is assessed based on habitat models for fish and macroinvertebrates. This modelling cascade was applied to predict and compare the impacts of climate- and land use changes at different scales to finally assess their effects on fish and macroinvertebrates. Model simulations revealed that magnitude and direction of change differed along the modelling cascade. Whilst the hydrological model predicted a relevant decrease of discharge due to climate change, the hydraulic conditions changed less. Generally, the habitat suitability for fish decreased but this was strongly species-specific and suitability even increased for some species. In contrast to climate change, the effect of land use change on discharge was negligible. However, land use change had a stronger impact on the modelled nitrate concentrations affecting the abundances of macroinvertebrates. The scenario simulations for the two organism groups illustrated that direction and intensity of changes in habitat suitability are highly species-dependent. Thus, a joined model analysis of different organism groups combined with the results of hydrological and hydrodynamic models is recommended to assess the impact of climate and land use changes on
Precipitation interpolation and corresponding uncertainty assessment using copulas
Bardossy, A.; Pegram, G. G.
2012-12-01
Spatial interpolation of rainfall over different time and spatial scales is necessary in many applications of hydrometeorology. The specific problems encountered in rainfall interpolation include: the large number of calculations which need to be performed automatically the quantification of the influence of topography, usually the most influential of exogenous variables how to use observed zero (dry) values in interpolation, because their proportion increases the shorter the time interval the need to estimate a reasonable uncertainty of the modelled point/pixel distributions the need to separate (i) temporally highly correlated bias from (ii) random interpolation errors at different spatial and temporal scales the difficulty of estimating uncertainty of accumulations over a range of spatial scales. The approaches used and described in the presentation employ the variables rainfall and altitude. The methods of interpolation include (i) Ordinary Kriging of the rainfall without altitude, (ii) External Drift Kriging with altitude as an exogenous variable, and less conventionally, (iii) truncated Gaussian copulas and truncated v-copulas, both omitting and including the altitude of the control stations as well as that of the target (iv) truncated Gaussian copulas and truncated v-copulas for a two-step interpolation of precipitation combining temporal and spatial quantiles for bias quantification. It was found that truncated Gaussian copulas, with the target's and all control the stations' altitudes included as exogenous variables, produce the lowest Mean Square error in cross-validation and, as a bonus, model with the least bias. In contrast, the uncertainty of interpolation is better described by the v-copulas, but the Gaussian copulas have the advantage of computational effort (by three orders of magnitude) which justifies their use in practice. It turns out that the uncertainty estimates of the OK and EDK interpolants are not competitive at any time scale, from daily
Optimum Gravity Interpolation Technique for Large Data Gaps: Case Study for Africa
Abd-Elmotaal, Hussein; Kühtreiber, Norbert
2017-04-01
The gravity database for the IAG African Geoid Project contains significantly large data gaps. These large data gaps affect the interpolation precision of the reduced gravity anomalies needed for the determination of the gravimetric geoid for Africa. The aim of this paper is to develop an optimal interpolation technique that can be used for a proper gravity interpolation within large data gaps. A gap of 3x3 degrees has been artificially created within the gravity data set for Africa. The remaining data set has been used to interpolate the gravity values at the gap points; then a comparison between the interpolated and the data values has been carried-out to determine the accuracy of the used interpolation technique. The unequal weight least-squares prediction (with the optimum curvature parameter at the origin) with a tailored geopotential model for Africa, used to estimate an underlying grid at the gap area, has been proposed as the developed interpolation approach. For comparison purpose, the Kriging interpolation technique has also been tested. The window technique, suggested by Abd-Elmotaal and Kühtreiber (2003) to get rid of the double consideration of the topographic-isostatic masses within the data window in the framework of the remove-restore technique, has been used for the reduction process. A comparison between the data and interpolated values of the gravity at the gap points has been carried out. The results show that the developed interpolation technique gives better interpolation accuracy at the artificial data gap.
Information cascade on networks
Hisakado, Masato; Mori, Shintaro
2016-05-01
In this paper, we discuss a voting model by considering three different kinds of networks: a random graph, the Barabási-Albert (BA) model, and a fitness model. A voting model represents the way in which public perceptions are conveyed to voters. Our voting model is constructed by using two types of voters-herders and independents-and two candidates. Independents conduct voting based on their fundamental values; on the other hand, herders base their voting on the number of previous votes. Hence, herders vote for the majority candidates and obtain information relating to previous votes from their networks. We discuss the difference between the phases on which the networks depend. Two kinds of phase transitions, an information cascade transition and a super-normal transition, were identified. The first of these is a transition between a state in which most voters make the correct choices and a state in which most of them are wrong. The second is a transition of convergence speed. The information cascade transition prevails when herder effects are stronger than the super-normal transition. In the BA and fitness models, the critical point of the information cascade transition is the same as that of the random network model. However, the critical point of the super-normal transition disappears when these two models are used. In conclusion, the influence of networks is shown to only affect the convergence speed and not the information cascade transition. We are therefore able to conclude that the influence of hubs on voters' perceptions is limited.
Energy Technology Data Exchange (ETDEWEB)
Song, Jun Beom [Dept. of Aviation Maintenance, Dongwon Institute of Science and Technology, Yangsan (Korea, Republic of); Byun, Young Seop; Jeong, Jin Seok; Kim, Jeong; Kang, Beom Soo [Dept. of Aerospace Engineering, Pusan National University, Busan (Korea, Republic of)
2016-11-15
This paper proposes a cascaded control structure and a method of practical application for attitude control of a multi-rotor Unmanned aerial vehicle (UAV). The cascade control, which has tighter control capability than a single-loop control, is rarely used in attitude control of a multi-rotor UAV due to the input-output relation, which is no longer simply a set-point to Euler angle response transfer function of a single-loop PID control, but there are multiply measured signals and interactive control loops that increase the complexity of evaluation in conventional way of design. However, it is proposed in this research a method that can optimize a cascade control with a primary and secondary loops and a PID controller for each loop. An investigation of currently available PID-tuning methods lead to selection of the Simple internal model control (SIMC) method, which is based on the Internal model control (IMC) and direct-synthesis method. Through the analysis and experiments, this research proposes a systematic procedure to implement a cascaded attitude controller, which includes the flight test, system identification and SIMC-based PID-tuning. The proposed method was validated successfully from multiple applications where the application to roll axis lead to a PID-PID cascade control, but the application to yaw axis lead to that of PID-PI.
Mehl, S.; Hill, M.C.
2004-01-01
This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.
Spatial Interpolation of Ewert's Index of Continentality in Poland
Szymanowski, Mariusz; Bednarczyk, Piotr; Kryza, Maciej; Nowosad, Marek
2016-10-01
The article presents methodological considerations on the spatial interpolation of Ewert's index of continentality for Poland. The primary objective was to perform spatial interpolation and generate maps of the index combined with selection of an optimal interpolation method and validation of the use of the decision tree proposed by Szymanowski et al. (Meteorol Z 22:577-585, 2013). The analysis involved four selected years and a multi-year average of the period 1981-2010 and was based on data from 111 meteorological stations. Three regression models: multiple linear regression (MLR), geographically weighted regression (GWR), and mixed geographically weighted regression were used in the analysis as well as extensions of two of them to the residual kriging form. The regression models were compared demonstrating a better fit of the local model and, hence, the non-stationarity of the spatial process. However, the decisive role in the selection of the interpolator was assigned to the possibility of extension of the regression model to residual kriging. A key element here is the autocorrelation of the regression residuals, which proved to be significant for MLR and irrelevant for GWR. This resulted in exclusion of geographically weighted regression kriging from further analysis. The multiple linear regression kriging was found as the optimal interpolator. This was confirmed by cross validation combined with an analysis of improvement of the model in accordance with the criterion of the mean absolute error (MAE). The results obtained facilitate modification of the scheme of selection of an optimal interpolator and development of guidelines for automation of interpolation of Ewert's index of continentality for Poland.
Spatial Interpolation of Ewert's Index of Continentality in Poland
Szymanowski, Mariusz; Bednarczyk, Piotr; Kryza, Maciej; Nowosad, Marek
2017-02-01
The article presents methodological considerations on the spatial interpolation of Ewert's index of continentality for Poland. The primary objective was to perform spatial interpolation and generate maps of the index combined with selection of an optimal interpolation method and validation of the use of the decision tree proposed by Szymanowski et al. (Meteorol Z 22:577-585, 2013). The analysis involved four selected years and a multi-year average of the period 1981-2010 and was based on data from 111 meteorological stations. Three regression models: multiple linear regression (MLR), geographically weighted regression (GWR), and mixed geographically weighted regression were used in the analysis as well as extensions of two of them to the residual kriging form. The regression models were compared demonstrating a better fit of the local model and, hence, the non-stationarity of the spatial process. However, the decisive role in the selection of the interpolator was assigned to the possibility of extension of the regression model to residual kriging. A key element here is the autocorrelation of the regression residuals, which proved to be significant for MLR and irrelevant for GWR. This resulted in exclusion of geographically weighted regression kriging from further analysis. The multiple linear regression kriging was found as the optimal interpolator. This was confirmed by cross validation combined with an analysis of improvement of the model in accordance with the criterion of the mean absolute error (MAE). The results obtained facilitate modification of the scheme of selection of an optimal interpolator and development of guidelines for automation of interpolation of Ewert's index of continentality for Poland.
Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H
2001-01-01
Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.
Song, Ju-Hyun; Volling, Brenda L; Lane, Jonathan D; Wellman, Henry M
2016-07-01
A developmental cascade model was tested to examine longitudinal associations among firstborn children's aggression, theory of mind (ToM), and antagonism toward their younger sibling during the 1st year of siblinghood. Aggression and ToM were assessed before the birth of a sibling and 4 and 12 months after the birth, and antagonism was examined at 4 and 12 months in a sample of 208 firstborn children (initial Mage = 30 months, 56% girls) from primarily European American, middle-class families. Firstborns' aggression consistently predicted high sibling antagonism both directly and through poorer ToM. Results highlight the importance of examining longitudinal influences across behavioral, social-cognitive, and relational factors that are closely intertwined even from the early years of life.
An Adaptive Weighting Algorithm for Interpolating the Soil Potassium Content
Liu, Wei; Du, Peijun; Zhao, Zhuowen; Zhang, Lianpeng
2016-04-01
The concept of spatial interpolation is important in the soil sciences. However, the use of a single global interpolation model is often limited by certain conditions (e.g., terrain complexity), which leads to distorted interpolation results. Here we present a method of adaptive weighting combined environmental variables for soil properties interpolation (AW-SP) to improve accuracy. Using various environmental variables, AW-SP was used to interpolate soil potassium content in Qinghai Lake Basin. To evaluate AW-SP performance, we compared it with that of inverse distance weighting (IDW), ordinary kriging, and OK combined with different environmental variables. The experimental results showed that the methods combined with environmental variables did not always improve prediction accuracy even if there was a strong correlation between the soil properties and environmental variables. However, compared with IDW, OK, and OK combined with different environmental variables, AW-SP is more stable and has lower mean absolute and root mean square errors. Furthermore, the AW-SP maps provided improved details of soil potassium content and provided clearer boundaries to its spatial distribution. In conclusion, AW-SP can not only reduce prediction errors, it also accounts for the distribution and contributions of environmental variables, making the spatial interpolation of soil potassium content more reasonable.
Daily optimized model for long-term operation of the Three Gorges-Gezhouba Cascade Power Stations
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper presents the step-by-step genetic algorithm based on artificial intelligence guidance and builds a long-term daily optimized operating model for the Three Gorges-Gezhouba Hydropower Complex with single generating set as the based operating unit.Actual operating data from 2004to 2006 are used to verify the model,and results show that the simulation accuracy determined by measuring the total amount of cascade power generation reaches 99.66%.Statistic hydrological data of normal years and actual data of three years of 2004-2006 are respectively used to perform an optimized prediction of the power generation process and benefits in future when the water stored in the TGP Reservoir reaches 175 m level and power generation benefits under different operation modes,such as delayed subsiding water level,advance water storage,and adopting of different flood-limited water levels,are forecasted.In the case of years with normal inflows,the total amount of cascade power generation running on current specifications reaches 107500 GWh per year.If the commencement of water storage after the flood season is moved forward by 20 days,the amount of power generation can be increased by 3400 GWh per year.If the limited water level in the flood season is raised by three to five meters,the amount of power generation can be increased by 1600 to 3200 GWh per year.If the commencement of water storage is moved forward while the maximum water level allowed in the flood season is raised,the amount of power generation can be increased by 6400 GWh per year.
Multinode rational operators for univariate interpolation
Dell'Accio, Francesco; Di Tommaso, Filomena; Hormann, Kai
2016-10-01
Birkhoff (or lacunary) interpolation is an extension of polynomial interpolation that appears when observation gives irregular information about function and its derivatives. A Birkhoff interpolation problem is not always solvable even in the appropriate polynomial or rational space. In this talk we split up the initial problem in subproblems having a unique polynomial solution and use multinode rational basis functions in order to obtain a global interpolant.
A Parameterization Method from Conic Spline Interpolation
Institute of Scientific and Technical Information of China (English)
MA Long; GUO Feng-hua
2014-01-01
Interpolating a set of planar points is a common problem in CAD. Most constructions of interpolation functions are based on the parameters at the sample points. Assigning parameters to all sample points is a vital step before constructing interpolation functions. The most widely used parameterization method is accumulative chord length parameterization. In this paper, we give out a better method based on the interpolation of conics. Based on this method, a sequence of fairer Hermite curves can be constructed.
General Structures of Block Based Interpolational Function
Institute of Scientific and Technical Information of China (English)
ZOU LE; TANG SHUO; Ma Fu-ming
2012-01-01
We construct general structures of one and two variable interpolation function,without depending on the existence of divided difference or inverse differences,and we also discuss the block based osculatory interpolation in one variable case.Clearly,our method offers many flexible interpolation schemes for choices.Error terms for the interpolation are determined and numerical examples are given to show the effectiveness of the results.
Inverse Distance Weighted Interpolation Involving Position Shading
Li, Zhengquan; WU Yaoxiang
2015-01-01
Considering the shortcomings of inverse distance weighted (IDW) interpolation in practical applications, this study improved the IDW algorithm and put forward a new spatial interpolation method that named as adjusted inverse distance weighted (AIDW). In interpolating process, the AIDW is capable of taking into account the comprehensive influence of distance and position of sample point to interpolation point, by adding a coefficient (K) into the normal IDW formula. The coefficient (K) is used...
Li, Hongzhi; Zhong, Ziyan; Li, Lin; Gao, Rui; Cui, Jingxia; Gao, Ting; Hu, Li Hong; Lu, Yinghua; Su, Zhong-Min; Li, Hui
2015-05-30
A cascaded model is proposed to establish the quantitative structure-activity relationship (QSAR) between the overall power conversion efficiency (PCE) and quantum chemical molecular descriptors of all-organic dye sensitizers. The cascaded model is a two-level network in which the outputs of the first level (JSC, VOC, and FF) are the inputs of the second level, and the ultimate end-point is the overall PCE of dye-sensitized solar cells (DSSCs). The model combines quantum chemical methods and machine learning methods, further including quantum chemical calculations, data division, feature selection, regression, and validation steps. To improve the efficiency of the model and reduce the redundancy and noise of the molecular descriptors, six feature selection methods (multiple linear regression, genetic algorithms, mean impact value, forward selection, backward elimination, and +n-m algorithm) are used with the support vector machine. The best established cascaded model predicts the PCE values of DSSCs with a MAE of 0.57 (%), which is about 10% of the mean value PCE (5.62%). The validation parameters according to the OECD principles are R(2) (0.75), Q(2) (0.77), and Qcv2 (0.76), which demonstrate the great goodness-of-fit, predictivity, and robustness of the model. Additionally, the applicability domain of the cascaded QSAR model is defined for further application. This study demonstrates that the established cascaded model is able to effectively predict the PCE for organic dye sensitizers with very low cost and relatively high accuracy, providing a useful tool for the design of dye sensitizers with high PCE.
BIVARIATE FRACTAL INTERPOLATION FUNCTIONS ON RECTANGULAR DOMAINS
Institute of Scientific and Technical Information of China (English)
Xiao-yuan Qian
2002-01-01
Non-tensor product bivariate fractal interpolation functions defined on gridded rectangular domains are constructed. Linear spaces consisting of these functions are introduced.The relevant Lagrange interpolation problem is discussed. A negative result about the existence of affine fractal interpolation functions defined on such domains is obtained.
Interpolated Sounding Value-Added Product
Energy Technology Data Exchange (ETDEWEB)
Troyan, D [Brookhaven National Laboratory
2013-04-01
The Interpolated Sounding (INTERPSONDE) value-added product (VAP) uses a combination of observations from radiosonde soundings, the microwave radiometer (MWR), and surface meteorological instruments in order to define profiles of the atmospheric thermodynamic state at one-minute temporal intervals and a total of at least 266 altitude levels. This VAP is part of the Merged Sounding (MERGESONDE) suite of VAPs. INTERPSONDE is the profile of the atmospheric thermodynamic state created using the algorithms of MERGESONDE without including the model data from the European Centre for Medium-range Weather Forecasting (ECMWF). More specifically, INTERPSONDE VAP represents an intermediate step within the larger MERGESONDE process.
Topics in multivariate approximation and interpolation
Jetter, Kurt
2005-01-01
This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr
Boudard, Alain; David, Jean-Christophe; Leray, Sylvie; Mancusi, Davide
2012-01-01
The new version (INCL4.6) of the Li`ege intranuclear cascade (INC) model for the description of spallation reactions is presented in detail. Compared to the standard version (INCL4.2), it incorporates several new features, the most important of which are: (i) the inclusion of cluster production through a dynamical phase space coalescence model, (ii) the Coulomb deflection for entering and outgoing charged particles, (iii) the improvement of the treatment of Pauli blocking and of soft collisions, (iv) the introduction of experimental threshold values for the emission of particles, (v) the improvement of pion dynamics, (vi) a detailed procedure for the treatment of light-cluster induced reactions taking care of the effects of binding energy of the nucleons inside the incident cluster and of the possible fusion reaction at low energy. Performances of the new model concerning nucleon-induced reactions are illustrated. Whenever necessary, the INCL4.6 model is coupled to the ABLA07 deexcitation model and the respec...
Bilinear Interpolation Image Scaling Processor for VLSI
Directory of Open Access Journals (Sweden)
Ms. Pawar Ashwini Dilip
2014-05-01
Full Text Available We introduce image scaling processor using VLSI technique. It consist of Bilinear interpolation, clamp filter and a sharpening spatial filter. Bilinear interpolation algorithm is popular due to its computational efficiency and image quality. But resultant image consist of blurring edges and aliasing artifacts after scaling. To reduce the blurring and aliasing artifacts sharpening spatial filter and clamp filters are used as pre-filter. These filters are realized by using T-model and inversed T-model convolution kernels. To reduce the memory buffer and computing resources for proposed image processor design two T-model or inversed T-model filters are combined into combined filter which requires only one line buffer memory. Also, to reduce hardware cost Reconfigurable calculation unit (RCUis invented. The VLSI architecture in this work can achieve 280 MHz with 6.08-K gate counts, and its core area is 30 378 μm2 synthesized by a 0.13-μm CMOS process
Institute of Scientific and Technical Information of China (English)
张兴飞
2012-01-01
The normal height of the station can be obtained Lhrough GPS technology combined with lugh precision and high-resolution quasi-geoid refinement model. It can replace the conventional level measurements to reduce labor-intensive and improve efficiency.There are two factors that affect the accuracy: the accuracy of the geodetic height of the GPS station and its interpolation method. On Lhe basis of the geoid of Shenzhen with one-kilometer resolution, the effect and accuracy of CPS height anomaly interpolation are analyzed and the results show that the Kriging interpolation method is accurate and qtable for the high resolution quasi-geoid model.%地面点的正常高可以通过GPS技术结合高精度、高分辨率的似大地水准面模型获得,以代替劳动强度大效率低的传统水准(或高程)测量.影响精度的因素有两个:GPS点大地高的测量精度和该点内插高程异常的精度.本文主要针对深圳市1km格网似大地水准面数据,利用克里金法内插拟合高程异常值,用实例说明克里金法在深圳市似大地水准面的应用中可以满足大比例尺数字化测图的需要.
Temporal interpolation in Meteosat images
DEFF Research Database (Denmark)
Larsen, Rasmus; Hansen, Johan Dore; Ersbøll, Bjarne Kjær;
a threshold between clouds and land/water. The temperature maps are estimated using observations from the image sequence itself at cloud free pixels and ground temperature measurements from a series of meteor ological observation stations in Europe. The temporal interpolation of the images is bas ed on a path...... in such animated films are perceived as being jerky due to t he low temporal sampling rate in general and missing images in particular. In order to perform a satisfactory temporal interpolation we estimate and use the optical flow corresponding to every image in the sequenc e. The estimation of the optical flow...... is based on images sequences where the clouds are segmented from the land/water that might a lso be visible in the images. Because the pixel values measured correspond directly to temperature and because clouds (normally) are colder than land/water we use an estimated lan d temperature map to perform...
Yield statistics of interpolated superoscillations
Katzav, Eytan; Perlsman, Ehud; Schwartz, Moshe
2017-01-01
Yield optimized interpolated superoscillations have been recently introduced as a means for possibly making the use of the phenomenon of superoscillation practical. In this paper we study how good is a superoscillation that is not optimal. Namely, by how much is the yield decreased when the signal departs from the optimal one. We consider two situations. One is the case where the signal strictly obeys the interpolation requirement and the other is when that requirement is relaxed. In the latter case the yield can be increased at the expense of deterioration of signal quality. An important conclusion is that optimizing superoscillations may be challenging in terms of the precision needed, however, storing and using them is not at all that sensitive. This is of great importance in any physical system where noise and error are inevitable.
INTERPOLATION WITH RESTRICTED ARC LENGTH
Institute of Scientific and Technical Information of China (English)
Petar Petrov
2003-01-01
For given data (ti,yi), I= 0,1,…,n,0 = t0 ＜t1 ＜…＜tn = 1we study constrained interpolation problem of Favard type inf{‖f"‖∞|f∈W2∞[0,1],f(ti)=yi,i=0,…,n,l(f;[0,1])≤l0}, wherel(f";[0,1])=∫1 0 / 1+f'2(x)dx is the arc length off in [0,1]. We prove the existence of a solution f* of the above problem, that is a quadratic spline with a second derivative f"* , which coincides with one of the constants - ‖f"*‖∞,0,‖f"*‖∞ between every two consecutive knots. Thus, we extend a result ofKarlin concerning Favard problem, to the case of restricted length interpolation.
Segment adaptive gradient angle interpolation.
Zwart, Christine M; Frakes, David H
2013-08-01
We introduce a new edge-directed interpolator based on locally defined, straight line approximations of image isophotes. Spatial derivatives of image intensity are used to describe the principal behavior of pixel-intersecting isophotes in terms of their slopes. The slopes are determined by inverting a tridiagonal matrix and are forced to vary linearly from pixel-to-pixel within segments. Image resizing is performed by interpolating along the approximated isophotes. The proposed method can accommodate arbitrary scaling factors, provides state-of-the-art results in terms of PSNR as well as other quantitative visual quality metrics, and has the advantage of reduced computational complexity that is directly proportional to the number of pixels.
Serinaldi, F.
2010-12-01
Discrete multiplicative random cascade (MRC) models were extensively studied and applied to disaggregate rainfall data, thanks to their formal simplicity and the small number of involved parameters. Focusing on temporal disaggregation, the rationale of these models is based on multiplying the value assumed by a physical attribute (e.g., rainfall intensity) at a given time scale L, by a suitable number b of random weights, to obtain b attribute values corresponding to statistically plausible observations at a smaller L/b time resolution. In the original formulation of the MRC models, the random weights were assumed to be independent and identically distributed. However, for several studies this hypothesis did not appear to be realistic for the observed rainfall series as the distribution of the weights was shown to depend on the space-time scale and rainfall intensity. Since these findings contrast with the scale invariance assumption behind the MRC models and impact on the applicability of these models, it is worth studying their nature. This study explores the possible presence of dependence of the parameters of two discrete MRC models on rainfall intensity and time scale, by analyzing point rainfall series with 5-min time resolution. Taking into account a discrete microcanonical (MC) model based on beta distribution and a discrete canonical beta-logstable (BLS), the analysis points out that the relations between the parameters and rainfall intensity across the time scales are detectable and can be modeled by a set of simple functions accounting for the parameter-rainfall intensity relationship, and another set describing the link between the parameters and the time scale. Therefore, MC and BLS models were modified to explicitly account for these relationships and compared with the continuous in scale universal multifractal (CUM) model, which is used as a physically based benchmark model. Monte Carlo simulations point out that the dependence of MC and BLS
Directory of Open Access Journals (Sweden)
F. Serinaldi
2010-12-01
Full Text Available Discrete multiplicative random cascade (MRC models were extensively studied and applied to disaggregate rainfall data, thanks to their formal simplicity and the small number of involved parameters. Focusing on temporal disaggregation, the rationale of these models is based on multiplying the value assumed by a physical attribute (e.g., rainfall intensity at a given time scale L, by a suitable number b of random weights, to obtain b attribute values corresponding to statistically plausible observations at a smaller L/b time resolution. In the original formulation of the MRC models, the random weights were assumed to be independent and identically distributed. However, for several studies this hypothesis did not appear to be realistic for the observed rainfall series as the distribution of the weights was shown to depend on the space-time scale and rainfall intensity. Since these findings contrast with the scale invariance assumption behind the MRC models and impact on the applicability of these models, it is worth studying their nature. This study explores the possible presence of dependence of the parameters of two discrete MRC models on rainfall intensity and time scale, by analyzing point rainfall series with 5-min time resolution. Taking into account a discrete microcanonical (MC model based on beta distribution and a discrete canonical beta-logstable (BLS, the analysis points out that the relations between the parameters and rainfall intensity across the time scales are detectable and can be modeled by a set of simple functions accounting for the parameter-rainfall intensity relationship, and another set describing the link between the parameters and the time scale. Therefore, MC and BLS models were modified to explicitly account for these relationships and compared with the continuous in scale universal multifractal (CUM model, which is used as a physically based benchmark model. Monte Carlo simulations point out
Adaptive manifold-mapping using multiquadric interpolation applied to linear actuator design
Lahaye, D.; Canova, A.; Gruosso, G.; Repetto, M.
2006-01-01
In this work a multilevel optimization strategy based on manifold-mapping combined with multiquadric interpolation for the coarse model construction is presented. In the proposed approach the coarse model is obtained by interpolating the fine model using multiquadrics in a small number of points. As
de Oliveira, Samuel Conceição; de Castro, Heizir Ferreira; Visconti, Alexandre Eliseu Stourdze; Giudici, Reinaldo
2015-03-01
Experiments of continuous alcoholic fermentation of sugarcane juice with flocculating yeast recycle were conducted in a system of two 0.22-L tower bioreactors in series, operated at a range of dilution rates (D 1 = D 2 = 0.27-0.95 h(-1)), constant recycle ratio (α = F R /F = 4.0) and a sugar concentration in the feed stream (S 0) around 150 g/L. The data obtained in these experimental conditions were used to adjust the parameters of a mathematical model previously developed for the single-stage process. This model considers each of the tower bioreactors as a perfectly mixed continuous reactor and the kinetics of cell growth and product formation takes into account the limitation by substrate and the inhibition by ethanol and biomass, as well as the substrate consumption for cellular maintenance. The model predictions agreed satisfactorily with the measurements taken in both stages of the cascade. The major differences with respect to the kinetic parameters previously estimated for a single-stage system were observed for the maximum specific growth rate, for the inhibition constants of cell growth and for the specific rate of substrate consumption for cell maintenance. Mathematical models were validated and used to simulate alternative operating conditions as well as to analyze the performance of the two-stage process against that of the single-stage process.
Directory of Open Access Journals (Sweden)
Umar Iqbal
2010-01-01
Full Text Available Present land vehicle navigation relies mostly on the Global Positioning System (GPS that may be interrupted or deteriorated in urban areas. In order to obtain continuous positioning services in all environments, GPS can be integrated with inertial sensors and vehicle odometer using Kalman filtering (KF. For car navigation, low-cost positioning solutions based on MEMS-based inertial sensors are utilized. To further reduce the cost, a reduced inertial sensor system (RISS consisting of only one gyroscope and speed measurement (obtained from the car odometer is integrated with GPS. The MEMS-based gyroscope measurement deteriorates over time due to different errors like the bias drift. These errors may lead to large azimuth errors and mitigating the azimuth errors requires robust modeling of both linear and nonlinear effects. Therefore, this paper presents a solution based on Parallel Cascade Identification (PCI module that models the azimuth errors and is augmented to KF. The proposed augmented KF-PCI method can handle both linear and nonlinear system errors as the linear parts of the errors are modeled inside the KF and the nonlinear and residual parts of the azimuth errors are modeled by PCI. The performance of this method is examined using road test experiments in a land vehicle.
Fouladi Osgouei, Hojjatollah; Zarghami, Mahdi; Ashouri, Hamed
2016-04-01
The availability of spatial, high-resolution rainfall data is one of the most essential needs in the study of water resources. These data are extremely valuable in providing flood awareness for dense urban and industrial areas. The first part of this paper applies an optimization-based method to the calibration of radar data based on ground rainfall gauges. Then, the climatological Z-R relationship for the Sahand radar, located in the East Azarbaijan province of Iran, with the help of three adjacent rainfall stations, is obtained. The new climatological Z-R relationship with a power-law form shows acceptable statistical performance, making it suitable for radar-rainfall estimation by the Sahand radar outputs. The second part of the study develops a new heterogeneous random-cascade model for spatially disaggregating the rainfall data resulting from the power-law model. This model is applied to the radar-rainfall image data to disaggregate rainfall data with coverage area of 512 × 512 km2 to a resolution of 32 × 32 km2. Results show that the proposed model has a good ability to disaggregate rainfall data, which may lead to improvement in precipitation forecasting, and ultimately better water-resources management in this arid region, including Urmia Lake.
Fouladi Osgouei, Hojjatollah; Zarghami, Mahdi; Ashouri, Hamed
2017-07-01
The availability of spatial, high-resolution rainfall data is one of the most essential needs in the study of water resources. These data are extremely valuable in providing flood awareness for dense urban and industrial areas. The first part of this paper applies an optimization-based method to the calibration of radar data based on ground rainfall gauges. Then, the climatological Z-R relationship for the Sahand radar, located in the East Azarbaijan province of Iran, with the help of three adjacent rainfall stations, is obtained. The new climatological Z-R relationship with a power-law form shows acceptable statistical performance, making it suitable for radar-rainfall estimation by the Sahand radar outputs. The second part of the study develops a new heterogeneous random-cascade model for spatially disaggregating the rainfall data resulting from the power-law model. This model is applied to the radar-rainfall image data to disaggregate rainfall data with coverage area of 512 × 512 km2 to a resolution of 32 × 32 km2. Results show that the proposed model has a good ability to disaggregate rainfall data, which may lead to improvement in precipitation forecasting, and ultimately better water-resources management in this arid region, including Urmia Lake.
Seta, Ryo; Okubo, Kan; Tagawa, Norio
2009-01-01
Image interpolation can be performed by a convolution operation using the neighboring image values. To achieve accurate image interpolation, some of the conventional methods use basis function with large support, and therefore their implementation may have a large computational cost. Interpolation by the Hermite interpolating polynomials can be performed using image values and their derivatives. This makes it possible to realize the high-order interpolation with small support. In this study, ...
Cascading Gravity is Ghost Free
de Rham, Claudia; Tolley, Andrew J
2010-01-01
We perform a full perturbative stability analysis of the 6D cascading gravity model in the presence of 3-brane tension. We demonstrate that for sufficiently large tension on the (flat) 3-brane, there are no ghosts at the perturbative level, consistent with results that had previously only been obtained in a specific 5D decoupling limit. These results establish the cascading gravity framework as a consistent infrared modification of gravity.
A comparative analysis of different DEM interpolation methods
Directory of Open Access Journals (Sweden)
P.V. Arun
2013-12-01
Full Text Available Visualization of geospatial entities generally entails Digital Elevation Models (DEMs that are interpolated to establish three dimensional co-ordinates for the entire terrain. The accuracy of generated terrain model depends on the interpolation mechanism adopted and hence it is needed to investigate the comparative performance of different approaches in this context. General interpolation techniques namely Inverse Distance Weighted, kriging, ANUDEM, Nearest Neighbor, and Spline approaches have been compared. Differential ground field survey has been conducted to generate reference DEM as well as specific set of test points for comparative evaluation. We have also investigated the suitability of Shuttle Radar Topographic Mapper Digital Elevation Mapper for Indian terrain by comparing it with the Survey of India (SOI Digital Elevation Model (DEM. Contours were generated at different intervals for comparative analysis and found SRTM as more suitable. The terrain sensitivity of various methods has also been analyzed with reference to the study area.
BLOCK BASED NEWTON-LIKE BLENDING INTERPOLATION
Institute of Scientific and Technical Information of China (English)
Qian-jin Zhao; Jie-qing Tan
2006-01-01
Newton's polynomial interpolation may be the favourite linear interpolation in the sense that it is built up by means of the divided differences which can be calculated recursively and produce useful intermediate results. However Newton interpolation is in fact point based interpolation since a new interpolating polynomial with one more degree is obtained by adding a new support point into the current set of support points once at a time. In this paper we extend the point based interpolation to the block based interpolation. Inspired by the idea of the modern architectural design, we first divide the original set of support points into some subsets (blocks), then construct each block by using whatever interpolation means, linear or rational and finally assemble these blocks by Newton's method to shape the whole interpolation scheme. Clearly our method offers many flexible interpolation schemes for choices which include the classical Newton's polynomial interpolation as its special case. A bivariate analogy is also discussed and numerical examples are given to show the effectiveness of our method.
Inverse Distance Weighted Interpolation Involving Position Shading
Directory of Open Access Journals (Sweden)
LI Zhengquan
2015-01-01
Full Text Available Considering the shortcomings of inverse distance weighted (IDW interpolation in practical applications, this study improved the IDW algorithm and put forward a new spatial interpolation method that named as adjusted inverse distance weighted (AIDW. In interpolating process, the AIDW is capable of taking into account the comprehensive influence of distance and position of sample point to interpolation point, by adding a coefficient (K into the normal IDW formula. The coefficient (K is used to adjust interpolation weight of the sample point according to its position in sample points. Theoretical analysis and practical application indicates that the AIDW algorithm could diminish or eliminate the IDW interpolation defect of non-uniform distribution of sample points. Consequently the AIDW interpolating is more reasonable, compared with the IDW interpolating. On the other hand, the contour plotting of the AIDW interpolation could effectively avoid the implausible isolated and concentric circles that originated from the defect of the IDW interpolation, with the result that the contour derived from the AIDW interpolated surface is more similar to the professional manual identification.
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.
Spatial interpolation methods for monthly rainfalls and temperatures in Basilicata
Directory of Open Access Journals (Sweden)
Ferrara A
2008-12-01
Full Text Available Spatial interpolated climatic data on grids are important as input in forest modeling because climate spatial variability has a direct effect on productivity and forest growth. Maps of climatic variables can be obtained by different interpolation methods depending on data quality (number of station, spatial distribution, missed data etc. and topographic and climatic features of study area. In this paper four methods are compared to interpolate monthly rainfall at regional scale: 1 inverse distance weighting (IDW; 2 regularized spline with tension (RST; 3 ordinary kriging (OK; 4 universal kriging (UK. Besides, an approach to generate monthly surfaces of temperatures over regions of complex terrain and with limited number of stations is presented. Daily data were gathered from 1976 to 2006 period and then gaps in the time series were filled in order to obtain monthly mean temperatures and cumulative precipitation. Basic statistics of monthly dataset and analysis of relationship of temperature and precipitation to elevation were performed. A linear relationship was found between temperature and altitude, while no relationship was found between rainfall and elevation. Precipitations were then interpolated without taking into account elevation. Based on root mean squared error for each month the best method was ranked. Results showed that universal kriging (UK is the best method in spatial interpolation of rainfall in study area. Then cross validation was used to compare prediction performance of tree different variogram model (circular, spherical, exponential using UK algorithm in order to produce final maps of monthly precipitations. Before interpolating temperatures were referred to see level using the calculated lapse rate and a digital elevation model (DEM. The result of interpolation with RST was then set to originally elevation with an inverse procedure. To evaluate the quality of interpolated surfaces a comparison between interpolated and
Energy Technology Data Exchange (ETDEWEB)
Niemann, V.
1998-01-01
Homogeneous stratified turbulent shear flow was simulated numerically using the cascade model of Eggers and Grossmann (1991). The model is made applicable to homogeneous shear flow by transformation into a coordinate system that moves along with a basic flow with a constant vertical velocity gradient. The author simulated cases of stable thermal stratification with Richardson numbers in the range of 0{<=}Ri{<=}1. The simulation data were evaluated with particular regard to the anisotropic characteristics of the turbulence field. Further, the results are compared with some common equation systems up to second order. (orig.) [Deutsch] Thema der vorliegenden Dissertation ist die numerische Simulation homogener geschichteter turbulenter Scherstroemungen. Grundlage der Simulation ist das von Eggers and Grossmann (1991) entwickelte Kaskadenmodell. Dieses Modell wird durch Transformation in ein Koordinatensystem, das mit einem Grundstrom mit konstantem vertikalen Geschwindigkeitsgradienten mitbewegt wird, auf homogene Scherstroemungen angewendet. Simuliert werden Faelle mit stabiler thermischer Schichtung mit Richardsonzahlen im Bereich von 0{<=}Ri{<=}1. Der Schwerpunkt bei der Auswertung der Simulationsdaten liegt auf der Untersuchung der Anisotropie-Eigenschaften des Turbulenzfeldes. Darueber hinaus wird ein Vergleich mit einigen gaengigen Schliessungsansaetzen bis zur zweiten Ordnung gezogen. (orig.)
Müller, Hannes; Föt, Annika; Haberlandt, Uwe
2016-04-01
Rainfall time series with a high temporal resolution are needed in many hydrological and water resources management fields. Unfortunately, future climate projections are often available only in low temporal resolutions, e.g. daily values. A possible solution is the disaggregation of these time series using information of high-resolution time series of recording stations. Often, the required parameters for the disaggregation process are applied to future climate without any change, because the change is unknown. For this investigation a multiplicative random cascade model is used. The parameters can be estimated directly from high-resolution time series. Here, time series with hourly resolution generated by the ECHAM5-model and dynamically downscaled with the REMO-model (UBA-, BfG- & ENS-realisation) are used for parameter estimation. The parameters are compared between the past (1971-20000), near-term (2021-2050) and long-term future (2071-2100) for temporal resolutions of 1 h and 8 h. Additionally, the parameters of each period are used for the disaggregation of the other two periods. Afterwards the disaggregated time series are analyzed concerning extreme values representation, event specific characteristics (average wet spell duration and amount) and overall time series characteristics (average intensity and fraction of dry spell events). The aim of the investigation is a) to detect and quantify parameter changes and b) to analyze the influence on the disaggregated time series. The investigation area is Lower Saxony, Germany.
Interpolating of climate data using R
Reinhardt, Katja
2017-04-01
Interpolation methods are used in many different geoscientific areas, such as soil physics, climatology and meteorology. Thereby, unknown values are calculated by using statistical calculation approaches applied on known values. So far, the majority of climatologists have been using computer languages, such as FORTRAN or C++, but there is also an increasing number of climate scientists using R for data processing and visualization. Most of them, however, are still working with arrays and vector based data which is often associated with complex R code structures. For the presented study, I have decided to convert the climate data into geodata and to perform the whole data processing using the raster package, gstat and similar packages, providing a much more comfortable way for data handling. A central goal of my approach is to create an easy to use, powerful and fast R script, implementing the entire geodata processing and visualization into a single and fully automated R based procedure, which allows avoiding the necessity of using other software packages, such as ArcGIS or QGIS. Thus, large amount of data with recurrent process sequences can be processed. The aim of the presented study, which is located in western Central Asia, is to interpolate wind data based on the European reanalysis data Era-Interim, which are available as raster data with a resolution of 0.75˚ x 0.75˚ , to a finer grid. Therefore, various interpolation methods are used: inverse distance weighting, the geostatistical methods ordinary kriging and regression kriging, generalized additve model and the machine learning algorithms support vector machine and neural networks. Besides the first two mentioned methods, the methods are used with influencing factors, e.g. geopotential and topography.
Directory of Open Access Journals (Sweden)
F. Pappenberger
2005-01-01
Full Text Available The political pressure on the scientific community to provide medium to long term flood forecasts has increased in the light of recent flooding events in Europe. Such demands can be met by a system consisting of three different model components (weather forecast, rainfall-runoff forecast and flood inundation forecast which are all liable to considerable uncertainty in the input, output and model parameters. Thus, an understanding of cascaded uncertainties is a necessary requirement to provide robust predictions. In this paper, 10-day ahead rainfall forecasts, consisting of one deterministic, one control and 50 ensemble forecasts, are fed into a rainfall-runoff model (LisFlood for which parameter uncertainty is represented by six different parameter sets identified through a Generalised Likelihood Uncertainty Estimation (GLUE analysis and functional hydrograph classification. The runoff of these 52 * 6 realisations form the input to a flood inundation model (LisFlood-FP which acknowledges uncertainty by utilising ten different sets of roughness coefficients identified using the same GLUE methodology. Likelihood measures for each parameter set computed on historical data are used to give uncertain predictions of flow hydrographs as well as spatial inundation extent. This analysis demonstrates that a full uncertainty analysis of such an integrated system is limited mainly by computer power as well as by how well the rainfall predictions represent potential future conditions. However, these restrictions may be overcome or lessened in the future and this paper establishes a computationally feasible methodological approach to the uncertainty cascade problem.
Muzy, Jean-François; Baïle, Rachel; Bacry, Emmanuel
2013-04-01
In this paper we propose a new model for volatility fluctuations in financial time series. This model relies on a nonstationary Gaussian process that exhibits aging behavior. It turns out that its properties, over any finite time interval, are very close to continuous cascade models. These latter models are indeed well known to reproduce faithfully the main stylized facts of financial time series. However, it involves a large-scale parameter (the so-called “integral scale” where the cascade is initiated) that is hard to interpret in finance. Moreover, the empirical value of the integral scale is in general deeply correlated to the overall length of the sample. This feature is precisely predicted by our model, which, as illustrated by various examples from daily stock index data, quantitatively reproduces the empirical observations.
BLOCK BASED NEWTON-LIKE BLENDING OSCULATORY RATIONAL INTERPOLATION
Institute of Scientific and Technical Information of China (English)
Shuo Tang; Le Zou; Chensheng Li
2010-01-01
With Newton's interpolating formula,we construct a kind of block based Newton-like blending osculatory interpolation.The interpolation provides us many flexible interpolation schemes for choices which include the expansive Newton's polynomial interpolation as its special case.A bivariate analogy is also discussed and numerical examples are given to show the effectiveness of the interpolation.
Optimized interpolations and nonlinearity in numerical studies of woodwind instruments
Skouroupathis, A
2005-01-01
We study the impedance spectra of woodwind instruments with arbitrary axisymmetric geometry. We perform piecewise interpolations of the instruments' profile, using interpolating functions amenable to analytic solutions of the Webster equation. Our algorithm optimizes on the choice of such functions, while ensuring compatibility of wavefronts at the joining points. Employing a standard mathematical model of a single-reed mouthpiece as well as the time-domain reflection function, which we derive from our impedance results, we solve the Schumacher equation for the pressure evolution in time. We make analytic checks that, despite the nonlinearity in the reed model and in the evolution equation, solutions are unique and singularity-free.
Estimating monthly temperature using point based interpolation techniques
Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi
2013-04-01
This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.
Cascades on clique-based graphs
Hackett, Adam
2013-01-01
We present an analytical approach to determining the expected cascade size in a broad range of dynamical models on the class of highly-clustered random graphs introduced in [J. P. Gleeson, Phys. Rev. E 80, 036107 (2009)]. A condition for the existence of global cascades is also derived. Applications of this approach include analyses of percolation, and Watts's model. We show how our techniques can be used to study the effects of in-group bias in cascades on social networks.
Directory of Open Access Journals (Sweden)
Zhenling Yao
2008-01-01
Full Text Available Corticosteroids (CS effects on insulin resistance related genes in rat skeletal muscle were studied. In our acute study, adrenalectomized (ADX rats were given single doses of 50 mg/kg methylprednisolone (MPL intravenously. In our chronic study, ADX rats were implanted with Alzet mini-pumps giving zero-order release rates of 0.3 mg/kg/h MPL and sacriﬁced at various times up to 7 days. Total RNA was extracted from gastrocnemius muscles and hybridized to Affymetrix GeneChips. Data mining and literature searches identiﬁed 6 insulin resistance related genes which exhibited complex regulatory pathways. Insulin receptor substrate-1 (IRS-1, uncoupling protein 3 (UCP3, pyruvate dehydrogenase kinase isoenzyme 4 (PDK4, fatty acid translocase (FAT and glycerol-3-phosphate acyltransferase (GPAT dynamic proﬁles were modeled with mutual effects by calculated nuclear drug-receptor complex (DR(N and transcription factors. The oscillatory feature of endothelin-1 (ET-1 expression was depicted by a negative feedback loop. These integrated models provide test- able quantitative hypotheses for these regulatory cascades.
Directory of Open Access Journals (Sweden)
Mingjian Sun
2015-01-01
Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.
Nourani, Vahid; Mousavi, Shahram; Dabrowska, Dominika; Sadikoglu, Fahreddin
2017-05-01
As an innovation, both black box and physical-based models were incorporated into simulating groundwater flow and contaminant transport. Time series of groundwater level (GL) and chloride concentration (CC) observed at different piezometers of study plain were firstly de-noised by the wavelet-based de-noising approach. The effect of de-noised data on the performance of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) was evaluated. Wavelet transform coherence was employed for spatial clustering of piezometers. Then for each cluster, ANN and ANFIS models were trained to predict GL and CC values. Finally, considering the predicted water heads of piezometers as interior conditions, the radial basis function as a meshless method which solves partial differential equations of GFCT, was used to estimate GL and CC values at any point within the plain where there is not any piezometer. Results indicated that efficiency of ANFIS based spatiotemporal model was more than ANN based model up to 13%.
Efficient Interpolant Generation in Satisfiability Modulo Linear Integer Arithmetic
Griggio, Alberto; Sebastiani, Roberto
2010-01-01
The problem of computing Craig interpolants in SAT and SMT has recently received a lot of interest, mainly for its applications in formal verification. Efficient algorithms for interpolant generation have been presented for some theories of interest ---including that of equality and uninterpreted functions, linear arithmetic over the rationals, and their combination--- and they are successfully used within model checking tools. For the theory of linear arithmetic over the integers (LA(Z)), however, the problem of finding an interpolant is more challenging, and the task of developing efficient interpolant generators for the full theory LA(Z) is still the objective of ongoing research. In this paper we try to close this gap. We build on previous work and present a novel interpolation algorithm for SMT(LA(Z)), which exploits the full power of current state-of-the-art SMT(LA(Z)) solvers. We demonstrate the potential of our approach with an extensive experimental evaluation of our implementation of the proposed al...
Deriving global flood hazard maps of fluvial floods through a physical model cascade
Pappenberger, F.; E. Dutra; Wetterhall, F.; Cloke, H
2012-01-01
Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteoro...
MODELING COLLISIONAL CASCADES IN DEBRIS DISKS: STEEP DUST-SIZE DISTRIBUTIONS
Energy Technology Data Exchange (ETDEWEB)
Gaspar, Andras; Psaltis, Dimitrios; Rieke, George H.; Oezel, Feryal, E-mail: agaspar@as.arizona.edu, E-mail: dpsaltis@as.arizona.edu, E-mail: grieke@as.arizona.edu, E-mail: fozel@as.arizona.edu [Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States)
2012-07-20
We explore the evolution of the mass distribution of dust in collision-dominated debris disks, using the collisional code introduced in our previous paper. We analyze the equilibrium distribution and its dependence on model parameters by evolving over 100 models to 10 Gyr. With our numerical models, we confirm that systems reach collisional equilibrium with a mass distribution that is steeper than the traditional solution by Dohnanyi. Our model yields a quasi-steady-state slope of n(m) {approx} m{sup -1.88} [n(a) {approx} a{sup -3.65}] as a robust solution for a wide range of possible model parameters. We also show that a simple power-law function can be an appropriate approximation for the mass distribution of particles in certain regimes. The steeper solution has observable effects in the submillimeter and millimeter wavelength regimes of the electromagnetic spectrum. We assemble data for nine debris disks that have been observed at these wavelengths and, using a simplified absorption efficiency model, show that the predicted slope of the particle-mass distribution generates spectral energy distributions that are in agreement with the observed ones.
A Dynamic Wheel Model Based on Steady-state Interpolation Model%基于稳态插值模型的动态车轮模型
Institute of Scientific and Technical Information of China (English)
管欣; 段春光; 卢萍萍; 吴玉杰
2014-01-01
Aiming at the problem that the traditional steady-state tire model is inadequate to precisely de-scribe the friction force between tire and road surface in low speed zone, leading to inaccurate simulation results in stopping condition with residual vehicle speed, the state key laboratory of automotive simulation and control in Jilin University has developed a dynamic wheel model. The model simplifies the tire crown as a rigid-ring, which is con-nected to wheel rim through six-direction spring-damper representing the elasticity of tire carcass. A modeling scheme is proposed, in which the dynamic and static friction forces between rigid-ring and road surface are calculat-ed separately. With the dynamics subsystem of rigid ring set up, a simulation program is developed using C lan-guage and embedded into sophisticated vehicle model with a simulation conducted. The results show that with the model, the simulated vehicle can start smoothly and stop completely without nonzero residual speed.%针对传统的稳态轮胎模型缺乏对低速区轮胎与路面之间摩擦力的精确描述，导致车辆在停车工况下仿真不准，车辆具有残余速度的问题，吉林大学汽车仿真与控制国家重点实验室开发了动态车轮模型。该模型将轮胎胎冠部分假设为刚性环，而刚性环与轮辋之间通过六向弹簧阻尼器连接，以模拟胎体的弹性。提出了刚性环与路面之间动、静摩擦力分离求解的建模方法。建立了刚性环动力学子系统，基于C语言开发了仿真程序，并嵌入到复杂车辆模型中。仿真结果表明，采用该模型，车辆可以平稳起步并实现完全停车。
Three-dimensional tumor perfusion reconstruction using fractal interpolation functions.
Craciunescu, O I; Das, S K; Poulson, J M; Samulski, T V
2001-04-01
It has been shown that the perfusion of blood in tumor tissue can be approximated using the relative perfusion index determined from dynamic contrast-enhanced magnetic resonance imaging (DE-MRI) of the tumor blood pool. Also, it was concluded in a previous report that the blood perfusion in a two-dimensional (2-D) tumor vessel network has a fractal structure and that the evolution of the perfusion front can be characterized using invasion percolation. In this paper, the three-dimensional (3-D) tumor perfusion is reconstructed from the 2-D slices using the method of fractal interpolation functions (FIF), i.e., the piecewise self-affine fractal interpolation model (PSAFIM) and the piecewise hidden variable fractal interpolation model (PHVFIM). The fractal models are compared to classical interpolation techniques (linear, spline, polynomial) by means of determining the 2-D fractal dimension of the reconstructed slices. Using FIFs instead of classical interpolation techniques better conserves the fractal-like structure of the perfusion data. Among the two FIF methods, PHVFIM conserves the 3-D fractality better due to the cross correlation that exists between the data in the 2-D slices and the data along the reconstructed direction. The 3-D structures resulting from PHVFIM have a fractal dimension within 3%-5% of the one reported in literature for 3-D percolation. It is, thus, concluded that the reconstructed 3-D perfusion has a percolation-like scaling. As the perfusion term from bio-heat equation is possibly better described by reconstruction via fractal interpolation, a more suitable computation of the temperature field induced during hyperthermia treatments is expected.
Jing, Zhao; Wu, Lixin; Ma, Xiaohui
2016-08-01
The authors regret that the Acknowledgements section in Jing et al. (2016) neglected to give proper credit to the model development team and to the intellectual work behind the model simulation and wish to add the following acknowledgements: We are very grateful to the developers of the coupled regional climate model (CRCM) used in this study. The CRCM was developed at Texas A&M University by Dr. Raffaele Montuoro under the direction of Dr. Ping Chang, with support from National Science Foundation Grants AGS-1067937 and AGS-1347808, Department of Energy Grant DE-SC0006824, as well as National Oceanic and Atmospheric Administration Grant NA11OAR4310154. The design of the reported CRCM simulations was led by Dr. Ping Chang and carried out by Dr. Xiaohui Ma as a part of her dissertation research under the supervision of Dr. Ping Chang, supported by National Science Foundation Grants AGS-1067937 and AGS-1347808. The authors would like to apologise for any inconvenience caused.
Energy cascades in the upper ocean
Institute of Scientific and Technical Information of China (English)
Ray Q.Lin; Scott Chubb
2006-01-01
Wave-wave interactions cause energy cascades. These are the most important processes in the upper ocean because they govern wave-growth and dissipation. Through indirect cascades, wave energy is transferred from higher frequencies to lower frequencies, leading to wave growth. In direct cascades, energy is transferred from lower frequencies to the higher frequencies, which causes waves to break, and dissipation of wave energy. However, the evolution and origin of energy cascade processes are still not fully understood. In particular, for example, results from a recent theory (Kalmykov, 1998) suggest that the class I wave-wave interactions (defined by situations involving 4-, 6-, 8-, etc, even numbers of resonantly interacting waves) cause indirect cascades, and Class II wave-wave interactions (involving, 5-, 7-, 9-, etc, .., odd numbers of waves) cause direct cascades. In contrast to this theory, our model results indicate the 4-wave interactions can cause significant transfer of wave energy through both direct and indirect cascades. In most situations, 4-wave interactions provide the major source of energy transfer for both direct cascades and indirect cascades, except when the wave steepness is larger than 0.28. Our model results agree well with wave measurements, obtained using field buoy data (for example, Lin and Lin, 2002). In particular, in these observations, asymmetrical wave-wave interactions were studied. They found that direct and indirect cascades both are mainly due to the 4-wave interactions when wave steepness is less than 0.3.
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
In the area with high groundwater pressure,grout curtain is often adopted to reduce the water pressure on tunnel lining.A series of model tests for the diversion tunnel of the Jinping Second Cascade Hydropower Station,China,is designed to study the effect of grout curtain.The impact of the thickness of grout curtain,permeability of grout curtain,internal water pressure and drainage inflow on the distribution of water pressure are discussed.The results indicates that under un-drained condition,water pressure is equal to hydrostatic one no matter grout curtain is selected or not,water pressure under drained condition is far less than that of un-drained condition,drainage in tunnel can reduce tunnel water pressure effectively.For same inflow,both increasing of thickness and decrease of hydraulic conductivity of grout curtain can reduce water pressure effectively.For the same water pressure,the smaller inflow of grout curtain,the less volume of water to be discharged.The impact of hydraulic conductivity of grout curtain is more obvious than that of thickness.With increasing of internal water pressure,the water pressure of grout curtain increases too,and the water pressure increases nearly linearly.The proposed thickness of grout curtain for the diversion tunnels is 16 m.
Jamali Mahabadi, S. E.; Hu, Yue; Talukder, Muhammad Anisuzzaman; Carruthers, Thomas F.; Menyuk, Curtis R.
2016-10-01
We have developed a comprehensive model of gain recovery due to unipolar electron transport after a short optical pulse in quantum cascade lasers (QCLs) that takes into account all the participating energy levels, including the continuum, in a device. This work takes into account the incoherent scattering of electrons from one energy level to another and quantum coherent tunneling from an injector level to an active region level or vice versa. In contrast to the prior work that only considered transitions to and from a limited number of bound levels, this work include transitions between all bound levels and between the bound energy levels and the continuum. We simulated an experiment of S. Liu et al., in which 438-pJ femtosecond optical pulses at the device's lasing wavelength were injected into an I n0.653 Ga0.348 As/In0.310 Al0.690 As QCL structure; we found that approximately 1% of the electrons in the bound energy levels will be excited into the continuum by a pulse and that the probability that these electrons will be scattered back into bound energy levels is negligible, ˜10-4 . The gain recovery that is predicted is not consistent with the experiments, indicating that one or more phenomena besides unipolar electron transport in response to a short optical pulse play an important role in the observed gain recovery.
Development of a Higher Fidelity Model for the Cascade Distillation Subsystem (CDS)
Perry, Bruce; Anderson, Molly
2014-01-01
Significant improvements have been made to the ACM model of the CDS, enabling accurate predictions of dynamic operations with fewer assumptions. The model has been utilized to predict how CDS performance would be impacted by changing operating parameters, revealing performance trade-offs and possibilities for improvement. CDS efficiency is driven by the THP coefficient of performance, which in turn is dependent on heat transfer within the system. Based on the remaining limitations of the simulation, priorities for further model development include: center dot Relaxing the assumption of total condensation center dot Incorporating dynamic simulation capability for the buildup of dissolved inert gasses in condensers center dot Examining CDS operation with more complex feeds center dot Extending heat transfer analysis to all surfaces
Leveraging Structural Characteristics of Interdependent Networks to Model Non-Linear Cascading Risks
2013-04-01
jung.sourceforge.net/), and so forth. State of the program in our decision-theoretic DEC- MDP model captures the critical information at a specific point in time...the performance factor for 5/10 years time span for all MDAPs, based on a composite metric (it may include the breach factors, %PAUC, funding delta...features to describe the state space of the program within the DEC- MDP model. The following is the list of features of interest: Feature 1
Interpolation of rational matrix functions
Ball, Joseph A; Rodman, Leiba
1990-01-01
This book aims to present the theory of interpolation for rational matrix functions as a recently matured independent mathematical subject with its own problems, methods and applications. The authors decided to start working on this book during the regional CBMS conference in Lincoln, Nebraska organized by F. Gilfeather and D. Larson. The principal lecturer, J. William Helton, presented ten lectures on operator and systems theory and the interplay between them. The conference was very stimulating and helped us to decide that the time was ripe for a book on interpolation for matrix valued functions (both rational and non-rational). When the work started and the first partial draft of the book was ready it became clear that the topic is vast and that the rational case by itself with its applications is already enough material for an interesting book. In the process of writing the book, methods for the rational case were developed and refined. As a result we are now able to present the rational case as an indepe...
Kriging Interpolating Cosmic Velocity Field
Yu, Yu; Jing, Yipeng; Zhang, Pengjie
2015-01-01
[abridge] Volume-weighted statistics of large scale peculiar velocity is preferred by peculiar velocity cosmology, since it is free of uncertainties of galaxy density bias entangled in mass-weighted statistics. However, measuring the volume-weighted velocity statistics from galaxy (halo/simulation particle) velocity data is challenging. For the first time, we apply the Kriging interpolation to obtain the volume-weighted velocity field. Kriging is a minimum variance estimator. It predicts the most likely velocity for each place based on the velocity at other places. We test the performance of Kriging quantified by the E-mode velocity power spectrum from simulations. Dependences on the variogram prior used in Kriging, the number $n_k$ of the nearby particles to interpolate and the density $n_P$ of the observed sample are investigated. (1) We find that Kriging induces $1\\%$ and $3\\%$ systematics at $k\\sim 0.1h{\\rm Mpc}^{-1}$ when $n_P\\sim 6\\times 10^{-2} ({\\rm Mpc}/h)^{-3}$ and $n_P\\sim 6\\times 10^{-3} ({\\rm Mpc...
Evaluation of various interpolants available in DICE
Energy Technology Data Exchange (ETDEWEB)
Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reu, Phillip L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Crozier, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-02-01
This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an image correlation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel intensity data as a continuous signal. Interpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guide the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the interpolants implemented.
An adaptive interpolation scheme for molecular potential energy surfaces
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
An adaptive interpolation scheme for molecular potential energy surfaces
Kowalewski, Markus; Heryudono, Alfa
2016-01-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task -- especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior is evaluated for a model function in 2, 3 and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
Research on interpolation methods in medical image processing.
Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian
2012-04-01
Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.
Yousefvand, Hossein Reza
2017-07-01
In this paper a self-consistent numerical approach to study the temperature and bias dependent characteristics of mid-infrared (mid-IR) quantum cascade lasers (QCLs) is presented which integrates a number of quantum mechanical models. The field-dependent laser parameters including the nonradiative scattering times, the detuning and energy levels, the escape activation energy, the backfilling excitation energy and dipole moment of the optical transition are calculated for a wide range of applied electric fields by a self-consistent solution of Schrodinger-Poisson equations. A detailed analysis of performance of the obtained structure is carried out within a self-consistent solution of the subband population rate equations coupled with carrier coherent transport equations through the sequential resonant tunneling, by taking into account the temperature and bias dependency of the relevant parameters. Furthermore, the heat transfer equation is included in order to calculate the carrier temperature inside the active region levels. This leads to a compact predictive model to analyze the temperature and electric field dependent characteristics of the mid-IR QCLs such as the light-current (L-I), electric field-current (F-I) and core temperature-electric field (T-F) curves. For a typical mid-IR QCL, a good agreement was found between the simulated temperature-dependent L-I characteristic and experimental data, which confirms validity of the model. It is found that the main characteristics of the device such as output power and turn-on delay time are degraded by interplay between the temperature and Stark effects.
Information cascade on networks
Hisakado, Masato
2015-01-01
In this paper, we discuss a voting model by considering three different kinds of networks: a random graph, the Barab\\'{a}si-Albert(BA) model, and a fitness model. A voting model represents the way in which public perceptions are conveyed to voters. Our voting model is constructed by using two types of voters--herders and independents--and two candidates. Independents conduct voting based on their fundamental values; on the other hand, herders base their voting on the number of previous votes. Hence, herders vote for the majority candidates and obtain information relating to previous votes from their networks. We discussed the difference between the phases on which the networks depend. Two kinds of phase transitions, an information cascade transition and a super-normal transition, were identified. The first of these is a transition between a state in which most voters make the correct choices and a state in which most of them are wrong. The second is a transition of convergence speed. The information cascade t...
Laos Organization Name Using Cascaded Model Based on SVM and CRF
Directory of Open Access Journals (Sweden)
Duan Shaopeng
2017-01-01
Full Text Available According to the characteristics of Laos organization name, this paper proposes a two layer model based on conditional random field (CRF and support vector machine (SVM for Laos organization name recognition. A layer of model uses CRF to recognition simple organization name, and the result is used to support the decision of the second level. Based on the driving method, the second layer uses SVM and CRF to recognition the complicated organization name. Finally, the results of the two levels are combined, And by a subsequent treatment to correct results of low confidence recognition. The results show that this approach based on SVM and CRF is efficient in recognizing organization name through open test for real linguistics, and the recalling rate achieve 80. 83％and the precision rate achieves 82. 75％.
Schneider, Demian; Huggel, Christian; García, Javier; Ludeña, Sebastian; Cochachin, Alejo
2013-04-01
that complex cascades of mass movement processes can realistically be modeled using different models and model parameters. The method to semi-automatically produce hazard maps is promising and should be applied in other case studies. Verification of model based results in the field remains an important requirement. Results from this study are important for the GLOF early warning system that is currently in an implementation phase, and for risk reduction efforts in general.
[Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].
Chen, Hao; Yu, Haizhong
2014-04-01
Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.
High degree interpolation polynomial in Newton form
Tal-Ezer, Hillel
1988-01-01
Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.
COMPUTATION OF VECTOR VALUED BLENDING RATIONAL INTERPOLANTS
Institute of Scientific and Technical Information of China (English)
檀结庆
2003-01-01
As we know, Newton's interpolation polynomial is based on divided differ-ences which can be calculated recursively by the divided-difference scheme while Thiele'sinterpolating continued fractions are geared towards determining a rational functionwhich can also be calculated recursively by so-called inverse differences. In this paper,both Newton's interpolation polynomial and Thiele's interpolating continued fractionsare incorporated to yield a kind of bivariate vector valued blending rational interpolantsby means of the Samelson inverse. Blending differences are introduced to calculate theblending rational interpolants recursively, algorithm and matrix-valued case are dis-cussed and a numerical example is given to illustrate the efficiency of the algorithm.
Directory of Open Access Journals (Sweden)
Wassim M. Haddad
2014-07-01
Full Text Available Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a
Baer, Patrick; Huggel, Christian; Frey, Holger; Chisolm, Rachel; McKinney, Daene; McArdell, Brian; Portocarrero, Cesar; Cochachin, Alejo
2016-04-01
Huaraz as the largest city in Cordillera Blanca has faced a major disaster in 1941, when an outburst flood from Lake Palcacocha killed several thousand people and caused widespread destruction. Recent studies on glacial lake outburst flood (GLOF) modelling and early warning systems focussed on Lake Palcacocha which has regrown after the 1941 event, from a volume of half a million m3 in 1974 to a total volume of more than 17 million m3 today. However, little research has been conducted so far concerning the situation of other lakes in the Quillcay catchment, namely Lake Tullparaju (12 mill. m3) and Cuchillacocha (2.5 mill. m3), which both also pose a threat to the city of Huaraz. In this study, we modelled the cascading processes at Lake Tullparaju and Lake Cuchillacocha including rock/ice avalanches, flood wave propagation in the lake and the resulting outburst flood and debris flows. We used the 2D model RAMMS to simulate ice avalanches. Model output was used as input for analytical 2D and 3D calculations of impact waves in the lakes that allowed us to estimate dam overtopping wave height. Since the dimension of the hanging glaciers above all three lakes is comparable, the scenarios in this study have been defined similar to the previous study at Lake Palcacocha. The flow propagation model included sediment entrainment in the steeper parts of the catchment, adding up to 50% to the initial flow volume. The results for total travel time as well as for inundated areas and flow depth and velocity in the city of Huaraz are comparable to the previous studies at Lake Palcacocha. This underlines the importance of considering also these lakes within an integral hazard analysis for the city of Huaraz. A main challenge for modelling GLOFs in the Quillcay catchment using RAMMS is the long runout distance of over 22 km combined with the very low slope gradient of the river. Further studies could improve the process understanding and could focus on more detailed investigations
Autoregressive cascades on random networks
Iyer, Srikanth K.; Vaze, Rahul; Narasimha, Dheeraj
2016-04-01
A network cascade model that captures many real-life correlated node failures in large networks via load redistribution is studied. The considered model is well suited for networks where physical quantities are transmitted, e.g., studying large scale outages in electrical power grids, gridlocks in road networks, and connectivity breakdown in communication networks, etc. For this model, a phase transition is established, i.e., existence of critical thresholds above or below which a small number of node failures lead to a global cascade of network failures or not. Theoretical bounds are obtained for the phase transition on the critical capacity parameter that determines the threshold above and below which cascade appears or disappears, respectively, that are shown to closely follow numerical simulation results.
Deriving global flood hazard maps of fluvial floods through a physical model cascade
Pappenberger, Florian; Dutra, Emanuel; Wetterhall, Fredrik; Cloke, Hannah L.
2013-04-01
Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979-2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25 × 25 km grid, which is then reprojected onto a 1 × 1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25 × 25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.
Deriving global flood hazard maps of fluvial floods through a physical model cascade
Directory of Open Access Journals (Sweden)
F. Pappenberger
2012-11-01
Full Text Available Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (reinsurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979–2010 simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25 × 25 km grid, which is then reprojected onto a 1 × 1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25 × 25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.
Institute of Scientific and Technical Information of China (English)
宁卫远
2013-01-01
科学、准确、及时地分析和预报建筑物的沉降状况,对其施工和运营极为重要,沉降监测的研究成果也是检验设计和施工的重要手段.由于自然环境的多变性、施工现场的复杂性等各种主、客观因素影响,监测数据有时会出现间断,为了分析沉降规律,需要对缺失数据进行插补.考虑到沉降监测非等时间间隔的特点,建立非等间距GM(1,1)模型及优化背景值的GM(1,1)模型,应用于沉降监测缺失资料的插补工作中,取得了较满意的监测间断数据插补值.%Scientific,accurate,and timely analysis and prediction of the settlement of the building is extremely important for the construction and the operation.Settlement monitoring results are the important means of the design and construction test.Due to the variability of the natural environment,complexity of construction site,and other subjective and objective factors,monitoring data can be sometimes interrupted.In order to analyze the settlement law,interpolation is needed for the missing data.Taking into account the non equal time interval characteristics of the settlement monitoring,non-equidistance GM (1,1) model and optimization of background values of GM (1,1) model are established,which is applied to the interpolation of the missing data,and more satisfactory continuous data interpolation values of monitoring is obtained.
Sitnick, Stephanie L; Shaw, Daniel S; Hyde, Luke W
2014-02-01
This study examined developmentally salient risk and protective factors of adolescent substance use assessed during early childhood and early adolescence using a sample of 310 low-income boys. Child problem behavior and proximal family risk and protective factors (i.e., parenting and maternal depression) during early childhood, as well as child and family factors and peer deviant behavior during adolescence, were explored as potential precursors to later substance use during adolescence using structural equation modeling. Results revealed that early childhood risk and protective factors (i.e., child externalizing problems, mothers' depressive symptomatology, and nurturant parenting) were indirectly related to substance use at the age of 17 via risk and protective factors during early and middle adolescence (i.e., parental knowledge and externalizing problems). The implications of these findings for early prevention and intervention are discussed.
Calibration and field application of a Sierra Model 235 cascade impactor
Energy Technology Data Exchange (ETDEWEB)
Knuth, R.H.
1984-06-01
A Sierra Model 235 slotted impactor was used to measure the particle size distribution of ore dust in uranium concentrating mills. The impactor was calibrated at a flow rate of 0.21 m/sup 3//min, using solid monodisperse particles of methylene blue and an impaction surface of Whatman number41 filter paper soaked in mineral oil. The reduction from the impactor's design flow rate of 1.13 m/sup 3//min (40 cfm) to 0.21 m/sup 3//min (7.5 cfm), a necessary adjustment because of the anticipated large particle sizes of ore dust, increased the stage cut-off diameters by an average factor of 2.3. Evaluation of field test results revealed that the underestimation of mass median diameters, often caused by the rebound and reentrainment of solid particles from dry impaction surfaces, was virtually eliminated by using the oiled Whatman number41 impaction surface.
The equal load-sharing model of cascade failures in power grids
Scala, Antonio
2015-01-01
Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into "super-grids".
The equal load-sharing model of cascade failures in power grids
Scala, Antonio; De Sanctis Lucentini, Pier Giorgio
2016-11-01
Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing power demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into "super-grids".
Survey: interpolation methods in medical image processing.
Lehmann, T M; Gönner, C; Spitzer, K
1999-11-01
Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and windowed sinc; 2) nearest neighbor; 3) linear; 4) quadratic; 5) cubic B-spline; 6) cubic; g) Lagrange; and 7) Gaussian interpolation and approximation techniques with kernel sizes from 1 x 1 up to 8 x 8. The comparison is done by: 1) spatial and Fourier analyses; 2) computational complexity as well as runtime evaluations; and 3) qualitative and quantitative interpolation error determinations for particular interpolation tasks which were taken from common situations in medical image processing. For local and Fourier analyses, a standardized notation is introduced and fundamental properties of interpolators are derived. Successful methods should be direct current (DC)-constant and interpolators rather than DC-inconstant or approximators. Each method's parameters are tuned with respect to those properties. This results in three novel kernels, which are introduced in this paper and proven to be within the best choices for medical image interpolation: the 6 x 6 Blackman-Harris windowed sinc interpolator, and the C2-continuous cubic kernels with N = 6 and N = 8 supporting points. For quantitative error evaluations, a set of 50 direct digital X rays was used. They have been selected arbitrarily from clinical routine. In general, large kernel sizes were found to be superior to small interpolation masks. Except for truncated sinc interpolators, all kernels with N = 6 or larger sizes perform significantly better than N = 2 or N = 3 point methods (p cubic 6 x 6 interpolator with continuous second derivatives, as defined in (24), can be recommended for most common interpolation tasks. It appears to be the fastest
Spatial Sampling Strategies for the Effect of Interpolation Accuracy
Directory of Open Access Journals (Sweden)
Hairong Zhang
2015-12-01
Full Text Available Spatial interpolation methods are widely used in various fields and have been studied by several scholars with one or a few specific sampling datasets that do not reflect the complexity of the spatial characteristics and lead to conclusions that cannot be widely applied. In this paper, three factors that affect the accuracy of interpolation have been considered, i.e., sampling density, sampling mode, and sampling location. We studied the inverse distance weighted (IDW, regular spline (RS, and ordinary kriging (OK interpolation methods using 162 DEM datasets considering six sampling densities, nine terrain complexities, and three sampling modes. The experimental results show that, in selective sampling and combined sampling, the maximum absolute errors of interpolation methods rapidly increase and the estimated values are overestimated. In regular-grid sampling, the RS method has the highest interpolation accuracy, and IDW has the lowest interpolation accuracy. However, in both selective and combined sampling, the accuracy of the IDW method is significantly improved and the RS method performs worse. The OK method does not significantly change between the three sampling modes. The following conclusion can be obtained from the above analysis: the combined sampling mode is recommended for sampling, and more sampling points should be added in the ridges, valleys, and other complex terrain. The IDW method should not be used in the regular-grid sampling mode, but it has good performance in the selective sampling mode and combined sampling mode. However, the RS method shows the opposite phenomenon. The sampling dataset should be analyzed before using the OK method, which can select suitable models based on the analysis results of the sampling dataset.
Automatic Image Interpolation Using Homography
Directory of Open Access Journals (Sweden)
Chi-Tsung Liu
2010-01-01
Full Text Available While taking photographs, we often face the problem that unwanted foreground objects (e.g., vehicles, signs, and pedestrians occlude the main subject(s. We propose to apply image interpolation (also known as inpainting techniques to remove unwanted objects in the photographs and to automatically patch the vacancy after the unwanted objects are removed. When given only a single image, if the information loss after the unwanted objects in images being removed is too great, the patching results are usually unsatisfactory. The proposed inpainting techniques employ the homographic constraints in geometry to incorporate multiple images taken from different viewpoints. Our experiment results showed that the proposed techniques could effectively reduce process in searching for potential patches from multiple input images and decide the best patches for the missing regions.
Coarse graining approach to First principles modeling of radiation cascade in large Fe super-cells
Odbadrakh, Khorgolkhuu; Nicholson, Don; Rusanu, Aurelian; Wang, Yang; Stoller, Roger; Zhang, Xiaoguang; Stocks, George
2012-02-01
First principles techniques employed to understand systems at an atomistic level are not practical for large systems consisting of millions of atoms. We present an efficient coarse graining approach to bridge the first principles calculations of local electronic properties to classical Molecular Dynamics (MD) simulations of large structures. Local atomic magnetic moments in crystalline Fe are perturbed by radiation generated defects. The effects are most pronounced near the defect core and decay with distance. We develop a coarse grained technique based on the Locally Self-consistent Multiple Scattering (LSMS) method that exploits the near-sightedness of the electron Green function. The atomic positions were determined by MD with an embedded atom force field. The local moments in the neighborhood of the defect cores are calculated with first-principles based on full local structure information. Atoms in the rest of the system are modeled by representative atoms with approximated properties. This work was supported by the Center for Defect Physics, an Energy Frontier Research Center funded by the US Department of Energy, Office of Science, Office of Basic Energy Sciences.
Unsteady transonic flow over cascade blades
Surampudi, S. P.; Adamczyk, J. J.
1986-01-01
An attempt is made to develop an efficient staggered cascade blade unsteady aerodynamics model for the neighborhood of March 1, representing the blade row by a rectilinear two-dimensional cascade of thin, flat plate airfoils. The equations of motion are derived on the basis of linearized transonic small perturbation theory, and an analytical solution is obtained by means of the Wiener-Hopf procedure. Making use of the transonic similarity law, the results obtained are compared with those of other linearized cascade analyses. A parametric study is conducted to find the effects of reduced frequency, stagger angle, solidity, and the location of the pitching axis on cascade stability.
Petersen, Marcell Elo; Maar, Marie; Larsen, Janus; Møller, Eva Friis; Hansen, Per Juel
2017-05-01
The aim of the study was to investigate the relative importance of bottom-up and top-down forcing on trophic cascades in the pelagic food-web and the implications for water quality indicators (summer phytoplankton biomass and winter nutrients) in relation to management. The 3D ecological model ERGOM was validated and applied in a local set-up of the Kattegat, Denmark, using the off-line Flexsem framework. The model scenarios were conducted by changing the forcing by ± 20% of nutrient inputs (bottom-up) and mesozooplankton mortality (top-down), and both types of forcing combined. The model results showed that cascading effects operated differently depending on the forcing type. In the single-forcing bottom-up scenarios, the cascade directions were in the same direction as the forcing. For scenarios involving top-down, there was a skipped-level-transmission in the trophic responses that was either attenuated or amplified at different trophic levels. On a seasonal scale, bottom-up forcing showed strongest response during winter-spring for DIN and Chl a concentrations, whereas top-down forcing had the highest cascade strength during summer for Chl a concentrations and microzooplankton biomass. On annual basis, the system was more bottom-up than top-down controlled. Microzooplankton was found to play an important role in the pelagic food web as mediator of nutrient and energy fluxes. This study demonstrated that the best scenario for improved water quality was a combined reduction in nutrient input and mesozooplankton mortality calling for the need of an integrated management of marine areas exploited by human activities.
Spatial interpolation approach based on IDW with anisotropic spatial structures
Li, Jia; Duan, Ping; Sheng, Yehua; Lv, Haiyang
2015-12-01
In many interpolation methods, with its simple interpolation principle, Inverse distance weighted (IDW) interpolation is one of the most common interpolation method. There are anisotropic spatial structures with actual geographical spatial phenomenon. When the IDW interpolation is used, anisotropic spatial structures should be considered. Geostatistical theory has a characteristics of exploring anisotropic spatial structures. In this paper, spatial interpolation approach based on IDW with anisotropic spatial structures is proposed. The DEM data is tested in this paper to prove reliability of the IDW interpolation considering anisotropic spatial structures. Experimental results show that IDW interpolation considering anisotropic spatial structures can improve interpolation precision when sampling data has anisotropic spatial structures feature.
Study on Cascading Failures′Model of Edge in Coupled Networks%耦合网络边相继故障模型研究
Institute of Scientific and Technical Information of China (English)
王建伟; 蒋晨; 孙恩慧
2014-01-01
In order to deal with cascading failures in coupled networks, this study analyzes the dynamics mechanism of cascading failures and propose the cascading failures′model of edge in coupled networks.To improve the robustness of coupled networks a-gainst cascading failures, according to different measures, this study takes multiple perspectives to analyze the correlation be-tween the robustness of coupled networks with different link patterns and some parameters in our model.this study then discusses the influences of link patterns of coupled networks and the basic network model on cascading failures and states the whole protec-tion strategies in the proposed model.This study finds: the assortative link pattern can enhance the robustness of coupled net-works against cascading failures;the more similar the topological structures of two interdependent networks, the stronger the net-work robustness against cascading failures;the robustness of coupled networks has a positive correlation with the average degree;an appropriate increase in the number of symmetrical edges between two networks can improve the network robustness.Finally, the cascading failures′model of edge in the real coupled power grid is analyzed.%针对耦合网络上频发的相继故障问题，通过分析边级联故障蔓延的动力学演化机制，构建耦合网络上边相继故障模型。以提高耦合网络整体抵制相继故障能力为出发点，依据不同度量指标，多角度分析具有不同耦合模式的耦合网络鲁棒性与模型参数之间的关联性，研究耦合网络间耦合模式和网络基本模型等因素对相继故障的影响，探讨耦合网络边相继故障模型的整体保护策略。研究结果表明，同配连接模式能够增强耦合网络抵制级联故障的鲁棒性；相互依赖的两个耦合网络之间拓扑结构越相似，网络抵制相继故障的鲁棒性越强；耦合网络鲁棒性与网络平均度正相关；适当的增加
Improving image registration by correspondence interpolation
DEFF Research Database (Denmark)
Ólafsdóttir, Hildur; Pedersen, Henrik; Hansen, Michael Sass;
2011-01-01
) quantitatively by registering downsampled brain data using two different interpolators and subsequently applying the deformation fields to the original data. The results show that the interpolator provides better gradient images and a more sharp cardiac atlas. Moreover, it provides better deformation fields...
Revisiting Veerman’s interpolation method
DEFF Research Database (Denmark)
Christiansen, Peter; Bay, Niels Oluf
2016-01-01
for comparison. Bulge testing and tensile testing of aluminium sheets containingelectro-chemically etched circle grids are performed to experimentally determine the forming limit of the sheet material.The forming limit is determined using (a) Veerman’s interpolation method, (b) exact Lagrangian interpolation...
Differential Interpolation Effects in Free Recall
Petrusic, William M.; Jamieson, Donald G.
1978-01-01
Attempts to determine whether a sufficiently demanding and difficult interpolated task (shadowing, i.e., repeating aloud) would decrease recall for earlier-presented items as well as for more recent items. Listening to music was included as a second interpolated task. Results support views that serial position effects reflect a single process.…
Interpolation for weak Orlicz spaces with condition
Institute of Scientific and Technical Information of China (English)
JIAO Yong; PENG LiHua; LIU PeiDe
2008-01-01
An interpolation theorem for weak Orlicz spaces generalized by N-functions satisfying M△ condition is given.It is proved to be true for weak Orlicz martingale spaces by weak atomic decomposition of weak Hardy martingale spaces.And applying the interpolation theorem,we obtain some embedding relationships among weak Orlicz martingale spaces.
Interpolation theorems on weighted Lorentz martingale spaces
Institute of Scientific and Technical Information of China (English)
2007-01-01
In this paper several interpolation theorems on martingale Lorentz spaces are given.The proofs are based on the atomic decompositions of martingale Hardy spaces over weighted measure spaces.Applying the interpolation theorems,we obtain some inequalities on martingale transform operator.
Transition elements based on transfinite interpolation
Odabas, Onur R.; Sarigul-Klijn, Nesrin
1993-01-01
In this study the transfinite interpolation methodology, a 'blending-function' method in particular, is utilized for the formulation of transition elements. The method offers a formal way of meeting continuity requirements in a transition element. Element shape functions are derived by blending the continuity requirements of individual boundary segments. The blending directions are naturally orthogonal in rectangular domains therefore interpolation of the boundaries over rectangular 2D and 3D elements can be performed with minimal effort. In triangular domains, however, the choice of blending directions and interpolants is not straightforward. For that reason, two interpolation techniques are proposed for blending of the boundaries of triangular domains. A series of transition elements of various classes compatible with elements of different orders and dimensions is developed and the full potential of the transfinite interpolation, as it applies to element formulation, is explored.
Cascaded-cladding-pumped cascaded Raman fiber amplifier.
Jiang, Huawei; Zhang, Lei; Feng, Yan
2015-06-01
The conversion efficiency of double-clad Raman fiber laser is limited by the cladding-to-core area ratio. To get high conversion efficiency, the inner-cladding-to-core area ratio has to be less than about 8, which limits the brightness enhancement. To overcome the problem, a cascaded-cladding-pumped cascaded Raman fiber laser with multiple-clad fiber as the Raman gain medium is proposed. A theoretical model of Raman fiber amplifier with multiple-clad fiber is developed, and numerical simulation proves that the proposed scheme can improve the conversion efficiency and brightness enhancement of cladding pumped Raman fiber laser.
Zhao, Qi; Yi, Ming; Liu, Yan
2011-10-01
The mitogen-activated protein kinase (MAPK) cascade plays a critical role in the control of cell growth. Deregulation of this pathway contributes to the development of many cancers. To better understand its signal transduction, we constructed a reaction-diffusion model for the MAPK pathway. We modeled the three layers of phosphorylation-dephosphorylation reactions and diffusion processes from the cell membrane to the nucleus. Based on different types of feedback in the MAPK cascade, four operation modes are introduced. For each of the four modes, spatial distributions and dose-response curves of active kinases (i.e. ppMAPK) are explored by numerical simulation. The effects of propagation length, diffusion coefficient and feedback strength on the pathway dynamics are investigated. We found that intrinsic bistability in the MAPK cascade can generate a traveling wave of ppMAPK with constant amplitude when the propagation length is short. ppMAPK in this mode of intrinsic bistability decays more slowly than it does in all other modes as the propagation length increases. Moreover, we examined the global and local responses to Ras-GTP of these four modes, and demonstrated how the shapes of these dose-response curves change as the propagation length increases. Also, we found that larger diffusion constant gives a higher response level on the zero-order regime and makes the ppMAPK profiles flatter under strong Ras-GTP stimulus. Furthermore, we observed that spatial responses of ppMAPK are more sensitive to negative feedback than to positive feedback in the broader signal range. Finally, we showed how oscillatory signals pass through the kinase cascade, and found that high frequency signals are damped faster than low frequency ones.
Kriging interpolating cosmic velocity field
Yu, Yu; Zhang, Jun; Jing, Yipeng; Zhang, Pengjie
2015-10-01
Volume-weighted statistics of large-scale peculiar velocity is preferred by peculiar velocity cosmology, since it is free of the uncertainties of galaxy density bias entangled in observed number density-weighted statistics. However, measuring the volume-weighted velocity statistics from galaxy (halo/simulation particle) velocity data is challenging. Therefore, the exploration of velocity assignment methods with well-controlled sampling artifacts is of great importance. For the first time, we apply the Kriging interpolation to obtain the volume-weighted velocity field. Kriging is a minimum variance estimator. It predicts the most likely velocity for each place based on the velocity at other places. We test the performance of Kriging quantified by the E-mode velocity power spectrum from simulations. Dependences on the variogram prior used in Kriging, the number nk of the nearby particles to interpolate, and the density nP of the observed sample are investigated. First, we find that Kriging induces 1% and 3% systematics at k ˜0.1 h Mpc-1 when nP˜6 ×1 0-2(h-1 Mpc )-3 and nP˜6 ×1 0-3(h-1 Mpc )-3 , respectively. The deviation increases for decreasing nP and increasing k . When nP≲6 ×1 0-4(h-1 Mpc )-3 , a smoothing effect dominates small scales, causing significant underestimation of the velocity power spectrum. Second, increasing nk helps to recover small-scale power. However, for nP≲6 ×1 0-4(h-1 Mpc )-3 cases, the recovery is limited. Finally, Kriging is more sensitive to the variogram prior for a lower sample density. The most straightforward application of Kriging on the cosmic velocity field does not show obvious advantages over the nearest-particle method [Y. Zheng, P. Zhang, Y. Jing, W. Lin, and J. Pan, Phys. Rev. D 88, 103510 (2013)] and could not be directly applied to cosmology so far. However, whether potential improvements may be achieved by more delicate versions of Kriging is worth further investigation.
Contingency Analysis of Cascading Line Outage Events
Energy Technology Data Exchange (ETDEWEB)
Thomas L Baldwin; Magdy S Tawfik; Miles McQueen
2011-03-01
As the US power systems continue to increase in size and complexity, including the growth of smart grids, larger blackouts due to cascading outages become more likely. Grid congestion is often associated with a cascading collapse leading to a major blackout. Such a collapse is characterized by a self-sustaining sequence of line outages followed by a topology breakup of the network. This paper addresses the implementation and testing of a process for N-k contingency analysis and sequential cascading outage simulation in order to identify potential cascading modes. A modeling approach described in this paper offers a unique capability to identify initiating events that may lead to cascading outages. It predicts the development of cascading events by identifying and visualizing potential cascading tiers. The proposed approach was implemented using a 328-bus simplified SERC power system network. The results of the study indicate that initiating events and possible cascading chains may be identified, ranked and visualized. This approach may be used to improve the reliability of a transmission grid and reduce its vulnerability to cascading outages.
Sathasivam, S; Grierson, A J; Shaw, P J
2005-10-01
There is increasing evidence that apoptosis or a similar programmed cell death pathway is the mechanism of cell death responsible for motor neurone degeneration in amyotrophic lateral sclerosis. Knowledge of the relative importance of different caspases in the cell death process is at present incomplete. In addition, there is little information on the critical point of the death pathway when the process of dying becomes irreversible. In this study, using the well-established NSC34 motor neurone-like cell line stably transfected with empty vector, normal or mutant human Cu-Zn superoxide dismutase (SOD1), we have characterized the activation of the caspase cascade in detail, revealing that the activation of caspases-9, -3 and -8 are important in motor neurone death and that the presence of mutant SOD1 causes increased activation of components of the apoptotic cascade under both basal culture conditions and following oxidative stress induced by serum withdrawal. Activation of the caspases identified in the cellular model has been confirmed in the G93A SOD1 transgenic mice. Furthermore, investigation of the effects of anti-apoptotic neuroprotective agents including specific caspase inhibitors, minocycline and nifedipine, have supported the importance of the mitochondrion-dependent apoptotic pathway in the death process and revealed that the upstream caspase cascade needs to be inhibited if useful neuro-protection is to be achieved.
MULTI-EPIPOLAR LINES MATCHING-BASED RAY-SPACE INTERPOLATION FOR FREE VIEWPOINT VIDEO SYSTEM
Institute of Scientific and Technical Information of China (English)
Fan Liangzhong; Jiang Gangyi; Yu Mei; Yong-deak Kim
2008-01-01
Ray-space based arbitrary viewpoint rendering without complex object segmentation or model construction is the main technology to realize Free Viewpoint Video(FVV) system for complex scenes.Ray-space interpolation and compression are two key techniques for the solution.In this paper,correlation among multiple epipolar lines in ray-space data is analyzed,and a new method of ray-space interpolation with multi-epipolar lines matching is proposed.Comparing with the pixel-based matching interpolation method and the block-based matching interpolation method,the proposed method can achieve higher Peak Signal to Noise Ratio(PSNR)in interpolating rayspace data and rendering arbitrary viewpoint images.
Knowledge base interpolation of path-dependent data using irregularly spaced natural neighbors
Energy Technology Data Exchange (ETDEWEB)
Hipp, J.; Keyser, R.; Young, C.; Shepard-Dombroski, E.; Chael, E.
1996-08-01
This paper summarizes the requirements for the interpolation scheme needed for the CTBT Knowledge Base and discusses interpolation issues relative to the requirements. Based on these requirements, a methodology for providing an accurate and robust interpolation scheme for the CTBT Knowledge Base is proposed. The method utilizes a Delaunay triangle tessellation to mesh the Earth`s surface and employs the natural-neighbor interpolation technique to provide accurate evaluation of geophysical data that is important for CTBT verification. The natural-neighbor interpolation method is a local weighted average technique capable of modeling sparse irregular data sets as is commonly found in the geophysical sciences. This is particularly true of the data to be contained in the CTBT Knowledge Base. Furthermore, natural neighbor interpolation is first order continuous everywhere except at the data points. The non-linear form of the natural-neighbor interpolation method can provide continuous first and second order derivatives throughout the entire data domain. Since one of the primary support functions of the Knowledge Base is to provide event location capabilities, and the seismic event location algorithms typically require first and second order continuity, this is a prime requirement of any interpolation methodology chosen for use by the CTBT Knowledge Base.
Alexakis, A.
2009-04-01
Most astrophysical and planetary systems e.g., solar convection and stellar winds, are in a turbulent state and coupled to magnetic fields. Understanding and quantifying the statistical properties of magneto-hydro-dynamic (MHD) turbulence is crucial to explain the involved physical processes. Although the phenomenological theory of hydro-dynamic (HD) turbulence has been verified up to small corrections, a similar statement cannot be made for MHD turbulence. Since the phenomenological description of Hydrodynamic turbulence by Kolmogorov in 1941 there have been many attempts to derive a similar description for turbulence in conducting fluids (i.e Magneto-Hydrodynamic turbulence). However such a description is going to be based inevitably on strong assumptions (typically borrowed from hydrodynamics) that do not however necessarily apply to the MHD case. In this talk I will discuss some of the properties and differences of the energy and helicity cascades in turbulent MHD and HD flows. The investigation is going to be based on the analysis of direct numerical simulations. The cascades in MHD turbulence appear to be a more non-local process (in scale space) than in Hydrodynamics. Some implications of these results to turbulent modeling will be discussed
Analysis of extrapolation cascadic multigrid method(EXCMG)
Institute of Scientific and Technical Information of China (English)
2008-01-01
Based on an asymptotic expansion of finite element,a new extrapolation formula and extrapolation cascadic multigrid method(EXCMG)are proposed,in which the new extrapolation and quadratic interpolation are used to provide a better initial value on refined grid.In the case of triple grids,the error of the new initial value is analyzed in detail.A larger scale computation is completed in PC.
A Note on General Frames for Bivariate Interpolation
Institute of Scientific and Technical Information of China (English)
TANG Shuo; ZOU Le
2009-01-01
Newton interpolation and Thiele-type continued fractions interpolation may be the favoured linear interpolation and nonlinear interpolation, but these two interpolations could not solve all the interpolant problems. In this paper, several general frames are established by introducing multiple parameters and they are extensions and improvements of those for the general frames studied by Tan and Fang. Numerical examples are given to show the effectiveness of the results in this paper.
Accuracy of stream habitat interpolations across spatial scales
Sheehan, Kenneth R.; Welsh, Stuart
2013-01-01
Stream habitat data are often collected across spatial scales because relationships among habitat, species occurrence, and management plans are linked at multiple spatial scales. Unfortunately, scale is often a factor limiting insight gained from spatial analysis of stream habitat data. Considerable cost is often expended to collect data at several spatial scales to provide accurate evaluation of spatial relationships in streams. To address utility of single scale set of stream habitat data used at varying scales, we examined the influence that data scaling had on accuracy of natural neighbor predictions of depth, flow, and benthic substrate. To achieve this goal, we measured two streams at gridded resolution of 0.33 × 0.33 meter cell size over a combined area of 934 m2 to create a baseline for natural neighbor interpolated maps at 12 incremental scales ranging from a raster cell size of 0.11 m2 to 16 m2 . Analysis of predictive maps showed a logarithmic linear decay pattern in RMSE values in interpolation accuracy for variables as resolution of data used to interpolate study areas became coarser. Proportional accuracy of interpolated models (r2 ) decreased, but it was maintained up to 78% as interpolation scale moved from 0.11 m2 to 16 m2 . Results indicated that accuracy retention was suitable for assessment and management purposes at various scales different from the data collection scale. Our study is relevant to spatial modeling, fish habitat assessment, and stream habitat management because it highlights the potential of using a single dataset to fulfill analysis needs rather than investing considerable cost to develop several scaled datasets.
An interpolation method for stream habitat assessments
Sheehan, Kenneth R.; Welsh, Stuart A.
2015-01-01
Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.
Conservative interpolation between general spherical meshes
Directory of Open Access Journals (Sweden)
E. Kritsikis
2015-06-01
Full Text Available An efficient, local, explicit, second-order, conservative interpolation algorithm between spherical meshes is presented. The cells composing the source and target meshes may be either spherical polygons or longitude–latitude quadrilaterals. Second-order accuracy is obtained by piecewise-linear finite volume reconstruction over the source mesh. Global conservation is achieved through the introduction of a supermesh, whose cells are all possible intersections of source and target cells. Areas and intersections are computed exactly to yield a geometrically exact method. The main efficiency bottleneck caused by the construction of the supermesh is overcome by adopting tree-based data structures and algorithms, from which the mesh connectivity can also be deduced efficiently. The theoretical second-order accuracy is verified using a smooth test function and pairs of meshes commonly used for atmospheric modelling. Experiments confirm that the most expensive operations, especially the supermesh construction, have O(NlogN computational cost. The method presented is meant to be incorporated in pre- or post-processing atmospheric modelling pipelines, or directly into models for flexible input/output. It could also serve as a basis for conservative coupling between model components, e.g. atmosphere and ocean.
Conservative interpolation between general spherical meshes
Kritsikis, Evaggelos; Aechtner, Matthias; Meurdesoif, Yann; Dubos, Thomas
2017-01-01
An efficient, local, explicit, second-order, conservative interpolation algorithm between spherical meshes is presented. The cells composing the source and target meshes may be either spherical polygons or latitude-longitude quadrilaterals. Second-order accuracy is obtained by piece-wise linear finite-volume reconstruction over the source mesh. Global conservation is achieved through the introduction of a supermesh, whose cells are all possible intersections of source and target cells. Areas and intersections are computed exactly to yield a geometrically exact method. The main efficiency bottleneck caused by the construction of the supermesh is overcome by adopting tree-based data structures and algorithms, from which the mesh connectivity can also be deduced efficiently.The theoretical second-order accuracy is verified using a smooth test function and pairs of meshes commonly used for atmospheric modelling. Experiments confirm that the most expensive operations, especially the supermesh construction, have O(NlogN) computational cost. The method presented is meant to be incorporated in pre- or post-processing atmospheric modelling pipelines, or directly into models for flexible input/output. It could also serve as a basis for conservative coupling between model components, e.g., atmosphere and ocean.
ENO reconstruction and ENO interpolation are stable
Fjordholm, Ulrik S; Tadmor, Eitan
2011-01-01
We prove stability estimates for the ENO reconstruction and ENO interpolation procedures. In particular, we show that the jump of the reconstructed ENO pointvalues at each cell interface has the same sign as the jump of the underlying cell averages across that interface. We also prove that the jump of the reconstructed values can be upper-bounded in terms of the jump of the underlying cell averages. Similar sign properties hold for the ENO interpolation procedure. These estimates, which are shown to hold for ENO reconstruction and interpolation of arbitrary order of accuracy and on non-uniform meshes, indicate a remarkable rigidity of the piecewise-polynomial ENO procedure.
Wang, Wen-Xu; Lai, Ying-Cheng; Armbruster, Dieter
2011-09-01
We study catastrophic behaviors in large networked systems in the paradigm of evolutionary games by incorporating a realistic "death" or "bankruptcy" mechanism. We find that a cascading bankruptcy process can arise when defection strategies exist and individuals are vulnerable to deficit. Strikingly, we observe that, after the catastrophic cascading process terminates, cooperators are the sole survivors, regardless of the game types and of the connection patterns among individuals as determined by the topology of the underlying network. It is necessary that individuals cooperate with each other to survive the catastrophic failures. Cooperation thus becomes the optimal strategy and absolutely outperforms defection in the game evolution with respect to the "death" mechanism. Our results can be useful for understanding large-scale catastrophe in real-world systems and in particular, they may yield insights into significant social and economical phenomena such as large-scale failures of financial institutions and corporations during an economic recession.
The Nonlocal p-Laplacian Evolution for Image Interpolation
Directory of Open Access Journals (Sweden)
Yi Zhan
2011-01-01
Full Text Available This paper presents an image interpolation model with nonlocal p-Laplacian regularization. The nonlocal p-Laplacian regularization overcomes the drawback of the partial differential equation (PDE proposed by Belahmidi and Guichard (2004 that image density diffuses in the directions pointed by local gradient. The grey values of images diffuse along image feature direction not gradient direction under the control of the proposed model, that is, minimal smoothing in the directions across the image features and maximal smoothing in the directions along the image features. The total regularizer combines the advantages of nonlocal p-Laplacian regularization and total variation (TV regularization (preserving discontinuities and 1D image structures. The derived model efficiently reconstructs the real image, leading to a natural interpolation, with reduced blurring and staircase artifacts. We present experimental results that prove the potential and efficacy of the method.
Noda, H.; Nakatani, M.; Hori, T.
2012-12-01
Seismological observations [e.g., Abercrombie and Rice, 2005] suggest that a larger earthquake has larger fracture energy Gc. One way to realize such scaling is to assume a hierarchical patchy distribution of Gc on a fault; there are patches of different sizes with different Gc so that a larger patch has larger Gc. Ide and Aochi [2005] conducted dynamic rupture simulations with such a distribution of weakening distance Dc in a linear slip-weakening law, initiating ruptures on the smallest patch which sometimes grow up by cascading into a larger scale. They suggested that the initial phase of a large earthquake is indistinguishable from that of a small earthquake. In the present study we simulate a similar multi-scale asperity model but following rate and state friction (RSF), where stress and strength distribution resulting from the history of coseismic and aseismic slip influences the way of rupture initiation, growth, and arrest of a forthcoming earthquake. Multi-scale asperities were represented by a distribution of the state evolution distance dc in the aging version of RSF evolution law. Numerical scheme adopted [Noda and Lapsuta, 2010] is fully dynamic and 3D. We have modeled a circular rate-weakening patch, Patch L (radius R), which has a smaller patch, Patch S (radius r), in it by the rim. The ratio of the radii α = R/r is the amount of the gap between two scales. Patch L and Patch S respectively have nucleation sizes Rc and rc. The same brittleness β = R/Rc = r/rc is assumed for simplicity. We shall call an earthquake which ruptures only Patch S as an S-event, and one which ruptures Patch L, an L-event. We have conducted a series of simulations with α from 2 to 5 while keeping β = 3 until the end of the 20th L-event. If the patch S was relatively large (α = 2 and 2.5), only L-events occurred and they always dynamically cascaded up from a patch S rupture following small quasi-static nucleation there. If the patch S was small enough (α = 5), in
Extended Lagrange interpolation in L1 spaces
Occorsio, Donatella; Russo, Maria Grazia
2016-10-01
Let w (x )=e-xβxα , w ¯(x )=x w (x ) and denote by {pm(w)}m,{pn(w¯)}n the corresponding sequences of orthonormal polynomials. The zeros of the polynomial Q2 m +1=pm +1(w )pm(w ¯) are simple and are sufficiently far among them. Therefore it is possible to construct an interpolation process essentially based on the zeros of Q2m+1, which is called "Extended Lagrange Interpolation". Here we study the convergence of this interpolation process in suitable weighted L1 spaces. This study completes the results given by the authors in previous papers in weighted Lup((0 ,+∞ )) , for 1≤p≤∞. Moreover an application of the proposed interpolation process in order to construct an e cient product quadrature scheme for weakly singular integrals is given.
Evaluation of various interpolants available in DICE.
Energy Technology Data Exchange (ETDEWEB)
Turner, Daniel Z.; Reu, Phillip L.; Crozier, Paul
2015-02-01
This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an ima ge cor relation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel inten sity data as a continuous signal. Inte rpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guid e the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the i nterpolants implemented.
Loop Subdivision Surface Based Progressive Interpolation
Institute of Scientific and Technical Information of China (English)
Fu-Hua (Frank) Cheng; Feng-Tao Fan; Shu-Hua Lai; Cong-Lin Huang; Jia-Xi Wang; Jun-Hai Yong
2009-01-01
A new method for constructing interpolating Loop subdivision surfaces is presented. The new method is an extension of the progressive interpolation technique for B-splines. Given a triangular mesh M, the idea is to iteratively upgrade the vertices of M to generate a new control mesh M such that limit surface of M would interpolate M. It can be shown that the iterative process is convergent for Loop subdivision surfaces. Hence, the method is well-defined. The new method has the advantages of both a local method and a global method, i.e., it can handle meshes of any size and any topology while generating smooth interpolating subdivision surfaces that faithfully resemble the shape of the given meshes. The meshes considered here can be open or closed.
LINEAR SYSTEMS AND LINEAR INTERPOLATION I
Institute of Scientific and Technical Information of China (English)
丁立峰
2001-01-01
he linear interpolation of linear system on a family of linear systems is introduced and discussed. Some results and examples on singly generated systems on a finite dimensional vector space are given.
NOAA Optimum Interpolation (OI) SST V2
National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...
Generation of multivariate Hermite interpolating polynomials
Tavares, Santiago Alves
2005-01-01
Generation of Multivariate Hermite Interpolating Polynomials advances the study of approximate solutions to partial differential equations by presenting a novel approach that employs Hermite interpolating polynomials and bysupplying algorithms useful in applying this approach.Organized into three sections, the book begins with a thorough examination of constrained numbers, which form the basis for constructing interpolating polynomials. The author develops their geometric representation in coordinate systems in several dimensions and presents generating algorithms for each level number. He then discusses their applications in computing the derivative of the product of functions of several variables and in the construction of expression for n-dimensional natural numbers. Section II focuses on the construction of Hermite interpolating polynomials, from their characterizing properties and generating algorithms to a graphical analysis of their behavior. The final section of the book is dedicated to the applicatio...
Kuu plaat : Interpol Antics. Plaadid kauplusest Lasering
2005-01-01
Heliplaatidest: "Interpol Antics", Scooter "Mind the Gap", Slide-Fifty "The Way Ahead", Psyhhoterror "Freddy, löö esimesena!", Riho Sibul "Must", Bossacucanova "Uma Batida Diferente", "Biscantorat - Sound of the spirit from Glenstal Abbey"
Calculation of electromagnetic parameter based on interpolation algorithm
Energy Technology Data Exchange (ETDEWEB)
Zhang, Wenqiang, E-mail: zwqcau@gmail.com [College of Engineering, China Agricultural University, Beijing 100083 (China); Bionic and Micro/Nano/Bio Manufacturing Technology Research Center, Beihang University, Beijing 100191 (China); Yuan, Liming; Zhang, Deyuan [Bionic and Micro/Nano/Bio Manufacturing Technology Research Center, Beihang University, Beijing 100191 (China)
2015-11-01
Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment.
Wang, Yonggui; Zhang, Wanshun; Zhao, Yanxin; Peng, Hong; Shi, Yingyuan
2016-10-01
The effects of inter-basin water diversion projects and cascade reservoirs are typically complex and challenging, as the uncertain temporal-spatial variation of both water quality and quantity. The purpose of this paper is to propose a coupled 1D hydrodynamic model with water-quality model to analyze the effects of current and future inter-basin water diversion projects, i.e., South-to-North Water Diversion Project (SNWD) and Yangtze-Hanjiang Water Diversion Project (YHWD), and cascade reservoirs (CRS) on water quantity and quality in the middle-lower Hanjiang River. Considering water use and pollution contribution, the middle-lower Hanjaing River basin is generalized and divided into 18 land use units with tributaries, reservoirs and water exchanges. Each unit is considered with the processes of lateral inflow, point and non-point pollution loads, irrigation return flow, and stream-aquifer exchanges in the model. The long-term time series from 1956 to 1998 of water quality and quantity with four engineering scenarios is collected. The validation of results shows that the relative errors between the simulated and observed values at certain control sections are within 5% for water levels and 20% for water quality. The water level will be decreased by 0.38-0.65 m (decreasing rate 0.44-2.68%), the annual runoff will be significantly decreased over 4 billion m3 and the water quality will be changed after the SNWD. As a compensation project, the YHWD partly offsets the negative effects of the SNWD in water flow rate, but at the same time it rises the water level and reduces the flow velocity. This, together with the effect of cascade reservoirs, leads to water quality concentration increasing and deteriorating to Grade IV of the Chinese Surface Water Quality Criteria. The water resource reduction and water quality problems in the Middle-lower Hanjiang River require attention after these projects.
NEVILLE-TYPE VECTOR VALUED RATIONAL INTERPOLANTS
Institute of Scientific and Technical Information of China (English)
陈之兵; 顾传青; 徐晨
2004-01-01
A new kind of vector valued rational interpolants is established by means of Samelson inverse, with scalar numerator and vector valued denominator. It is essen tially different from that of Graves-Morris(1983), where the interpolants are constructed by Thiele-type continued fractions with vector valued numerator and scalar denominator. The new approach is more suitable to calculate the value of a vector valued function for a given point. And an error formula is also given and proven.
Duality, Tangential Interpolation, and Toeplitz Corona Problems
Raghupathi, Mrinal
2009-01-01
In this paper we extend a method of Arveson and McCullough to prove a tangential interpolation theorem for subalgebras of $H^\\infty$. This tangential interpolation result implies a Toelitz corona theorem. In particular, it is shown that the set of matrix positivity conditions is indexed by cyclic subspaces, which is analogous to the results obtained for the ball and the polydisk algebra by Trent-Wick and Douglas-Sarkar.
On the interpolation of univariate distributions
Dembinski, Hans P
2011-01-01
This note discusses an interpolation technique for univariate distributions. In other words, the question is how to obtain a good approximation for f(x|a) if a0 < a < a1 is a control variable and f(x|a0) and f(x|a1) are known. The technique presented here is based on the interpolation of the quantile function, i.e. the inverse of the cumulative density function.
Polynomial Interpolation in the Elliptic Curve Cryptosystem
Directory of Open Access Journals (Sweden)
Liew K. Jie
2011-01-01
Full Text Available Problem statement: In this research, we incorporate the polynomial interpolation method in the discrete logarithm problem based cryptosystem which is the elliptic curve cryptosystem. Approach: In this study, the polynomial interpolation method to be focused is the Lagrange polynomial interpolation which is the simplest polynomial interpolation method. This method will be incorporated in the encryption algorithm of the elliptic curve ElGamal cryptosystem. Results: The scheme modifies the elliptic curve ElGamal cryptosystem by adding few steps in the encryption algorithm. Two polynomials are constructed based on the encrypted points using Lagrange polynomial interpolation and encrypted for the second time using the proposed encryption method. We believe it is safe from the theoretical side as it still relies on the discrete logarithm problem of the elliptic curve. Conclusion/Recommendations: The modified scheme is expected to be more secure than the existing scheme as it offers double encryption techniques. On top of the existing encryption algorithm, we managed to encrypt one more time using the polynomial interpolation method. We also have provided detail examples based on the described algorithm.
Analysis of radial basis function interpolation approach
Institute of Scientific and Technical Information of China (English)
Zou You-Long; Hu Fa-Long; Zhou Can-Can; Li Chao-Liu; Dunn Keh-Jim
2013-01-01
The radial basis function (RBF) interpolation approach proposed by Freedman is used to solve inverse problems encountered in well-logging and other petrophysical issues. The approach is to predict petrophysical properties in the laboratory on the basis of physical rock datasets, which include the formation factor, viscosity, permeability, and molecular composition. However, this approach does not consider the effect of spatial distribution of the calibration data on the interpolation result. This study proposes a new RBF interpolation approach based on the Freedman's RBF interpolation approach, by which the unit basis functions are uniformly populated in the space domain. The inverse results of the two approaches are comparatively analyzed by using our datasets. We determine that although the interpolation effects of the two approaches are equivalent, the new approach is more flexible and beneficial for reducing the number of basis functions when the database is large, resulting in simplification of the interpolation function expression. However, the predicted results of the central data are not sufficiently satisfied when the data clusters are far apart.
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
By using the method of eigenvectors, the atomic populations and emission spectrum are investigated in a system that consists of a cascade three-level atom resonantly interacting with a single-mode field in a Kerr-like medium.The atom and the field are assumed to be initially in the upper atomic state and the Fock state, respectively. Results for models with intensity-dependent coupling and with intensity-independent coupling are compared. It is found that both population dynamics and emission spectrum show no indications of atom-field decoupling in the strong field limit if the intensity-dependent coupling is taken into account.
Institute of Scientific and Technical Information of China (English)
肖静
2014-01-01
In accurate modeling of three-dimensional, the result was affected by the underground coal faults, rock, undulat⁃ing terrain and other factors, so the modeling accuracy was not high. To improve the accuracy of three-dimensional model⁃ing of coal, an algorithm of three-dimensional coal seam modeling with pseudo point elimination and four fields spline inter⁃polation was proposed, the pseudo point was pretreated, and the isolated point was removed, the remaining non-isolated point was used for establishing the basic model and rebuilding the model, the spline interpolation method was used for the four domains of non-isolated point modeling, modeling, the modeling domain was divided into the starting field, extending the domain, stable domain and the convergence domain, in different domains, different modeling point of matching weight⁃ed interpolation was processed, filtering through four domains were combined to achieve the four domains spline interpola⁃tion, it played a role of smoothing filter in modeling, thus the accuracy of three-dimensional modeling was greatly im⁃proved. Simulation results show that the new algorithm has good robustness for external faults, rock, undulating terrain, the 3D modeling accuracy was improved and the computation is less.%三维煤层精确建模受地下断层、岩层、地势起伏等因素的影响，建模精度不高。为提高三维煤层建模的精度，提出一种基于伪点剔除与四域样条插值的三维煤层精确建模算法，通过伪点剔除方法对建模点进行预处理，筛选出其中的孤立点，进行剔除，其余非孤立点用于基本模型建立和基于映射的模型重构，采用四域样条插值方法对非孤立点进行建模，在建模中，将建模域分为起始域、延伸域、稳定域、收敛域共四个域，在不同域中，对不同的建模点进行匹配加权插值处理，经过四域联合滤波，实现四域样条插值，对建模效果起到平滑
Directory of Open Access Journals (Sweden)
Yingyi Chen
2016-01-01
Full Text Available Dissolved oxygen (DO content is a significant aspect of water quality in aquaculture. Prediction of dissolved oxygen may timely avoid the financial loss caused by inappropriate dissolved oxygen content and three-dimensional prediction can achieve more accurate and overall guidance. Therefore, this study presents a three-dimensional short-term prediction model of dissolved oxygen in crab aquaculture ponds based on back propagation artificial neural network (BPANN optimized by particle swarm optimization (PSO, which coupled with Kriging method. In this model, wavelet analysis is adopted for denoising, BPANN optimized by PSO is utilized for data analysis and one-dimensional prediction, and Kriging method is used for three-dimensional prediction. Compared with traditional one-dimensional prediction model, three-dimensional model has more real reaction of dissolved oxygen content in crab growth environment. In particular, the merits of PSO are evaluated against genetic algorithm (GA. The root mean square error (RMSE, mean absolute error (MAE, and mean absolute percentage error (MAPE for PSO model are 0.136445, 0.90534, and 0.15384, respectively, while for the GA model the values are 2.04184, 1.18316, and 0.21014, respectively. Furthermore, results of cross validation experiment show that the average error of this model is 0.0705 (mg/L. Consequently, this study suggests that the prediction model operates in a satisfactory manner.
Synthetic turbulence, fractal interpolation, and large-eddy simulation.
Basu, Sukanta; Foufoula-Georgiou, Efi; Porté-Agel, Fernando
2004-08-01
Fractal interpolation has been proposed in the literature as an efficient way to construct closure models for the numerical solution of coarse-grained Navier-Stokes equations. It is based on synthetically generating a scale-invariant subgrid-scale field and analytically evaluating its effects on large resolved scales. In this paper, we propose an extension of previous work by developing a multiaffine fractal interpolation scheme and demonstrate that it preserves not only the fractal dimension but also the higher-order structure functions and the non-Gaussian probability density function of the velocity increments. Extensive a priori analyses of atmospheric boundary layer measurements further reveal that this multiaffine closure model has the potential for satisfactory performance in large-eddy simulations. The pertinence of this newly proposed methodology in the case of passive scalars is also discussed.
Dynamic Stability Analysis Using High-Order Interpolation
Directory of Open Access Journals (Sweden)
Juarez-Toledo C.
2012-10-01
Full Text Available A non-linear model with robust precision for transient stability analysis in multimachine power systems is proposed. The proposed formulation uses the interpolation of Lagrange and Newton's Divided Difference. The High-Order Interpolation technique developed can be used for evaluation of the critical conditions of the dynamic system.The technique is applied to a 5-area 45-machine model of the Mexican interconnected system. As a particular case, this paper shows the application of the High-Order procedure for identifying the slow-frequency mode for a critical contingency. Numerical examples illustrate the method and demonstrate the ability of the High-Order technique to isolate and extract temporal modal behavior.
PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0
Sa, Ben-Hao; Zhou, Dai-Mei; Yan, Yu-Liang; Dong, Bao-Guo; Cai, Xu
2013-05-01
We have updated the parton and hadron cascade model PACIAE 2.0 (cf. Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Xiao-Mei Li, Sheng-Qin Feng, Bao-Guo Dong, Xu Cai, Comput. Phys. Comm. 183 (2012) 333.) to the new issue of PACIAE 2.1. The PACIAE model is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum pT is randomly sampled in the string fragmentation, the px and py components are originally put on the circle with radius pT randomly. Now it is put on the circumference of ellipse with half major and minor axes of pT(1+δp) and pT(1-δp), respectively, in order to better investigate the final state transverse momentum anisotropy. New version program summaryManuscript title: PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0 Authors: Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Bao-Guo Dong, and Xu Cai Program title: PACIAE version 2.1 Journal reference: Catalogue identifier: Licensing provisions: none Programming language: FORTRAN 77 or GFORTRAN Computer: DELL Studio XPS and others with a FORTRAN 77 or GFORTRAN compiler Operating system: Linux or Windows with FORTRAN 77 or GFORTRAN compiler RAM: ≈ 1GB Number of processors used: Supplementary material: Keywords: relativistic nuclear collision; PYTHIA model; PACIAE model Classification: 11.1, 17.8 External routines/libraries: Subprograms used: Catalogue identifier of previous version: aeki_v1_0* Journal reference of previous version: Comput. Phys. Comm. 183(2012)333. Does the new version supersede the previous version?: Yes* Nature of problem: PACIAE is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum(pT)is randomly sampled in the string fragmentation, thepxandpycomponents are randomly placed on the circle with radius ofpT. This strongly cancels the final state transverse momentum asymmetry developed dynamically. Solution method: Thepxandpycomponent of hadron in the string fragmentation is now randomly placed on the circumference of an ellipse with
Institute of Scientific and Technical Information of China (English)
CHEN Yunlong; SHAN Xiujuan; JIN Xianshi; YANG Tao; DAI Fangqun; YANG Dingtian
2016-01-01
Spatial interpolation is a common tool used in the study of fishery ecology, especially for the construction of ecosystem models. To develop an appropriate interpolation method of determining fishery resources density in the Yellow Sea, we tested four frequently used methods, including inverse distance weighted interpolation (IDW), global polynomial interpolation (GPI), local polynomial interpolation (LPI) and ordinary kriging (OK). A cross-validation diagnostic was used to analyze the efficacy of interpolation, and a visual examination was conducted to evaluate the spatial performance of the different methods. The results showed that the original data were not normally distributed. A log transformation was then used to make the data fit a normal distribution. During four survey periods, an exponential model was shown to be the best semivariogram model in August and October 2014, while data from January and May 2015 exhibited the pure nugget effect. Using a paired-samplest test, no significant differences (P>0.05) between predicted and observed data were found in all four of the interpolation methods during the four survey periods. Results of the cross-validation diagnostic demonstrated that OK performed the best in August 2014, while IDW performed better during the other three survey periods. The GPI and LPI methods had relatively poor interpolation results compared to IDW and OK. With respect to the spatial distribution, OK was balanced and was not as disconnected as IDW nor as overly smooth as GPI and LPI, although OK still produced a few “bull’s-eye” patterns in some areas. However, the degree of autocorrelation sometimes limits the application of OK. Thus, OK is highly recommended if data are spatially autocorrelated. With respect to feasibility and accuracy, we recommend IDW to be used as a routine interpolation method. IDW is more accurate than GPI and LPI and has a combination of desirable properties, such as easy accessibility and rapid processing.