System identification application using Hammerstein model
Indian Academy of Sciences (India)
SABAN OZER; HASAN ZORLU; SELCUK METE
2016-06-01
Generally, memoryless polynomial nonlinear model for nonlinear part and finite impulse response (FIR) model or infinite impulse response model for linear part are preferred in Hammerstein models in literature. In this paper, system identification applications of Hammerstein model that is cascade of nonlinear second order volterra and linear FIR model are studied. Recursive least square algorithm is used to identify the proposed Hammerstein model parameters. Furthermore, the results are compared to identify the success of proposed Hammerstein model and different types of models
An Envelope Hammerstein Model for Power Amplifiers
Institute of Scientific and Technical Information of China (English)
Hua-Dong Wang; Song-Bai He; Jing-Fu Bao; Zheng-De Wu
2007-01-01
In this paper, an envelope Hammerstein(EH) model is introduced to describe dynamic inputoutput characteristics of RF power amplifiers. In the modeling approach, we use a new truncation method and an established nonlinear time series method to determine model structure. Then, we discuss the process of model parameter extraction in detailed. Finally, a 2 W WCDMA power amplifier is measured to verify the performance of EH model, and good agreement between model output and measurement result shows our model can accurately predict output characteristic of the power amplifier.
Hammerstein Model Based RLS Algorithm for Modeling the Intelligent Pneumatic Actuator (IPA System
Directory of Open Access Journals (Sweden)
Siti Fatimah Sulaiman
2017-08-01
Full Text Available An Intelligent Pneumatic Actuator (IPA system is considered highly nonlinear and subject to nonlinearities which make the precise position control of this actuator is difficult to achieve. Thus, it is appropriate to model the system using nonlinear approach because the linear model sometimes not sufficient enough to represent the nonlinearity of the system in the real process. This study presents a new modeling of an IPA system using Hammerstein model based Recursive Least Square (RLS algorithm. The Hammerstein model is one of the blocks structured nonlinear models often used to model a nonlinear system and it consists of a static nonlinear block followed by a linear block of dynamic element. In this study, the static nonlinear block was represented by a deadzone of the pneumatic valve, while the linear block was represented by a dynamic element of IPA system. A RLS has been employed as the main algorithm in order to estimate the parameters of the Hammerstein model. The validity of the proposed model has been verified by conducting a real-time experiment. All of the criteria as outlined in the system identification’s procedures were successfully complied by the proposed Hammerstein model as it managed to provide a stable system, higher best fit, lower loss function and lower final prediction error than a linear model developed before. The performance of the proposed Hammerstein model in controlling the IPA’s positioning system is also considered good. Thus, this new developed Hammerstein model is sufficient enough to represents the IPA system utilized in this study.
Fault Detection for Shipboard Monitoring – Volterra Kernel and Hammerstein Model Approaches
DEFF Research Database (Denmark)
Lajic, Zoran; Blanke, Mogens; Nielsen, Ulrik Dam
2009-01-01
In this paper nonlinear fault detection for in-service monitoring and decision support systems for ships will be presented. The ship is described as a nonlinear system, and the stochastic wave elevation and the associated ship responses are conveniently modelled in frequency domain...
Modelling and Estimation of Hammerstein System with Preload Nonlinearity
Directory of Open Access Journals (Sweden)
Khaled ELLEUCH
2010-12-01
Full Text Available This paper deals with modelling and parameter identification of nonlinear systems described by Hammerstein model having asymmetric static nonlinearities known as preload nonlinearity characteristic. The simultaneous use of both an easy decomposition technique and the generalized orthonormal bases leads to a particular form of Hammerstein model containing a minimal parameters number. The employ of orthonormal bases for the description of the linear dynamic block conducts to a linear regressor model, so that least squares techniques can be used for the parameter estimation. Singular Values Decomposition (SVD technique has been applied to separate the coupled parameters. To demonstrate the feasibility of the identification method, an illustrative example is included.
Multilinear Model of Heat Exchanger with Hammerstein Structure
Directory of Open Access Journals (Sweden)
Dragan Pršić
2016-01-01
Full Text Available The multilinear model control design approach is based on the approximation of the nonlinear model of the system by a set of linear models. The paper presents the method of creation of a bank of linear models of the two-pass shell and tube heat exchanger. The nonlinear model is assumed to have a Hammerstein structure. The set of linear models is formed by decomposition of the nonlinear steady-state characteristic by using the modified Included Angle Dividing method. Two modifications of this method are proposed. The first one refers to the addition to the algorithm for decomposition, which reduces the number of linear segments. The second one refers to determination of the threshold value. The dependence between decomposition of the nonlinear characteristic and the linear dynamics of the closed-loop system is established. The decoupling process is more formal and it can be easily implemented by using software tools. Due to its simplicity, the method is particularly suitable in complex systems, such as heat exchanger networks.
Wiener-Hammerstein system identification - an evolutionary approach
Naitali, Abdessamad; Giri, Fouad
2016-01-01
The problem of identifying parametric Wiener-Hammerstein (WH) systems is addressed within the evolutionary optimisation context. Specifically, a hybrid culture identification method is developed that involves model structure adaptation using genetic recombination and model parameter learning using particle swarm optimisation. The method enjoys three interesting features: (1) the risk of premature convergence of model parameter estimates to local optima is significantly reduced, due to the constantly maintained diversity of model candidates; (2) no prior knowledge is needed except for upper bounds on the system structure indices; (3) the method is fully autonomous as no interaction is needed with the user during the optimum search process. The performances of the proposed method will be illustrated and compared to alternative methods using a well-established WH benchmark.
Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems
Directory of Open Access Journals (Sweden)
Man Hong
2013-01-01
Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.
Adaptive Predistortion Using Cubic Spline Nonlinearity Based Hammerstein Modeling
Wu, Xiaofang; Shi, Jianghong
In this paper, a new Hammerstein predistorter modeling for power amplifier (PA) linearization is proposed. The key feature of the model is that the cubic splines, instead of conventional high-order polynomials, are utilized as the static nonlinearities due to the fact that the splines are able to represent hard nonlinearities accurately and circumvent the numerical instability problem simultaneously. Furthermore, according to the amplifier's AM/AM and AM/PM characteristics, real-valued cubic spline functions are utilized to compensate the nonlinear distortion of the amplifier and the following finite impulse response (FIR) filters are utilized to eliminate the memory effects of the amplifier. In addition, the identification algorithm of the Hammerstein predistorter is discussed. The predistorter is implemented on the indirect learning architecture, and the separable nonlinear least squares (SNLS) Levenberg-Marquardt algorithm is adopted for the sake that the separation method reduces the dimension of the nonlinear search space and thus greatly simplifies the identification procedure. However, the convergence performance of the iterative SNLS algorithm is sensitive to the initial estimation. Therefore an effective normalization strategy is presented to solve this problem. Simulation experiments were carried out on a single-carrier WCDMA signal. Results show that compared to the conventional polynomial predistorters, the proposed Hammerstein predistorter has a higher linearization performance when the PA is near saturation and has a comparable linearization performance when the PA is mildly nonlinear. Furthermore, the proposed predistorter is numerically more stable in all input back-off cases. The results also demonstrate the validity of the convergence scheme.
Energy Technology Data Exchange (ETDEWEB)
Li, Chun-Hua; Zhu, Xin-Jian; Cao, Guang-Yi; Sui, Sheng; Hu, Ming-Ruo [Fuel Cell Research Institute, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240 (China)
2008-01-03
This paper reports a Hammerstein modeling study of a proton exchange membrane fuel cell (PEMFC) stack using least squares support vector machines (LS-SVM). PEMFC is a complex nonlinear, multi-input and multi-output (MIMO) system that is hard to model by traditional methodologies. Due to the generalization performance of LS-SVM being independent of the dimensionality of the input data and the particularly simple structure of the Hammerstein model, a MIMO SVM-ARX (linear autoregression model with exogenous input) Hammerstein model is used to represent the PEMFC stack in this paper. The linear model parameters and the static nonlinearity can be obtained simultaneously by solving a set of linear equations followed by the singular value decomposition (SVD). The simulation tests demonstrate the obtained SVM-ARX Hammerstein model can efficiently approximate the dynamic behavior of a PEMFC stack. Furthermore, based on the proposed SVM-ARX Hammerstein model, valid control strategy studies such as predictive control, robust control can be developed. (author)
Directory of Open Access Journals (Sweden)
Fatima Sbeity
2013-01-01
Full Text Available Sub- and ultraharmonic (SUH ultrasound contrast imaging is an alternative modality to the second harmonic imaging, since, in specific conditions it could produce high quality echographic images. This modality enables the contrast enhancement of echographic images by using SUH present in the contrast agent response but absent from the nonperfused tissue. For a better access to the components generated by the ultrasound contrast agents, nonlinear techniques based on Hammerstein model are preferred. As the major limitation of Hammerstein model is its capacity of modeling harmonic components only, in this work we propose two methods allowing to model SUH. These new methods use several Hammerstein models to identify contrast agent signals having SUH components and to separate these components from harmonic components. The application of the proposed methods for modeling simulated contrast agent signals shows their efficiency in modeling these signals and in separating SUH components. The achieved gain with respect to the standard Hammerstein model was 26.8 dB and 22.8 dB for the two proposed methods, respectively.
Institute of Scientific and Technical Information of China (English)
满红; 邵诚
2011-01-01
针对化工过程中广泛使用的连续搅拌反应釜(CSTR),提出一种基于神经网络的模型预测控制策略,采用分段最小二乘支持向量机辨识Hammerstein-Wiener模型系数的方法,在此基础上建立线性自回归模式(ARX)结构和高斯径向基神经网络串联的非线性预测控制器.利用BP神经网络训练预测控制输入序列和拟牛顿算法求解非线性预测控制律,从而实现一种基于支持向量机Hammerstein-Wiener辨识模型的非线性神经网络预测控制算法.对CSTR的仿真结果表明,该方法能够更有效地跟踪控制反应物浓度.%A model predictive control strategy based on neural network is presented for a continuous stirred tank reactor (CSTR). A segmentation method was adopted to identify Hammerstein-Wiener model coefficient by least squares support vector machines and then to construct a nonlinear predictive controller which was by a linear optimal component and radial basis function neural networks in series. A nonlinear predictive control algorithm based on least support vector machines Hammerstein-Wiener model was realized by using BP neural network to train predictive input sequences and to solve nonlinear predictive control rules by Quasi-Newton method. The simulation results of CSTR illustrate that this approach is effective tracking and controlling product concentration.
New identification method for Hammerstein models based on approximate least absolute deviation
Xu, Bao-Chang; Zhang, Ying-Dan
2016-07-01
Disorder and peak noises or large disturbances can deteriorate the identification effects of Hammerstein non-linear models when using the least-square (LS) method. The least absolute deviation technique can be used to resolve this problem; however, its absolute value cannot meet the need of differentiability required by most algorithms. To improve robustness and resolve the non-differentiable problem, an approximate least absolute deviation (ALAD) objective function is established by introducing a deterministic function that exhibits the characteristics of absolute value under certain situations. A new identification method for Hammerstein models based on ALAD is thus developed in this paper. The basic idea of this method is to apply the stochastic approximation theory in the process of deriving the recursive equations. After identifying the parameter matrix of the Hammerstein model via the new algorithm, the product terms in the matrix are separated by calculating the average values. Finally, algorithm convergence is proven by applying the ordinary differential equation method. The proposed algorithm has a better robustness as compared to other LS methods, particularly when abnormal points exist in the measured data. Furthermore, the proposed algorithm is easier to apply and converges faster. The simulation results demonstrate the efficacy of the proposed algorithm.
Modelling of ultrasonic motor with dead-zone based on Hammerstein model structure
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
The ultrasonic motor (USM) possesses heavy nonlinearities which vary with driving conditions and load-dependent characteristics such as the dead-zone. In this paper, an identification method for the rotary travelling-wave type ultrasonic motor (RTWUSM) with dead-zone is proposed based on a modified Hammerstein model structure. The driving voltage contributing effect on the nonlinearities of the RTWUSM was transformed to the change of dynamic parameters against the driving voltage.The dead-zone of the RTWUSM is identified based upon the above transformation. Experiment results showed good agreement between the output of the proposed model and actual measured output.
Coelho, Antonio Augusto Rodrigues
2016-01-01
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723
Identification of Nonlinear Dynamic Systems Using Hammerstein-Type Neural Network
Directory of Open Access Journals (Sweden)
Hongshan Yu
2014-01-01
Full Text Available Hammerstein model has been popularly applied to identify the nonlinear systems. In this paper, a Hammerstein-type neural network (HTNN is derived to formulate the well-known Hammerstein model. The HTNN consists of a nonlinear static gain in cascade with a linear dynamic part. First, the Lipschitz criterion for order determination is derived. Second, the backpropagation algorithm for updating the network weights is presented, and the stability analysis is also drawn. Finally, simulation results show that HTNN identification approach demonstrated identification performances.
Nonlinear modeling of activated sludge process using the Hammerstein-Wiener structure
Directory of Open Access Journals (Sweden)
Frącz Paweł
2016-01-01
Full Text Available The paper regards to physical model of the Activated Sludge Process, which is a part of the wastewater treatment. The aim of the study was to describe nitrogen transformation process and the demand of chemical fractions, involved in the ASP process. Moreover, the non-linear relationship between the flow of wastewater and the consumed electrical energy, used by the blowers, was determined. Such analyses are important from the economical and environmental point of view. Assuming that the total power does not change the blower is charging during a year an energy amount of approx. 613 MW. This illustrates in particular the scale of the demand for energy consumption in the biological aeration unit. The aim is to minimize the energy consumption through first building a model of ASP and then through optimization of the overall process by modifying chosen parameter in numerical simulations. In this paper example measurement and analysis results of nitrite and ammonium nitrogen concentrations in the aeration reactor and the active power consumed by blowers for the aeration process were presented. Further the ASP modeling procedure, which uses the Hammerstein-Wiener structure and example verification results were presented. Based on the achieved results it was stated that the developed set of methodologies may be used to improve and expand the overriding control system for system for wastewater treatment plant.
2012-01-01
In this paper a new system identification algorithm is introduced for Hammerstein systems based on observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a non-uniform rational B-spline (NURB) neural network. The proposed system identification algorithm for this NURB network based Hammerstein system consists of two successive stages. First the shaping parameters in NURB network are estimated using particle swarmoptimization (PSO) procedure. ...
Research on the Identification Method to Hammerstein model in Frequency Domain%Hammerstein模型频域辨识方法研究
Institute of Scientific and Technical Information of China (English)
李振强; 黄杰; 邹丽蓉
2015-01-01
In this paper, a method is proposed to identify the nonlinear Hammerstein model with the input-output data in frequency domain. We can obtain the nonparametric model and not to estimate the parameters of the model with the noise corrupted output data. If the nonlinear model is known, the inverse of its can be done as a controller, we can make the nonlinear system to be a linear system to control. Parameters of the nonlinear block can be estimated by the proposed method, it can give a help to design a controller. The proposed method is proved to be feasible and effective by the simulation.%本文提出了一种在频域估计非线性Hammerstein模型的非线性部分的参数的方法。在随机噪声存在的情况下，通常利用频域的辨识方法，只能得到非参数的模型，不能估计出对应的参数。对于控制非线性Hammerstein模型，如果非线性部分的参数已知，控制器选取非线性部分的逆，能够使非线性系统控制变成线性系统进行控制。本文提出方法能够估计出非线性部分的参数，便于控制器的设计。通过仿真实验，证明该辨识方法是可行的，有效的。
Adaptive control of Hammerstein-Wiener nonlinear systems
Zhang, Bi; Hong, Hyokchan; Mao, Zhizhong
2016-07-01
The Hammerstein-Wiener model is a block-oriented model, having a linear dynamic block sandwiched by two static nonlinear blocks. This note develops an adaptive controller for a special form of Hammerstein-Wiener nonlinear systems which are parameterized by the key-term separation principle. The adaptive control law and recursive parameter estimation are updated by the use of internal variable estimations. By modeling the errors due to the estimation of internal variables, we establish convergence and stability properties. Theoretical results show that parameter estimation convergence and closed-loop system stability can be guaranteed under sufficient condition. From a qualitative analysis of the sufficient condition, we introduce an adaptive weighted factor to improve the performance of the adaptive controller. Numerical examples are given to confirm the results in this paper.
Parameter Estimation of Hammerstein Model Based on Data in Wavelet Domain%基于小波域数据的Hammerstein模型参数估计
Institute of Scientific and Technical Information of China (English)
李振强
2012-01-01
针对非线性离散Hammerstein模型的输出存在随机噪声情况下,提出直接利用小波域的输入输出数据,估计出该模型的参数的方法.最小二乘法是时域参数估计的主要方法,随着对小波理论的深入研究,它在信号处理方面起着重要的作用.信号经过小波变换后,得到具有时频特征的小波域的信号,提高了信号的信噪比,去噪结果比时域和频域更有效.通过小波最小二乘法估计出模型的参数,与时域最小二乘法的估计参数相比较,仿真结果表明波域方法是可行的,有效的.%For the discrete nonlinear Hammerstein model with the noise corrupted output data, a method was proposed to estimate the parameters of the model with the input - output data in wavelet domain directly. The least squared ( LS) method is an important method for parameter estimation in time domain, with the developing of wavelet theory, it plays an important role in signal processing. By means of wavelet transform, the signal has both characteristics of time and frequency. It became a signal in wavelet domain, and increased the ratio of signal to noise. The de-noising result is more effective than in time domain and in frequency domain. The parameters of model were estimated by the wavelet least squared method. Compared with the least squared method in time domain, the proposed method is feasible and effective.
Exact controllability of generalized Hammerstein type integral equation and applications
Directory of Open Access Journals (Sweden)
Dimplekumar N. Chalishajar
2006-11-01
Full Text Available In this article, we study the exact controllability of an abstract model described by the controlled generalized Hammerstein type integral equation $$ x(t = int_0^t h(t,su(sds+ int_0^t k(t,s,xf(s,x(sds, quad 0 leq t leq T less than infty, $$ where, the state $x(t$ lies in a Hilbert space $H$ and control $u(t$ lies another Hilbert space $V$ for each time $t in I=[0,T]$, $T$ greater than 0. We establish the controllability result under suitable assumptions on $h, k$ and $f$ using the monotone operator theory.
Direct discriminant locality preserving projection with Hammerstein polynomial expansion.
Chen, Xi; Zhang, Jiashu; Li, Defang
2012-12-01
Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.
Directory of Open Access Journals (Sweden)
Ying-Ying Wang
2015-06-01
Full Text Available The identification difficulties for a dual-rate Hammerstein system lie in two aspects. First, the identification model of the system contains the products of the parameters of the nonlinear block and the linear block, and a standard least squares method cannot be directly applied to the model; second, the traditional single-rate discrete-time Hammerstein model cannot be used as the identification model for the dual-rate sampled system. In order to solve these problems, by combining the polynomial transformation technique with the key variable separation technique, this paper converts the Hammerstein system into a dual-rate linear regression model about all parameters (linear-in-parameter model and proposes a recursive least squares algorithm to estimate the parameters of the dual-rate system. The simulation results verify the effectiveness of the proposed algorithm.
Institute of Scientific and Technical Information of China (English)
Zhiyun Zou; Dandan Zhao; Xinghong Liu; Yuqing Guo; Chen Guan; Wenqiang Feng; Ning Guo
2015-01-01
By taking advantage of the separation characteristics of nonlinear gain and dynamic sector inside a Hammerstein model, a novel pole placement self tuning control scheme for nonlinear Hammerstein system was put forward based on the linear system pole placement self tuning control algorithm. And the nonlinear Hammerstein system pole placement self tuning control (NL-PP-STC) algorithm was presented in detail. The identification ability of its parameter estimation algorithm of NL-PP-STC was analyzed, which was always identifiable in closed loop. Two particular problems including the selection of poles and the on-line estimation of model parameters, which may be met in applications of NL-PP-STC to real process control, were discussed. The control simulation of a strong nonlinear pH neutralization process was carried out and good control performance was achieved.
Adaptive Hammerstein Predistorter Using the Recursive Prediction Error Method
Institute of Scientific and Technical Information of China (English)
LI Hui; WANG Desheng; CHEN Zhaowu
2008-01-01
The digital baseband predistorter is an effective technique to compensate for the nonlinearity of power amplifiers (Pas) with memory effects. However, most available adaptive predistorters based on direct learning architectures suffer from slow convergence speeds. In this paper, the recursive prediction error method is used to construct an adaptive Hammerstein predistorter based on the direct learning architecture,which is used to linearize the Wiener PA model. The effectiveness of the scheme is demonstrated on a digi-tal video broadcasting-terrestrial system. Simulation results show that the predistorter outperforms previous predistorters based on direct learning architectures in terms of convergence speed and linearization. A simi-lar algorithm can be applied to estimate the Wiener PA model, which will achieve high model accuracy.
Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion
Directory of Open Access Journals (Sweden)
Zongze Wu
2015-10-01
Full Text Available The maximum correntropy criterion (MCC has recently been successfully applied to adaptive filtering. Adaptive algorithms under MCC show strong robustness against large outliers. In this work, we apply the MCC criterion to develop a robust Hammerstein adaptive filter. Compared with the traditional Hammerstein adaptive filters, which are usually derived based on the well-known mean square error (MSE criterion, the proposed algorithm can achieve better convergence performance especially in the presence of impulsive non-Gaussian (e.g., α-stable noises. Additionally, some theoretical results concerning the convergence behavior are also obtained. Simulation examples are presented to confirm the superior performance of the new algorithm.
Directory of Open Access Journals (Sweden)
Huiyi Hu
2013-01-01
speed of the stochastic gradient algorithm. The key term separation principle can simplify the identification model of the input nonlinear system, and the decomposition technique can enhance computational efficiencies of identification algorithms. The simulation results show that the proposed algorithm is effective for estimating the parameters of IN-CARAR systems.
Recursive Identification of Hammerstein Systems Using Polynomial Function Approximate%分段多项式逼近的递推辨识研究
Institute of Scientific and Technical Information of China (English)
肖永生; 黄丽贞; 王建宏
2012-01-01
通过对Hammerstein系统非线性系统辨识的研究,Hammerstein模型中的非线性稳态函数用一族分段的多项式函数来逼近.辨识的主要方法是用噪声项的估计值来代替信息矢量中的噪声项,利用可获得的参数估计值来计算噪声中的估计值.辨识方法分为两步:第一步采用增广的随机梯度法,辨识模型中的部分未知参数；第二步在奇异值分解(SVD)的基础上,提出一种新的方法来辨识模型中剩余的未知参数.通过实验仿真,论证了本文提出方法的有效性.%Nonlinear system identification of Hammerstein model was studied. The nonlinear static function was approximated by a number of polynomial functions. The basic method is to replace immeasurable noise terms in the information vectors by their estimates, and to compute the noise estimates based on the obtained parameter estimates. It is based on a piecewise-linear Hammerstein model, which is linear in the parameters. The identification procedure is divided into two steps. In step 1, the extended stochastic gradient algorithm is adopted to identify some unknown parameters. In step 2, based on signal value decomposition (SYD) , this paper proposes a new method to identify the other parameters. Simulations results demonstrate the validity of the proposed approach.
Directory of Open Access Journals (Sweden)
Ignacio Santamaría
2008-04-01
Full Text Available This paper treats the identification of nonlinear systems that consist of a cascade of a linear channel and a nonlinearity, such as the well-known Wiener and Hammerstein systems. In particular, we follow a supervised identification approach that simultaneously identifies both parts of the nonlinear system. Given the correct restrictions on the identification problem, we show how kernel canonical correlation analysis (KCCA emerges as the logical solution to this problem. We then extend the proposed identification algorithm to an adaptive version allowing to deal with time-varying systems. In order to avoid overfitting problems, we discuss and compare three possible regularization techniques for both the batch and the adaptive versions of the proposed algorithm. Simulations are included to demonstrate the effectiveness of the presented algorithm.
MULTIPLE POSITIVE SOLUTIONS TO A SYSTEM OF NONLINEAR HAMMERSTEIN TYPE INTEGRAL EQUATIONS
Institute of Scientific and Technical Information of China (English)
Wang Feng; Zhang Fang; Liu Chunhan
2009-01-01
In this paper, we use cone theory and a new method of computation of fixed point index to study a system of nonlinear Hammerstein type integral equations, and the existence of multiple positive solutions to the system is discussed.
Chatterjee, Shre Kumar; Das, Saptarshi; Manzella, Veronica; Vitaletti, Andrea; Masi, Elisa; Santopolo, Luisa; Mancuso, Stefano; Maharatna, Koushik
2014-01-01
In this paper, system identification approach has been adopted to develop a novel dynamical model for describing the relationship between light as an environmental stimulus and the electrical response as the measured output for a bay leaf (Laurus nobilis) plant. More specifically, the target is to predict the characteristics of the input light stimulus (in terms of on-off timing, duration and intensity) from the measured electrical response - leading to an inverse problem. We explored two major classes of system estimators to develop dynamical models - linear and nonlinear - and their several variants for establishing a forward and also an inverse relationship between the light stimulus and plant electrical response. The best class of models are given by the Nonlinear Hammerstein-Wiener (NLHW) estimator showing good data fitting results over other linear and nonlinear estimators in a statistical sense. Consequently, a few set of models using different functional variants of NLHW has been developed and their a...
Institute of Scientific and Technical Information of China (English)
刘冉冉; 潘天红; 李正明
2015-01-01
针对一类非均匀数据采样Hammerstein-Wiener系统,提出一种递阶多新息随机梯度算法. 首先基于提升技术,推导出系统的状态空间模型,并考虑因果约束关系,将该模型分解成两个子系统,利用多新息遗忘随机梯度算法辨识出此模型的参数;然后,引入可变遗忘因子,提出一种修正函数并在线确定其大小,提高了算法的收敛速度及抗干扰能力.仿真实例验证了所提出算法的有效性和优越性.%A hierarchical multi-innovation stochastic gradient identification algorithm is proposed for Hammerstein-Wiener(H-W) nonlinear systems with non-uniformly sampling. The corresponding state space models of H-W are derived by using the lifting technique. Considering the causality constraints, the H-W system is decomposed into two subsystems firstly. Then the model parameters are identified by using the multi-innovation based stochastic gradient algorithm with forgetting factors. In order to improve the convergent rate and the disturbance rejection, a new kind of variable forgetting factor algorithm is also presented. Simulation examples demonstrate that the proposed algorithm has fast convergence speed and is robust to the noise.
Directory of Open Access Journals (Sweden)
Sohrab Bazm
2016-11-01
Full Text Available Alternative Legendre polynomials (ALPs are used to approximate the solution of a class of nonlinear Volterra-Hammerstein integral equations. For this purpose, the operational matrices of integration and the product for ALPs are derived. Then, using the collocation method, the considered problem is reduced into a set of nonlinear algebraic equations. The error analysis of the method is given and the efficiency and accuracy are illustrated by applying the method to some examples.
Identification of Hammerstein Model Based on Quantum Genetic Algorithm
Directory of Open Access Journals (Sweden)
Zhang Hai Li
2013-07-01
Full Text Available Nonlinear system identification is a main topic of modern identification. A new method for nonlinear system identification is presented by using Quantum Genetic Algorithm(QGA.The problems of nonlinear system identification are cast as function optimization overprameter space，and the Quantum Genetic Algorithm is adopted to solve the optimization problem. Simulation experiments show that: compared with the genetic algorithm, quantum genetic algorithm is an effective swarm intelligence algorithm, its salient features of the algorithm parameters, small population size, and the use of Quantum gate update populations, greatly improving the recognition in the optimization of speed and accuracy. Simulation results show the effectiveness of the proposed method.
Identification of Hammerstein Model Based on Quantum Genetic Algorithm
Zhang Hai Li
2013-01-01
Nonlinear system identification is a main topic of modern identification. A new method for nonlinear system identification is presented by using Quantum Genetic Algorithm(QGA).The problems of nonlinear system identification are cast as function optimization overprameter space，and the Quantum Genetic Algorithm is adopted to solve the optimization problem. Simulation experiments show that: compared with the genetic algorithm, quantum genetic algorithm is an effective swarm intelligence algorith...
A Hammerstein Predistorter Based on Parallel Filterbank%基于并联FIR滤波器组的Hammerstein预失真器
Institute of Scientific and Technical Information of China (English)
佀秀杰; 韩丽茹; 曹振
2015-01-01
The key factor of compensating the nonlinear distortion ( NLD ) of power amplifier ( PA ) with memory effects is the accuracy of predistorter ( PD ) modeling, especially the description of the inverse memory characteristics of PA. In order to solve the problem that descriptions of predistorter models are not sufficient for inverse memory effects of PA,a lookup table ( LUT) cascaded a parallel Finite Impulse Re-sponse ( FIR) filterbank as the realization form of Hammerstein predistorter is proposed for improving the compensation performance of the traditional Hammerstein predistorter. A two-stage scheme is used to iden-tify the Hammerstein predistorter. Simulation results demonstrate that the proposed PD can more efficiently compensate the nonlinear distortion of power amplifier with strong memory effects.%采用预失真技术对功率放大器的记忆非线性失真进行补偿的关键是预失真器建模的准确性，尤其是模型对功率放大器逆记忆特性的描述能力。针对目前预失真器模型对功率放大器逆记忆效应描述不充分的问题，提出了将查找表( LUT )级联一个具有并联结构的有限长单位冲激响应( FIR)滤波器组作为实现形式的Hammerstein预失真器，以提高传统Hammerstein预失真器的补偿性能，并采用一种两步算法对其参数进行辨识。仿真实验表明，提出的Hammerstein预失真器能更好地补偿带强记忆效应的功率放大器的非线性失真。
Spatio-temporal modeling of nonlinear distributed parameter systems
Li, Han-Xiong
2011-01-01
The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes model reduction, parameter estimation and system identifi cation. Next, a class of block-oriented nonlinear systems in traditional lumped parameter systems (LPS) is extended to DPS, which results in the spatio-temporal Wiener and Hammerstein s
SHAPE STABILITY OF OPTIMAL CONTROL PROBLEMS IN COEFFICIENTS FOR COUPLED SYSTEM OF HAMMERSTEIN TYPE
Directory of Open Access Journals (Sweden)
P. I. Kogut
2014-01-01
Full Text Available In this paper we consider an optimal control problem (OCP for the coupledsystem of a nonlinear monotone Dirichlet problem with matrix-valued L∞(Ω;RN×N-controls in coecients and a nonlinear equation of Hammerstein type, where solution nonlinearly depends on L∞ -control. Since problems of this type have no solutions in general, we make a special assumption on the coecients of the state equations and introduce the class of so-called solenoidal admissible controls. Using the direct method in calculus of variations, we prove the existence of an optimal control. We also study the stability of the optimal control problem with respect to the domain perturbation. In particular, we derive the sucient conditions of the Mosco-stability for the given class of OCPs.
A muscle model for hybrid muscle activation
Directory of Open Access Journals (Sweden)
Klauer Christian
2015-09-01
Full Text Available To develop model-based control strategies for Functional Electrical Stimulation (FES in order to support weak voluntary muscle contractions, a hybrid model for describing joint motions induced by concurrent voluntary-and FES induced muscle activation is proposed. It is based on a Hammerstein model – as commonly used in feedback controlled FES – and exemplarily applied to describe the shoulder abduction joint angle. Main component of a Hammerstein muscle model is usually a static input nonlinearity depending on the stimulation intensity. To additionally incorporate voluntary contributions, we extended the static non-linearity by a second input describing the intensity of the voluntary contribution that is estimated by electromyography (EMG measurements – even during active FES. An Artificial Neural Network (ANN is used to describe the static input non-linearity. The output of the ANN drives a second-order linear dynamical system that describes the combined muscle activation and joint angle dynamics. The tunable parameters are adapted to the individual subject by a system identification approach using previously recorded I/O-data. The model has been validated in two healthy subjects yielding RMS values for the joint angle error of 3.56° and 3.44°, respectively.
Design and experiment of data-driven modeling and flutter control of a prototype wing
Lum, Kai-Yew; Xu, Cai-Lin; Lu, Zhenbo; Lai, Kwok-Leung; Cui, Yongdong
2017-06-01
This paper presents an approach for data-driven modeling of aeroelasticity and its application to flutter control design of a wind-tunnel wing model. Modeling is centered on system identification of unsteady aerodynamic loads using computational fluid dynamics data, and adopts a nonlinear multivariable extension of the Hammerstein-Wiener system. The formulation is in modal coordinates of the elastic structure, and yields a reduced-order model of the aeroelastic feedback loop that is parametrized by airspeed. Flutter suppression is thus cast as a robust stabilization problem over uncertain airspeed, for which a low-order H∞ controller is computed. The paper discusses in detail parameter sensitivity and observability of the model, the former to justify the chosen model structure, and the latter to provide a criterion for physical sensor placement. Wind tunnel experiments confirm the validity of the modeling approach and the effectiveness of the control design.
A Novel Adaptive Structure for Hammerstein Predistorter%一种新的用于Hammerstein预失真器的自适应结构
Institute of Scientific and Technical Information of China (English)
佀秀杰; 金明录
2011-01-01
针对目前的自适应预失真结构不利于高效的最小二乘算法直接对Hammerstein预失真器参数进行更新的问题,该文提出了一种新的自适应预失真结构.应用该结构可以得到Hammerstein预失真器中两个子系统的误差,因此可使用高效的最小二乘算法直接对Hammerstein预失真器进行自适应更新,避免了结构误差以及子系统误差不精确对预失真器性能的影响.仿真结果表明:该文提出的自适应结构可使Hammerstein预失真器快速高效地补偿带记忆效应功率放大器的非线性失真.%Present adaptive predistortion structures make against the utilization of efficient least-square aigorithms in the parameters update of Hammerstein predistorter. In order to solve the problem, a novel adaptive structure is proposed. Using the structure, errors of the two subsystems of a Hammerstein predistorter can be obtained, so efficient least-square algorithm can be used in the parameters update of a Hammerstein predistorter directly. By this means, the effect on the performance of predistorter induced by the structure error and the imprecision of subsystem errors become evitable. It is confirmed by computer simulation that Hammerstein predistorter could more efficiently compensate the nonlinear distortion of power amplifier with memory effects by the proposed adaptive predistortion structure.
Yan, Jun; Li, Bo; Guo, Gang; Zeng, Yonghua; Zhang, Meijun
2013-11-01
Electro-hydraulic control systems are nonlinear in nature and their mathematic models have unknown parameters. Existing research of modeling and identification of the electro-hydraulic control system is mainly based on theoretical state space model, and the parameters identification is hard due to its demand on internal states measurement. Moreover, there are also some hard-to-model nonlinearities in theoretical model, which needs to be overcome. Modeling and identification of the electro-hydraulic control system of an excavator arm based on block-oriented nonlinear(BONL) models is investigated. The nonlinear state space model of the system is built first, and field tests are carried out to reveal the nonlinear characteristics of the system. Based on the physic insight into the system, three BONL models are adopted to describe the highly nonlinear system. The Hammerstein model is composed of a two-segment polynomial nonlinearity followed by a linear dynamic subsystem. The Hammerstein-Wiener(H-W) model is represented by the Hammerstein model in cascade with another single polynomial nonlinearity. A novel Pseudo-Hammerstein-Wiener(P-H-W) model is developed by replacing the single polynomial of the H-W model by a non-smooth backlash function. The key term separation principle is applied to simplify the BONL models into linear-in-parameters structures. Then, a modified recursive least square algorithm(MRLSA) with iterative estimation of internal variables is developed to identify the all the parameters simultaneously. The identification results demonstrate that the BONL models with two-segment polynomial nonlinearities are able to capture the system behavior, and the P-H-W model has the best prediction accuracy. Comparison experiments show that the velocity prediction error of the P-H-W model is reduced by 14%, 30% and 75% to the H-W model, Hammerstein model, and extended auto-regressive (ARX) model, respectively. This research is helpful in controller design, system
An Improved Adaptive Structure for Hammerstein Predistorter%一种改进的用于Hammerstein预失真器的自适应结构
Institute of Scientific and Technical Information of China (English)
侣秀杰; 金明录
2011-01-01
The existing adaptive structure are not suitable to update the parameters for Hammerstein predistorter,especially for the linear subsystem.In order to solve the problem,an improved adaptive structure based on indirect learning structure is proposed in this paper.Efficient least-square algorithm can be used to update parameters of the linear subsystem in a Hammerstein predistorter by this adaptive structure which can provide the error of the linear subsystem.Therefore,the linearization efficiency of the whole predistorter is improved.The experimental results show that the Hammerstein predistorter by using the proposed adaptive structure can compensate the nonlinear distortion of PA with memory effects efficiently.%针对目前的自适应结构不利于Hammerstein预失真器进行参数更新,尤其是不利于线性子系统参数更新的问题,该文在间接学习结构的基础上提出了一种改进的自适应预失真结构。应用该结构能够得到Hammer-stein预失真器中待识别线性子系统的误差,使得高效的最小二乘算法能够直接对其参数进行更新,进而提高了整个预失真器的线性化性能。仿真结果表明：应用文中自适应结构,Hammerstein预失真器可以高效地补偿带记忆效应功率放大器的非线性失真。
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
Why Multiple Models?This book presents a variety of approaches which produce complex models or controllers by piecing together a number of simpler subsystems. Thisdivide-and-conquer strategy is a long-standing and general way of copingwith complexity in engineering systems, nature and human probl...
Khachatryan, Kh A.
2015-04-01
We study certain classes of non-linear Hammerstein integral equations on the semi-axis and the whole line. These classes of equations arise in the theory of radiative transfer in nuclear reactors, in the kinetic theory of gases, and for travelling waves in non-linear Richer competition systems. By combining special iteration methods with the methods of construction of invariant cone segments for the appropriate non-linear operator, we are able to prove constructive existence theorems for positive solutions in various function spaces. We give illustrative examples of equations satisfying all the hypotheses of our theorems.
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating...... of introduction of existing knowledge, as well as the ease of model interpretation. This book attempts to outlinemuch of the common ground between the various approaches, encouraging the transfer of ideas.Recent progress in algorithms and analysis is presented, with constructive algorithms for automated model...
Model Construct Based Enterprise Model Architecture and Its Modeling Approach
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.
Recursive Subspace Identification of AUV Dynamic Model under General Noise Assumption
Directory of Open Access Journals (Sweden)
Zheping Yan
2014-01-01
Full Text Available A recursive subspace identification algorithm for autonomous underwater vehicles (AUVs is proposed in this paper. Due to the advantages at handling nonlinearities and couplings, the AUV model investigated here is for the first time constructed as a Hammerstein model with nonlinear feedback in the linear part. To better take the environment and sensor noises into consideration, the identification problem is concerned as an errors-in-variables (EIV one which means that the identification procedure is under general noise assumption. In order to make the algorithm recursively, propagator method (PM based subspace approach is extended into EIV framework to form the recursive identification method called PM-EIV algorithm. With several identification experiments carried out by the AUV simulation platform, the proposed algorithm demonstrates its effectiveness and feasibility.
Energy Technology Data Exchange (ETDEWEB)
Zhou, Ping; Song, Heda; Wang, Hong; Chai, Tianyou
2017-09-01
Blast furnace (BF) in ironmaking is a nonlinear dynamic process with complicated physical-chemical reactions, where multi-phase and multi-field coupling and large time delay occur during its operation. In BF operation, the molten iron temperature (MIT) as well as Si, P and S contents of molten iron are the most essential molten iron quality (MIQ) indices, whose measurement, modeling and control have always been important issues in metallurgic engineering and automation field. This paper develops a novel data-driven nonlinear state space modeling for the prediction and control of multivariate MIQ indices by integrating hybrid modeling and control techniques. First, to improve modeling efficiency, a data-driven hybrid method combining canonical correlation analysis and correlation analysis is proposed to identify the most influential controllable variables as the modeling inputs from multitudinous factors would affect the MIQ indices. Then, a Hammerstein model for the prediction of MIQ indices is established using the LS-SVM based nonlinear subspace identification method. Such a model is further simplified by using piecewise cubic Hermite interpolating polynomial method to fit the complex nonlinear kernel function. Compared to the original Hammerstein model, this simplified model can not only significantly reduce the computational complexity, but also has almost the same reliability and accuracy for a stable prediction of MIQ indices. Last, in order to verify the practicability of the developed model, it is applied in designing a genetic algorithm based nonlinear predictive controller for multivariate MIQ indices by directly taking the established model as a predictor. Industrial experiments show the advantages and effectiveness of the proposed approach.
Measurements and Modeling of the Nonlinear Behavior of a Guitar Pickup at Low Frequencies †
Directory of Open Access Journals (Sweden)
Antonin Novak
2017-01-01
Full Text Available Description of the physical behavior of electric guitars is still not very widespread in the scientific literature. In particular, the physical models describing a nonlinear behavior of pickups still requires some refinements. The study presented in this paper is focused on nonlinear modeling of the pickups. Two main issues are raised. First, is the currently most used nonlinear model (a Hammerstein model sufficient for the complex nonlinear behavior of the pickup? In other words, would a more complex model, such as a Generalized Hammerstein that can deal better with the nonlinear memory, yield better results? The second troublesome issue is how to measure the nonlinear behavior of a pickup correctly. A specific experimental set-up allowing for driving the pickup in a controlled way (string displacement perpendicular to the pickup and to separate the nonlinear model of the pickup from other nonlinearities in the measurement chain is proposed. Thanks to this experimental set-up, a Generalized Hammerstein model of the pickup is estimated for frequency range 15–500 Hz and the results are compared with a simple Hammerstein model. A comparison with experimental results shows that both models succeed in describing the pickup when used in realistic conditions.
Hydraulic Modeling of Lock Approaches
2016-08-01
cation was that the guidewall design changed from a solid wall to one on pilings in which water was allowed to flow through and/or under the wall ...develops innovative solutions in civil and military engineering, geospatial sciences, water resources, and environmental sciences for the Army, the...magnitudes and directions at lock approaches for open river conditions. The meshes were developed using the Surface- water Modeling System. The two
LP Approach to Statistical Modeling
Mukhopadhyay, Subhadeep; Parzen, Emanuel
2014-01-01
We present an approach to statistical data modeling and exploratory data analysis called `LP Statistical Data Science.' It aims to generalize and unify traditional and novel statistical measures, methods, and exploratory tools. This article outlines fundamental concepts along with real-data examples to illustrate how the `LP Statistical Algorithm' can systematically tackle different varieties of data types, data patterns, and data structures under a coherent theoretical framework. A fundament...
Approaches to Modeling of Recrystallization
Directory of Open Access Journals (Sweden)
Håkan Hallberg
2011-10-01
Full Text Available Control of the material microstructure in terms of the grain size is a key component in tailoring material properties of metals and alloys and in creating functionally graded materials. To exert this control, reliable and efficient modeling and simulation of the recrystallization process whereby the grain size evolves is vital. The present contribution is a review paper, summarizing the current status of various approaches to modeling grain refinement due to recrystallization. The underlying mechanisms of recrystallization are briefly recollected and different simulation methods are discussed. Analytical and empirical models, continuum mechanical models and discrete methods as well as phase field, vertex and level set models of recrystallization will be considered. Such numerical methods have been reviewed previously, but with the present focus on recrystallization modeling and with a rapidly increasing amount of related publications, an updated review is called for. Advantages and disadvantages of the different methods are discussed in terms of applicability, underlying assumptions, physical relevance, implementation issues and computational efficiency.
Validation of Modeling Flow Approaching Navigation Locks
2013-08-01
instrumentation, direction vernier . ........................................................................ 8 Figure 11. Plan A lock approach, upstream approach...13-9 8 Figure 9. Tools and instrumentation, bracket attached to rail. Figure 10. Tools and instrumentation, direction vernier . Numerical model
Model Mapping Approach Based on Ontology Semantics
Directory of Open Access Journals (Sweden)
Jinkui Hou
2013-09-01
Full Text Available The mapping relations between different models are the foundation for model transformation in model-driven software development. On the basis of ontology semantics, model mappings between different levels are classified by using structural semantics of modeling languages. The general definition process for mapping relations is explored, and the principles of structure mapping are proposed subsequently. The approach is further illustrated by the mapping relations from class model of object oriented modeling language to the C programming codes. The application research shows that the approach provides a theoretical guidance for the realization of model mapping, and thus can make an effective support to model-driven software development
Learning Action Models: Qualitative Approach
Bolander, T.; Gierasimczuk, N.; van der Hoek, W.; Holliday, W.H.; Wang, W.-F.
2015-01-01
In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite
Learning Actions Models: Qualitative Approach
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2015-01-01
—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning......In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite...... identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power...
Geometrical approach to fluid models
Kuvshinov, B. N.; Schep, T. J.
1997-01-01
Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical
Geometrical approach to fluid models
Kuvshinov, B. N.; Schep, T. J.
1997-01-01
Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notio
Model based feature fusion approach
Schwering, P.B.W.
2001-01-01
In recent years different sensor data fusion approaches have been analyzed and evaluated in the field of mine detection. In various studies comparisons have been made between different techniques. Although claims can be made for advantages for using certain techniques, until now there has been no si
Global energy modeling - A biophysical approach
Energy Technology Data Exchange (ETDEWEB)
Dale, Michael
2010-09-15
This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.
A POMDP approach to Affective Dialogue Modeling
Bui Huu Trung, B.H.T.; Poel, Mannes; Nijholt, Antinus; Zwiers, Jakob; Keller, E.; Marinaro, M.; Bratanic, M.
2007-01-01
We propose a novel approach to developing a dialogue model that is able to take into account some aspects of the user's affective state and to act appropriately. Our dialogue model uses a Partially Observable Markov Decision Process approach with observations composed of the observed user's
The chronic diseases modelling approach
Hoogenveen RT; Hollander AEM de; Genugten MLL van; CCM
1998-01-01
A mathematical model structure is described that can be used to simulate the changes of the Dutch public health state over time. The model is based on the concept of demographic and epidemiologic processes (events) and is mathematically based on the lifetable method. The population is divided over s
Learning Actions Models: Qualitative Approach
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2015-01-01
identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power......—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning...
A Unified Approach to Modeling and Programming
DEFF Research Database (Denmark)
Madsen, Ole Lehrmann; Møller-Pedersen, Birger
2010-01-01
of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...
Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P
2015-11-01
This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme.
Szekeres models: a covariant approach
Apostolopoulos, Pantelis S
2016-01-01
We exploit the 1+1+2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an \\emph{average scale length} can be defined \\emph{covariantly} which satisfies a 2d equation of motion driven from the \\emph{effective gravitational mass} (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field $E_{ab}$. In addition the notions of the Apparent and Absolute Apparent Horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express the Sachs optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.
Matrix Model Approach to Cosmology
Chaney, A; Stern, A
2015-01-01
We perform a systematic search for rotationally invariant cosmological solutions to matrix models, or more specifically the bosonic sector of Lorentzian IKKT-type matrix models, in dimensions $d$ less than ten, specifically $d=3$ and $d=5$. After taking a continuum (or commutative) limit they yield $d-1$ dimensional space-time surfaces, with an attached Poisson structure, which can be associated with closed, open or static cosmologies. For $d=3$, we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a matrix resolution of cosmological singularities. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the $d=3$ soluti...
A new approach to adaptive data models
Directory of Open Access Journals (Sweden)
Ion LUNGU
2016-12-01
Full Text Available Over the last decade, there has been a substantial increase in the volume and complexity of data we collect, store and process. We are now aware of the increasing demand for real time data processing in every continuous business process that evolves within the organization. We witness a shift from a traditional static data approach to a more adaptive model approach. This article aims to extend understanding in the field of data models used in information systems by examining how an adaptive data model approach for managing business processes can help organizations accommodate on the fly and build dynamic capabilities to react in a dynamic environment.
Modeling software behavior a craftsman's approach
Jorgensen, Paul C
2009-01-01
A common problem with most texts on requirements specifications is that they emphasize structural models to the near exclusion of behavioral models-focusing on what the software is, rather than what it does. If they do cover behavioral models, the coverage is brief and usually focused on a single model. Modeling Software Behavior: A Craftsman's Approach provides detailed treatment of various models of software behavior that support early analysis, comprehension, and model-based testing. Based on the popular and continually evolving course on requirements specification models taught by the auth
Current approaches to gene regulatory network modelling
Directory of Open Access Journals (Sweden)
Brazma Alvis
2007-09-01
Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.
Model Oriented Approach for Industrial Software Development
Directory of Open Access Journals (Sweden)
P. D. Drobintsev
2015-01-01
Full Text Available The article considers the specifics of a model oriented approach to software development based on the usage of Model Driven Architecture (MDA, Model Driven Software Development (MDSD and Model Driven Development (MDD technologies. Benefits of this approach usage in the software development industry are described. The main emphasis is put on the system design, automated code generation for large systems, verification, proof of system properties and reduction of bug density. Drawbacks of the approach are also considered. The approach proposed in the article is specific for industrial software systems development. These systems are characterized by different levels of abstraction, which is used on modeling and code development phases. The approach allows to detail the model to the level of the system code, at the same time store the verified model semantics and provide the checking of the whole detailed model. Steps of translating abstract data structures (including transactions, signals and their parameters into data structures used in detailed system implementation are presented. Also the grammar of a language for specifying rules of abstract model data structures transformation into real system detailed data structures is described. The results of applying the proposed method in the industrial technology are shown.The article is published in the authors’ wording.
Distributed simulation a model driven engineering approach
Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent
2016-01-01
Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.
Empirical Modeling of Heating Element Power for the Czochralski Crystallization Process
Directory of Open Access Journals (Sweden)
Magnus Komperød
2010-01-01
Full Text Available The Czochralski (CZ crystallization process is used to produce monocrystalline silicon. Monocrystalline silicon is used in solar cell wafers and in computers and electronics. The CZ process is a batch process, where multicrystalline silicon is melted in a crucible and later solidifies on a monocrystalline seed crystal. The crucible is heated using a heating element where the power is manipulated using a triode for alternating current (TRIAC. As the electric resistance of the heating element increases by increased temperature, there are significant dynamics from the TRIAC input signal (control system output to the actual (measured heating element power. The present paper focuses on empirical modeling of these dynamics. The modeling is based on a dataset logged from a real-life CZ process. Initially the dataset is preprocessed by detrending and handling outliers. Next, linear ARX, ARMAX, and output error (OE models are identfied. As the linear models do not fully explain the process' behavior, nonlinear system identification is applied. The Hammerstein-Wiener (HW model structure is chosen. The final model identified is a Hammerstein model, i.e. a HW model with nonlinearity at the input, but not at the output. This model has only one more identified parameter than the linear OE model, but still improves the optimization criterion (mean squared ballistic simulation errors by a factor of six. As there is no nonlinearity at the output, the dynamics from the prediction error to the model output are linear, which allows a noise model to be added. Comparison of a Hammerstein model with noise model and the linear ARMAX model, both optimized for mean squared one-step-ahead prediction errors, shows that this optimization criterion is 42% lower for the Hammerstein model. Minimizing the number of parameters to be identified has been an important consideration throughout the modeling work.
A Study of Thermal Contact using Nonlinear System Identification Models
Directory of Open Access Journals (Sweden)
M. H. Shojaeefard
2008-01-01
Full Text Available One interesting application of system identification method is to identify and control the heat transfer from the exhaust valve to the seat to keep away the valve from being damaged. In this study, two co-axial cylindrical specimens are used as exhaust valve and its seat. Using the measured temperatures at different locations of the specimens and with a semi-analytical method, the temperature distribution of the specimens is calculated and consequently, the thermal contact conductance is calculated. By applying the system identification method and having the temperatures at both sides of the contact surface, the temperature transfer function is calculated. With regard to the fact that the thermal contact has nonlinear behavior, two nonlinear black-box models called nonlinear ARX and NLN Hammerstein-Wiener models are taken for accurate estimation. Results show that the NLN Hammerstein-Wiener models with wavelet network nonlinear estimator is the best.
Nonlinear State Space Modeling and System Identification for Electrohydraulic Control
Directory of Open Access Journals (Sweden)
Jun Yan
2013-01-01
Full Text Available The paper deals with nonlinear modeling and identification of an electrohydraulic control system for improving its tracking performance. We build the nonlinear state space model for analyzing the highly nonlinear system and then develop a Hammerstein-Wiener (H-W model which consists of a static input nonlinear block with two-segment polynomial nonlinearities, a linear time-invariant dynamic block, and a static output nonlinear block with single polynomial nonlinearity to describe it. We simplify the H-W model into a linear-in-parameters structure by using the key term separation principle and then use a modified recursive least square method with iterative estimation of internal variables to identify all the unknown parameters simultaneously. It is found that the proposed H-W model approximates the actual system better than the independent Hammerstein, Wiener, and ARX models. The prediction error of the H-W model is about 13%, 54%, and 58% less than the Hammerstein, Wiener, and ARX models, respectively.
A Set Theoretical Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models ch...
Modeling diffuse pollution with a distributed approach.
León, L F; Soulis, E D; Kouwen, N; Farquhar, G J
2002-01-01
The transferability of parameters for non-point source pollution models to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse pollution modeling. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. The distributed approach for the water quality model for diffuse pollution in agricultural watersheds is described in this paper. Integrating the model with data extracted using GIS technology (Geographical Information Systems) for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve non-point source modeling at the watershed scale level.
MODULAR APPROACH WITH ROUGH DECISION MODELS
Directory of Open Access Journals (Sweden)
Ahmed T. Shawky
2012-09-01
Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.
Modular Approach with Rough Decision Models
Directory of Open Access Journals (Sweden)
Ahmed T. Shawky
2012-10-01
Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.
Modeling approach suitable for energy system
Energy Technology Data Exchange (ETDEWEB)
Goetschel, D. V.
1979-01-01
Recently increased attention has been placed on optimization problems related to the determination and analysis of operating strategies for energy systems. Presented in this paper is a nonlinear model that can be used in the formulation of certain energy-conversion systems-modeling problems. The model lends itself nicely to solution approaches based on nonlinear-programming algorithms and, in particular, to those methods falling into the class of variable metric algorithms for nonlinearly constrained optimization.
Piecewise-polynomial and cascade models of predistorter for linearization of power amplifier
2012-01-01
To combat non-linear signal distortions in a power amplifier we suggest using predistorter with cascade structure in which first and second nodes have piecewise-polynomial and polynomial models. On example of linearizing the Winner–Hammerstein amplifier model we demonstrate that cascade structure of predistorter improves precision of amplifier’s linearization. To simplify predistorter’s synthesis the degree of polynomial model used in first node should be moderate, while precision should be i...
Simplified modeling and generalized predictive position control of an ultrasonic motor.
Bigdeli, Nooshin; Haeri, Mohammad
2005-04-01
Ultrasonic motors (USM's) possess heavy nonlinear and load dependent characteristics such as dead-zone and saturation reverse effects, which vary with driving conditions. In this paper, behavior of an ultrasonic motor is modeled using Hammerstein model structure and experimental measurements. Also, model predictive controllers are designed to obtain precise USM position control. Simulation results indicate improved performance of the motor for both set point tracking and disturbance rejection.
Stormwater infiltration trenches: a conceptual modelling approach.
Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare
2009-01-01
In recent years, limitations linked to traditional urban drainage schemes have been pointed out and new approaches are developing introducing more natural methods for retaining and/or disposing of stormwater. These mitigation measures are generally called Best Management Practices or Sustainable Urban Drainage System and they include practices such as infiltration and storage tanks in order to reduce the peak flow and retain part of the polluting components. The introduction of such practices in urban drainage systems entails an upgrade of existing modelling frameworks in order to evaluate their efficiency in mitigating the impact of urban drainage systems on receiving water bodies. While storage tank modelling approaches are quite well documented in literature, some gaps are still present about infiltration facilities mainly dependent on the complexity of the involved physical processes. In this study, a simplified conceptual modelling approach for the simulation of the infiltration trenches is presented. The model enables to assess the performance of infiltration trenches. The main goal is to develop a model that can be employed for the assessment of the mitigation efficiency of infiltration trenches in an integrated urban drainage context. Particular care was given to the simulation of infiltration structures considering the performance reduction due to clogging phenomena. The proposed model has been compared with other simplified modelling approaches and with a physically based model adopted as benchmark. The model performed better compared to other approaches considering both unclogged facilities and the effect of clogging. On the basis of a long-term simulation of six years of rain data, the performance and the effectiveness of an infiltration trench measure are assessed. The study confirmed the important role played by the clogging phenomenon on such infiltration structures.
Challenges in structural approaches to cell modeling.
Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A
2016-07-31
Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. Copyright © 2016 Elsevier Ltd. All rights reserved.
Building Water Models, A Different Approach
Izadi, Saeed; Onufriev, Alexey V
2014-01-01
Simplified, classical models of water are an integral part of atomistic molecular simulations, especially in biology and chemistry where hydration effects are critical. Yet, despite several decades of effort, these models are still far from perfect. Presented here is an alternative approach to constructing point charge water models - currently, the most commonly used type. In contrast to the conventional approach, we do not impose any geometry constraints on the model other than symmetry. Instead, we optimize the distribution of point charges to best describe the "electrostatics" of the water molecule, which is key to many unusual properties of liquid water. The search for the optimal charge distribution is performed in 2D parameter space of key lowest multipole moments of the model, to find best fit to a small set of bulk water properties at room temperature. A virtually exhaustive search is enabled via analytical equations that relate the charge distribution to the multipole moments. The resulting "optimal"...
Towards new approaches in phenological modelling
Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas
2014-05-01
Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.
Nonlinear Model Algorithmic Control of a pH Neutralization Process
Institute of Scientific and Technical Information of China (English)
ZOU Zhiyun; YU Meng; WANG Zhizhen; LIU Xinghong; GUO Yuqing; ZHANG Fengbo; GUO Ning
2013-01-01
Control of pH neutralization processes is challenging in the chemical process industry because of their inherent strong nonlinearity.In this paper,the model algorithmic control (MAC) strategy is extended to nonlinear processes using Hammerstein model that consists of a static nonlinear polynomial function followed in series by a linear impulse response dynamic element.A new nonlinear Hammerstein MAC algorithm (named NLH-MAC) is presented in detail.The simulation control results of a pH neutralization process show that NLH-MAC gives better control performance than linear MAC and the commonly used industrial nonlinear propotional plus integral plus derivative (PID) controller.Further simulation experiment demonstrates that NLH-MAC not only gives good control response,but also possesses good stability and robustness even with large modeling errors.
Modelling Coagulation Systems: A Stochastic Approach
Ryazanov, V V
2011-01-01
A general stochastic approach to the description of coagulating aerosol system is developed. As the object of description one can consider arbitrary mesoscopic values (number of aerosol clusters, their size etc). The birth-and-death formalism for a number of clusters can be regarded as a partial case of the generalized storage model. An application of the storage model to the number of monomers in a cluster is discussed.
A Multiple Model Approach to Modeling Based on LPF Algorithm
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``
Towards a Multiscale Approach to Cybersecurity Modeling
Energy Technology Data Exchange (ETDEWEB)
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.
Post-16 Biology--Some Model Approaches?
Lock, Roger
1997-01-01
Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)
Decomposition approach to model smart suspension struts
Song, Xubin
2008-10-01
Model and simulation study is the starting point for engineering design and development, especially for developing vehicle control systems. This paper presents a methodology to build models for application of smart struts for vehicle suspension control development. The modeling approach is based on decomposition of the testing data. Per the strut functions, the data is dissected according to both control and physical variables. Then the data sets are characterized to represent different aspects of the strut working behaviors. Next different mathematical equations can be built and optimized to best fit the corresponding data sets, respectively. In this way, the model optimization can be facilitated in comparison to a traditional approach to find out a global optimum set of model parameters for a complicated nonlinear model from a series of testing data. Finally, two struts are introduced as examples for this modeling study: magneto-rheological (MR) dampers and compressible fluid (CF) based struts. The model validation shows that this methodology can truly capture macro-behaviors of these struts.
Heat transfer modeling an inductive approach
Sidebotham, George
2015-01-01
This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...
A Bayesian Shrinkage Approach for AMMI Models.
da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio
2015-01-01
Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior
A Bayesian Shrinkage Approach for AMMI Models.
Directory of Open Access Journals (Sweden)
Carlos Pereira da Silva
Full Text Available Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI model, are widely applicable to genotype-by-environment interaction (GEI studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05 in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct
Scientific Theories, Models and the Semantic Approach
Directory of Open Access Journals (Sweden)
Décio Krause
2007-12-01
Full Text Available According to the semantic view, a theory is characterized by a class of models. In this paper, we examine critically some of the assumptions that underlie this approach. First, we recall that models are models of something. Thus we cannot leave completely aside the axiomatization of the theories under consideration, nor can we ignore the metamathematics used to elaborate these models, for changes in the metamathematics often impose restrictions on the resulting models. Second, based on a parallel between van Fraassen’s modal interpretation of quantum mechanics and Skolem’s relativism regarding set-theoretic concepts, we introduce a distinction between relative and absolute concepts in the context of the models of a scientific theory. And we discuss the significance of that distinction. Finally, by focusing on contemporary particle physics, we raise the question: since there is no general accepted unification of the parts of the standard model (namely, QED and QCD, we have no theory, in the usual sense of the term. This poses a difficulty: if there is no theory, how can we speak of its models? What are the latter models of? We conclude by noting that it is unclear that the semantic view can be applied to contemporary physical theories.
Multiscale Model Approach for Magnetization Dynamics Simulations
De Lucia, Andrea; Tretiakov, Oleg A; Kläui, Mathias
2016-01-01
Simulations of magnetization dynamics in a multiscale environment enable rapid evaluation of the Landau-Lifshitz-Gilbert equation in a mesoscopic sample with nanoscopic accuracy in areas where such accuracy is required. We have developed a multiscale magnetization dynamics simulation approach that can be applied to large systems with spin structures that vary locally on small length scales. To implement this, the conventional micromagnetic simulation framework has been expanded to include a multiscale solving routine. The software selectively simulates different regions of a ferromagnetic sample according to the spin structures located within in order to employ a suitable discretization and use either a micromagnetic or an atomistic model. To demonstrate the validity of the multiscale approach, we simulate the spin wave transmission across the regions simulated with the two different models and different discretizations. We find that the interface between the regions is fully transparent for spin waves with f...
Continuum modeling an approach through practical examples
Muntean, Adrian
2015-01-01
This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.
A Multivariate Approach to Functional Neuro Modeling
DEFF Research Database (Denmark)
Mørch, Niels J.S.
1998-01-01
This Ph.D. thesis, A Multivariate Approach to Functional Neuro Modeling, deals with the analysis and modeling of data from functional neuro imaging experiments. A multivariate dataset description is provided which facilitates efficient representation of typical datasets and, more importantly...... and overall conditions governing the functional experiment, via associated micro- and macroscopic variables. The description facilitates an efficient microscopic re-representation, as well as a handle on the link between brain and behavior; the latter is achieved by hypothesizing variations in the micro...... a generalization theoretical framework centered around measures of model generalization error. - Only few, if any, examples of the application of generalization theory to functional neuro modeling currently exist in the literature. - Exemplification of the proposed generalization theoretical framework...
Interfacial Fluid Mechanics A Mathematical Modeling Approach
Ajaev, Vladimir S
2012-01-01
Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail. Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also: Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...
Systematic approach to MIS model creation
Directory of Open Access Journals (Sweden)
Macura Perica
2004-01-01
Full Text Available In this paper-work, by application of basic principles of general theory of system (systematic approach, we have formulated a model of marketing information system. Bases for research were basic characteristics of systematic approach and marketing system. Informational base for management of marketing system, i.e. marketing instruments was presented in a way that the most important information for decision making were listed per individual marketing mix instruments. In projected model of marketing information system, information listed in this way create a base for establishing of data bases, i.e. bases of information (data bases of: product, price, distribution, promotion. This paper-work gives basic preconditions for formulation and functioning of the model. Model was presented by explication of elements of its structure (environment, data bases operators, analysts of information system, decision makers - managers, i.e. input, process, output, feedback and relations between these elements which are necessary for its optimal functioning. Beside that, here are basic elements for implementation of the model into business system, as well as conditions for its efficient functioning and development.
BLIND IDENTIFICATION OF A CLASS OF NONLINEAR SYSTEMS WITH CYCLOSTATIONARY INPUT
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
This letter deals with blind identification of nonlinear discrete Hammerstein system under the input signal that is cyclostationary.The first-order moment of the specific input as well as the inverse nonlinear mapping of the Hammerstein model are combined to establish a relationship between the system output and the system parameters,which implies an approach to identifying the system blindly.Simulation results demonstrate the effectiveness of this approach to blind identification of a class of nonUnear systems.
Regularization of turbulence - a comprehensive modeling approach
Geurts, B. J.
2011-12-01
Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fluid turbulence, alternative coarsened descriptions need to be developed to cope with the wide range of length and time scales. These coarsened descriptions are known as large-eddy simulations in which one aims to capture only the primary features of a flow, at considerably reduced computational effort. Such coarsening introduces a closure problem that requires additional phenomenological modeling. A systematic approach to the closure problem, know as regularization modeling, will be reviewed. Its application to multiphase turbulent will be illustrated in which a basic regularization principle is enforced to physically consistently approximate momentum and scalar transport. Examples of Leray and LANS-alpha regularization are discussed in some detail, as are compatible numerical strategies. We illustrate regularization modeling to turbulence under the influence of rotation and buoyancy and investigate the accuracy with which particle-laden flow can be represented. A discussion of the numerical and modeling errors incurred will be given on the basis of homogeneous isotropic turbulence.
Merging Digital Surface Models Implementing Bayesian Approaches
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES
Directory of Open Access Journals (Sweden)
H. Sadeq
2016-06-01
Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
A new approach for Bayesian model averaging
Institute of Scientific and Technical Information of China (English)
TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun
2012-01-01
Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.
AN AUTOMATIC APPROACH TO BOX & JENKINS MODELLING
MARCELO KRIEGER
1983-01-01
Apesar do reconhecimento amplo da qualidade das previsões obtidas na aplicação de um modelo ARIMA à previsão de séries temporais univariadas, seu uso tem permanecido restrito pela falta de procedimentos automáticos, computadorizados. Neste trabalho este problema é discutido e um algoritmo é proposto. Inspite of general recognition of the good forecasting ability of ARIMA models in predicting time series, this approach is not widely used because of the lack of ...
Modeling in transport phenomena a conceptual approach
Tosun, Ismail
2007-01-01
Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to
Modeling for fairness: A Rawlsian approach.
Diekmann, Sven; Zwart, Sjoerd D
2014-06-01
In this paper we introduce the overlapping design consensus for the construction of models in design and the related value judgments. The overlapping design consensus is inspired by Rawls' overlapping consensus. The overlapping design consensus is a well-informed, mutual agreement among all stakeholders based on fairness. Fairness is respected if all stakeholders' interests are given due and equal attention. For reaching such fair agreement, we apply Rawls' original position and reflective equilibrium to modeling. We argue that by striving for the original position, stakeholders expel invalid arguments, hierarchies, unwarranted beliefs, and bargaining effects from influencing the consensus. The reflective equilibrium requires that stakeholders' beliefs cohere with the final agreement and its justification. Therefore, the overlapping design consensus is not only an agreement to decisions, as most other stakeholder approaches, it is also an agreement to their justification and that this justification is consistent with each stakeholders' beliefs. For supporting fairness, we argue that fairness qualifies as a maxim in modeling. We furthermore distinguish values embedded in a model from values that are implied by its context of application. Finally, we conclude that for reaching an overlapping design consensus communication about properties of and values related to a model is required.
Pedagogic process modeling: Humanistic-integrative approach
Directory of Open Access Journals (Sweden)
Boritko Nikolaj M.
2007-01-01
Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .
Nuclear level density: Shell-model approach
Sen'kov, Roman; Zelevinsky, Vladimir
2016-06-01
Knowledge of the nuclear level density is necessary for understanding various reactions, including those in the stellar environment. Usually the combinatorics of a Fermi gas plus pairing is used for finding the level density. Recently a practical algorithm avoiding diagonalization of huge matrices was developed for calculating the density of many-body nuclear energy levels with certain quantum numbers for a full shell-model Hamiltonian. The underlying physics is that of quantum chaos and intrinsic thermalization in a closed system of interacting particles. We briefly explain this algorithm and, when possible, demonstrate the agreement of the results with those derived from exact diagonalization. The resulting level density is much smoother than that coming from conventional mean-field combinatorics. We study the role of various components of residual interactions in the process of thermalization, stressing the influence of incoherent collision-like processes. The shell-model results for the traditionally used parameters are also compared with standard phenomenological approaches.
Modeling Social Annotation: a Bayesian Approach
Plangprasopchok, Anon
2008-01-01
Collaborative tagging systems, such as del.icio.us, CiteULike, and others, allow users to annotate objects, e.g., Web pages or scientific papers, with descriptive labels called tags. The social annotations, contributed by thousands of users, can potentially be used to infer categorical knowledge, classify documents or recommend new relevant information. Traditional text inference methods do not make best use of socially-generated data, since they do not take into account variations in individual users' perspectives and vocabulary. In a previous work, we introduced a simple probabilistic model that takes interests of individual annotators into account in order to find hidden topics of annotated objects. Unfortunately, our proposed approach had a number of shortcomings, including overfitting, local maxima and the requirement to specify values for some parameters. In this paper we address these shortcomings in two ways. First, we extend the model to a fully Bayesian framework. Second, we describe an infinite ver...
Multicomponent Equilibrium Models for Testing Geothermometry Approaches
Energy Technology Data Exchange (ETDEWEB)
Carl D. Palmer; Robert W. Smith; Travis L. McLing
2013-02-01
Geothermometry is an important tool for estimating deep reservoir temperature from the geochemical composition of shallower and cooler waters. The underlying assumption of geothermometry is that the waters collected from shallow wells and seeps maintain a chemical signature that reflects equilibrium in the deeper reservoir. Many of the geothermometers used in practice are based on correlation between water temperatures and composition or using thermodynamic calculations based a subset (typically silica, cations or cation ratios) of the dissolved constituents. An alternative approach is to use complete water compositions and equilibrium geochemical modeling to calculate the degree of disequilibrium (saturation index) for large number of potential reservoir minerals as a function of temperature. We have constructed several “forward” geochemical models using The Geochemist’s Workbench to simulate the change in chemical composition of reservoir fluids as they migrate toward the surface. These models explicitly account for the formation (mass and composition) of a steam phase and equilibrium partitioning of volatile components (e.g., CO2, H2S, and H2) into the steam as a result of pressure decreases associated with upward fluid migration from depth. We use the synthetic data generated from these simulations to determine the advantages and limitations of various geothermometry and optimization approaches for estimating the likely conditions (e.g., temperature, pCO2) to which the water was exposed in the deep subsurface. We demonstrate the magnitude of errors that can result from boiling, loss of volatiles, and analytical error from sampling and instrumental analysis. The estimated reservoir temperatures for these scenarios are also compared to conventional geothermometers. These results can help improve estimation of geothermal resource temperature during exploration and early development.
A semiparametric approach to physiological flow models.
Verotta, D; Sheiner, L B; Ebling, W F; Stanski, D R
1989-08-01
By regarding sampled tissues in a physiological model as linear subsystems, the usual advantages of flow models are preserved while mitigating two of their disadvantages, (i) the need for assumptions regarding intratissue kinetics, and (ii) the need to simultaneously fit data from several tissues. To apply the linear systems approach, both arterial blood and (interesting) tissue drug concentrations must be measured. The body is modeled as having an arterial compartment (A) distributing drug to different linear subsystems (tissues), connected in a specific way by blood flow. The response (CA, with dimensions of concentration) of A is measured. Tissues receive input from A (and optionally from other tissues), and send output to the outside or to other parts of the body. The response (CT, total amount of drug in the tissue (T) divided by the volume of T) from the T-th one, for example, of such tissues is also observed. From linear systems theory, CT can be expressed as the convolution of CA with a disposition function, F(t) (with dimensions 1/time). The function F(t) depends on the (unknown) structure of T, but has certain other constant properties: The integral integral infinity0 F(t) dt is the steady state ratio of CT to CA, and the point F(0) is the clearance rate of drug from A to T divided by the volume of T. A formula for the clearance rate of drug from T to outside T can be derived. To estimate F(t) empirically, and thus mitigate disadvantage (i), we suggest that, first, a nonparametric (or parametric) function be fitted to CA data yielding predicted values, CA, and, second, the convolution integral of CA with F(t) be fitted to CT data using a deconvolution method. By so doing, each tissue's data are analyzed separately, thus mitigating disadvantage (ii). A method for system simulation is also proposed. The results of applying the approach to simulated data and to real thiopental data are reported.
Evaluating face trustworthiness: a model based approach.
Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N
2008-06-01
Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.
Approaches and models of intercultural education
Directory of Open Access Journals (Sweden)
Iván Manuel Sánchez Fontalvo
2013-10-01
Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.
Energy Technology Data Exchange (ETDEWEB)
Chandrasekhar Potluri,; Madhavi Anugolu; Marco P. Schoen; D. Subbaram Naidu
2013-08-01
In this work, an array of three surface Electrography (sEMG) sensors are used to acquired muscle extension and contraction signals for 18 healthy test subjects. The skeletal muscle force is estimated using the acquired sEMG signals and a Non-linear Wiener Hammerstein model, relating the two signals in a dynamic fashion. The model is obtained from using System Identification (SI) algorithm. The obtained force models for each sensor are fused using a proposed fuzzy logic concept with the intent to improve the force estimation accuracy and resilience to sensor failure or misalignment. For the fuzzy logic inference system, the sEMG entropy, the relative error, and the correlation of the force signals are considered for defining the membership functions. The proposed fusion algorithm yields an average of 92.49% correlation between the actual force and the overall estimated force output. In addition, the proposed fusionbased approach is implemented on a test platform. Experiments indicate an improvement in finger/hand force estimation.
A Bayesian modeling approach for generalized semiparametric structural equation models.
Song, Xin-Yuan; Lu, Zhao-Hua; Cai, Jing-Heng; Ip, Edward Hak-Sing
2013-10-01
In behavioral, biomedical, and psychological studies, structural equation models (SEMs) have been widely used for assessing relationships between latent variables. Regression-type structural models based on parametric functions are often used for such purposes. In many applications, however, parametric SEMs are not adequate to capture subtle patterns in the functions over the entire range of the predictor variable. A different but equally important limitation of traditional parametric SEMs is that they are not designed to handle mixed data types-continuous, count, ordered, and unordered categorical. This paper develops a generalized semiparametric SEM that is able to handle mixed data types and to simultaneously model different functional relationships among latent variables. A structural equation of the proposed SEM is formulated using a series of unspecified smooth functions. The Bayesian P-splines approach and Markov chain Monte Carlo methods are developed to estimate the smooth functions and the unknown parameters. Moreover, we examine the relative benefits of semiparametric modeling over parametric modeling using a Bayesian model-comparison statistic, called the complete deviance information criterion (DIC). The performance of the developed methodology is evaluated using a simulation study. To illustrate the method, we used a data set derived from the National Longitudinal Survey of Youth.
An integrated approach to permeability modeling using micro-models
Energy Technology Data Exchange (ETDEWEB)
Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)
2008-10-15
An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.
Development of a computationally efficient urban modeling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Murla, Damian; Ntegeka, Victor
2016-01-01
This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can be a...
Behavioral modelling and predistortion of wideband wireless transmitters
Ghannouchi, Fadhel M; Helaoui, Mohamed
2015-01-01
Covers theoretical and practical aspects related to the behavioral modelling and predistortion of wireless transmitters and power amplifiers. It includes simulation software that enables the users to apply the theory presented in the book. In the first section, the reader is given the general background of nonlinear dynamic systems along with their behavioral modelling from all its aspects. In the second part, a comprehensive compilation of behavioral models formulations and structures is provided including memory polynomial based models, box oriented models such as Hammerstein-based and Wiene
Connectivity of channelized reservoirs: a modelling approach
Energy Technology Data Exchange (ETDEWEB)
Larue, David K. [ChevronTexaco, Bakersfield, CA (United States); Hovadik, Joseph [ChevronTexaco, San Ramon, CA (United States)
2006-07-01
Connectivity represents one of the fundamental properties of a reservoir that directly affects recovery. If a portion of the reservoir is not connected to a well, it cannot be drained. Geobody or sandbody connectivity is defined as the percentage of the reservoir that is connected, and reservoir connectivity is defined as the percentage of the reservoir that is connected to wells. Previous studies have mostly considered mathematical, physical and engineering aspects of connectivity. In the current study, the stratigraphy of connectivity is characterized using simple, 3D geostatistical models. Based on these modelling studies, stratigraphic connectivity is good, usually greater than 90%, if the net: gross ratio, or sand fraction, is greater than about 30%. At net: gross values less than 30%, there is a rapid diminishment of connectivity as a function of net: gross. This behaviour between net: gross and connectivity defines a characteristic 'S-curve', in which the connectivity is high for net: gross values above 30%, then diminishes rapidly and approaches 0. Well configuration factors that can influence reservoir connectivity are well density, well orientation (vertical or horizontal; horizontal parallel to channels or perpendicular) and length of completion zones. Reservoir connectivity as a function of net: gross can be improved by several factors: presence of overbank sandy facies, deposition of channels in a channel belt, deposition of channels with high width/thickness ratios, and deposition of channels during variable floodplain aggradation rates. Connectivity can be reduced substantially in two-dimensional reservoirs, in map view or in cross-section, by volume support effects and by stratigraphic heterogeneities. It is well known that in two dimensions, the cascade zone for the 'S-curve' of net: gross plotted against connectivity occurs at about 60% net: gross. Generalizing this knowledge, any time that a reservoir can be regarded as &apos
Uncertainty in biology a computational modeling approach
Gomez-Cabrero, David
2016-01-01
Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process. This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples. This book is intended for graduate stude...
ALREST High Fidelity Modeling Program Approach
2011-05-18
Gases and Mixtures of Redlich - Kwong and Peng- Robinson Fluids Assumed pdf Model based on k- ε-g Model in NASA/LaRc Vulcan code Level Set model...Potential Attractiveness Of Liquid Hydrocarbon Engines For Boost Applications • Propensity Of Hydrocarbon Engines For Combustion Instability • Air
LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL
National Research Council Canada - National Science Library
Eser ÖRDEM
2013-01-01
Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002) and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings...
A model-based multisensor data fusion knowledge management approach
Straub, Jeremy
2014-06-01
A variety of approaches exist for combining data from multiple sensors. The model-based approach combines data based on its support for or refutation of elements of the model which in turn can be used to evaluate an experimental thesis. This paper presents a collection of algorithms for mapping various types of sensor data onto a thesis-based model and evaluating the truth or falsity of the thesis, based on the model. The use of this approach for autonomously arriving at findings and for prioritizing data are considered. Techniques for updating the model (instead of arriving at a true/false assertion) are also discussed.
Comparison of two novel approaches to model fibre reinforced concrete
Radtke, F.K.F.; Simone, A.; Sluys, L.J.
2009-01-01
We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity f
Modelling the World Wool Market: A Hybrid Approach
2007-01-01
We present a model of the world wool market that merges two modelling traditions: the partialequilibrium commodity-specific approach and the computable general-equilibrium approach. The model captures the multistage nature of the wool production system, and the heterogeneous nature of raw wool, processed wool and wool garments. It also captures the important wool producing and consuming regions of the world. We illustrate the utility of the model by estimating the effects of tariff barriers o...
An algebraic approach to the Hubbard model
de Leeuw, Marius
2015-01-01
We study the algebraic structure of an integrable Hubbard-Shastry type lattice model associated with the centrally extended su(2|2) superalgebra. This superalgebra underlies Beisert's AdS/CFT worldsheet R-matrix and Shastry's R-matrix. The considered model specializes to the one-dimensional Hubbard model in a certain limit. We demonstrate that Yangian symmetries of the R-matrix specialize to the Yangian symmetry of the Hubbard model found by Korepin and Uglov. Moreover, we show that the Hubbard model Hamiltonian has an algebraic interpretation as the so-called secret symmetry. We also discuss Yangian symmetries of the A and B models introduced by Frolov and Quinn.
Numerical modelling approach for mine backfill
Indian Academy of Sciences (India)
MUHAMMAD ZAKA EMAD
2017-09-01
Numerical modelling is broadly used for assessing complex scenarios in underground mines, including mining sequence and blast-induced vibrations from production blasting. Sublevel stoping mining methods with delayed backfill are extensively used to exploit steeply dipping ore bodies by Canadian hard-rockmetal mines. Mine backfill is an important constituent of mining process. Numerical modelling of mine backfill material needs special attention as the numerical model must behave realistically and in accordance with the site conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. Themodelling strategy is studied using a case study mine from Canadian mining industry. In the end, results of numerical model parametric study are shown and discussed.
Regularization of turbulence - a comprehensive modeling approach
Geurts, Bernard J.
2011-01-01
Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fl
Measuring equilibrium models: a multivariate approach
Directory of Open Access Journals (Sweden)
Nadji RAHMANIA
2011-04-01
Full Text Available This paper presents a multivariate methodology for obtaining measures of unobserved macroeconomic variables. The used procedure is the multivariate Hodrick-Prescot which depends on smoothing param eters. The choice of these parameters is crucial. Our approach is based on consistent estimators of these parameters, depending only on the observed data.
A graphical approach to analogue behavioural modelling
Moser, Vincent; Nussbaum, Pascal; Amann, Hans-Peter; Astier, Luc; Pellandini, Fausto
2007-01-01
In order to master the growing complexity of analogue electronic systems, modelling and simulation of analogue hardware at various levels is absolutely necessary. This paper presents an original modelling method based on the graphical description of analogue electronic functional blocks. This method is intended to be automated and integrated into a design framework: specialists create behavioural models of existing functional blocks, that can then be used through high-level selection and spec...
A geometrical approach to structural change modeling
Stijepic, Denis
2013-01-01
We propose a model for studying the dynamics of economic structures. The model is based on qualitative information regarding structural dynamics, in particular, (a) the information on the geometrical properties of trajectories (and their domains) which are studied in structural change theory and (b) the empirical information from stylized facts of structural change. We show that structural change is path-dependent in this model and use this fact to restrict the number of future structural cha...
Consumer preference models: fuzzy theory approach
Turksen, I. B.; Wilson, I. A.
1993-12-01
Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).
A New Approach for Magneto-Static Hysteresis Behavioral Modeling
DEFF Research Database (Denmark)
Astorino, Antonio; Swaminathan, Madhavan; Antonini, Giulio
2016-01-01
In this paper, a new behavioral modeling approach for magneto-static hysteresis is presented. Many accurate models are currently available, but none of them seems to be able to correctly reproduce all the possible B-H paths with low computational cost. By contrast, the approach proposed...... achieved when comparing the measured and simulated results....
Nucleon Spin Content in a Relativistic Quark Potential Model Approach
Institute of Scientific and Technical Information of China (English)
DONG YuBing; FENG QingGuo
2002-01-01
Based on a relativistic quark model approach with an effective potential U(r) = (ac/2)(1 + γ0)r2, the spin content of the nucleon is investigated. Pseudo-scalar interaction between quarks and Goldstone bosons is employed to calculate the couplings between the Goldstone bosons and the nucleon. Different approaches to deal with the center of mass correction in the relativistic quark potential model approach are discussed.
A simple approach to modeling ductile failure.
Energy Technology Data Exchange (ETDEWEB)
Wellman, Gerald William
2012-06-01
Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.
An approach for activity-based DEVS model specification
DEFF Research Database (Denmark)
Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram
2016-01-01
activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...
Advanced language modeling approaches, case study: Expert search
Hiemstra, Djoerd
2008-01-01
This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative
Random matrix model approach to chiral symmetry
Verbaarschot, J J M
1996-01-01
We review the application of random matrix theory (RMT) to chiral symmetry in QCD. Starting from the general philosophy of RMT we introduce a chiral random matrix model with the global symmetries of QCD. Exact results are obtained for universal properties of the Dirac spectrum: i) finite volume corrections to valence quark mass dependence of the chiral condensate, and ii) microscopic fluctuations of Dirac spectra. Comparisons with lattice QCD simulations are made. Most notably, the variance of the number of levels in an interval containing $n$ levels on average is suppressed by a factor $(\\log n)/\\pi^2 n$. An extension of the random matrix model model to nonzero temperatures and chemical potential provides us with a schematic model of the chiral phase transition. In particular, this elucidates the nature of the quenched approximation at nonzero chemical potential.
Machine Learning Approaches for Modeling Spammer Behavior
Islam, Md Saiful; Islam, Md Rafiqul
2010-01-01
Spam is commonly known as unsolicited or unwanted email messages in the Internet causing potential threat to Internet Security. Users spend a valuable amount of time deleting spam emails. More importantly, ever increasing spam emails occupy server storage space and consume network bandwidth. Keyword-based spam email filtering strategies will eventually be less successful to model spammer behavior as the spammer constantly changes their tricks to circumvent these filters. The evasive tactics that the spammer uses are patterns and these patterns can be modeled to combat spam. This paper investigates the possibilities of modeling spammer behavioral patterns by well-known classification algorithms such as Na\\"ive Bayesian classifier (Na\\"ive Bayes), Decision Tree Induction (DTI) and Support Vector Machines (SVMs). Preliminary experimental results demonstrate a promising detection rate of around 92%, which is considerably an enhancement of performance compared to similar spammer behavior modeling research.
Infectious disease modeling a hybrid system approach
Liu, Xinzhi
2017-01-01
This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.
Second Quantization Approach to Stochastic Epidemic Models
Mondaini, Leonardo
2015-01-01
We show how the standard field theoretical language based on creation and annihilation operators may be used for a straightforward derivation of closed master equations describing the population dynamics of multivariate stochastic epidemic models. In order to do that, we introduce an SIR-inspired stochastic model for hepatitis C virus epidemic, from which we obtain the time evolution of the mean number of susceptible, infected, recovered and chronically infected individuals in a population whose total size is allowed to change.
"Dispersion modeling approaches for near road | Science ...
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of applications. For example, such models can be useful for evaluating the mitigation potential of roadside barriers in reducing near-road exposures and their associated adverse health effects. Two databases, a tracer field study and a wind tunnel study, provide measurements used in the development and/or validation of algorithms to simulate dispersion in the presence of noise barriers. The tracer field study was performed in Idaho Falls, ID, USA with a 6-m noise barrier and a finite line source in a variety of atmospheric conditions. The second study was performed in the meteorological wind tunnel at the US EPA and simulated line sources at different distances from a model noise barrier to capture the effect on emissions from individual lanes of traffic. In both cases, velocity and concentration measurements characterized the effect of the barrier on dispersion.This paper presents comparisons with the two datasets of the barrier algorithms implemented in two different dispersion models: US EPA’s R-LINE (a research dispersion modelling tool under development by the US EPA’s Office of Research and Development) and CERC’s ADMS model (ADMS-Urban). In R-LINE the physical features reveal
Flipped models in Trinification: A Comprehensive Approach
Rodríguez, Oscar; Ponce, William A; Rojas, Eduardo
2016-01-01
By considering the 3-3-1 and the left-right symmetric models as low energy effective theories of the trinification group, alternative versions of these models are found. The new neutral gauge bosons in the universal 3-3-1 model and its flipped versions are considered; also, the left-right symmetric model and the two flipped variants of it are also studied. For these models, the couplings of the $Z'$ bosons to the standard model fermions are reported. The explicit form of the null space of the vector boson mass matrix for an arbitrary Higgs tensor and gauge group is also presented. In the general framework of the trinification gauge group, and by using the LHC experimental results and EW precision data, limits on the $Z'$ mass and the mixing angle between $Z$ and the new gauge bosons $Z'$ are imposed. The general results call for very small mixing angles in the range $10^{-3}$ radians and $M_{Z'}$ > 2.5 TeV.
Lightweight approach to model traceability in a CASE tool
Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita
2017-07-01
A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.
Approaching models of nursing from a postmodernist perspective.
Lister, P
1991-02-01
This paper explores some questions about the use of models of nursing. These questions make various assumptions about the nature of models of nursing, in general and in particular. Underlying these assumptions are various philosophical positions which are explored through an introduction to postmodernist approaches in philosophical criticism. To illustrate these approaches, a critique of the Roper et al. model is developed, and more general attitudes towards models of nursing are examined. It is suggested that postmodernism offers a challenge to many of the assumptions implicit in models of nursing, and that a greater awareness of these assumptions should lead to nursing care being better informed where such models are in use.
Manufacturing Excellence Approach to Business Performance Model
Directory of Open Access Journals (Sweden)
Jesus Cruz Alvarez
2015-03-01
Full Text Available Six Sigma, lean manufacturing, total quality management, quality control, and quality function deployment are the fundamental set of tools to enhance productivity in organizations. There is some research that outlines the benefit of each tool into a particular context of firm´s productivity, but not into a broader context of firm´s competitiveness that is achieved thru business performance. The aim of this theoretical research paper is to contribute to this mean and propose a manufacturing excellence approach that links productivity tools into a broader context of business performance.
A Bayesian Model Committee Approach to Forecasting Global Solar Radiation
Lauret, Philippe; Muselli, Marc; David, Mathieu; Diagne, Hadja; Voyant, Cyril
2012-01-01
This paper proposes to use a rather new modelling approach in the realm of solar radiation forecasting. In this work, two forecasting models: Autoregressive Moving Average (ARMA) and Neural Network (NN) models are combined to form a model committee. The Bayesian inference is used to affect a probability to each model in the committee. Hence, each model's predictions are weighted by their respective probability. The models are fitted to one year of hourly Global Horizontal Irradiance (GHI) measurements. Another year (the test set) is used for making genuine one hour ahead (h+1) out-of-sample forecast comparisons. The proposed approach is benchmarked against the persistence model. The very first results show an improvement brought by this approach.
MDA based-approach for UML Models Complete Comparison
Chaouni, Samia Benabdellah; Mouline, Salma
2011-01-01
If a modeling task is distributed, it will frequently be necessary to integrate models developed by different team members. Problems occur in the models integration step and particularly, in the comparison phase of the integration. This issue had been discussed in several domains and various models. However, previous approaches have not correctly handled the semantic comparison. In the current paper, we provide a MDA-based approach for models comparison which aims at comparing UML models. We develop an hybrid approach which takes into account syntactic, semantic and structural comparison aspects. For this purpose, we use the domain ontology as well as other resources such as dictionaries. We propose a decision support system which permits the user to validate (or not) correspondences extracted in the comparison phase. For implementation, we propose an extension of the generic correspondence metamodel AMW in order to transform UML models to the correspondence model.
A consortium approach to glass furnace modeling.
Energy Technology Data Exchange (ETDEWEB)
Chang, S.-L.; Golchert, B.; Petrick, M.
1999-04-20
Using computational fluid dynamics to model a glass furnace is a difficult task for any one glass company, laboratory, or university to accomplish. The task of building a computational model of the furnace requires knowledge and experience in modeling two dissimilar regimes (the combustion space and the liquid glass bath), along with the skill necessary to couple these two regimes. Also, a detailed set of experimental data is needed in order to evaluate the output of the code to ensure that the code is providing proper results. Since all these diverse skills are not present in any one research institution, a consortium was formed between Argonne National Laboratory, Purdue University, Mississippi State University, and five glass companies in order to marshal these skills into one three-year program. The objective of this program is to develop a fully coupled, validated simulation of a glass melting furnace that may be used by industry to optimize the performance of existing furnaces.
Mixture modeling approach to flow cytometry data.
Boedigheimer, Michael J; Ferbas, John
2008-05-01
Flow Cytometry has become a mainstay technique for measuring fluorescent and physical attributes of single cells in a suspended mixture. These data are reduced during analysis using a manual or semiautomated process of gating. Despite the need to gate data for traditional analyses, it is well recognized that analyst-to-analyst variability can impact the dataset. Moreover, cells of interest can be inadvertently excluded from the gate, and relationships between collected variables may go unappreciated because they were not included in the original analysis plan. A multivariate non-gating technique was developed and implemented that accomplished the same goal as traditional gating while eliminating many weaknesses. The procedure was validated against traditional gating for analysis of circulating B cells in normal donors (n = 20) and persons with Systemic Lupus Erythematosus (n = 42). The method recapitulated relationships in the dataset while providing for an automated and objective assessment of the data. Flow cytometry analyses are amenable to automated analytical techniques that are not predicated on discrete operator-generated gates. Such alternative approaches can remove subjectivity in data analysis, improve efficiency and may ultimately enable construction of large bioinformatics data systems for more sophisticated approaches to hypothesis testing.
BUSINESS MODEL IN ELECTRICITY INDUSTRY USING BUSINESS MODEL CANVAS APPROACH; THE CASE OF PT. XYZ
National Research Council Canada - National Science Library
Wicaksono, Achmad Arief; Syarief, Rizal; Suparno, Ono
2017-01-01
.... This study aims to identify company's business model using Business Model Canvas approach, formulate business development strategy alternatives, and determine the prioritized business development...
"Dispersion modeling approaches for near road
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of app...
and Models: A Self-Similar Approach
Directory of Open Access Journals (Sweden)
José Antonio Belinchón
2013-01-01
equations (FEs admit self-similar solutions. The methods employed allow us to obtain general results that are valid not only for the FRW metric, but also for all the Bianchi types as well as for the Kantowski-Sachs model (under the self-similarity hypothesis and the power-law hypothesis for the scale factors.
Nonperturbative approach to the modified statistical model
Energy Technology Data Exchange (ETDEWEB)
Magdy, M.A.; Bekmezci, A.; Sever, R. [Middle East Technical Univ., Ankara (Turkey)
1993-12-01
The modified form of the statistical model is used without making any perturbation. The mass spectra of the lowest S, P and D levels of the (Q{bar Q}) and the non-self-conjugate (Q{bar q}) mesons are studied with the Song-Lin potential. The authors results are in good agreement with the experimental and theoretical findings.
System Behavior Models: A Survey of Approaches
2016-06-01
Mandana Vaziri, and Frank Tip. 2007. “Finding Bugs Efficiently with a SAT Solver.” In European Software Engineering Conference and the ACM SIGSOFT...Van Gorp. 2005. “A Taxonomy of Model Transformation.” Electronic Notes in Theoretical Computer Science 152: 125–142. Miyazawa, Alvaro, and Ana
A moving approach for the Vector Hysteron Model
Energy Technology Data Exchange (ETDEWEB)
Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)
2016-04-01
A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.
Integration models: multicultural and liberal approaches confronted
Janicki, Wojciech
2012-01-01
European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.
ISM Approach to Model Offshore Outsourcing Risks
Directory of Open Access Journals (Sweden)
Sunand Kumar
2014-07-01
Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing. To this effect, authors have identified various risks through extant review of literature. From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled. Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.
Quantum Machine and SR Approach: a Unified Model
Garola, C; Sozzo, S; Garola, Claudio; Pykacz, Jaroslav; Sozzo, Sandro
2005-01-01
The Geneva-Brussels approach to quantum mechanics (QM) and the semantic realism (SR) nonstandard interpretation of QM exhibit some common features and some deep conceptual differences. We discuss in this paper two elementary models provided in the two approaches as intuitive supports to general reasonings and as a proof of consistency of general assumptions, and show that Aerts' quantum machine can be embodied into a macroscopic version of the microscopic SR model, overcoming the seeming incompatibility between the two models. This result provides some hints for the construction of a unified perspective in which the two approaches can be properly placed.
Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach
Energy Technology Data Exchange (ETDEWEB)
Liao, James C. [Univ. of California, Los Angeles, CA (United States)
2016-10-01
Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.
A modular approach to numerical human body modeling
Forbes, P.A.; Griotto, G.; Rooij, L. van
2007-01-01
The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body mod
A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING
ANTOULAS, AC; WILLEMS, JC
1993-01-01
The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both re
A modular approach to numerical human body modeling
Forbes, P.A.; Griotto, G.; Rooij, L. van
2007-01-01
The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body
A market model for stochastic smile: a conditional density approach
Zilber, A.
2005-01-01
The purpose of this paper is to introduce a new approach that allows to construct no-arbitrage market models of for implied volatility surfaces (in other words, stochastic smile models). That is to say, the idea presented here allows us to model prices of liquidly traded vanilla options as separate
Thermoplasmonics modeling: A Green's function approach
Baffou, Guillaume; Quidant, Romain; Girard, Christian
2010-10-01
We extend the discrete dipole approximation (DDA) and the Green’s dyadic tensor (GDT) methods—previously dedicated to all-optical simulations—to investigate the thermodynamics of illuminated plasmonic nanostructures. This extension is based on the use of the thermal Green’s function and a original algorithm that we named Laplace matrix inversion. It allows for the computation of the steady-state temperature distribution throughout plasmonic systems. This hybrid photothermal numerical method is suited to investigate arbitrarily complex structures. It can take into account the presence of a dielectric planar substrate and is simple to implement in any DDA or GDT code. Using this numerical framework, different applications are discussed such as thermal collective effects in nanoparticles assembly, the influence of a substrate on the temperature distribution and the heat generation in a plasmonic nanoantenna. This numerical approach appears particularly suited for new applications in physics, chemistry, and biology such as plasmon-induced nanochemistry and catalysis, nanofluidics, photothermal cancer therapy, or phase-transition control at the nanoscale.
Agribusiness model approach to territorial food development
Directory of Open Access Journals (Sweden)
Murcia Hector Horacio
2011-04-01
Full Text Available
Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.
Coupling approaches used in atmospheric entry models
Gritsevich, M. I.
2012-09-01
While a planet orbits the Sun, it is subject to impact by smaller objects, ranging from tiny dust particles and space debris to much larger asteroids and comets. Such collisions have taken place frequently over geological time and played an important role in the evolution of planets and the development of life on the Earth. Though the search for near-Earth objects addresses one of the main points of the Asteroid and Comet Hazard, one should not underestimate the useful information to be gleaned from smaller atmospheric encounters, known as meteors or fireballs. Not only do these events help determine the linkages between meteorites and their parent bodies; due to their relative regularity they provide a good statistical basis for analysis. For successful cases with found meteorites, the detailed atmospheric path record is an excellent tool to test and improve existing entry models assuring the robustness of their implementation. There are many more important scientific questions meteoroids help us to answer, among them: Where do these objects come from, what are their origins, physical properties and chemical composition? What are the shapes and bulk densities of the space objects which fully ablate in an atmosphere and do not reach the planetary surface? Which values are directly measured and which are initially assumed as input to various models? How to couple both fragmentation and ablation effects in the model, taking real size distribution of fragments into account? How to specify and speed up the recovery of a recently fallen meteorites, not letting weathering to affect samples too much? How big is the pre-atmospheric projectile to terminal body ratio in terms of their mass/volume? Which exact parameters beside initial mass define this ratio? More generally, how entering object affects Earth's atmosphere and (if applicable) Earth's surface? How to predict these impact consequences based on atmospheric trajectory data? How to describe atmospheric entry
Applied Regression Modeling A Business Approach
Pardoe, Iain
2012-01-01
An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a
Bayesian Approach to Neuro-Rough Models for Modelling HIV
Marwala, Tshilidzi
2007-01-01
This paper proposes a new neuro-rough model for modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Markov Chain Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62% as opposed to 58% obtained from a Bayesian formulated rough set model trained using Markov chain Monte Carlo method and 62% obtained from a Bayesian formulated multi-layered perceptron (MLP) model trained using hybrid Monte. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.
Development of a computationally efficient urban modeling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Murla, Damian; Ntegeka, Victor;
2016-01-01
This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...
Implicit moral evaluations: A multinomial modeling approach.
Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael
2017-01-01
Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Continuous Molecular Fields Approach Applied to Structure-Activity Modeling
Baskin, Igor I
2013-01-01
The Method of Continuous Molecular Fields is a universal approach to predict various properties of chemical compounds, in which molecules are represented by means of continuous fields (such as electrostatic, steric, electron density functions, etc). The essence of the proposed approach consists in performing statistical analysis of functional molecular data by means of joint application of kernel machine learning methods and special kernels which compare molecules by computing overlap integrals of their molecular fields. This approach is an alternative to traditional methods of building 3D structure-activity and structure-property models based on the use of fixed sets of molecular descriptors. The methodology of the approach is described in this chapter, followed by its application to building regression 3D-QSAR models and conducting virtual screening based on one-class classification models. The main directions of the further development of this approach are outlined at the end of the chapter.
A forward modeling approach for interpreting impeller flow logs.
Parker, Alison H; West, L Jared; Odling, Noelle E; Bown, Richard T
2010-01-01
A rigorous and practical approach for interpretation of impeller flow log data to determine vertical variations in hydraulic conductivity is presented and applied to two well logs from a Chalk aquifer in England. Impeller flow logging involves measuring vertical flow speed in a pumped well and using changes in flow with depth to infer the locations and magnitudes of inflows into the well. However, the measured flow logs are typically noisy, which leads to spurious hydraulic conductivity values where simplistic interpretation approaches are applied. In this study, a new method for interpretation is presented, which first defines a series of physical models for hydraulic conductivity variation with depth and then fits the models to the data, using a regression technique. Some of the models will be rejected as they are physically unrealistic. The best model is then selected from the remaining models using a maximum likelihood approach. This balances model complexity against fit, for example, using Akaike's Information Criterion.
An Adaptive Approach to Schema Classification for Data Warehouse Modeling
Institute of Scientific and Technical Information of China (English)
Hong-Ding Wang; Yun-Hai Tong; Shao-Hua Tan; Shi-Wei Tang; Dong-Qing Yang; Guo-Hui Sun
2007-01-01
Data warehouse (DW) modeling is a complicated task, involving both knowledge of business processes and familiarity with operational information systems structure and behavior. Existing DW modeling techniques suffer from the following major drawbacks -data-driven approach requires high levels of expertise and neglects the requirements of end users, while demand-driven approach lacks enterprise-wide vision and is regardless of existing models of underlying operational systems. In order to make up for those shortcomings, a method of classification of schema elements for DW modeling is proposed in this paper. We first put forward the vector space models for subjects and schema elements, then present an adaptive approach with self-tuning theory to construct context vectors of subjects, and finally classify the source schema elements into different subjects of the DW automatically. Benefited from the result of the schema elements classification, designers can model and construct a DW more easily.
A Networks Approach to Modeling Enzymatic Reactions.
Imhof, P
2016-01-01
Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes.
Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling
Lohn, Jason; Colombano, Silvano
1997-01-01
We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.
Modeling Approaches for Describing Microbial Population Heterogeneity
DEFF Research Database (Denmark)
Lencastre Fernandes, Rita
, ethanol and biomass throughout the reactor. This work has proven that the integration of CFD and population balance models, for describing the growth of a microbial population in a spatially heterogeneous reactor, is feasible, and that valuable insight on the interplay between flow and the dynamics......Although microbial populations are typically described by averaged properties, individual cells present a certain degree of variability. Indeed, initially clonal microbial populations develop into heterogeneous populations, even when growing in a homogeneous environment. A heterogeneous microbial......) to predict distributions of certain population properties including particle size, mass or volume, and molecular weight. Similarly, PBM allow for a mathematical description of distributed cell properties within microbial populations. Cell total protein content distributions (a measure of cell mass) have been...
Hamiltonian approach to hybrid plasma models
Tronci, Cesare
2010-01-01
The Hamiltonian structures of several hybrid kinetic-fluid models are identified explicitly, upon considering collisionless Vlasov dynamics for the hot particles interacting with a bulk fluid. After presenting different pressure-coupling schemes for an ordinary fluid interacting with a hot gas, the paper extends the treatment to account for a fluid plasma interacting with an energetic ion species. Both current-coupling and pressure-coupling MHD schemes are treated extensively. In particular, pressure-coupling schemes are shown to require a transport-like term in the Vlasov kinetic equation, in order for the Hamiltonian structure to be preserved. The last part of the paper is devoted to studying the more general case of an energetic ion species interacting with a neutralizing electron background (hybrid Hall-MHD). Circulation laws and Casimir functionals are presented explicitly in each case.
Modeling of phase equilibria with CPA using the homomorph approach
DEFF Research Database (Denmark)
Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios
2011-01-01
For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij, are ...
A Constructive Neural-Network Approach to Modeling Psychological Development
Shultz, Thomas R.
2012-01-01
This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…
A Constructive Neural-Network Approach to Modeling Psychological Development
Shultz, Thomas R.
2012-01-01
This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…
Modular Modelling and Simulation Approach - Applied to Refrigeration Systems
DEFF Research Database (Denmark)
Sørensen, Kresten Kjær; Stoustrup, Jakob
2008-01-01
This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...
Pattern-based approach for logical traffic isolation forensic modelling
CSIR Research Space (South Africa)
Dlamini, I
2009-08-01
Full Text Available The use of design patterns usually changes the approach of software design and makes software development relatively easy. This paper extends work on a forensic model for Logical Traffic Isolation (LTI) based on Differentiated Services (Diff...
A semantic-web approach for modeling computing infrastructures
M. Ghijsen; J. van der Ham; P. Grosso; C. Dumitru; H. Zhu; Z. Zhao; C. de Laat
2013-01-01
This paper describes our approach to modeling computing infrastructures. Our main contribution is the Infrastructure and Network Description Language (INDL) ontology. The aim of INDL is to provide technology independent descriptions of computing infrastructures, including the physical resources as w
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
Modelling road accidents: An approach using structural time series
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
Functional state modelling approach validation for yeast and bacteria cultivations
Roeva, Olympia; Pencheva, Tania
2014-01-01
In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778
An optimization approach to kinetic model reduction for combustion chemistry
Lebiedz, Dirk
2013-01-01
Model reduction methods are relevant when the computation time of a full convection-diffusion-reaction simulation based on detailed chemical reaction mechanisms is too large. In this article, we review a model reduction approach based on optimization of trajectories and show its applicability to realistic combustion models. As most model reduction methods, it identifies points on a slow invariant manifold based on time scale separation in the dynamics of the reaction system. The numerical approximation of points on the manifold is achieved by solving a semi-infinite optimization problem, where the dynamics enter the problem as constraints. The proof of existence of a solution for an arbitrarily chosen dimension of the reduced model (slow manifold) is extended to the case of realistic combustion models including thermochemistry by considering the properties of proper maps. The model reduction approach is finally applied to three models based on realistic reaction mechanisms: 1. ozone decomposition as a small t...
Functional state modelling approach validation for yeast and bacteria cultivations.
Roeva, Olympia; Pencheva, Tania
2014-09-03
In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach.
Molecular Modeling Approach to Cardiovascular Disease Targetting
Directory of Open Access Journals (Sweden)
Chandra Sekhar Akula,
2010-05-01
Full Text Available Cardiovascular disease, including stroke, is the leading cause of illness and death in the India. A number of studies have shown that inflammation of blood vessels is one of the major factors that increase the incidence of heart diseases, including arteriosclerosis (clogging of the arteries, stroke and myocardial infraction or heart attack. Studies have associated obesity and other components of metabolic syndrome, cardiovascular risk factors, with lowgradeinflammation. Furthermore, some findings suggest that drugs commonly prescribed to the lower cholesterol also reduce this inflammation, suggesting an additional beneficial effect of the stains. The recent development of angiotensin 11 (Ang11 receptor antagonists has enabled to improve significantly the tolerability profile of thisgroup of drugs while maintaining a high clinical efficacy. ACE2 is expressed predominantly in the endothelium and in renal tubular epithelium, and it thus may be an import new cardiovascular target. In the present study we modeled the structure of ACE and designed an inhibitor through using ARGUS lab and the validation of the Drug molecule is done basing on QSAR properties and Cache for this protein through CADD.
Virtuous organization: A structural equation modeling approach
Directory of Open Access Journals (Sweden)
Majid Zamahani
2013-02-01
Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.
Data Analysis A Model Comparison Approach, Second Edition
Judd, Charles M; Ryan, Carey S
2008-01-01
This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T
Zimmer, Christoph; Sahle, Sven
2016-04-01
Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.
Modelling and Generating Ajax Applications: A Model-Driven Approach
Gharavi, V.; Mesbah, A.; Van Deursen, A.
2008-01-01
Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction betw
Modelling and Generating Ajax Applications: A Model-Driven Approach
Gharavi, V.; Mesbah, A.; Van Deursen, A.
2008-01-01
Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction
A novel approach to modeling and diagnosing the cardiovascular system
Energy Technology Data Exchange (ETDEWEB)
Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)
1995-07-01
A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.
Mathematical models for therapeutic approaches to control HIV disease transmission
Roy, Priti Kumar
2015-01-01
The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...
Asteroid modeling for testing spacecraft approach and landing.
Martin, Iain; Parkes, Steve; Dunstan, Martin; Rowell, Nick
2014-01-01
Spacecraft exploration of asteroids presents autonomous-navigation challenges that can be aided by virtual models to test and develop guidance and hazard-avoidance systems. Researchers have extended and applied graphics techniques to create high-resolution asteroid models to simulate cameras and other spacecraft sensors approaching and descending toward asteroids. A scalable model structure with evenly spaced vertices simplifies terrain modeling, avoids distortion at the poles, and enables triangle-strip definition for efficient rendering. To create the base asteroid models, this approach uses two-phase Poisson faulting and Perlin noise. It creates realistic asteroid surfaces by adding both crater models adapted from lunar terrain simulation and multiresolution boulders. The researchers evaluated the virtual asteroids by comparing them with real asteroid images, examining the slope distributions, and applying a surface-relative feature-tracking algorithm to the models.
A model-driven approach to information security compliance
Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena
2017-06-01
The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.
Heuristic approaches to models and modeling in systems biology
MacLeod, Miles
2016-01-01
Prediction and control sufficient for reliable medical and other interventions are prominent aims of modeling in systems biology. The short-term attainment of these goals has played a strong role in projecting the importance and value of the field. In this paper I identify the standard models must m
A Model Management Approach for Co-Simulation Model Evaluation
Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno
2011-01-01
Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software
A New Detection Approach Based on the Maximum Entropy Model
Institute of Scientific and Technical Information of China (English)
DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua
2006-01-01
The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.
LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL
Directory of Open Access Journals (Sweden)
Eser ÖRDEM
2013-06-01
Full Text Available Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002 and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings. This model was created by the researcher as a result of the studies carried out in applied linguistics (Hill, 20009 and memory (Murphy, 2004. Since one of the main problems of foreign language learners is to retrieve what they have learnt, Lewis (1998 and Wray (2008 assume that lexical approach is an alternative explanation to solve this problem.Unlike grammar translation method, this approach supports the idea that language is not composed of general grammar but strings of word and word combinations.In addition, lexical approach posits the idea that each word has tiw gramamtical properties, and therefore each dictionary is a potential grammar book. Foreign language learners can learn to use collocations, a basic principle of Lexical approach. Thus, learners can increase the level of retention.The concept of retrieval clue (Murphy, 2004 is considered the main element in this collocational study model because the main purpose of this model is boost fluency and help learners gain native-like accuracy while producing the target language. Keywords: Foreign language teaching, lexical approach, collocations, retrieval clue
A Model-Driven Approach for Telecommunications Network Services Definition
Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.
Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.
Child human model development: a hybrid validation approach
Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.
2008-01-01
The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a
Modeling Alaska boreal forests with a controlled trend surface approach
Mo Zhou; Jingjing Liang
2012-01-01
An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...
Teaching Service Modelling to a Mixed Class: An Integrated Approach
Deng, Jeremiah D.; Purvis, Martin K.
2015-01-01
Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…
Gray-box modelling approach for description of storage tunnel
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Jacob
1999-01-01
The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...
Child human model development: a hybrid validation approach
Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.
2008-01-01
The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Hybrid continuum-atomistic approach to model electrokinetics in nanofluidics
Energy Technology Data Exchange (ETDEWEB)
Amani, Ehsan, E-mail: eamani@aut.ac.ir; Movahed, Saeid, E-mail: smovahed@aut.ac.ir
2016-06-07
In this study, for the first time, a hybrid continuum-atomistic based model is proposed for electrokinetics, electroosmosis and electrophoresis, through nanochannels. Although continuum based methods are accurate enough to model fluid flow and electric potential in nanofluidics (in dimensions larger than 4 nm), ionic concentration is too low in nanochannels for the continuum assumption to be valid. On the other hand, the non-continuum based approaches are too time-consuming and therefore is limited to simple geometries, in practice. Here, to propose an efficient hybrid continuum-atomistic method of modelling the electrokinetics in nanochannels; the fluid flow and electric potential are computed based on continuum hypothesis coupled with an atomistic Lagrangian approach for the ionic transport. The results of the model are compared to and validated by the results of the molecular dynamics technique for a couple of case studies. Then, the influences of bulk ionic concentration, external electric field, size of nanochannel, and surface electric charge on the electrokinetic flow and ionic mass transfer are investigated, carefully. The hybrid continuum-atomistic method is a promising approach to model more complicated geometries and investigate more details of the electrokinetics in nanofluidics. - Highlights: • A hybrid continuum-atomistic model is proposed for electrokinetics in nanochannels. • The model is validated by molecular dynamics. • This is a promising approach to model more complicated geometries and physics.
Modelling diversity in building occupant behaviour: a novel statistical approach
DEFF Research Database (Denmark)
Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm
2016-01-01
We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...
Asteroid fragmentation approaches for modeling atmospheric energy deposition
Register, Paul J.; Mathias, Donovan L.; Wheeler, Lorien F.
2017-03-01
During asteroid entry, energy is deposited in the atmosphere through thermal ablation and momentum-loss due to aerodynamic drag. Analytic models of asteroid entry and breakup physics are used to compute the energy deposition, which can then be compared against measured light curves and used to estimate ground damage due to airburst events. This work assesses and compares energy deposition results from four existing approaches to asteroid breakup modeling, and presents a new model that combines key elements of those approaches. The existing approaches considered include a liquid drop or "pancake" model where the object is treated as a single deforming body, and a set of discrete fragment models where the object breaks progressively into individual fragments. The new model incorporates both independent fragments and aggregate debris clouds to represent a broader range of fragmentation behaviors and reproduce more detailed light curve features. All five models are used to estimate the energy deposition rate versus altitude for the Chelyabinsk meteor impact, and results are compared with an observationally derived energy deposition curve. Comparisons show that four of the five approaches are able to match the overall observed energy deposition profile, but the features of the combined model are needed to better replicate both the primary and secondary peaks of the Chelyabinsk curve.
A Bayesian Approach for Analyzing Longitudinal Structural Equation Models
Song, Xin-Yuan; Lu, Zhao-Hua; Hser, Yih-Ing; Lee, Sik-Yum
2011-01-01
This article considers a Bayesian approach for analyzing a longitudinal 2-level nonlinear structural equation model with covariates, and mixed continuous and ordered categorical variables. The first-level model is formulated for measures taken at each time point nested within individuals for investigating their characteristics that are dynamically…
An Empirical-Mathematical Modelling Approach to Upper Secondary Physics
Angell, Carl; Kind, Per Morten; Henriksen, Ellen K.; Guttersrud, Oystein
2008-01-01
In this paper we describe a teaching approach focusing on modelling in physics, emphasizing scientific reasoning based on empirical data and using the notion of multiple representations of physical phenomena as a framework. We describe modelling activities from a project (PHYS 21) and relate some experiences from implementation of the modelling…
An Alternative Approach for Nonlinear Latent Variable Models
Mooijaart, Ab; Bentler, Peter M.
2010-01-01
In the last decades there has been an increasing interest in nonlinear latent variable models. Since the seminal paper of Kenny and Judd, several methods have been proposed for dealing with these kinds of models. This article introduces an alternative approach. The methodology involves fitting some third-order moments in addition to the means and…
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
A multilevel approach to modeling of porous bioceramics
Mikushina, Valentina A.; Sidorenko, Yury N.
2015-10-01
The paper is devoted to discussion of multiscale models of heterogeneous materials using principles. The specificity of approach considered is the using of geometrical model of composites representative volume, which must be generated with taking the materials reinforcement structure into account. In framework of such model may be considered different physical processes which have influence on the effective mechanical properties of composite, in particular, the process of damage accumulation. It is shown that such approach can be used to prediction the value of composite macroscopic ultimate strength. As an example discussed the particular problem of the study the mechanical properties of biocomposite representing porous ceramics matrix filled with cortical bones tissue.
Gray-box modelling approach for description of storage tunnel
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Jacob
1999-01-01
of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented....... The model in the present paper provides on-line information on overflow volumes, pumping capacities, and remaining storage capacities. A linear overflow relation is found, differing significantly from the traditional deterministic modeling approach. The linearity of the formulas is explained by the inertia...
A study of multidimensional modeling approaches for data warehouse
Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani
2016-08-01
Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.
Meta-analysis a structural equation modeling approach
Cheung, Mike W-L
2015-01-01
Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Energy Technology Data Exchange (ETDEWEB)
Miller, Gregory H. [Univ. of California, Davis, CA (United States); Forest, Gregory [Univ. of California, Davis, CA (United States)
2014-05-01
We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
Metamodelling Approach and Software Tools for Physical Modelling and Simulation
Directory of Open Access Journals (Sweden)
Vitaliy Mezhuyev
2015-02-01
Full Text Available In computer science, metamodelling approach becomes more and more popular for the purpose of software systems development. In this paper, we discuss applicability of the metamodelling approach for development of software tools for physical modelling and simulation.To define a metamodel for physical modelling the analysis of physical models will be done. The result of such the analyses will show the invariant physical structures, we propose to use as the basic abstractions of the physical metamodel. It is a system of geometrical objects, allowing to build a spatial structure of physical models and to set a distribution of physical properties. For such geometry of distributed physical properties, the different mathematical methods can be applied. To prove the proposed metamodelling approach, we consider the developed prototypes of software tools.
Social learning in Models and Cases - an Interdisciplinary Approach
Buhl, Johannes; De Cian, Enrica; Carrara, Samuel; Monetti, Silvia; Berg, Holger
2016-04-01
Our paper follows an interdisciplinary understanding of social learning. We contribute to the literature on social learning in transition research by bridging case-oriented research and modelling-oriented transition research. We start by describing selected theories on social learning in innovation, diffusion and transition research. We present theoretical understandings of social learning in techno-economic and agent-based modelling. Then we elaborate on empirical research on social learning in transition case studies. We identify and synthetize key dimensions of social learning in transition case studies. In the following we bridge between more formal and generalising modelling approaches towards social learning processes and more descriptive, individualising case study approaches by interpreting the case study analysis into a visual guide on functional forms of social learning typically identified in the cases. We then try to exemplarily vary functional forms of social learning in integrated assessment models. We conclude by drawing the lessons learned from the interdisciplinary approach - methodologically and empirically.
Learning the Task Management Space of an Aircraft Approach Model
Krall, Joseph; Menzies, Tim; Davies, Misty
2014-01-01
Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.
Building enterprise reuse program--A model-based approach
Institute of Scientific and Technical Information of China (English)
梅宏; 杨芙清
2002-01-01
Reuse is viewed as a realistically effective approach to solving software crisis. For an organization that wants to build a reuse program, technical and non-technical issues must be considered in parallel. In this paper, a model-based approach to building systematic reuse program is presented. Component-based reuse is currently a dominant approach to software reuse. In this approach, building the right reusable component model is the first important step. In order to achieve systematic reuse, a set of component models should be built from different perspectives. Each of these models will give a specific view of the components so as to satisfy different needs of different persons involved in the enterprise reuse program. There already exist some component models for reuse from technical perspectives. But less attention is paid to the reusable components from a non-technical view, especially from the view of process and management. In our approach, a reusable component model--FLP model for reusable component--is introduced. This model describes components from three dimensions (Form, Level, and Presentation) and views components and their relationships from the perspective of process and management. It determines the sphere of reusable components, the time points of reusing components in the development process, and the needed means to present components in terms of the abstraction level, logic granularity and presentation media. Being the basis on which the management and technical decisions are made, our model will be used as the kernel model to initialize and normalize a systematic enterprise reuse program.
Current approaches to model extracellular electrical neural microstimulation
Directory of Open Access Journals (Sweden)
Sébastien eJoucla
2014-02-01
Full Text Available Nowadays, high-density microelectrode arrays provide unprecedented possibilities to precisely activate spatially well-controlled central nervous system (CNS areas. However, this requires optimizing stimulating devices, which in turn requires a good understanding of the effects of microstimulation on cells and tissues. In this context, modeling approaches provide flexible ways to predict the outcome of electrical stimulation in terms of CNS activation. In this paper, we present state-of-the-art modeling methods with sufficient details to allow the reader to rapidly build numerical models of neuronal extracellular microstimulation. These include 1 the computation of the electrical potential field created by the stimulation in the tissue, and 2 the response of a target neuron to this field. Two main approaches are described: First we describe the classical hybrid approach that combines the finite element modeling of the potential field with the calculation of the neuron’s response in a cable equation framework (compartmentalized neuron models. Then, we present a whole finite element approach allows the simultaneous calculation of the extracellular and intracellular potentials, by representing the neuronal membrane with a thin-film approximation. This approach was previously introduced in the frame of neural recording, but has never been implemented to determine the effect of extracellular stimulation on the neural response at a sub-compartment level. Here, we show on an example that the latter modeling scheme can reveal important sub-compartment behavior of the neural membrane that cannot be resolved using the hybrid approach. The goal of this paper is also to describe in detail the practical implementation of these methods to allow the reader to easily build new models using standard software packages. These modeling paradigms, depending on the situation, should help build more efficient high-density neural prostheses for CNS rehabilitation.
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches
Application of the Interface Approach in Quantum Ising Models
Sen, Parongama
1997-01-01
We investigate phase transitions in the Ising model and the ANNNI model in transverse field using the interface approach. The exact result of the Ising chain in a transverse field is reproduced. We find that apart from the interfacial energy, there are two other response functions which show simple scaling behaviour. For the ANNNI model in a transverse field, the phase diagram can be fully studied in the region where a ferromagnetic to paramagnetic phase transition occurs. The other region ca...
A Variable Flow Modelling Approach To Military End Strength Planning
2016-12-01
System Dynamics (SD) model is ideal for strategic analysis as it encompasses all the behaviours of a system and how the behaviours are influenced by...Markov Chain Models Wang describes Markov chain theory as a mathematical tool used to investigate dynamic behaviours of a system in a discrete-time... MODELLING APPROACH TO MILITARY END STRENGTH PLANNING by Benjamin K. Grossi December 2016 Thesis Advisor: Kenneth Doerr Second Reader
New Approaches in Usable Booster System Life Cycle Cost Modeling
2012-01-01
Lean NPD practices (many) • Lean Production & Operations Practices (many) • Supply Chain Operations Reference ( SCOR ) Model , Best Practices Make Deliver...NEW APPROACHES IN REUSABLE BOOSTER SYSTEM LIFE CYCLE COST MODELING Edgar Zapata National Aeronautics and Space Administration Kennedy Space Center...Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC
THE FAIRSHARES MODEL: AN ETHICAL APPROACH TO SOCIAL ENTERPRISE DEVELOPMENT?
Ridley-Duff, R.
2015-01-01
This paper is based on the keynote address to the 14th International Association of Public and Non-Profit Marketing (IAPNM) conference. It explore the question "What impact do ethical values in the FairShares Model have on social entrepreneurial behaviour?" In the first part, three broad approaches to social enterprise are set out: co-operative and mutual enterprises (CMEs), social and responsible businesses (SRBs) and charitable trading activities (CTAs). The ethics that guide each approach ...
A computational language approach to modeling prose recall in schizophrenia.
Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita
2014-06-01
Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.
Intelligent Transportation and Evacuation Planning A Modeling-Based Approach
Naser, Arab
2012-01-01
Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...
A model selection approach to analysis of variance and covariance.
Alber, Susan A; Weiss, Robert E
2009-06-15
An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures. Copyright (c) 2009 John Wiley & Sons, Ltd.
Towards a whole-cell modeling approach for synthetic biology
Purcell, Oliver; Jain, Bonny; Karr, Jonathan R.; Covert, Markus W.; Lu, Timothy K.
2013-06-01
Despite rapid advances over the last decade, synthetic biology lacks the predictive tools needed to enable rational design. Unlike established engineering disciplines, the engineering of synthetic gene circuits still relies heavily on experimental trial-and-error, a time-consuming and inefficient process that slows down the biological design cycle. This reliance on experimental tuning is because current modeling approaches are unable to make reliable predictions about the in vivo behavior of synthetic circuits. A major reason for this lack of predictability is that current models view circuits in isolation, ignoring the vast number of complex cellular processes that impinge on the dynamics of the synthetic circuit and vice versa. To address this problem, we present a modeling approach for the design of synthetic circuits in the context of cellular networks. Using the recently published whole-cell model of Mycoplasma genitalium, we examined the effect of adding genes into the host genome. We also investigated how codon usage correlates with gene expression and find agreement with existing experimental results. Finally, we successfully implemented a synthetic Goodwin oscillator in the whole-cell model. We provide an updated software framework for the whole-cell model that lays the foundation for the integration of whole-cell models with synthetic gene circuit models. This software framework is made freely available to the community to enable future extensions. We envision that this approach will be critical to transforming the field of synthetic biology into a rational and predictive engineering discipline.
A transformation approach for collaboration based requirement models
Harbouche, Ahmed; Mokhtari, Aicha
2012-01-01
Distributed software engineering is widely recognized as a complex task. Among the inherent complexities is the process of obtaining a system design from its global requirement specification. This paper deals with such transformation process and suggests an approach to derive the behavior of a given system components, in the form of distributed Finite State Machines, from the global system requirements, in the form of an augmented UML Activity Diagrams notation. The process of the suggested approach is summarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model), the definition of the target Design Meta-Model and the definition of the rules to govern the transformation during the derivation process. The derivation process transforms the global system requirements described as UML diagram activities (extended with collaborations) to system roles behaviors represented as UML finite state machines. The approach is implemented using Atlas Transformation Language (ATL).
A TRANSFORMATION APPROACH FOR COLLABORATION BASED REQUIREMENT MODELS
Directory of Open Access Journals (Sweden)
Ahmed Harbouche
2012-02-01
Full Text Available Distributed software engineering is widely recognized as a complex task. Among the inherent complexitiesis the process of obtaining a system design from its global requirement specification. This paper deals withsuch transformation process and suggests an approach to derive the behavior of a given systemcomponents, in the form of distributed Finite State Machines, from the global system requirements, in theform of an augmented UML Activity Diagrams notation. The process of the suggested approach issummarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model, the definition of the target Design Meta-Model and the definition of the rules to govern thetransformation during the derivation process. The derivation process transforms the global systemrequirements described as UML diagram activities (extended with collaborations to system rolesbehaviors represented as UML finite state machines. The approach is implemented using AtlasTransformation Language (ATL.
An algebraic approach to modeling in software engineering
Energy Technology Data Exchange (ETDEWEB)
Loegel, G.J. [Superconducting Super Collider Lab., Dallas, TX (United States)]|[Michigan Univ., Ann Arbor, MI (United States); Ravishankar, C.V. [Michigan Univ., Ann Arbor, MI (United States)
1993-09-01
Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ``computer science`` objects like abstract data types, but in practice software errors are often caused because ``real-world`` objects are improperly modeled. There is a large semantic gap between the customer`s objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form.
DISTRIBUTED APPROACH to WEB PAGE CATEGORIZATION USING MAPREDUCE PROGRAMMING MODEL
Directory of Open Access Journals (Sweden)
P.Malarvizhi
2011-12-01
Full Text Available The web is a large repository of information and to facilitate the search and retrieval of pages from it,categorization of web documents is essential. An effective means to handle the complexity of information retrieval from the internet is through automatic classification of web pages. Although lots of automatic classification algorithms and systems have been presented, most of the existing approaches are computationally challenging. In order to overcome this challenge, we have proposed a parallel algorithm, known as MapReduce programming model to automatically categorize the web pages. This approach incorporates three concepts. They are web crawler, MapReduce programming model and the proposed web page categorization approach. Initially, we have utilized web crawler to mine the World Wide Web and the crawled web pages are then directly given as input to the MapReduce programming model. Here the MapReduce programming model adapted to our proposed web page categorization approach finds the appropriate category of the web page according to its content. The experimental results show that our proposed parallel web page categorization approach achieves satisfactory results in finding the right category for any given web page.
Teaching Service Modelling to a Mixed Class: An Integrated Approach
Directory of Open Access Journals (Sweden)
Jeremiah D. DENG
2015-04-01
Full Text Available Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching with strategies such as problem-solving, visualization, and the use of examples and simulations, has been developed. From assessment on student learning outcomes, it is indicated that the proposed course delivery approach succeeded in bringing out comparable and satisfactory performance from students of different educational backgrounds.
A Spatial Clustering Approach for Stochastic Fracture Network Modelling
Seifollahi, S.; Dowd, P. A.; Xu, C.; Fadakar, A. Y.
2014-07-01
Fracture network modelling plays an important role in many application areas in which the behaviour of a rock mass is of interest. These areas include mining, civil, petroleum, water and environmental engineering and geothermal systems modelling. The aim is to model the fractured rock to assess fluid flow or the stability of rock blocks. One important step in fracture network modelling is to estimate the number of fractures and the properties of individual fractures such as their size and orientation. Due to the lack of data and the complexity of the problem, there are significant uncertainties associated with fracture network modelling in practice. Our primary interest is the modelling of fracture networks in geothermal systems and, in this paper, we propose a general stochastic approach to fracture network modelling for this application. We focus on using the seismic point cloud detected during the fracture stimulation of a hot dry rock reservoir to create an enhanced geothermal system; these seismic points are the conditioning data in the modelling process. The seismic points can be used to estimate the geographical extent of the reservoir, the amount of fracturing and the detailed geometries of fractures within the reservoir. The objective is to determine a fracture model from the conditioning data by minimizing the sum of the distances of the points from the fitted fracture model. Fractures are represented as line segments connecting two points in two-dimensional applications or as ellipses in three-dimensional (3D) cases. The novelty of our model is twofold: (1) it comprises a comprehensive fracture modification scheme based on simulated annealing and (2) it introduces new spatial approaches, a goodness-of-fit measure for the fitted fracture model, a measure for fracture similarity and a clustering technique for proposing a locally optimal solution for fracture parameters. We use a simulated dataset to demonstrate the application of the proposed approach
Joint Modeling of Multiple Crimes: A Bayesian Spatial Approach
Directory of Open Access Journals (Sweden)
Hongqiang Liu
2017-01-01
Full Text Available A multivariate Bayesian spatial modeling approach was used to jointly model the counts of two types of crime, i.e., burglary and non-motor vehicle theft, and explore the geographic pattern of crime risks and relevant risk factors. In contrast to the univariate model, which assumes independence across outcomes, the multivariate approach takes into account potential correlations between crimes. Six independent variables are included in the model as potential risk factors. In order to fully present this method, both the multivariate model and its univariate counterpart are examined. We fitted the two models to the data and assessed them using the deviance information criterion. A comparison of the results from the two models indicates that the multivariate model was superior to the univariate model. Our results show that population density and bar density are clearly associated with both burglary and non-motor vehicle theft risks and indicate a close relationship between these two types of crime. The posterior means and 2.5% percentile of type-specific crime risks estimated by the multivariate model were mapped to uncover the geographic patterns. The implications, limitations and future work of the study are discussed in the concluding section.
Directory of Open Access Journals (Sweden)
Ramiro M. Irastorza
2015-01-01
Full Text Available Small diameter tissue-engineered arteries improve their mechanical and functional properties when they are mechanically stimulated. Applying a suitable stress and/or strain with or without a cycle to the scaffolds and cells during the culturing process resides in our ability to generate a suitable mechanical model. Collagen gel is one of the most used scaffolds in vascular tissue engineering, mainly because it is the principal constituent of the extracellular matrix for vascular cells in human. The mechanical modeling of such a material is not a trivial task, mainly for its viscoelastic nature. Computational and experimental methods for developing a suitable model for collagen gels are of primary importance for the field. In this research, we focused on mechanical properties of collagen gels under unconfined compression. First, mechanical viscoelastic models are discussed and framed in the control system theory. Second, models are fitted using system identification. Several models are evaluated and two nonlinear models are proposed: Mooney-Rivlin inspired and Hammerstein models. The results suggest that Mooney-Rivlin and Hammerstein models succeed in describing the mechanical behavior of collagen gels for cyclic tests on scaffolds (with best fitting parameters 58.3% and 75.8%, resp.. When Akaike criterion is used, the best is the Mooney-Rivlin inspired model.
Irastorza, Ramiro M; Drouin, Bernard; Blangino, Eugenia; Mantovani, Diego
2015-01-01
Small diameter tissue-engineered arteries improve their mechanical and functional properties when they are mechanically stimulated. Applying a suitable stress and/or strain with or without a cycle to the scaffolds and cells during the culturing process resides in our ability to generate a suitable mechanical model. Collagen gel is one of the most used scaffolds in vascular tissue engineering, mainly because it is the principal constituent of the extracellular matrix for vascular cells in human. The mechanical modeling of such a material is not a trivial task, mainly for its viscoelastic nature. Computational and experimental methods for developing a suitable model for collagen gels are of primary importance for the field. In this research, we focused on mechanical properties of collagen gels under unconfined compression. First, mechanical viscoelastic models are discussed and framed in the control system theory. Second, models are fitted using system identification. Several models are evaluated and two nonlinear models are proposed: Mooney-Rivlin inspired and Hammerstein models. The results suggest that Mooney-Rivlin and Hammerstein models succeed in describing the mechanical behavior of collagen gels for cyclic tests on scaffolds (with best fitting parameters 58.3% and 75.8%, resp.). When Akaike criterion is used, the best is the Mooney-Rivlin inspired model.
A Multiple Model Approach to Modeling Based on Fuzzy Support Vector Machines
Institute of Scientific and Technical Information of China (English)
冯瑞; 张艳珠; 宋春林; 邵惠鹤
2003-01-01
A new multiple models(MM) approach was proposed to model complex industrial process by using Fuzzy Support Vector Machines (F SVMs). By applying the proposed approach to a pH neutralization titration experi-ment, F_SVMs MM not only provides satisfactory approximation and generalization property, but also achieves superior performance to USOCPN multiple modeling method and single modeling method based on standard SVMs.
Software sensors based on the grey-box modelling approach
DEFF Research Database (Denmark)
Carstensen, J.; Harremoës, P.; Strube, Rune
1996-01-01
In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey-box...... models may contain terms which can be estimated on-line by use of the models and measurements. In this paper, it is demonstrated that many storage basins in sewer systems can be used as an on-line flow measurement provided that the basin is monitored on-line with a level transmitter and that a grey-box...
Environmental Radiation Effects on Mammals A Dynamical Modeling Approach
Smirnova, Olga A
2010-01-01
This text is devoted to the theoretical studies of radiation effects on mammals. It uses the framework of developed deterministic mathematical models to investigate the effects of both acute and chronic irradiation in a wide range of doses and dose rates on vital body systems including hematopoiesis, small intestine and humoral immunity, as well as on the development of autoimmune diseases. Thus, these models can contribute to the development of the system and quantitative approaches in radiation biology and ecology. This text is also of practical use. Its modeling studies of the dynamics of granulocytopoiesis and thrombocytopoiesis in humans testify to the efficiency of employment of the developed models in the investigation and prediction of radiation effects on these hematopoietic lines. These models, as well as the properly identified models of other vital body systems, could provide a better understanding of the radiation risks to health. The modeling predictions will enable the implementation of more ef...
The standard data model approach to patient record transfer.
Canfield, K; Silva, M; Petrucci, K
1994-01-01
This paper develops an approach to electronic data exchange of patient records from Ambulatory Encounter Systems (AESs). This approach assumes that the AES is based upon a standard data model. The data modeling standard used here is IDEFIX for Entity/Relationship (E/R) modeling. Each site that uses a relational database implementation of this standard data model (or a subset of it) can exchange very detailed patient data with other such sites using industry standard tools and without excessive programming efforts. This design is detailed below for a demonstration project between the research-oriented geriatric clinic at the Baltimore Veterans Affairs Medical Center (BVAMC) and the Laboratory for Healthcare Informatics (LHI) at the University of Maryland.
Model selection and inference a practical information-theoretic approach
Burnham, Kenneth P
1998-01-01
This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...
Real-space renormalization group approach to the Anderson model
Campbell, Eamonn
Many of the most interesting electronic behaviours currently being studied are associated with strong correlations. In addition, many of these materials are disordered either intrinsically or due to doping. Solving interacting systems exactly is extremely computationally expensive, and approximate techniques developed for strongly correlated systems are not easily adapted to include disorder. As a non-interacting disordered model, it makes sense to consider the Anderson model as a first step in developing an approximate method of solution to the interacting and disordered Anderson-Hubbard model. Our renormalization group (RG) approach is modeled on that proposed by Johri and Bhatt [23]. We found an error in their work which we have corrected in our procedure. After testing the execution of the RG, we benchmarked the density of states and inverse participation ratio results against exact diagonalization. Our approach is significantly faster than exact diagonalization and is most accurate in the limit of strong disorder.
Model Convolution: A Computational Approach to Digital Image Interpretation
Gardner, Melissa K.; Sprague, Brian L.; Pearson, Chad G.; Cosgrove, Benjamin D.; Bicek, Andrew D.; Bloom, Kerry; Salmon, E. D.
2010-01-01
Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called “model-convolution,” which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory. PMID:20461132
MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTION
Directory of Open Access Journals (Sweden)
Priyanka H U
2016-09-01
Full Text Available Developing predictive modelling solutions for risk estimation is extremely challenging in health-care informatics. Risk estimation involves integration of heterogeneous clinical sources having different representation from different health-care provider making the task increasingly complex. Such sources are typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel computing tools collectively termed big data tools are in need which can synthesize and assist the physician to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel approach for combining the predictive ability of multiple models for better prediction accuracy. We demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study. Results show that the proposed multi-model predictive architecture is able to provide better accuracy than best model approach. By modelling the error of predictive models we are able to choose sub set of models which yields accurate results. More information was modelled into system by multi-level mining which has resulted in enhanced predictive accuracy.
A new approach of high speed cutting modelling: SPH method
LIMIDO, Jérôme; Espinosa, Christine; Salaün, Michel; Lacome, Jean-Luc
2006-01-01
The purpose of this study is to introduce a new approach of high speed cutting numerical modelling. A lagrangian Smoothed Particle Hydrodynamics (SPH) based model is carried out using the Ls-Dyna software. SPH is a meshless method, thus large material distortions that occur in the cutting problem are easily managed and SPH contact control permits a “natural” workpiece/chip separation. Estimated chip morphology and cutting forces are compared to machining dedicated code results and experimenta...
Schwinger boson approach to the fully screened Kondo model.
Rech, J; Coleman, P; Zarand, G; Parcollet, O
2006-01-13
We apply the Schwinger boson scheme to the fully screened Kondo model and generalize the method to include antiferromagnetic interactions between ions. Our approach captures the Kondo crossover from local moment behavior to a Fermi liquid with a nontrivial Wilson ratio. When applied to the two-impurity model, the mean-field theory describes the "Varma-Jones" quantum phase transition between a valence bond state and a heavy Fermi liquid.
Kallen Lehman approach to 3D Ising model
Canfora, F.
2007-03-01
A “Kallen-Lehman” approach to Ising model, inspired by quantum field theory à la Regge, is proposed. The analogy with the Kallen-Lehman representation leads to a formula for the free-energy of the 3D model with few free parameters which could be matched with the numerical data. The possible application of this scheme to the spin glass case is shortly discussed.
Modelling approaches in sedimentology: Introduction to the thematic issue
Joseph, Philippe; Teles, Vanessa; Weill, Pierre
2016-09-01
As an introduction to this thematic issue on "Modelling approaches in sedimentology", this paper gives an overview of the workshop held in Paris on 7 November 2013 during the 14th Congress of the French Association of Sedimentologists. A synthesis of the workshop in terms of concepts, spatial and temporal scales, constraining data, and scientific challenges is first presented, then a discussion on the possibility of coupling different models, the industrial needs, and the new potential domains of research is exposed.
Modeling Electronic Circular Dichroism within the Polarizable Embedding Approach
DEFF Research Database (Denmark)
Nørby, Morten S; Olsen, Jógvan Magnus Haugaard; Steinmann, Casper
2017-01-01
We present a systematic investigation of the key components needed to model single chromophore electronic circular dichroism (ECD) within the polarizable embedding (PE) approach. By relying on accurate forms of the embedding potential, where especially the inclusion of local field effects...... sampling. We show that a significant number of snapshots are needed to avoid artifacts in the calculated electronic circular dichroism parameters due to insufficient configurational sampling, thus highlighting the efficiency of the PE model....
Computational Models of Spreadsheet Development: Basis for Educational Approaches
Hodnigg, Karin; Mittermeir, Roland T
2008-01-01
Among the multiple causes of high error rates in spreadsheets, lack of proper training and of deep understanding of the computational model upon which spreadsheet computations rest might not be the least issue. The paper addresses this problem by presenting a didactical model focussing on cell interaction, thus exceeding the atomicity of cell computations. The approach is motivated by an investigation how different spreadsheet systems handle certain computational issues implied from moving cells, copy-paste operations, or recursion.
Modeling Water Shortage Management Using an Object-Oriented Approach
Wang, J.; Senarath, S.; Brion, L.; Niedzialek, J.; Novoa, R.; Obeysekera, J.
2007-12-01
As a result of the increasing global population and the resulting urbanization, water shortage issues have received increased attention throughout the world . Water supply has not been able to keep up with increased demand for water, especially during times of drought. The use of an object-oriented (OO) approach coupled with efficient mathematical models is an effective tool in addressing discrepancies between water supply and demand. Object-oriented modeling has been proven powerful and efficient in simulating natural behavior. This research presents a way to model water shortage management using the OO approach. Three groups of conceptual components using the OO approach are designed for the management model. The first group encompasses evaluation of natural behaviors and possible related management options. This evaluation includes assessing any discrepancy that might exist between water demand and supply. The second group is for decision making which includes the determination of water use cutback amount and duration using established criteria. The third group is for implementation of the management options which are restrictions of water usage at a local or regional scale. The loop is closed through a feedback mechanism where continuity in the time domain is established. Like many other regions, drought management is very important in south Florida. The Regional Simulation Model (RSM) is a finite volume, fully integrated hydrologic model used by the South Florida Water Management District to evaluate regional response to various planning alternatives including drought management. A trigger module was developed for RSM that encapsulates the OO approach to water shortage management. Rigorous testing of the module was performed using historical south Florida conditions. Keywords: Object-oriented, modeling, water shortage management, trigger module, Regional Simulation Model
Urban Modelling with Typological Approach. Case Study: Merida, Yucatan, Mexico
Rodriguez, A.
2017-08-01
In three-dimensional models of urban historical reconstruction, missed contextual architecture faces difficulties because it does not have much written references in contrast to the most important monuments. This is the case of Merida, Yucatan, Mexico during the Colonial Era (1542-1810), which has lost much of its heritage. An alternative to offer a hypothetical view of these elements is a typological - parametric definition that allows a 3D modeling approach to the most common features of this heritage evidence.
Comparative flood damage model assessment: towards a European approach
Directory of Open Access Journals (Sweden)
B. Jongman
2012-12-01
Full Text Available There is a wide variety of flood damage models in use internationally, differing substantially in their approaches and economic estimates. Since these models are being used more and more as a basis for investment and planning decisions on an increasingly large scale, there is a need to reduce the uncertainties involved and develop a harmonised European approach, in particular with respect to the EU Flood Risks Directive. In this paper we present a qualitative and quantitative assessment of seven flood damage models, using two case studies of past flood events in Germany and the United Kingdom. The qualitative analysis shows that modelling approaches vary strongly, and that current methodologies for estimating infrastructural damage are not as well developed as methodologies for the estimation of damage to buildings. The quantitative results show that the model outcomes are very sensitive to uncertainty in both vulnerability (i.e. depth–damage functions and exposure (i.e. asset values, whereby the first has a larger effect than the latter. We conclude that care needs to be taken when using aggregated land use data for flood risk assessment, and that it is essential to adjust asset values to the regional economic situation and property characteristics. We call for the development of a flexible but consistent European framework that applies best practice from existing models while providing room for including necessary regional adjustments.
Similarity transformation approach to identifiability analysis of nonlinear compartmental models.
Vajda, S; Godfrey, K R; Rabitz, H
1989-04-01
Through use of the local state isomorphism theorem instead of the algebraic equivalence theorem of linear systems theory, the similarity transformation approach is extended to nonlinear models, resulting in finitely verifiable sufficient and necessary conditions for global and local identifiability. The approach requires testing of certain controllability and observability conditions, but in many practical examples these conditions prove very easy to verify. In principle the method also involves nonlinear state variable transformations, but in all of the examples presented in the paper the transformations turn out to be linear. The method is applied to an unidentifiable nonlinear model and a locally identifiable nonlinear model, and these are the first nonlinear models other than bilinear models where the reason for lack of global identifiability is nontrivial. The method is also applied to two models with Michaelis-Menten elimination kinetics, both of considerable importance in pharmacokinetics, and for both of which the complicated nature of the algebraic equations arising from the Taylor series approach has hitherto defeated attempts to establish identifiability results for specific input functions.
a Study of Urban Stormwater Modeling Approach in Singapore Catchment
Liew, S. C.; Liong, S. Y.; Vu, M. T.
2011-07-01
Urbanization has the direct effect of increasing the amount of surface runoff to be discharged through man-made drainage systems. Thus, Singapore's rapid urbanization has drawn great attention on flooding issues. In view of this, proper stormwater modeling approach is necessary for the assessment planning, design, and control of the storm and combines sewerage system. Impacts of urbanization on surface runoff and catchment flooding in Singapore are studied in this paper. In this study, the application of SOBEK-urban 1D is introduced on model catchments and a hypothetical catchment model is created for simulation purpose. Stormwater modeling approach using SOBEK-urban offers a comprehensive modeling tool for simple or extensive urban drainage systems consisting of sewers and open channels despite its size and complexity of the network. The findings from the present study show that stormwater modeling is able to identify flood area and the impact of the anticipated sea level on urban drainage network. Consequently, the performance of the urban drainage system can be improved and early prevention approaches can be carried out.
The Generalised Ecosystem Modelling Approach in Radiological Assessment
Energy Technology Data Exchange (ETDEWEB)
Klos, Richard
2008-03-15
An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment
A vector relational data modeling approach to Insider threat intelligence
Kelly, Ryan F.; Anderson, Thomas S.
2016-05-01
We address the problem of detecting insider threats before they can do harm. In many cases, co-workers notice indications of suspicious activity prior to insider threat attacks. A partial solution to this problem requires an understanding of how information can better traverse the communication network between human intelligence and insider threat analysts. Our approach employs modern mobile communications technology and scale free network architecture to reduce the network distance between human sensors and analysts. In order to solve this problem, we propose a Vector Relational Data Modeling approach to integrate human "sensors," geo-location, and existing visual analytics tools. This integration problem is known to be difficult due to quadratic increases in cost associated with complex integration solutions. A scale free network integration approach using vector relational data modeling is proposed as a method for reducing network distance without increasing cost.
A discrete Lagrangian based direct approach to macroscopic modelling
Sarkar, Saikat; Nowruzpour, Mohsen; Reddy, J. N.; Srinivasa, A. R.
2017-01-01
A direct discrete Lagrangian based approach, designed at a length scale of interest, to characterize the response of a body is proposed. The main idea is to understand the dynamics of a deformable body via a Lagrangian corresponding to a coupled interaction of rigid particles in the reduced dimension. We argue that the usual practice of describing the laws of a deformable body in the continuum limit is redundant, because for most of the practical problems, analytical solutions are not available. Since continuum limit is not taken, the framework automatically relaxes the requirement of differentiability of field variables. The discrete Lagrangian based approach is illustrated by deriving an equivalent of the Euler-Bernoulli beam model. A few test examples are solved, which demonstrate that the derived non-local model predicts lower deflections in comparison to classical Euler-Bernoulli beam solutions. We have also included crack propagation in thin structures for isotropic and anisotropic cases using the Lagrangian based approach.
Reconciliation with oneself and with others: From approach to model
Directory of Open Access Journals (Sweden)
Nikolić-Ristanović Vesna
2010-01-01
Full Text Available The paper intends to present the approach to dealing with war and its consequences which was developed within Victimology Society of Serbia over the last five years, in the framework of Association Joint Action for Truth and Reconciliation (ZAIP. First, the short review of the Association and the process through which ZAIP approach to dealing with a past was developed is presented. Then, the detailed description of the approach itself, with identification of its most important specificities, is presented. In the conclusion, next steps, aimed at development of the model of reconciliation which will have the basis in ZAIP approach and which will be appropriate to social context of Serbia and its surrounding, are suggested.
EXTENDE MODEL OF COMPETITIVITY THROUG APPLICATION OF NEW APPROACH DIRECTIVES
Directory of Open Access Journals (Sweden)
Slavko Arsovski
2009-03-01
Full Text Available The basic subject of this work is the model of new approach impact on quality and safety products, and competency of our companies. This work represents real hypothesis on the basis of expert's experiences, in regard to that the infrastructure with using new approach directives wasn't examined until now, it isn't known which product or industry of Serbia is related to directives of the new approach and CE mark, and it is not known which are effects of the use of the CE mark. This work should indicate existing quality reserves and product's safety, the level of possible competency improvement and increasing the profit by discharging new approach directive requires.
Vibro-acoustics of porous materials - waveguide modeling approach
DEFF Research Database (Denmark)
Darula, Radoslav; Sorokin, Sergey V.
2016-01-01
The porous material is considered as a compound multi-layered waveguide (i.e. a fluid layer surrounded with elastic layers) with traction free boundary conditions. The attenuation of the vibro-acoustic waves in such a material is assessed. This approach is compared with a conventional Biot's model...... in porous materials....
A novel Monte Carlo approach to hybrid local volatility models
A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)
2017-01-01
textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.
Teaching Modeling with Partial Differential Equations: Several Successful Approaches
Myers, Joseph; Trubatch, David; Winkel, Brian
2008-01-01
We discuss the introduction and teaching of partial differential equations (heat and wave equations) via modeling physical phenomena, using a new approach that encompasses constructing difference equations and implementing these in a spreadsheet, numerically solving the partial differential equations using the numerical differential equation…
A Behavioral Decision Making Modeling Approach Towards Hedging Services
Pennings, J.M.E.; Candel, M.J.J.M.; Egelkraut, T.M.
2003-01-01
This paper takes a behavioral approach toward the market for hedging services. A behavioral decision-making model is developed that provides insight into how and why owner-managers decide the way they do regarding hedging services. Insight into those choice processes reveals information needed by fi
A fuzzy approach to the Weighted Overlap Dominance model
DEFF Research Database (Denmark)
Franco de los Rios, Camilo Andres; Hougaard, Jens Leth; Nielsen, Kurt
2013-01-01
in an interactive way, where input data can take the form of uniquely-graded or interval-valued information. Here we explore the Weighted Overlap Dominance (WOD) model from a fuzzy perspective and its outranking approach to decision support and multidimensional interval analysis. Firstly, imprecision measures...
Methodological Approach for Modeling of Multienzyme in-pot Processes
DEFF Research Database (Denmark)
Andrade Santacoloma, Paloma de Gracia; Roman Martinez, Alicia; Sin, Gürkan;
2011-01-01
This paper presents a methodological approach for modeling multi-enzyme in-pot processes. The methodology is exemplified stepwise through the bi-enzymatic production of N-acetyl-D-neuraminic acid (Neu5Ac) from N-acetyl-D-glucosamine (GlcNAc). In this case study, sensitivity analysis is also used...
Towards modeling future energy infrastructures - the ELECTRA system engineering approach
DEFF Research Database (Denmark)
Uslar, Mathias; Heussen, Kai
2016-01-01
Within this contribution, we provide an overview based on previous work conducted in the ELECTRA project to come up with a consistent method for modeling the ELECTRA WoC approach according to the methods established with the M/490 mandate of the European Commission. We will motivate the use of th...
Pruning Chinese trees : an experimental and modelling approach
Zeng, Bo
2002-01-01
Pruning of trees, in which some branches are removed from the lower crown of a tree, has been extensively used in China in silvicultural management for many purposes. With an experimental and modelling approach, the effects of pruning on tree growth and on the harvest of plant material were studied.
Evaluating Interventions with Multimethod Data: A Structural Equation Modeling Approach
Crayen, Claudia; Geiser, Christian; Scheithauer, Herbert; Eid, Michael
2011-01-01
In many intervention and evaluation studies, outcome variables are assessed using a multimethod approach comparing multiple groups over time. In this article, we show how evaluation data obtained from a complex multitrait-multimethod-multioccasion-multigroup design can be analyzed with structural equation models. In particular, we show how the…
Teaching Modeling with Partial Differential Equations: Several Successful Approaches
Myers, Joseph; Trubatch, David; Winkel, Brian
2008-01-01
We discuss the introduction and teaching of partial differential equations (heat and wave equations) via modeling physical phenomena, using a new approach that encompasses constructing difference equations and implementing these in a spreadsheet, numerically solving the partial differential equations using the numerical differential equation…
A Metacognitive-Motivational Model of Surface Approach to Studying
Spada, Marcantonio M.; Moneta, Giovanni B.
2012-01-01
In this study, we put forward and tested a model of how surface approach to studying during examination preparation is influenced by the trait variables of motivation and metacognition and the state variables of avoidance coping and evaluation anxiety. A sample of 528 university students completed, one week before examinations, the following…
A New Approach for Testing the Rasch Model
Kubinger, Klaus D.; Rasch, Dieter; Yanagida, Takuya
2011-01-01
Though calibration of an achievement test within psychological and educational context is very often carried out by the Rasch model, data sampling is hardly designed according to statistical foundations. However, Kubinger, Rasch, and Yanagida (2009) recently suggested an approach for the determination of sample size according to a given Type I and…
Comparing State SAT Scores Using a Mixture Modeling Approach
Kim, YoungKoung Rachel
2009-01-01
Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…
The Bipolar Approach: A Model for Interdisciplinary Art History Courses.
Calabrese, John A.
1993-01-01
Describes a college level art history course based on the opposing concepts of Classicism and Romanticism. Contends that all creative work, such as film or architecture, can be categorized according to this bipolar model. Includes suggestions for objects to study and recommends this approach for art education at all education levels. (CFR)
Non-frontal model based approach to forensic face recognition
Dutta, Abhishek; Veldhuis, Raymond; Spreeuwers, Luuk
2012-01-01
In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance vie
Smeared crack modelling approach for corrosion-induced concrete damage
DEFF Research Database (Denmark)
Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik
2017-01-01
compared to experimental data obtained by digital image correlation and published in the literature. Excellent agreements between experimentally observed and numerically predicted crack patterns at the micro and macro scale indicate the capability of the modelling approach to accurately capture corrosion...
Towards modeling future energy infrastructures - the ELECTRA system engineering approach
DEFF Research Database (Denmark)
Uslar, Mathias; Heussen, Kai
2016-01-01
Within this contribution, we provide an overview based on previous work conducted in the ELECTRA project to come up with a consistent method for modeling the ELECTRA WoC approach according to the methods established with the M/490 mandate of the European Commission. We will motivate the use...
Atomistic approach for modeling metal-semiconductor interfaces
DEFF Research Database (Denmark)
Stradi, Daniele; Martinez, Umberto; Blom, Anders
2016-01-01
realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via the I–V curve. In particular, it will be demonstrated how doping — and bias — modifies the Schottky barrier, and how finite size models (the slab approach) are unable to describe these interfaces...
CFD Approaches for Modelling Bubble Entrainment by an Impinging Jet
Directory of Open Access Journals (Sweden)
Martin Schmidtke
2009-01-01
Full Text Available This contribution presents different approaches for the modeling of gas entrainment under water by a plunging jet. Since the generation of bubbles happens on a scale which is smaller than the bubbles, this process cannot be resolved in meso-scale simulations, which include the full length of the jet and its environment. This is why the gas entrainment has to be modeled in meso-scale simulations. In the frame of a Euler-Euler simulation, the local morphology of the phases has to be considered in the drag model. For example, the gas is a continuous phase above the water level but bubbly below the water level. Various drag models are tested and their influence on the gas void fraction below the water level is discussed. The algebraic interface area density (AIAD model applies a drag coefficient for bubbles and a different drag coefficient for the free surface. If the AIAD model is used for the simulation of impinging jets, the gas entrainment depends on the free parameters included in this model. The calculated gas entrainment can be adapted via these parameters. Therefore, an advanced AIAD approach could be used in future for the implementation of models (e.g., correlations for the gas entrainment.
Approach for workflow modeling using π-calculus
Institute of Scientific and Technical Information of China (English)
杨东; 张申生
2003-01-01
As a variant of process algebra, π-calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new approach: representing workilow models using ~-calculus. As a result, the model can characterize the dynamic behaviors of the workflow process in terms of the LTS ( Labeled Transition Semantics) semantics of π-calculus. The main advantage of the worktlow model's formal semantic is that it allows for verification of the model's properties, such as deadlock-free and normal termination. Moreover, the equivalence of worktlow models can be checked thlx)ugh weak bisimulation theorem in the π-caleulus, thus facilitating the optimizationof business processes.
Approach for workflow modeling using π-calculus
Institute of Scientific and Technical Information of China (English)
杨东; 张申生
2003-01-01
As a variant of process algebra, π-calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new approach: representing workflow models using π-calculus. As a result, the model can characterize the dynamic behaviors of the workflow process in terms of the LTS (Labeled Transition Semantics) semantics of π-calculus. The main advantage of the workflow model's formal semantic is that it allows for verification of the model's properties, such as deadlock-free and normal termination. Moreover, the equivalence of workflow models can be checked through weak bisimulation theorem in the π-calculus, thus facilitating the optimization of business processes.
Multiphysics modeling using COMSOL a first principles approach
Pryor, Roger W
2011-01-01
Multiphysics Modeling Using COMSOL rapidly introduces the senior level undergraduate, graduate or professional scientist or engineer to the art and science of computerized modeling for physical systems and devices. It offers a step-by-step modeling methodology through examples that are linked to the Fundamental Laws of Physics through a First Principles Analysis approach. The text explores a breadth of multiphysics models in coordinate systems that range from 1D to 3D and introduces the readers to the numerical analysis modeling techniques employed in the COMSOL Multiphysics software. After readers have built and run the examples, they will have a much firmer understanding of the concepts, skills, and benefits acquired from the use of computerized modeling techniques to solve their current technological problems and to explore new areas of application for their particular technological areas of interest.
Evaluation of Workflow Management Systems - A Meta Model Approach
Directory of Open Access Journals (Sweden)
Michael Rosemann
1998-11-01
Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.
A simplified GIS approach to modeling global leaf water isoscapes.
Directory of Open Access Journals (Sweden)
Jason B West
Full Text Available The stable hydrogen (delta(2H and oxygen (delta(18O isotope ratios of organic and inorganic materials record biological and physical processes through the effects of substrate isotopic composition and fractionations that occur as reactions proceed. At large scales, these processes can exhibit spatial predictability because of the effects of coherent climatic patterns over the Earth's surface. Attempts to model spatial variation in the stable isotope ratios of water have been made for decades. Leaf water has a particular importance for some applications, including plant organic materials that record spatial and temporal climate variability and that may be a source of food for migrating animals. It is also an important source of the variability in the isotopic composition of atmospheric gases. Although efforts to model global-scale leaf water isotope ratio spatial variation have been made (especially of delta(18O, significant uncertainty remains in models and their execution across spatial domains. We introduce here a Geographic Information System (GIS approach to the generation of global, spatially-explicit isotope landscapes (= isoscapes of "climate normal" leaf water isotope ratios. We evaluate the approach and the resulting products by comparison with simulation model outputs and point measurements, where obtainable, over the Earth's surface. The isoscapes were generated using biophysical models of isotope fractionation and spatially continuous precipitation isotope and climate layers as input model drivers. Leaf water delta(18O isoscapes produced here generally agreed with latitudinal averages from GCM/biophysical model products, as well as mean values from point measurements. These results show global-scale spatial coherence in leaf water isotope ratios, similar to that observed for precipitation and validate the GIS approach to modeling leaf water isotopes. These results demonstrate that relatively simple models of leaf water enrichment
Polynomial Chaos Expansion Approach to Interest Rate Models
Directory of Open Access Journals (Sweden)
Luca Di Persio
2015-01-01
Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.
Popularity Modeling for Mobile Apps: A Sequential Approach.
Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong
2015-07-01
The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.
On a Markovian approach for modeling passive solar devices
Energy Technology Data Exchange (ETDEWEB)
Bottazzi, F.; Liebling, T.M. (Chaire de Recherche Operationelle, Ecole Polytechnique Federale de Lausanne (Switzerland)); Scartezzini, J.L.; Nygaard-Ferguson, M. (Lab. d' Energie Solaire et de Physique du Batiment, Ecole Polytechnique Federale de Lausanne (Switzerland))
1991-01-01
Stochastic models for the analysis of the energy and thermal comfort performances of passive solar devices have been increasingly studied for over a decade. A new approach to thermal building modeling, based on Markov chains, is proposed here to combine both the accuracy of traditional dynamic simulation with the practical advantages of simplified methods. A main difficulty of the Markovian approach is the discretization of the system variables. Efficient procedures have been developed to carry out this discretization and several numerical experiments have been performed to analyze the possibilities and limitations of the Markovian model. Despite its restrictive assumptions, it will be shown that accurate results are indeed obtained by this method. However, due to discretization, computer memory reqirements are more than inversely proportional to accuracy. (orig.).
Disturbed state concept as unified constitutive modeling approach
Directory of Open Access Journals (Sweden)
Chandrakant S. Desai
2016-06-01
Full Text Available A unified constitutive modeling approach is highly desirable to characterize a wide range of engineering materials subjected simultaneously to the effect of a number of factors such as elastic, plastic and creep deformations, stress path, volume change, microcracking leading to fracture, failure and softening, stiffening, and mechanical and environmental forces. There are hardly available such unified models. The disturbed state concept (DSC is considered to be a unified approach and is able to provide material characterization for almost all of the above factors. This paper presents a description of the DSC, and statements for determination of parameters based on triaxial, multiaxial and interface tests. Statements of DSC and validation at the specimen level and at the boundary value problem levels are also presented. An extensive list of publications by the author and others is provided at the end. The DSC is considered to be a unique and versatile procedure for modeling behaviors of engineering materials and interfaces.
Disturbed state concept as unified constitutive modeling approach
Institute of Scientific and Technical Information of China (English)
Chandrakant S. Desai
2016-01-01
A unified constitutive modeling approach is highly desirable to characterize a wide range of engineering materials subjected simultaneously to the effect of a number of factors such as elastic, plastic and creep deformations, stress path, volume change, microcracking leading to fracture, failure and softening, stiffening, and mechanical and environmental forces. There are hardly available such unified models. The disturbed state concept (DSC) is considered to be a unified approach and is able to provide material characterization for almost all of the above factors. This paper presents a description of the DSC, and statements for determination of parameters based on triaxial, multiaxial and interface tests. Statements of DSC and validation at the specimen level and at the boundary value problem levels are also presented. An extensive list of publications by the author and others is provided at the end. The DSC is considered to be a unique and versatile procedure for modeling behaviors of engineering materials and interfaces.
Proposal: A Hybrid Dictionary Modelling Approach for Malay Tweet Normalization
Muhamad, Nor Azlizawati Binti; Idris, Norisma; Arshi Saloot, Mohammad
2017-02-01
Malay Twitter message presents a special deviation from the original language. Malay Tweet widely used currently by Twitter users, especially at Malaya archipelago. Thus, it is important to make a normalization system which can translated Malay Tweet language into the standard Malay language. Some researchers have conducted in natural language processing which mainly focuses on normalizing English Twitter messages, while few studies have been done for normalize Malay Tweets. This paper proposes an approach to normalize Malay Twitter messages based on hybrid dictionary modelling methods. This approach normalizes noisy Malay twitter messages such as colloquially language, novel words, and interjections into standard Malay language. This research will be used Language Model and N-grams model.
DEFF Research Database (Denmark)
Simonsen, Kent Inge; Kristensen, Lars Michael
2013-01-01
and implementation. Our approach has been developed in the context of the Coloured Petri Nets (CPNs) modelling language. We illustrate our approach by presenting a descriptive specification model of the Websocket protocol which is currently under development by the Internet Engineering Task Force (IETF), and we show......Formal modelling of protocols is often aimed at one specific purpose such as verification or automatically generating an implementation. This leads to models that are useful for one purpose, but not for others. Being able to derive models for verification and implementation from a single model...
ON SOME APPROACHES TO ECONOMICMATHEMATICAL MODELING OF SMALL BUSINESS
Directory of Open Access Journals (Sweden)
Orlov A. I.
2015-04-01
Full Text Available Small business is an important part of modern Russian economy. We give a wide panorama developed by us of possible approaches to the construction of economic-mathematical models that may be useful to describe the dynamics of small businesses, as well as management. As for the description of certain problems of small business can use a variety of types of economic-mathematical and econometric models, we found it useful to consider a fairly wide range of such models, which resulted in quite a short description of the specific models. In this description of the models brought to such a level that an experienced professional in the field of economic-mathematical modeling could, if necessary, to develop their own specific model to the stage of design formulas and numerical results. Particular attention is paid to the use of statistical methods of non-numeric data, the most pressing at the moment. Are considered the problems of economic-mathematical modeling in solving problems of small business marketing. We have accumulated some experience in application of the methodology of economic-mathematical modeling in solving practical problems in small business marketing, in particular in the field of consumer goods and industrial purposes, educational services, as well as in the analysis and modeling of inflation, taxation and others. In marketing models of decision making theory we apply rankings and ratings. Is considered the problem of comparing averages. We present some models of the life cycle of small businesses - flow model projects, model of capture niches, and model of niche selection. We discuss the development of research on economic-mathematical modeling of small businesses
A validated approach for modeling collapse of steel structures
Saykin, Vitaliy Victorovich
A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are
GEOSPATIAL MODELLING APPROACH FOR 3D URBAN DENSIFICATION DEVELOPMENTS
Directory of Open Access Journals (Sweden)
O. Koziatek
2016-06-01
Full Text Available With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D. The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE, and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI’s CityEngine software and the Computer Generated Architecture (CGA language.
Geospatial Modelling Approach for 3d Urban Densification Developments
Koziatek, O.; Dragićević, S.; Li, S.
2016-06-01
With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D). The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE), and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI's CityEngine software and the Computer Generated Architecture (CGA) language.
A global sensitivity analysis approach for morphogenesis models
Boas, Sonja E. M.
2015-11-21
Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Energy Technology Data Exchange (ETDEWEB)
Marchi, Pierre A.; Coelho, Antonio A.R. [Santa Catarina Univ., Florianopolis, SC (Brazil)
1998-07-01
This paper presents three different design methodologies for the generalized predictive control strategy to deal with a nonlinear process. It covers the design and experimental study of the predictive controller for the horizontal balance plant. The effectiveness of the control schemes was evaluated under various operational conditions: setpoint changes and disturbances. The design philosophy of the predictive control was considered for a nonlinear model, as Hammerstein, and a linear model with a long range predictive identification. The overall performance of the predictive systems was studied and a comparison was made with the conventional linear predictive controller. (author)
A systemic approach for modeling biological evolution using Parallel DEVS.
Heredia, Daniel; Sanz, Victorino; Urquia, Alfonso; Sandín, Máximo
2015-08-01
A new model for studying the evolution of living organisms is proposed in this manuscript. The proposed model is based on a non-neodarwinian systemic approach. The model is focused on considering several controversies and open discussions about modern evolutionary biology. Additionally, a simplification of the proposed model, named EvoDEVS, has been mathematically described using the Parallel DEVS formalism and implemented as a computer program using the DEVSLib Modelica library. EvoDEVS serves as an experimental platform to study different conditions and scenarios by means of computer simulations. Two preliminary case studies are presented to illustrate the behavior of the model and validate its results. EvoDEVS is freely available at http://www.euclides.dia.uned.es. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Kinetic equations modelling wealth redistribution: a comparison of approaches.
Düring, Bertram; Matthes, Daniel; Toscani, Giuseppe
2008-11-01
Kinetic equations modelling the redistribution of wealth in simple market economies is one of the major topics in the field of econophysics. We present a unifying approach to the qualitative study for a large variety of such models, which is based on a moment analysis in the related homogeneous Boltzmann equation, and on the use of suitable metrics for probability measures. In consequence, we are able to classify the most important feature of the steady wealth distribution, namely the fatness of the Pareto tail, and the dynamical stability of the latter in terms of the model parameters. Our results apply, e.g., to the market model with risky investments [S. Cordier, L. Pareschi, and G. Toscani, J. Stat. Phys. 120, 253 (2005)], and to the model with quenched saving propensities [A. Chatterjee, B. K. Chakrabarti, and S. S. Manna, Physica A 335, 155 (2004)]. Also, we present results from numerical experiments that confirm the theoretical predictions.
Gu, Fei; Preacher, Kristopher J; Wu, Wei; Yung, Yiu-Fai
2014-01-01
Although the state space approach for estimating multilevel regression models has been well established for decades in the time series literature, it does not receive much attention from educational and psychological researchers. In this article, we (a) introduce the state space approach for estimating multilevel regression models and (b) extend the state space approach for estimating multilevel factor models. A brief outline of the state space formulation is provided and then state space forms for univariate and multivariate multilevel regression models, and a multilevel confirmatory factor model, are illustrated. The utility of the state space approach is demonstrated with either a simulated or real example for each multilevel model. It is concluded that the results from the state space approach are essentially identical to those from specialized multilevel regression modeling and structural equation modeling software. More importantly, the state space approach offers researchers a computationally more efficient alternative to fit multilevel regression models with a large number of Level 1 units within each Level 2 unit or a large number of observations on each subject in a longitudinal study.
Building Energy Modeling: A Data-Driven Approach
Cui, Can
Buildings consume nearly 50% of the total energy in the United States, which drives the need to develop high-fidelity models for building energy systems. Extensive methods and techniques have been developed, studied, and applied to building energy simulation and forecasting, while most of work have focused on developing dedicated modeling approach for generic buildings. In this study, an integrated computationally efficient and high-fidelity building energy modeling framework is proposed, with the concentration on developing a generalized modeling approach for various types of buildings. First, a number of data-driven simulation models are reviewed and assessed on various types of computationally expensive simulation problems. Motivated by the conclusion that no model outperforms others if amortized over diverse problems, a meta-learning based recommendation system for data-driven simulation modeling is proposed. To test the feasibility of the proposed framework on the building energy system, an extended application of the recommendation system for short-term building energy forecasting is deployed on various buildings. Finally, Kalman filter-based data fusion technique is incorporated into the building recommendation system for on-line energy forecasting. Data fusion enables model calibration to update the state estimation in real-time, which filters out the noise and renders more accurate energy forecast. The framework is composed of two modules: off-line model recommendation module and on-line model calibration module. Specifically, the off-line model recommendation module includes 6 widely used data-driven simulation models, which are ranked by meta-learning recommendation system for off-line energy modeling on a given building scenario. Only a selective set of building physical and operational characteristic features is needed to complete the recommendation task. The on-line calibration module effectively addresses system uncertainties, where data fusion on
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
Directory of Open Access Journals (Sweden)
W. Bastiaan Kleijn
2005-06-01
Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.
A modal approach to modeling spatially distributed vibration energy dissipation.
Energy Technology Data Exchange (ETDEWEB)
Segalman, Daniel Joseph
2010-08-01
The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.
Validation of models with constant bias: an applied approach
Directory of Open Access Journals (Sweden)
Salvador Medina-Peralta
2014-06-01
Full Text Available Objective. This paper presents extensions to the statistical validation method based on the procedure of Freese when a model shows constant bias (CB in its predictions and illustrate the method with data from a new mechanistic model that predict weight gain in cattle. Materials and methods. The extensions were the hypothesis tests and maximum anticipated error for the alternative approach, and the confidence interval for a quantile of the distribution of errors. Results. The model evaluated showed CB, once the CB is removed and with a confidence level of 95%, the magnitude of the error does not exceed 0.575 kg. Therefore, the validated model can be used to predict the daily weight gain of cattle, although it will require an adjustment in its structure based on the presence of CB to increase the accuracy of its forecasts. Conclusions. The confidence interval for the 1-α quantile of the distribution of errors after correcting the constant bias, allows determining the top limit for the magnitude of the error of prediction and use it to evaluate the evolution of the model in the forecasting of the system. The confidence interval approach to validate a model is more informative than the hypothesis tests for the same purpose.
Hiemstra, Djoerd
2010-01-01
In this report, we unify two quite distinct approaches to information retrieval: region models and language models. Region models were developed for structured document retrieval. They provide a well-defined behaviour as well as a simple query language that allows application developers to rapidly develop applications. Language models are particularly useful to reason about the ranking of search results, and for developing new ranking approaches. The unified model allows application developers to define complex language modeling approaches as logical queries on a textual database. We show a remarkable one-to-one relationship between region queries and the language models they represent for a wide variety of applications: simple ad-hoc search, cross-language retrieval, video retrieval, and web search.
Approach to Organizational Structure Modelling in Construction Companies
Directory of Open Access Journals (Sweden)
Ilin Igor V.
2016-01-01
Full Text Available Effective management system is one of the key factors of business success nowadays. Construction companies usually have a portfolio of independent projects running at the same time. Thus it is reasonable to take into account project orientation of such kind of business while designing the construction companies’ management system, which main components are business process system and organizational structure. The paper describes the management structure designing approach, based on the project-oriented nature of the construction projects, and propose a model of the organizational structure for the construction company. Application of the proposed approach will enable to assign responsibilities within the organizational structure in construction projects effectively and thus to shorten the time for projects allocation and to provide its smoother running. The practical case of using the approach also provided in the paper.
An integrated modelling approach to estimate urban traffic emissions
Misra, Aarshabh; Roorda, Matthew J.; MacLean, Heather L.
2013-07-01
An integrated modelling approach is adopted to estimate microscale urban traffic emissions. The modelling framework consists of a traffic microsimulation model developed in PARAMICS, a microscopic emissions model (Comprehensive Modal Emissions Model), and two dispersion models, AERMOD and the Quick Urban and Industrial Complex (QUIC). This framework is applied to a traffic network in downtown Toronto, Canada to evaluate summer time morning peak traffic emissions of carbon monoxide (CO) and nitrogen oxides (NOx) during five weekdays at a traffic intersection. The model predicted results are validated against sensor observations with 100% of the AERMOD modelled CO concentrations and 97.5% of the QUIC modelled NOx concentrations within a factor of two of the corresponding observed concentrations. Availability of local estimates of ambient concentration is useful for accurate comparisons of predicted concentrations with observed concentrations. Predicted and sensor measured concentrations are significantly lower than the hourly threshold Maximum Acceptable Levels for CO (31 ppm, ˜90 times lower) and NO2 (0.4 mg/m3, ˜12 times lower), within the National Ambient Air Quality Objectives established by Environment Canada.
A Novel Approach to Implement Takagi-Sugeno Fuzzy Models.
Chang, Chia-Wen; Tao, Chin-Wang
2017-09-01
This paper proposes new algorithms based on the fuzzy c-regressing model algorithm for Takagi-Sugeno (T-S) fuzzy modeling of the complex nonlinear systems. A fuzzy c-regression state model (FCRSM) algorithm is a T-S fuzzy model in which the functional antecedent and the state-space-model-type consequent are considered with the available input-output data. The antecedent and consequent forms of the proposed FCRSM consists mainly of two advantages: one is that the FCRSM has low computation load due to only one input variable is considered in the antecedent part; another is that the unknown system can be modeled to not only the polynomial form but also the state-space form. Moreover, the FCRSM can be extended to FCRSM-ND and FCRSM-Free algorithms. An algorithm FCRSM-ND is presented to find the T-S fuzzy state-space model of the nonlinear system when the input-output data cannot be precollected and an assumed effective controller is available. In the practical applications, the mathematical model of controller may be hard to be obtained. In this case, an online tuning algorithm, FCRSM-FREE, is designed such that the parameters of a T-S fuzzy controller and the T-S fuzzy state model of an unknown system can be online tuned simultaneously. Four numerical simulations are given to demonstrate the effectiveness of the proposed approach.
Diagnosing Hybrid Systems: a Bayesian Model Selection Approach
McIlraith, Sheila A.
2005-01-01
In this paper we examine the problem of monitoring and diagnosing noisy complex dynamical systems that are modeled as hybrid systems-models of continuous behavior, interleaved by discrete transitions. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. Building on our previous work in this area (MBCG99;MBCG00), our specific focus in this paper ins on the mathematical formulation of the hybrid monitoring and diagnosis task as a Bayesian model tracking algorithm. The nonlinear dynamics of many hybrid systems present challenges to probabilistic tracking. Further, probabilistic tracking of a system for the purposes of diagnosis is problematic because the models of the system corresponding to failure modes are numerous and generally very unlikely. To focus tracking on these unlikely models and to reduce the number of potential models under consideration, we exploit logic-based techniques for qualitative model-based diagnosis to conjecture a limited initial set of consistent candidate models. In this paper we discuss alternative tracking techniques that are relevant to different classes of hybrid systems, focusing specifically on a method for tracking multiple models of nonlinear behavior simultaneously using factored sampling and conditional density propagation. To illustrate and motivate the approach described in this paper we examine the problem of monitoring and diganosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.
A Bayesian Approach for Structural Learning with Hidden Markov Models
Directory of Open Access Journals (Sweden)
Cen Li
2002-01-01
Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.
Systematic approach to verification and validation: High explosive burn models
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Laboratory; Scovel, Christina A. [Los Alamos National Laboratory
2012-04-16
Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code
A Nonhydrostatic Model Based On A New Approach
Janjic, Z. I.
Considerable experience with nonhydrostatic mo dels has been accumulated on the scales of convective clouds and storms. However, numerical weather prediction (NWP) deals with motions on a much wider range of temporal and spatial scales. Thus, difficulties that may not be significant on the small scales, may become important in NWP applications. Having in mind these considerations, a new approach has been proposed and applied in developing nonhydrostatic models intended for NWP applications. Namely, instead of extending the cloud models to synoptic scales, the hydrostatic approximation is relaxed in a hydrostatic NWP model. In this way the model validity is extended to nonhydrostatic motions, and at the same time favorable features of the hydrostatic formulation are preserved. In order to apply this approach, the system of nonhydrostatic equations is split into two parts: (a) the part that corresponds to the hydrostatic system, except for corrections due to vertical acceleration, and (b) the system of equations that allows computation of the corrections appearing in the first system. This procedure does not require any additional approximation. In the model, "isotropic" horizontal finite differencing is employed that conserves a number of basic and derived dynamical and quadratic quantities. The hybrid pressure-sigma vertical coordinate has been chosen as the primary option. The forward-backward scheme is used for horizontally propagating fast waves, and an implicit scheme is used for vertically propagating sound waves. The Adams- Bashforth scheme is applied for the advection of the basic dynamical variables and for the Coriolis terms. In real data runs, the nonhydrostatic dynamics does not require extra computational boundary conditions at the top. The philosophy of the physical package and possible future developments of physical parameterizations are also reviewed. A two-dimensional model based on the described approach successfully reproduced classical
Infiltration under snow cover: Modeling approaches and predictive uncertainty
Meeks, Jessica; Moeck, Christian; Brunner, Philip; Hunkeler, Daniel
2017-03-01
Groundwater recharge from snowmelt represents a temporal redistribution of precipitation. This is extremely important because the rate and timing of snowpack drainage has substantial consequences to aquifer recharge patterns, which in turn affect groundwater availability throughout the rest of the year. The modeling methods developed to estimate drainage from a snowpack, which typically rely on temporally-dense point-measurements or temporally-limited spatially-dispersed calibration data, range in complexity from the simple degree-day method to more complex and physically-based energy balance approaches. While the gamut of snowmelt models are routinely used to aid in water resource management, a comparison of snowmelt models' predictive uncertainties had previously not been done. Therefore, we established a snowmelt model calibration dataset that is both temporally dense and represents the integrated snowmelt infiltration signal for the Vers Chez le Brandt research catchment, which functions as a rather unique natural lysimeter. We then evaluated the uncertainty associated with the degree-day, a modified degree-day and energy balance snowmelt model predictions using the null-space Monte Carlo approach. All three melt models underestimate total snowpack drainage, underestimate the rate of early and midwinter drainage and overestimate spring snowmelt rates. The actual rate of snowpack water loss is more constant over the course of the entire winter season than the snowmelt models would imply, indicating that mid-winter melt can contribute as significantly as springtime snowmelt to groundwater recharge in low alpine settings. Further, actual groundwater recharge could be between 2 and 31% greater than snowmelt models suggest, over the total winter season. This study shows that snowmelt model predictions can have considerable uncertainty, which may be reduced by the inclusion of more data that allows for the use of more complex approaches such as the energy balance
Social model: a new approach of the disability theme.
Bampi, Luciana Neves da Silva; Guilhem, Dirce; Alves, Elioenai Dornelles
2010-01-01
The experience of disability is part of the daily lives of people who have a disease, lesion or corporal limitation. Disability is still understood as personal bad luck; moreover, from the social and political points of view, the disabled are seen as a minority. The aim of this study is to contribute to the knowledge about the experience of disability. The research presents a new approach on the theme: the social model. This approach appeared as an alternative to the medical model of disability, which sees the lesion as the primary cause of social inequality and of the disadvantages experienced by the disabled, ignoring the role of social structures in their oppression and marginalization. The study permits reflecting on how the difficulties and barriers society imposed on people considered different make disability a reality and portray social injustice and the vulnerability situation lived by excluded groups.
Lattice percolation approach to 3D modeling of tissue aging
Gorshkov, Vyacheslav; Privman, Vladimir; Libert, Sergiy
2016-11-01
We describe a 3D percolation-type approach to modeling of the processes of aging and certain other properties of tissues analyzed as systems consisting of interacting cells. Lattice sites are designated as regular (healthy) cells, senescent cells, or vacancies left by dead (apoptotic) cells. The system is then studied dynamically with the ongoing processes including regular cell dividing to fill vacant sites, healthy cells becoming senescent or dying, and senescent cells dying. Statistical-mechanics description can provide patterns of time dependence and snapshots of morphological system properties. The developed theoretical modeling approach is found not only to corroborate recent experimental findings that inhibition of senescence can lead to extended lifespan, but also to confirm that, unlike 2D, in 3D senescent cells can contribute to tissue's connectivity/mechanical stability. The latter effect occurs by senescent cells forming the second infinite cluster in the regime when the regular (healthy) cell's infinite cluster still exists.
Research on teacher education programs: logic model approach.
Newton, Xiaoxia A; Poon, Rebecca C; Nunes, Nicole L; Stone, Elisa M
2013-02-01
Teacher education programs in the United States face increasing pressure to demonstrate their effectiveness through pupils' learning gains in classrooms where program graduates teach. The link between teacher candidates' learning in teacher education programs and pupils' learning in K-12 classrooms implicit in the policy discourse suggests a one-to-one correspondence. However, the logical steps leading from what teacher candidates have learned in their programs to what they are doing in classrooms that may contribute to their pupils' learning are anything but straightforward. In this paper, we argue that the logic model approach from scholarship on evaluation can enhance research on teacher education by making explicit the logical links between program processes and intended outcomes. We demonstrate the usefulness of the logic model approach through our own work on designing a longitudinal study that focuses on examining the process and impact of an undergraduate mathematics and science teacher education program.
A Variational Approach to the Modeling of MIMO Systems
Directory of Open Access Journals (Sweden)
Jraifi A
2007-01-01
Full Text Available Motivated by the study of the optimization of the quality of service for multiple input multiple output (MIMO systems in 3G (third generation, we develop a method for modeling MIMO channel . This method, which uses a statistical approach, is based on a variational form of the usual channel equation. The proposed equation is given by with scalar variable . Minimum distance of received vectors is used as the random variable to model MIMO channel. This variable is of crucial importance for the performance of the transmission system as it captures the degree of interference between neighbors vectors. Then, we use this approach to compute numerically the total probability of errors with respect to signal-to-noise ratio (SNR and then predict the numbers of antennas. By fixing SNR variable to a specific value, we extract informations on the optimal numbers of MIMO antennas.
A relaxation-based approach to damage modeling
Junker, Philipp; Schwarz, Stephan; Makowski, Jerzy; Hackl, Klaus
2017-01-01
Material models, including softening effects due to, for example, damage and localizations, share the problem of ill-posed boundary value problems that yield mesh-dependent finite element results. It is thus necessary to apply regularization techniques that couple local behavior described, for example, by internal variables, at a spatial level. This can take account of the gradient of the internal variable to yield mesh-independent finite element results. In this paper, we present a new approach to damage modeling that does not use common field functions, inclusion of gradients or complex integration techniques: Appropriate modifications of the relaxed (condensed) energy hold the same advantage as other methods, but with much less numerical effort. We start with the theoretical derivation and then discuss the numerical treatment. Finally, we present finite element results that prove empirically how the new approach works.
Coordination-theoretic approach to modelling grid service composition process
Institute of Scientific and Technical Information of China (English)
Meng Qian; Zhong Liu; Jing Wang; Li Yao; Weiming Zhang
2010-01-01
A grid service composite process is made up of complex coordinative activities.Developing the appropriate model of grid service coordinative activities is an important foundation for the grid service composition.According to the coordination theory,this paper elaborates the process of the grid service composition by using UML 2.0,and proposes an approach to modelling the grid service composition process based on the coordination theory.This approach helps not only to analyze accurately the task activities and relevant dependencies among task activities,but also to facilitate the adaptability of the grid service orchestration to further realize the connectivity,timeliness,appropriateness and expansibility of the grid service composition.
Innovation Networks New Approaches in Modelling and Analyzing
Pyka, Andreas
2009-01-01
The science of graphs and networks has become by now a well-established tool for modelling and analyzing a variety of systems with a large number of interacting components. Starting from the physical sciences, applications have spread rapidly to the natural and social sciences, as well as to economics, and are now further extended, in this volume, to the concept of innovations, viewed broadly. In an abstract, systems-theoretical approach, innovation can be understood as a critical event which destabilizes the current state of the system, and results in a new process of self-organization leading to a new stable state. The contributions to this anthology address different aspects of the relationship between innovation and networks. The various chapters incorporate approaches in evolutionary economics, agent-based modeling, social network analysis and econophysics and explore the epistemic tension between insights into economics and society-related processes, and the insights into new forms of complex dynamics.
Understanding complex urban systems multidisciplinary approaches to modeling
Gurr, Jens; Schmidt, J
2014-01-01
Understanding Complex Urban Systems takes as its point of departure the insight that the challenges of global urbanization and the complexity of urban systems cannot be understood – let alone ‘managed’ – by sectoral and disciplinary approaches alone. But while there has recently been significant progress in broadening and refining the methodologies for the quantitative modeling of complex urban systems, in deepening the theoretical understanding of cities as complex systems, or in illuminating the implications for urban planning, there is still a lack of well-founded conceptual thinking on the methodological foundations and the strategies of modeling urban complexity across the disciplines. Bringing together experts from the fields of urban and spatial planning, ecology, urban geography, real estate analysis, organizational cybernetics, stochastic optimization, and literary studies, as well as specialists in various systems approaches and in transdisciplinary methodologies of urban analysis, the volum...
A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model
DEFF Research Database (Denmark)
Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen
2007-01-01
This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertainties...
CFD Modeling of Wall Steam Condensation: Two-Phase Flow Approach versus Homogeneous Flow Approach
Directory of Open Access Journals (Sweden)
S. Mimouni
2011-01-01
Full Text Available The present work is focused on the condensation heat transfer that plays a dominant role in many accident scenarios postulated to occur in the containment of nuclear reactors. The study compares a general multiphase approach implemented in NEPTUNE_CFD with a homogeneous model, of widespread use for engineering studies, implemented in Code_Saturne. The model implemented in NEPTUNE_CFD assumes that liquid droplets form along the wall within nucleation sites. Vapor condensation on droplets makes them grow. Once the droplet diameter reaches a critical value, gravitational forces compensate surface tension force and then droplets slide over the wall and form a liquid film. This approach allows taking into account simultaneously the mechanical drift between the droplet and the gas, the heat and mass transfer on droplets in the core of the flow and the condensation/evaporation phenomena on the walls. As concern the homogeneous approach, the motion of the liquid film due to the gravitational forces is neglected, as well as the volume occupied by the liquid. Both condensation models and compressible procedures are validated and compared to experimental data provided by the TOSQAN ISP47 experiment (IRSN Saclay. Computational results compare favorably with experimental data, particularly for the Helium and steam volume fractions.
An interdisciplinary approach for earthquake modelling and forecasting
Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.
2016-12-01
Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.
A NEW APPROACH OF DIGITAL BRIDGE SURFACE MODEL GENERATION
Ju, H.
2012-01-01
Bridge areas present difficulties for orthophotos generation and to avoid “collapsed” bridges in the orthoimage, operator assistance is required to create the precise DBM (Digital Bridge Model), which is, subsequently, used for the orthoimage generation. In this paper, a new approach of DBM generation, based on fusing LiDAR (Light Detection And Ranging) data and aerial imagery, is proposed. The no precise exterior orientation of the aerial image is required for the DBM generation. First, a co...
A Conditional Approach to Panel Data Models with Common Shocks
Directory of Open Access Journals (Sweden)
Giovanni Forchini
2016-01-01
Full Text Available This paper studies the effects of common shocks on the OLS estimators of the slopes’ parameters in linear panel data models. The shocks are assumed to affect both the errors and some of the explanatory variables. In contrast to existing approaches, which rely on using results on martingale difference sequences, our method relies on conditional strong laws of large numbers and conditional central limit theorems for conditionally-heterogeneous random variables.
Modeling software with finite state machines a practical approach
Wagner, Ferdinand; Wagner, Thomas; Wolstenholme, Peter
2006-01-01
Modeling Software with Finite State Machines: A Practical Approach explains how to apply finite state machines to software development. It provides a critical analysis of using finite state machines as a foundation for executable specifications to reduce software development effort and improve quality. This book discusses the design of a state machine and of a system of state machines. It also presents a detailed analysis of development issues relating to behavior modeling with design examples and design rules for using finite state machines. This volume describes a coherent and well-tested fr
Ionization coefficient approach to modeling breakdown in nonuniform geometries.
Energy Technology Data Exchange (ETDEWEB)
Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Nicolaysen, Scott D.
2003-11-01
This report summarizes the work on breakdown modeling in nonuniform geometries by the ionization coefficient approach. Included are: (1) fits to primary and secondary ionization coefficients used in the modeling; (2) analytical test cases for sphere-to-sphere, wire-to-wire, corner, coaxial, and rod-to-plane geometries; a compilation of experimental data with source references; comparisons between code results, test case results, and experimental data. A simple criterion is proposed to differentiate between corona and spark. The effect of a dielectric surface on avalanche growth is examined by means of Monte Carlo simulations. The presence of a clean dry surface does not appear to enhance growth.
A Data Mining Approach to Modelling of Water Supply Assets
DEFF Research Database (Denmark)
Babovic, V.; Drecourt, J.; Keijzer, M.
2002-01-01
supply assets are mainly situated underground, and therefore not visible and under the influence of various highly unpredictable forces. This paper proposes the use of advanced data mining methods in order to determine the risks of pipe bursts. For example, analysis of the database of already occurred...... with the choice of pipes to be replaced, the outlined approach opens completely new avenues in asset modelling. The condition of an asset such as a water supply network deteriorates with age. With reliable risk models, addressing the evolution of risk with aging asset, it is now possible to plan optimal...
AN APPROACH IN MODELING TWO-DIMENSIONAL PARTIALLY CAVITATING FLOW
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
An approach of modeling viscosity, unsteady partially cavitating flows around lifting bodies is presented. By employing an one-fluid Navier-Stokers solver, the algorithm is proved to be able to handle two-dimensional laminar cavitating flows at moderate Reynolds number. Based on the state equation of water-vapor mixture, the constructive relations of densities and pressures are established. To numerically simulate the cavity wall, different pseudo transition of density models are presumed. The finite-volume method is adopted and the algorithm can be extended to three-dimensional cavitating flows.
A transformation approach to modelling multi-modal diffusions
DEFF Research Database (Denmark)
Forman, Julie Lyng; Sørensen, Michael
2014-01-01
when the diffusion is observed with additional measurement error. The new approach is applied to molecular dynamics data in the form of a reaction coordinate of the small Trp-zipper protein, from which the folding and unfolding rates of the protein are estimated. Because the diffusion coefficient...... is state-dependent, the new models provide a better fit to this type of protein folding data than the previous models with a constant diffusion coefficient, particularly when the effect of errors with a short time-scale is taken into account....
THE SIGNAL APPROACH TO MODELLING THE BALANCE OF PAYMENT CRISIS
Directory of Open Access Journals (Sweden)
O. Chernyak
2016-12-01
Full Text Available The paper considers and presents synthesis of theoretical models of balance of payment crisis and investigates the most effective ways to model the crisis in Ukraine. For mathematical formalization of balance of payment crisis, comparative analysis of the effectiveness of different calculation methods of Exchange Market Pressure Index was performed. A set of indicators that signal the growing likelihood of balance of payments crisis was defined using signal approach. With the help of minimization function thresholds indicators were selected, the crossing of which signalize increase in the probability of balance of payment crisis.
Laser modeling a numerical approach with algebra and calculus
Csele, Mark Steven
2014-01-01
Offering a fresh take on laser engineering, Laser Modeling: A Numerical Approach with Algebra and Calculus presents algebraic models and traditional calculus-based methods in tandem to make concepts easier to digest and apply in the real world. Each technique is introduced alongside a practical, solved example based on a commercial laser. Assuming some knowledge of the nature of light, emission of radiation, and basic atomic physics, the text:Explains how to formulate an accurate gain threshold equation as well as determine small-signal gainDiscusses gain saturation and introduces a novel pass
Noether symmetry approach in f(R)-tachyon model
Energy Technology Data Exchange (ETDEWEB)
Jamil, Mubasher, E-mail: mjamil@camp.nust.edu.pk [Center for Advanced Mathematics and Physics (CAMP), National University of Sciences and Technology (NUST), H-12, Islamabad (Pakistan); Mahomed, F.M., E-mail: Fazal.Mahomed@wits.ac.za [Centre for Differential Equations, Continuum Mechanics and Applications, School of Computational and Applied Mathematics, University of the Witwatersrand, Wits 2050 (South Africa); Momeni, D., E-mail: d.momeni@yahoo.com [Department of Physics, Faculty of Sciences, Tarbiat Moa' llem University, Tehran (Iran, Islamic Republic of)
2011-08-26
In this Letter by utilizing the Noether symmetry approach in cosmology, we attempt to find the tachyon potential via the application of this kind of symmetry to a flat Friedmann-Robertson-Walker (FRW) metric. We reduce the system of equations to simpler ones and obtain the general class of the tachyon's potential function and f(R) functions. We have found that the Noether symmetric model results in a power law f(R) and an inverse fourth power potential for the tachyonic field. Further we investigate numerically the cosmological evolution of our model and show explicitly the behavior of the equation of state crossing the cosmological constant boundary.
Surrogate based approaches to parameter inference in ocean models
Knio, Omar
2016-01-06
This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.
Modeling fabrication of nuclear components: An integrative approach
Energy Technology Data Exchange (ETDEWEB)
Hench, K.W.
1996-08-01
Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.
Injury prevention risk communication: A mental models approach
DEFF Research Database (Denmark)
Austin, Laurel Cecelia; Fischhoff, Baruch
2012-01-01
Individuals' decisions and behaviour can play a critical role in determining both the probability and severity of injury. Behavioural decision research studies peoples' decision-making processes in terms comparable to scientific models of optimal choices, providing a basis for focusing...... interventions on the most critical opportunities to reduce risks. That research often seeks to identify the ‘mental models’ that underlie individuals' interpretations of their circumstances and the outcomes of possible actions. In the context of injury prevention, a mental models approach would ask why people...... and create an expert model of the risk situation, interviewing lay people to elicit their comparable mental models, and developing and evaluating communication interventions designed to close the gaps between lay people and experts. This paper reviews the theory and method behind this research stream...
An Integrated Approach to Flexible Modelling and Animated Simulation
Institute of Scientific and Technical Information of China (English)
Li Shuliang; Wu Zhenye
1994-01-01
Based on the software support of SIMAN/CINEMA, this paper presents an integrated approach to flexible modelling and simulation with animation. The methodology provides a structured way of integrating mathematical and logical model, statistical experinentation, and statistical analysis with computer animation. Within this methodology, an animated simulation study is separated into six different activities: simulation objectives identification , system model development, simulation experiment specification, animation layout construction, real-time simulation and animation run, and output data analysis. These six activities are objectives driven, relatively independent, and integrate through software organization and simulation files. The key ideas behind this methodology are objectives orientation, modelling flexibility,simulation and animation integration, and application tailorability. Though the methodology is closely related to SIMAN/CINEMA, it can be extended to other software environments.
A reservoir simulation approach for modeling of naturally fractured reservoirs
Directory of Open Access Journals (Sweden)
H. Mohammadi
2012-12-01
Full Text Available In this investigation, the Warren and Root model proposed for the simulation of naturally fractured reservoir was improved. A reservoir simulation approach was used to develop a 2D model of a synthetic oil reservoir. Main rock properties of each gridblock were defined for two different types of gridblocks called matrix and fracture gridblocks. These two gridblocks were different in porosity and permeability values which were higher for fracture gridblocks compared to the matrix gridblocks. This model was solved using the implicit finite difference method. Results showed an improvement in the Warren and Root model especially in region 2 of the semilog plot of pressure drop versus time, which indicated a linear transition zone with no inflection point as predicted by other investigators. Effects of fracture spacing, fracture permeability, fracture porosity, matrix permeability and matrix porosity on the behavior of a typical naturally fractured reservoir were also presented.
Model-based approach for elevator performance estimation
Esteban, E.; Salgado, O.; Iturrospe, A.; Isasa, I.
2016-02-01
In this paper, a dynamic model for an elevator installation is presented in the state space domain. The model comprises both the mechanical and the electrical subsystems, including the electrical machine and a closed-loop field oriented control. The proposed model is employed for monitoring the condition of the elevator installation. The adopted model-based approach for monitoring employs the Kalman filter as an observer. A Kalman observer estimates the elevator car acceleration, which determines the elevator ride quality, based solely on the machine control signature and the encoder signal. Finally, five elevator key performance indicators are calculated based on the estimated car acceleration. The proposed procedure is experimentally evaluated, by comparing the key performance indicators calculated based on the estimated car acceleration and the values obtained from actual acceleration measurements in a test bench. Finally, the proposed procedure is compared with the sliding mode observer.
A Model Independent Approach to (p)Reheating
Özsoy, Ogan; Sinha, Kuver; Watson, Scott
2015-01-01
In this note we propose a model independent framework for inflationary (p)reheating. Our approach is analogous to the Effective Field Theory of Inflation, however here the inflaton oscillations provide an additional source of (discrete) symmetry breaking. Using the Goldstone field that non-linearly realizes time diffeormorphism invariance we construct a model independent action for both the inflaton and reheating sectors. Utilizing the hierarchy of scales present during the reheating process we are able to recover known results in the literature in a simpler fashion, including the presence of oscillations in the primordial power spectrum. We also construct a class of models where the shift symmetry of the inflaton is preserved during reheating, which helps alleviate past criticisms of (p)reheating in models of Natural Inflation. Extensions of our framework suggest the possibility of analytically investigating non-linear effects (such as rescattering and back-reaction) during thermalization without resorting t...
A model-based approach to human identification using ECG
Homer, Mark; Irvine, John M.; Wendelken, Suzanne
2009-05-01
Biometrics, such as fingerprint, iris scan, and face recognition, offer methods for identifying individuals based on a unique physiological measurement. Recent studies indicate that a person's electrocardiogram (ECG) may also provide a unique biometric signature. Current techniques for identification using ECG rely on empirical methods for extracting features from the ECG signal. This paper presents an alternative approach based on a time-domain model of the ECG trace. Because Auto-Regressive Integrated Moving Average (ARIMA) models form a rich class of descriptors for representing the structure of periodic time series data, they are well-suited to characterizing the ECG signal. We present a method for modeling the ECG, extracting features from the model representation, and identifying individuals using these features.
Computer Modeling of Violent Intent: A Content Analysis Approach
Energy Technology Data Exchange (ETDEWEB)
Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.
2014-01-03
We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.
A Model-based Prognostics Approach Applied to Pneumatic Valves
Directory of Open Access Journals (Sweden)
Matthew J. Daigle
2011-01-01
Full Text Available Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.
A Model-Based Prognostics Approach Applied to Pneumatic Valves
Daigle, Matthew J.; Goebel, Kai
2011-01-01
Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.
Systems pharmacology modeling: an approach to improving drug safety.
Bai, Jane P F; Fontana, Robert J; Price, Nathan D; Sangar, Vineet
2014-01-01
Advances in systems biology in conjunction with the expansion in knowledge of drug effects and diseases present an unprecedented opportunity to extend traditional pharmacokinetic and pharmacodynamic modeling/analysis to conduct systems pharmacology modeling. Many drugs that cause liver injury and myopathies have been studied extensively. Mitochondrion-centric systems pharmacology modeling is important since drug toxicity across a large number of pharmacological classes converges to mitochondrial injury and death. Approaches to systems pharmacology modeling of drug effects need to consider drug exposure, organelle and cellular phenotypes across all key cell types of human organs, organ-specific clinical biomarkers/phenotypes, gene-drug interaction and immune responses. Systems modeling approaches, that leverage the knowledge base constructed from curating a selected list of drugs across a wide range of pharmacological classes, will provide a critically needed blueprint for making informed decisions to reduce the rate of attrition for drugs in development and increase the number of drugs with an acceptable benefit/risk ratio.
Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach
Directory of Open Access Journals (Sweden)
Alistair McNair Senior
2016-01-01
Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.
THE MODEL OF EXTERNSHIP ORGANIZATION FOR FUTURE TEACHERS: QUALIMETRIC APPROACH
Directory of Open Access Journals (Sweden)
Taisiya A. Isaeva
2015-01-01
Full Text Available The aim of the paper is to present author’s model for bachelors – future teachers of vocational training. The model is been worked out from the standpoint of qualimetric approach and provides a pedagogical training.Methods. The process is based on the literature analysis of externship organization for students in higher education and includes the SWOT-analysis techniques in pedagogical training. The method of group expert evaluation is the main method of pedagogical qualimetry. Structural components of professional pedagogical competency of students-future teachers are defined. It allows us to determine a development level and criterion of estimation on mastering programme «Vocational training (branch-wise».Results. This article interprets the concept «pedagogical training»; its basic organization principles during students’ practice are stated. The methods of expert group formation are presented: self-assessment and personal data.Scientific novelty. The externship organization model for future teachers is developed. This model is based on pedagogical training, using qualimetric approach and the SWOT-analysis techniques. Proposed criterion-assessment procedures are managed to determine the developing levels of professional and pedagogical competency.Practical significance. The model is introduced into pedagogical training of educational process of Kalashnikov’s Izhevsk State Technical University, and can be used in other similar educational establishments.
Spatiotemporal infectious disease modeling: a BME-SIR approach.
Angulo, Jose; Yu, Hwa-Lung; Langousis, Andrea; Kolovos, Alexander; Wang, Jinfeng; Madrid, Ana Esther; Christakos, George
2013-01-01
This paper is concerned with the modeling of infectious disease spread in a composite space-time domain under conditions of uncertainty. We focus on stochastic modeling that accounts for basic mechanisms of disease distribution and multi-sourced in situ uncertainties. Starting from the general formulation of population migration dynamics and the specification of transmission and recovery rates, the model studies the functional formulation of the evolution of the fractions of susceptible-infected-recovered individuals. The suggested approach is capable of: a) modeling population dynamics within and across localities, b) integrating the disease representation (i.e. susceptible-infected-recovered individuals) with observation time series at different geographical locations and other sources of information (e.g. hard and soft data, empirical relationships, secondary information), and c) generating predictions of disease spread and associated parameters in real time, while considering model and observation uncertainties. Key aspects of the proposed approach are illustrated by means of simulations (i.e. synthetic studies), and a real-world application using hand-foot-mouth disease (HFMD) data from China.
Approaching the other: Investigation of a descriptive belief revision model
Directory of Open Access Journals (Sweden)
Spyridon Stelios
2016-12-01
Full Text Available When an individual—a hearer—is confronted with an opinion expressed by another individual—a speaker—differing from her only in terms of a degree of belief, how will she react? In trying to answer that question this paper reintroduces and investigates a descriptive belief revision model designed to measure approaches. Parameters of the model are the hearer’s credibility account of the speaker, the initial difference between the hearer’s and speaker’s degrees of belief, and the hearer’s resistance to change. Within an interdisciplinary framework, two empirical studies were conducted. A comparison was carried out between empirically recorded revisions and revisions according to the model. Results showed that the theoretical model is highly confirmed. An interesting finding is the measurement of an “unexplainable behaviour” that is not classified either as repulsion or as approach. At a second level of analysis, the model is compared to the Bayesian framework of inference. Structural differences and evidence for optimal descriptive adequacy of the former were highlighted.
Generalized linear models with coarsened covariates: a practical Bayesian approach.
Johnson, Timothy R; Wiest, Michelle M
2014-06-01
Coarsened covariates are a common and sometimes unavoidable phenomenon encountered in statistical modeling. Covariates are coarsened when their values or categories have been grouped. This may be done to protect privacy or to simplify data collection or analysis when researchers are not aware of their drawbacks. Analyses with coarsened covariates based on ad hoc methods can compromise the validity of inferences. One valid method for accounting for a coarsened covariate is to use a marginal likelihood derived by summing or integrating over the unknown realizations of the covariate. However, algorithms for estimation based on this approach can be tedious to program and can be computationally expensive. These are significant obstacles to their use in practice. To overcome these limitations, we show that when expressed as a Bayesian probability model, a generalized linear model with a coarsened covariate can be posed as a tractable missing data problem where the missing data are due to censoring. We also show that this model is amenable to widely available general-purpose software for simulation-based inference for Bayesian probability models, providing researchers a very practical approach for dealing with coarsened covariates.
Object-Oriented Approach to Modeling Units of Pneumatic Systems
Directory of Open Access Journals (Sweden)
Yu. V. Kyurdzhiev
2014-01-01
Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability
Modeling drug- and chemical- induced hepatotoxicity with systems biology approaches
Directory of Open Access Journals (Sweden)
Sudin eBhattacharya
2012-12-01
Full Text Available We provide an overview of computational systems biology approaches as applied to the study of chemical- and drug-induced toxicity. The concept of ‘toxicity pathways’ is described in the context of the 2007 US National Academies of Science report, Toxicity testing in the 21st Century: A Vision and A Strategy. Pathway mapping and modeling based on network biology concepts are a key component of the vision laid out in this report for a more biologically-based analysis of dose-response behavior and the safety of chemicals and drugs. We focus on toxicity of the liver (hepatotoxicity – a complex phenotypic response with contributions from a number of different cell types and biological processes. We describe three case studies of complementary multi-scale computational modeling approaches to understand perturbation of toxicity pathways in the human liver as a result of exposure to environmental contaminants and specific drugs. One approach involves development of a spatial, multicellular virtual tissue model of the liver lobule that combines molecular circuits in individual hepatocytes with cell-cell interactions and blood-mediated transport of toxicants through hepatic sinusoids, to enable quantitative, mechanistic prediction of hepatic dose-response for activation of the AhR toxicity pathway. Simultaneously, methods are being developing to extract quantitative maps of intracellular signaling and transcriptional regulatory networks perturbed by environmental contaminants, using a combination of gene expression and genome-wide protein-DNA interaction data. A predictive physiological model (DILIsymTM to understand drug-induced liver injury (DILI, the most common adverse event leading to termination of clinical development programs and regulatory actions on drugs, is also described. The model initially focuses on reactive metabolite-induced DILI in response to administration of acetaminophen, and spans multiple biological scales.
A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model
Energy Technology Data Exchange (ETDEWEB)
Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-11
This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimated stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.
ABOUT COMPLEX APPROACH TO MODELLING OF TECHNOLOGICAL MACHINES FUNCTIONING
Directory of Open Access Journals (Sweden)
A. A. Honcharov
2015-01-01
Full Text Available Problems arise in the process of designing, production and investigation of a complicated technological machine. These problems concern not only properties of some types of equipment but they have respect to regularities of control object functioning as a whole. A technological machine is thought of as such technological complex where it is possible to lay emphasis on a control system (or controlling device and a controlled object. The paper analyzes a number of existing approaches to construction of models for controlling devices and their functioning. A complex model for a technological machine operation has been proposed in the paper; in other words it means functioning of a controlling device and a controlled object of the technological machine. In this case models of the controlling device and the controlled object of the technological machine can be represented as aggregate combination (elements of these models. The paper describes a conception on realization of a complex model for a technological machine as a model for interaction of units (elements in the controlling device and the controlled object. When a control activation is given to the controlling device of the technological machine its modelling is executed at an algorithmic or logic level and the obtained output signals are interpreted as events and information about them is transferred to executive mechanisms.The proposed scheme of aggregate integration considers element models as object classes and the integration scheme is presented as a combination of object property values (combination of a great many input and output contacts and combination of object interactions (in the form of an integration operator. Spawn of parent object descendants of the technological machine model and creation of their copies in various project parts is one of the most important means of the distributed technological machine modelling that makes it possible to develop complicated models of
Quantitative versus qualitative modeling: a complementary approach in ecosystem study.
Bondavalli, C; Favilla, S; Bodini, A
2009-02-01
Natural disturbance or human perturbation act upon ecosystems by changing some dynamical parameters of one or more species. Foreseeing these modifications is necessary before embarking on an intervention: predictions may help to assess management options and define hypothesis for interventions. Models become valuable tools for studying and making predictions only when they capture types of interactions and their magnitude. Quantitative models are more precise and specific about a system, but require a large effort in model construction. Because of this very often ecological systems remain only partially specified and one possible approach to their description and analysis comes from qualitative modelling. Qualitative models yield predictions as directions of change in species abundance but in complex systems these predictions are often ambiguous, being the result of opposite actions exerted on the same species by way of multiple pathways of interactions. Again, to avoid such ambiguities one needs to know the intensity of all links in the system. One way to make link magnitude explicit in a way that can be used in qualitative analysis is described in this paper and takes advantage of another type of ecosystem representation: ecological flow networks. These flow diagrams contain the structure, the relative position and the connections between the components of a system, and the quantity of matter flowing along every connection. In this paper it is shown how these ecological flow networks can be used to produce a quantitative model similar to the qualitative counterpart. Analyzed through the apparatus of loop analysis this quantitative model yields predictions that are by no means ambiguous, solving in an elegant way the basic problem of qualitative analysis. The approach adopted in this work is still preliminary and we must be careful in its application.
A multi-model approach to X-ray pulsars
Directory of Open Access Journals (Sweden)
Schönherr G.
2014-01-01
Full Text Available The emission characteristics of X-ray pulsars are governed by magnetospheric accretion within the Alfvén radius, leading to a direct coupling of accretion column properties and interactions at the magnetosphere. The complexity of the physical processes governing the formation of radiation within the accreted, strongly magnetized plasma has led to several sophisticated theoretical modelling efforts over the last decade, dedicated to either the formation of the broad band continuum, the formation of cyclotron resonance scattering features (CRSFs or the formation of pulse profiles. While these individual approaches are powerful in themselves, they quickly reach their limits when aiming at a quantitative comparison to observational data. Too many fundamental parameters, describing the formation of the accretion columns and the systems’ overall geometry are unconstrained and different models are often based on different fundamental assumptions, while everything is intertwined in the observed, highly phase-dependent spectra and energy-dependent pulse profiles. To name just one example: the (phase variable line width of the CRSFs is highly dependent on the plasma temperature, the existence of B-field gradients (geometry and observation angle, parameters which, in turn, drive the continuum radiation and are driven by the overall two-pole geometry for the light bending model respectively. This renders a parallel assessment of all available spectral and timing information by a compatible across-models-approach indispensable. In a collaboration of theoreticians and observers, we have been working on a model unification project over the last years, bringing together theoretical calculations of the Comptonized continuum, Monte Carlo simulations and Radiation Transfer calculations of CRSFs as well as a General Relativity (GR light bending model for ray tracing of the incident emission pattern from both magnetic poles. The ultimate goal is to implement a
A participatory modelling approach to developing a numerical sediment dynamics model
Jones, Nicholas; McEwen, Lindsey; Parker, Chris; Staddon, Chad
2016-04-01
Fluvial geomorphology is recognised as an important consideration in policy and legislation in the management of river catchments. Despite this recognition, limited knowledge exchange occurs between scientific researchers and river management practitioners. An example of this can be found within the limited uptake of numerical models of sediment dynamics by river management practitioners in the United Kingdom. The uptake of these models amongst the applied community is important as they have the potential to articulate how, at the catchment-scale, the impacts of management strategies of land-use change affect sediment dynamics and resulting channel quality. This paper describes and evaluates a new approach which involves river management stakeholders in an iterative and reflexive participatory modelling process. The aim of this approach was to create an environment for knowledge exchange between the stakeholders and the research team in the process of co-constructing a model. This process adopted a multiple case study approach, involving four groups of river catchment stakeholders in the United Kingdom. These stakeholder groups were involved in several stages of the participatory modelling process including: requirements analysis, model design, model development, and model evaluation. Stakeholders have provided input into a number of aspects of the modelling process, such as: data requirements, user interface, modelled processes, model assumptions, model applications, and model outputs. This paper will reflect on this process, in particular: the innovative methods used, data generated, and lessons learnt.
An implicit approach to model plant infestation by insect pests.
Lopes, Christelle; Spataro, Thierry; Doursat, Christophe; Lapchin, Laurent; Arditi, Roger
2007-09-07
Various spatial approaches were developed to study the effect of spatial heterogeneities on population dynamics. We present in this paper a flux-based model to describe an aphid-parasitoid system in a closed and spatially structured environment, i.e. a greenhouse. Derived from previous work and adapted to host-parasitoid interactions, our model represents the level of plant infestation as a continuous variable corresponding to the number of plants bearing a given density of pests at a given time. The variation of this variable is described by a partial differential equation. It is coupled to an ordinary differential equation and a delay-differential equation that describe the parasitized host population and the parasitoid population, respectively. We have applied our approach to the pest Aphis gossypii and to one of its parasitoids, Lysiphlebus testaceipes, in a melon greenhouse. Numerical simulations showed that, regardless of the number and distribution of hosts in the greenhouse, the aphid population is slightly larger if parasitoids display a type III rather than a type II functional response. However, the population dynamics depend on the initial distribution of hosts and the initial density of parasitoids released, which is interesting for biological control strategies. Sensitivity analysis showed that the delay in the parasitoid equation and the growth rate of the pest population are crucial parameters for predicting the dynamics. We demonstrate here that such a flux-based approach generates relevant predictions with a more synthetic formalism than a common plant-by-plant model. We also explain how this approach can be better adapted to test different management strategies and to manage crops of several greenhouses.
A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model
2007-06-01
12TH ICCRTS “Adapting C2 to the 21st Century” A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model Tracks...Lenahan2 identified metrics and techniques for adversarial C2 process modeling . We intend to further that work by developing a set of adversarial process ...Approaches in an Adversarial Process Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK
Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces
Energy Technology Data Exchange (ETDEWEB)
Lomov, I; Antoun, T; Vorobiev, O
2009-12-16
Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian hydrocode GEODYN (Lomov and Liu 2005). Lagrangian methods with conforming meshes and explicit inclusion of joints in the geologic model are well suited for such an analysis. Unfortunately, current meshing tools are unable to automatically generate adequate hexahedral meshes for large numbers of irregular polyhedra. Another concern is that joint stiffness in such explicit computations requires significantly reduced time steps, with negative implications for both the efficiency and quality of the numerical solution. An alternative approach is to use non-conforming meshes and embed joint information into regular computational elements. However, once slip displacement on the joints become comparable to the zone size, Lagrangian (even non-conforming) meshes could suffer from tangling and decreased time step problems. The use of non-conforming meshes in an Eulerian solver may alleviate these difficulties and provide a viable numerical approach for modeling the effects of faults on the dynamic response of geologic materials. We studied shock propagation in jointed/faulted media using a Lagrangian and two Eulerian approaches. To investigate the accuracy of this joint treatment the GEODYN calculations have been compared with results from the Lagrangian code GEODYN-L which uses an explicit treatment of joints via common plane contact. We explore two approaches to joint treatment in the code, one for joints with finite thickness and the other for tight joints. In all cases the sliding interfaces are tracked explicitly without homogenization or blending the joint and block response into an average response. In general, rock joints will introduce an increase in normal compliance in addition to a reduction in shear strength. In the
Fugacity superposition: a new approach to dynamic multimedia fate modeling.
Hertwich, E G
2001-08-01
The fugacities, concentrations, or inventories of pollutants in environmental compartments as determined by multimedia environmental fate models of the Mackay type can be superimposed on each other. This is true for both steady-state (level III) and dynamic (level IV) models. Any problem in multimedia fate models with linear, time-invariant transfer and transformation coefficients can be solved through a superposition of a set of n independent solutions to a set of coupled, homogeneous first-order differential equations, where n is the number of compartments in the model. For initial condition problems in dynamic models, the initial inventories can be separated, e.g. by a compartment. The solution is obtained by adding the single-compartment solutions. For time-varying emissions, a convolution integral is used to superimpose solutions. The advantage of this approach is that the differential equations have to be solved only once. No numeric integration is required. Alternatively, the dynamic model can be simplified to algebraic equations using the Laplace transform. For time-varying emissions, the Laplace transform of the model equations is simply multiplied with the Laplace transform of the emission profile. It is also shown that the time-integrated inventories of the initial conditions problems are the same as the inventories in the steady-state problem. This implies that important properties of pollutants such as potential dose, persistence, and characteristic travel distance can be derived from the steady state.
A novel approach to modeling spacecraft spectral reflectance
Willison, Alexander; Bédard, Donald
2016-10-01
Simulated spectrometric observations of unresolved resident space objects are required for the interpretation of quantities measured by optical telescopes. This allows for their characterization as part of regular space surveillance activity. A peer-reviewed spacecraft reflectance model is necessary to help improve the understanding of characterization measurements. With this objective in mind, a novel approach to model spacecraft spectral reflectance as an overall spectral bidirectional reflectance distribution function (sBRDF) is presented. A spacecraft's overall sBRDF is determined using its triangular-faceted computer-aided design (CAD) model and the empirical sBRDF of its homogeneous materials. The CAD model is used to determine the proportional contribution of each homogeneous material to the overall reflectance. Each empirical sBRDF is contained in look-up tables developed from measurements made over a range of illumination and reflection geometries using simple interpolation and extrapolation techniques. A demonstration of the spacecraft reflectance model is provided through simulation of an optical ground truth characterization using the Canadian Advanced Nanospace eXperiment-1 Engineering Model nanosatellite as the subject. Validation of the reflectance model is achieved through a qualitative comparison of simulated and measured quantities.
Cancer systems biology and modeling: microscopic scale and multiscale approaches.
Masoudi-Nejad, Ali; Bidkhori, Gholamreza; Hosseini Ashtiani, Saman; Najafi, Ali; Bozorgmehr, Joseph H; Wang, Edwin
2015-02-01
Cancer has become known as a complex and systematic disease on macroscopic, mesoscopic and microscopic scales. Systems biology employs state-of-the-art computational theories and high-throughput experimental data to model and simulate complex biological procedures such as cancer, which involves genetic and epigenetic, in addition to intracellular and extracellular complex interaction networks. In this paper, different systems biology modeling techniques such as systems of differential equations, stochastic methods, Boolean networks, Petri nets, cellular automata methods and agent-based systems are concisely discussed. We have compared the mentioned formalisms and tried to address the span of applicability they can bear on emerging cancer modeling and simulation approaches. Different scales of cancer modeling, namely, microscopic, mesoscopic and macroscopic scales are explained followed by an illustration of angiogenesis in microscopic scale of the cancer modeling. Then, the modeling of cancer cell proliferation and survival are examined on a microscopic scale and the modeling of multiscale tumor growth is explained along with its advantages.
Modeling the crop transpiration using an optimality-based approach
Institute of Scientific and Technical Information of China (English)
Stanislaus; J.Schymanski; Murugesu; Sivapalan
2008-01-01
Evapotranspiration constitutes more than 80% of the long-term water balance in Northern China.In this area,crop transpiration due to large areas of agriculture and irrigation is responsible for the majority of evapotranspiration.A model for crop transpiration is therefore essential for estimating the agricultural water consumption and understanding its feedback to the environment.However,most existing hydrological models usually calculate transpiration by relying on parameter calibration against local observations,and do not take into account crop feedback to the ambient environment.This study presents an optimality-based ecohydrology model that couples an ecological hypothesis,the photosynthetic process,stomatal movement,water balance,root water uptake and crop senescence,with the aim of predicting crop characteristics,CO2 assimilation and water balance based only on given meteorological data.Field experiments were conducted in the Weishan Irrigation District of Northern China to evaluate performance of the model.Agreement between simulation and measurement was achieved for CO2 assimilation,evapotranspiration and soil moisture content.The vegetation optimality was proven valid for crops and the model was applicable for both C3 and C4 plants.Due to the simple scheme of the optimality-based approach as well as its capability for modeling dynamic interactions between crops and the water cycle without prior vegetation information,this methodology is potentially useful to couple with the distributed hydrological model for application at the watershed scale.
A DYNAMICAL SYSTEM APPROACH IN MODELING TECHNOLOGY TRANSFER
Directory of Open Access Journals (Sweden)
Hennie Husniah
2016-05-01
Full Text Available In this paper we discuss a mathematical model of two parties technology transfer from a leader to a follower. The model is reconstructed via dynamical system approach from a known standard Raz and Assa model and we found some important conclusion which have not been discussed in the original model. The model assumes that in the absence of technology transfer from a leader to a follower, both the leader and the follower have a capability to grow independently with a known upper limit of the development. We obtain a rich mathematical structure of the steady state solution of the model. We discuss a special situation in which the upper limit of the technological development of the follower is higher than that of the leader, but the leader has started earlier than the follower in implementing the technology. In this case we show a paradox stating that the follower is unable to reach its original upper limit of the technological development could appear whenever the transfer rate is sufficiently high. We propose a new model to increase realism so that any technological transfer rate could only has a positive effect in accelerating the rate of growth of the follower in reaching its original upper limit of the development.
Inverse modeling approach to allogenic karst system characterization.
Dörfliger, N; Fleury, P; Ladouche, B
2009-01-01
Allogenic karst systems function in a particular way that is influenced by the type of water infiltrating through river water losses, by karstification processes, and by water quality. Management of this system requires a good knowledge of its structure and functioning, for which a new methodology based on an inverse modeling approach appears to be well suited. This approach requires both spring and river inflow discharge measurements and a continuous record of chemical parameters in the river and at the spring. The inverse model calculates unit hydrographs and the impulse responses of fluxes from rainfall hydraulic head at the spring or rainfall flux data, the purpose of which is hydrograph separation. Hydrograph reconstruction is done using rainfall and river inflow data as model input and enables definition at each time step of the ratio of each component. Using chemical data, representing event and pre-event water, as input, it is possible to determine the origin of spring water (either fast flow through the epikarstic zone or slow flow through the saturated zone). This study made it possible to improve a conceptual model of allogenic karst system functioning. The methodology is used to study the Bas-Agly and the Cent Font karst systems, two allogenic karst systems in Southern France.
A secured e-tendering modeling using misuse case approach
Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman
2016-08-01
Major risk factors relating to electronic transactions may lead to destructive impacts on trust and transparency in the process of tendering. Currently, electronic tendering (e-tendering) systems still remain uncertain in issues relating to legal and security compliance and most importantly it has an unclear security framework. Particularly, the available systems are lacking in addressing integrity, confidentiality, authentication, and non-repudiation in e-tendering requirements. Thus, one of the challenges in developing an e-tendering system is to ensure the system requirements include the function for secured and trusted environment. Therefore, this paper aims to model a secured e-tendering system using misuse case approach. The modeling process begins with identifying the e-tendering process, which is based on the Australian Standard Code of Tendering (AS 4120-1994). It is followed by identifying security threats and their countermeasure. Then, the e-tendering was modelled using misuse case approach. The model can contribute to e-tendering developers and also to other researchers or experts in the e-tendering domain.
Multiple comparisons in genetic association studies: a hierarchical modeling approach.
Yi, Nengjun; Xu, Shizhong; Lou, Xiang-Yang; Mallick, Himel
2014-02-01
Multiple comparisons or multiple testing has been viewed as a thorny issue in genetic association studies aiming to detect disease-associated genetic variants from a large number of genotyped variants. We alleviate the problem of multiple comparisons by proposing a hierarchical modeling approach that is fundamentally different from the existing methods. The proposed hierarchical models simultaneously fit as many variables as possible and shrink unimportant effects towards zero. Thus, the hierarchical models yield more efficient estimates of parameters than the traditional methods that analyze genetic variants separately, and also coherently address the multiple comparisons problem due to largely reducing the effective number of genetic effects and the number of statistically "significant" effects. We develop a method for computing the effective number of genetic effects in hierarchical generalized linear models, and propose a new adjustment for multiple comparisons, the hierarchical Bonferroni correction, based on the effective number of genetic effects. Our approach not only increases the power to detect disease-associated variants but also controls the Type I error. We illustrate and evaluate our method with real and simulated data sets from genetic association studies. The method has been implemented in our freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/).
Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach.
Senior, Alistair M; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J
2016-01-01
Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.
Cavity approach for modeling and fitting polymer stretching
Massucci, Francesco Alessandro; Vicente, Conrad J Pérez
2014-01-01
The mechanical properties of molecules are today captured by single molecule manipulation experiments, so that polymer features are tested at a nanometric scale. Yet devising mathematical models to get further insight beyond the commonly studied force--elongation relation is typically hard. Here we draw from techniques developed in the context of disordered systems to solve models for single and double--stranded DNA stretching in the limit of a long polymeric chain. Since we directly derive the marginals for the molecule local orientation, our approach allows us to readily calculate the experimental elongation as well as other observables at wish. As an example, we evaluate the correlation length as a function of the stretching force. Furthermore, we are able to fit successfully our solution to real experimental data. Although the model is admittedly phenomenological, our findings are very sound. For single--stranded DNA our solution yields the correct (monomer) scale and, yet more importantly, the right pers...
Autonomous Cleaning of Corrupted Scanned Documents - A Generative Modeling Approach
Dai, Zhenwen
2012-01-01
We study the task of cleaning scanned text documents that are strongly corrupted by dirt such as manual line strokes, spilled ink etc. We aim at autonomously removing dirt from a single letter-size page based only on the information the page contains. Our approach, therefore, has to learn character representations without supervision and requires a mechanism to distinguish learned representations from irregular patterns. To learn character representations, we use a probabilistic generative model parameterizing pattern features, feature variances, the features' planar arrangements, and pattern frequencies. The latent variables of the model describe pattern class, pattern position, and the presence or absence of individual pattern features. The model parameters are optimized using a novel variational EM approximation. After learning, the parameters represent, independent of their absolute position, planar feature arrangements and their variances. A quality measure defined based on the learned representation the...
Oscillation threshold of a clarinet model: a numerical continuation approach
Karkar, Sami; Cochelin, Bruno; 10.1121/1.3651231
2012-01-01
This paper focuses on the oscillation threshold of single reed instruments. Several characteristics such as blowing pressure at threshold, regime selection, and playing frequency are known to change radically when taking into account the reed dynamics and the flow induced by the reed motion. Previous works have shown interesting tendencies, using analytical expressions with simplified models. In the present study, a more elaborated physical model is considered. The influence of several parameters, depending on the reed properties, the design of the instrument or the control operated by the player, are studied. Previous results on the influence of the reed resonance frequency are confirmed. New results concerning the simultaneous influence of two model parameters on oscillation threshold, regime selection and playing frequency are presented and discussed. The authors use a numerical continuation approach. Numerical continuation consists in following a given solution of a set of equations when a parameter varie...
Forecasting wind-driven wildfires using an inverse modelling approach
Directory of Open Access Journals (Sweden)
O. Rios
2013-12-01
Full Text Available A technology able to rapidly forecast wildlfire dynamics would lead to a paradigm shift in the response to emergencies, providing the Fire Service with essential information about the on-going fire. The article at hand presents and explores a novel methodology to forecast wildfire dynamics in wind-driven conditions, using real time data assimilation and inverse modelling. The forecasting algorithm combines Rothermel's rate of spread theory with a perimeter expansion model based on Huygens principle and solves the optimisation problem with a tangent linear approach and a forward automatic differentiation. Its potential is investigated using synthetic data and evaluated in different wildfire scenarios. The results show the high capacity of the method to quickly predict the location of the fire front with a positive lead time (ahead of the event. This work opens the door to further advances framework and more sophisticated models while keeping the computational time suitable for operativeness.
Lightning Modelling: From 3D to Circuit Approach
Moussa, H.; Abdi, M.; Issac, F.; Prost, D.
2012-05-01
The topic of this study is electromagnetic environment and electromagnetic interferences (EMI) effects, specifically the modelling of lightning indirect effects [1] on aircraft electrical systems present on deported and highly exposed equipments, such as nose landing gear (NLG) and nacelle, through a circuit approach. The main goal of the presented work, funded by a French national project: PREFACE, is to propose a simple equivalent electrical circuit to represent a geometrical structure, taking into account mutual, self inductances, and resistances, which play a fundamental role in the lightning current distribution. Then this model is intended to be coupled to a functional one, describing a power train chain composed of: a converter, a shielded power harness and a motor or a set of resistors used as a load for the converter. The novelty here, is to provide a pre-sizing qualitative approach allowing playing on integration in pre-design phases. This tool intends to offer a user-friendly way for replying rapidly to calls for tender, taking into account the lightning constraints. Two cases are analysed: first, a NLG that is composed of tubular pieces that can be easily approximated by equivalent cylindrical straight conductors. Therefore, passive R, L, M elements of the structure can be extracted through analytical engineer formulas such as those implemented in the partial element equivalent circuit (PEEC) [2] technique. Second, the same approach is intended to be applied on an electrical de-icing nacelle sub-system.
A Random Matrix Approach for Quantifying Model-Form Uncertainties in Turbulence Modeling
Xiao, Heng; Ghanem, Roger G
2016-01-01
With the ever-increasing use of Reynolds-Averaged Navier--Stokes (RANS) simulations in mission-critical applications, the quantification of model-form uncertainty in RANS models has attracted attention in the turbulence modeling community. Recently, a physics-based, nonparametric approach for quantifying model-form uncertainty in RANS simulations has been proposed, where Reynolds stresses are projected to physically meaningful dimensions and perturbations are introduced only in the physically realizable limits. However, a challenge associated with this approach is to assess the amount of information introduced in the prior distribution and to avoid imposing unwarranted constraints. In this work we propose a random matrix approach for quantifying model-form uncertainties in RANS simulations with the realizability of the Reynolds stress guaranteed. Furthermore, the maximum entropy principle is used to identify the probability distribution that satisfies the constraints from available information but without int...
Value Delivery Architecture Modeling – A New Approach for Business Modeling
Directory of Open Access Journals (Sweden)
Joachim Metzger
2015-08-01
Full Text Available Complexity and uncertainty have evolved as important challenges for entrepreneurship in many industries. Value Delivery Architecture Modeling (VDAM is a proposal for a new approach for business modeling to conquer these challenges. In addition to the creation of transparency and clarity, our approach supports the operationalization of business model ideas. VDAM is based on the combination of a new business modeling language called VDML, ontology building, and the implementation of a level of cross-company abstraction. The application of our new approach in the area of electric mobility in Germany, an industry sector with high levels of uncertainty and a lack of common understanding, shows several promising results: VDAM enables the development of an unambiguous and unbiased view on value creation. Additionally it allows for several applications leading to a more informed decision towards the implementation of new business models.
Bayesian network approach for modeling local failure in lung cancer
Oh, Jung Hun; Craft, Jeffrey; Al-Lozi, Rawan; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O; Bradley, Jeffrey D; Naqa, Issam El
2011-01-01
Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins’ role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which is comprised of clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogenous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients. PMID:21335651
Cheng, C. M.; Peng, Z. K.; Zhang, W. M.; Meng, G.
2017-03-01
Nonlinear problems have drawn great interest and extensive attention from engineers, physicists and mathematicians and many other scientists because most real systems are inherently nonlinear in nature. To model and analyze nonlinear systems, many mathematical theories and methods have been developed, including Volterra series. In this paper, the basic definition of the Volterra series is recapitulated, together with some frequency domain concepts which are derived from the Volterra series, including the general frequency response function (GFRF), the nonlinear output frequency response function (NOFRF), output frequency response function (OFRF) and associated frequency response function (AFRF). The relationship between the Volterra series and other nonlinear system models and nonlinear problem solving methods are discussed, including the Taylor series, Wiener series, NARMAX model, Hammerstein model, Wiener model, Wiener-Hammerstein model, harmonic balance method, perturbation method and Adomian decomposition. The challenging problems and their state of arts in the series convergence study and the kernel identification study are comprehensively introduced. In addition, a detailed review is then given on the applications of Volterra series in mechanical engineering, aeroelasticity problem, control engineering, electronic and electrical engineering.
Effective Model Approach to the Dense State of QCD Matter
Fukushima, Kenji
2010-01-01
The first-principle approach to the dense state of QCD matter, i.e. the lattice-QCD simulation at finite baryon density, is not under theoretical control for the moment. The effective model study based on QCD symmetries is a practical alternative. However the model parameters that are fixed by hadronic properties in the vacuum may have unknown dependence on the baryon chemical potential. We propose a new prescription to constrain the effective model parameters by the matching condition with the thermal Statistical Model. In the transitional region where thermal quantities blow up in the Statistical Model, deconfined quarks and gluons should smoothly take over the relevant degrees of freedom from hadrons and resonances. We use the Polyakov-loop coupled Nambu--Jona-Lasinio (PNJL) model as an effective description in the quark side and show how the matching condition is satisfied by a simple ansatz on the Polyakov loop potential. Our results favor a phase diagram with the chiral phase transition located at sligh...
Exploring a type-theoretic approach to accessibility constraint modelling
Pogodalla, Sylvain
2008-01-01
The type-theoretic modelling of DRT that [degroote06] proposed features continuations for the management of the context in which a clause has to be interpreted. This approach, while keeping the standard definitions of quantifier scope, translates the rules of the accessibility constraints of discourse referents inside the semantic recipes. In this paper, we deal with additional rules for these accessibility constraints. In particular in the case of discourse referents introduced by proper nouns, that negation does not block, and in the case of rhetorical relations that structure discourses. We show how this continuation-based approach applies to those accessibility constraints and how we can consider the parallel management of various principles.
Multiscale approach to modeling intrinsic dissipation in solids
Kunal, K.; Aluru, N. R.
2016-08-01
In this paper, we develop a multiscale approach to model intrinsic dissipation under high frequency of vibrations in solids. For vibrations with a timescale comparable to the phonon relaxation time, the local phonon distribution deviates from the equilibrium distribution. We extend the quasiharmonic (QHM) method to describe the dynamics under such a condition. The local deviation from the equilibrium state is characterized using a nonequilibrium stress tensor. A constitutive relation for the time evolution of the stress component is obtained. We then parametrize the evolution equation using the QHM method and a stochastic sampling approach. The stress relaxation dynamics is obtained using mode Langevin dynamics. Methods to obtain the input variables for the Langevin dynamics are discussed. The proposed methodology is used to obtain the dissipation rate Edissip for different cases. Frequency and size effect on Edissip are studied. The results are compared with those obtained using nonequilibrium molecular dynamics (MD).
Model predictive control approach for a CPAP-device
Directory of Open Access Journals (Sweden)
Scheel Mathias
2017-09-01
Full Text Available The obstructive sleep apnoea syndrome (OSAS is characterized by a collapse of the upper respiratory tract, resulting in a reduction of the blood oxygen- and an increase of the carbon dioxide (CO2 - concentration, which causes repeated sleep disruptions. The gold standard to treat the OSAS is the continuous positive airway pressure (CPAP therapy. The continuous pressure keeps the upper airway open and prevents the collapse of the upper respiratory tract and the pharynx. Most of the available CPAP-devices cannot maintain the pressure reference [1]. In this work a model predictive control approach is provided. This control approach has the possibility to include the patient’s breathing effort into the calculation of the control variable. Therefore a patient-individualized control strategy can be developed.
Anomalous superconductivity in the tJ model; moment approach
DEFF Research Database (Denmark)
Sørensen, Mads Peter; Rodriguez-Nunez, J.J.
1997-01-01
By extending the moment approach of Nolting (Z, Phys, 225 (1972) 25) in the superconducting phase, we have constructed the one-particle spectral functions (diagonal and off-diagonal) for the tJ model in any dimensions. We propose that both the diagonal and the off-diagonal spectral functions...... Hartree shift which in the end result enlarges the bandwidth of the free carriers allowing us to take relative high values of J/t and allowing superconductivity to live in the T-c-rho phase diagram, in agreement with numerical calculations in a cluster, We have calculated the static spin susceptibility......, chi(T), and the specific heat, C-v(T), within the moment approach. We find that all the relevant physical quantities show the signature of superconductivity at T-c in the form of kinks (anomalous behavior) or jumps, for low density, in agreement with recent published literature, showing a generic...
Directory of Open Access Journals (Sweden)
Xiao-meng SONG
2013-01-01
Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters’ sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.
Spintronic device modeling and evaluation using modular approach to spintronics
Ganguly, Samiran
Spintronics technology finds itself in an exciting stage today. Riding on the backs of rapid growth and impressive advances in materials and phenomena, it has started to make headway in the memory industry as solid state magnetic memories (STT-MRAM) and is considered a possible candidate to replace the CMOS when its scaling reaches physical limits. It is necessary to bring all these advances together in a coherent fashion to explore and evaluate the potential of spintronic devices. This work creates a framework for this exploration and evaluation based on Modular Approach to Spintronics, which encapsulate the physics of transport of charge and spin through materials and the phenomenology of magnetic dynamics and interaction in benchmarked elemental modules. These modules can then be combined together to form spin-circuit models of complex spintronic devices and structures which can be simulated using SPICE like circuit simulators. In this work we demonstrate how Modular Approach to Spintronics can be used to build spin-circuit models of functional spintronic devices of all types: memory, logic, and oscillators. We then show how Modular Approach to Spintronics can help identify critical factors behind static and dynamic dissipation in spintronic devices and provide remedies by exploring the use of various alternative materials and phenomena. Lastly, we show the use of Modular Approach to Spintronics in exploring new paradigms of computing enabled by the inherent physics of spintronic devices. We hope that this work will encourage more research and experiments that will establish spintronics as a viable technology for continued advancement of electronics.
The two capacitor problem revisited: simple harmonic oscillator model approach
Lee, Keeyung
2012-01-01
The well-known two-capacitor problem, in which exactly half the stored energy disappears when a charged capacitor is connected to an identical capacitor is discussed based on the mechanical harmonic oscillator model approach. In the mechanical harmonic oscillator model, it is shown first that \\emph {exactly half} the work done by a constant applied force is dissipated irrespective of the form of dissipation mechanism when the system comes to a new equilibrium after a constant force is abruptly applied. This model is then applied to the energy loss mechanism in the capacitor charging problem or the two-capacitor problem. This approach allows a simple explanation of the energy dissipation mechanism in these problems and shows that the dissipated energy should always be \\emph {exactly half} the supplied energy whether that is caused by the Joule heat or by the radiation. This paper which provides a simple treatment of the energy dissipation mechanism in the two-capacitor problem is suitable for all undergraduate...
A model-based approach to selection of tag SNPs
Directory of Open Access Journals (Sweden)
Sun Fengzhu
2006-06-01
Full Text Available Abstract Background Single Nucleotide Polymorphisms (SNPs are the most common type of polymorphisms found in the human genome. Effective genetic association studies require the identification of sets of tag SNPs that capture as much haplotype information as possible. Tag SNP selection is analogous to the problem of data compression in information theory. According to Shannon's framework, the optimal tag set maximizes the entropy of the tag SNPs subject to constraints on the number of SNPs. This approach requires an appropriate probabilistic model. Compared to simple measures of Linkage Disequilibrium (LD, a good model of haplotype sequences can more accurately account for LD structure. It also provides a machinery for the prediction of tagged SNPs and thereby to assess the performances of tag sets through their ability to predict larger SNP sets. Results Here, we compute the description code-lengths of SNP data for an array of models and we develop tag SNP selection methods based on these models and the strategy of entropy maximization. Using data sets from the HapMap and ENCODE projects, we show that the hidden Markov model introduced by Li and Stephens outperforms the other models in several aspects: description code-length of SNP data, information content of tag sets, and prediction of tagged SNPs. This is the first use of this model in the context of tag SNP selection. Conclusion Our study provides strong evidence that the tag sets selected by our best method, based on Li and Stephens model, outperform those chosen by several existing methods. The results also suggest that information content evaluated with a good model is more sensitive for assessing the quality of a tagging set than the correct prediction rate of tagged SNPs. Besides, we show that haplotype phase uncertainty has an almost negligible impact on the ability of good tag sets to predict tagged SNPs. This justifies the selection of tag SNPs on the basis of haplotype
Leavesley, G.H.; Markstrom, S.L.; Restrepo, Pedro J.; Viger, R.J.
2002-01-01
A modular approach to model design and construction provides a flexible framework in which to focus the multidisciplinary research and operational efforts needed to facilitate the development, selection, and application of the most robust distributed modelling methods. A variety of modular approaches have been developed, but with little consideration for compatibility among systems and concepts. Several systems are proprietary, limiting any user interaction. The US Geological Survey modular modelling system (MMS) is a modular modelling framework that uses an open source software approach to enable all members of the scientific community to address collaboratively the many complex issues associated with the design, development, and application of distributed hydrological and environmental models. Implementation of a common modular concept is not a trivial task. However, it brings the resources of a larger community to bear on the problems of distributed modelling, provides a framework in which to compare alternative modelling approaches objectively, and provides a means of sharing the latest modelling advances. The concepts and components of the MMS are described and an example application of the MMS, in a decision-support system context, is presented to demonstrate current system capabilities. Copyright ?? 2002 John Wiley and Sons, Ltd.
CM5: A pre-Swarm magnetic field model based upon the comprehensive modeling approach
DEFF Research Database (Denmark)
Sabaka, T.; Olsen, Nils; Tyler, Robert
2014-01-01
We have developed a model based upon the very successful Comprehensive Modeling (CM) approach using recent CHAMP, Ørsted, SAC-C and observatory hourly-means data from September 2000 to the end of 2013. This CM, called CM5, was derived from the algorithm that will provide a consistent line of Level...
Interdependence: a new model for the global approach to disability
Directory of Open Access Journals (Sweden)
Nathan Grills
2015-01-01
Full Text Available Disability affects over 1 billion people and the WHO estimates that over 80% of individuals with disability live in low and middle income countries, where access to health and social services to respond to disability are limited 1. Compounding this poverty is that medical and technological approaches to disability, however needed, are usually very expensive. Yet, much can be done at low cost to increase the wellbeing of people with disability, and the church and Christians need to take a lead. The WHO’s definition of disability highlights the challenge to us in global health. It has been defined by the WHO as “the interaction between a person’s impairments and the attitudinal and environmental barriers that hinder their full and effective participation in society on an equal basis with others” 2. This understanding of disability requires us to go beyond mere healing and towards inclusion in our response to chronic diseases and disability. This is known as the social model and requires societal attitudinal change and modification of disabling environments in order to facilitate those with disability to be included in our community and churches. These are good responses but the church needs to consider alternative models to those that are currently promoted which strive for independence as the ultimate endpoint. In this paper I introduce some disability-related articles in this issue and outline an approach that goes beyond the Social Model towards an Interdependence Model which I think is a more Biblical model of disability and one which we Christians and churches in global health should consider. This model would go beyond changing society to accommodate for people with disabilities towards acknowledging they play an important part in our community and indeed in our church. We need those people with disability to contribute, love and bless those with and without disabilities. And of course those with disability need the love, care and
The CONRAD approach to biokinetic modeling of DTPA decorporation therapy.
Breustedt, Bastian; Blanchardon, Eric; Bérard, Philippe; Fritsch, Paul; Giussani, Augusto; Lopez, Maria Antonia; Luciani, Andrea; Nosske, Dietmar; Piechowski, Jean; Schimmelpfeng, Jutta; Sérandour, Anne-Laure
2010-10-01
Diethylene Triamine Pentaacetic Acid (DTPA) is used for decorporation of plutonium because it is known to be able to enhance its urinary excretion for several days after treatment by forming stable Pu-DTPA complexes. The decorporation prevents accumulation in organs and results in a dosimetric benefit, which is difficult to quantify from bioassay data using existing models. The development of a biokinetic model describing the mechanisms of actinide decorporation by administration of DTPA was initiated as a task in the European COordinated Network on RAdiation Dosimetry (CONRAD). The systemic biokinetic model from Leggett et al. and the biokinetic model for DTPA compounds of International Commission on Radiological Protection Publication 53 were the starting points. A new model for biokinetics of administered DTPA based on physiological interpretation of 14C-labeled DTPA studies from literature was proposed by the group. Plutonium and DTPA biokinetics were modeled separately. The systems were connected by means of a second order kinetics process describing the chelation process of plutonium atoms and DTPA molecules to Pu-DTPA complexes. It was assumed that chelation only occurs in the blood and in systemic compartment ST0 (representing rapid turnover soft tissues), and that Pu-DTPA complexes and administered forms of DTPA share the same biokinetic behavior. First applications of the CONRAD approach showed that the enhancement of plutonium urinary excretion after administration of DTPA was strongly influenced by the chelation rate constant. Setting it to a high value resulted in a good fit to the observed data. However, the model was not yet satisfactory since the effects of repeated DTPA administration in a short time period cannot be predicted in a realistic way. In order to introduce more physiological knowledge into the model several questions still have to be answered. Further detailed studies of human contamination cases and experimental data will be needed in
New Approaches in Reusable Booster System Life Cycle Cost Modeling
Zapata, Edgar
2013-01-01
This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model
New Approaches in Reuseable Booster System Life Cycle Cost Modeling
Zapata, Edgar
2013-01-01
This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model
Approaches to Computer Modeling of Phosphate Hide-Out.
1984-06-28
phosphate acts as a buffer to keep pH at a value above which acid corrosion occurs . and below which caustic corrosion becomes significant. Difficulties are...ionization of dihydrogen phosphate : HIPO - + + 1PO, K (B-7) H+ + - £Iao 1/1, (B-8) H , PO4 - + O- - H0 4 + H20 K/Kw (0-9) 19 * Such zero heat...OF STANDARDS-1963-A +. .0 0 0 9t~ - 4 NRL Memorandum Report 5361 4 Approaches to Computer Modeling of Phosphate Hide-Out K. A. S. HARDY AND J. C
A motivic approach to phase transitions in Potts models
Aluffi, Paolo; Marcolli, Matilde
2013-01-01
We describe an approach to the study of phase transitions in Potts models based on an estimate of the complexity of the locus of real zeros of the partition function, computed in terms of the classes in the Grothendieck ring of the affine algebraic varieties defined by the vanishing of the multivariate Tutte polynomial. We give completely explicit calculations for the examples of the chains of linked polygons and of the graphs obtained by replacing the polygons with their dual graphs. These are based on a deletion-contraction formula for the Grothendieck classes and on generating functions for splitting and doubling edges.
Quiver Approach to Massive Gauge Bosons Beyond the Standard Model
Frampton, Paul Howard
2013-01-01
We address the question of the possible existence of massive gauge bosons beyond the $W^{\\pm}$ and $Z^{0}$ of the standard model. Our intuitive and aesthetic approach is based on quiver theory. Examples thereof arise, for example, from compactification of the type IIB superstring on $AdS_5 \\times S_5/ Z_n$ orbifolds. We explore the quiver theory framework more generally than string theory. The practical question is what gauge bosons to look for at the upgraded LHC, in terms of color and electric charge, and of their couplings to quarks and leptons. Axigluons and bileptons are favored.
Data mining approach to model the diagnostic service management.
Lee, Sun-Mi; Lee, Ae-Kyung; Park, Il-Su
2006-01-01
Korea has National Health Insurance Program operated by the government-owned National Health Insurance Corporation, and diagnostic services are provided every two year for the insured and their family members. Developing a customer relationship management (CRM) system using data mining technology would be useful to improve the performance of diagnostic service programs. Under these circumstances, this study developed a model for diagnostic service management taking into account the characteristics of subjects using a data mining approach. This study could be further used to develop an automated CRM system contributing to the increase in the rate of receiving diagnostic services.
Conceptual modelling approach of mechanical products based on functional surface
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
A modelling framework based on functional surface is presented to support conceptual design of mechanical products. The framework organizes product information in an abstract and multilevel manner. It consists of two mapping processes: function decomposition process and form reconstitution process. The steady mapping relationship from function to form (function-functional surface-form) is realized by taking functional surface as the middle layer. It farthest reduces the possibilities of combinatorial explosion that can occur during function decomposition and form reconstitution. Finally, CAD tools are developed and an auto-bender machine is applied to demonstrate the proposed approach.
Design of Multithreaded Software The Entity-Life Modeling Approach
Sandén, Bo I
2011-01-01
This book assumes familiarity with threads (in a language such as Ada, C#, or Java) and introduces the entity-life modeling (ELM) design approach for certain kinds of multithreaded software. ELM focuses on "reactive systems," which continuously interact with the problem environment. These "reactive systems" include embedded systems, as well as such interactive systems as cruise controllers and automated teller machines.Part I covers two fundamentals: program-language thread support and state diagramming. These are necessary for understanding ELM and are provided primarily for reference. P
Algebraic approach to small-world network models
Rudolph-Lilith, Michelle; Muller, Lyle E.
2014-01-01
We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.
Modelling Based Approach for Reconstructing Evidence of VOIP Malicious Attacks
Directory of Open Access Journals (Sweden)
Mohammed Ibrahim
2015-05-01
Full Text Available Voice over Internet Protocol (VoIP is a new communication technology that uses internet protocol in providing phone services. VoIP provides various forms of benefits such as low monthly fee and cheaper rate in terms of long distance and international calls. However, VoIP is accompanied with novel security threats. Criminals often take advantages of such security threats and commit illicit activities. These activities require digital forensic experts to acquire, analyses, reconstruct and provide digital evidence. Meanwhile, there are various methodologies and models proposed in detecting, analysing and providing digital evidence in VoIP forensic. However, at the time of writing this paper, there is no model formalized for the reconstruction of VoIP malicious attacks. Reconstruction of attack scenario is an important technique in exposing the unknown criminal acts. Hence, this paper will strive in addressing that gap. We propose a model for reconstructing VoIP malicious attacks. To achieve that, a formal logic approach called Secure Temporal Logic of Action(S-TLA+ was adopted in rebuilding the attack scenario. The expected result of this model is to generate additional related evidences and their consistency with the existing evidences can be determined by means of S-TLA+ model checker.
MODEL-BASED PERFORMANCE EVALUATION APPROACH FOR MOBILE AGENT SYSTEMS
Institute of Scientific and Technical Information of China (English)
Li Xin; Mi Zhengkun; Meng Xudong
2004-01-01
Claimed as the next generation programming paradigm, mobile agent technology has attracted extensive interests in recent years. However, up to now, limited research efforts have been devoted to the performance study of mobile agent system and most of these researches focus on agent behavior analysis resulting in that models are hard to apply to mobile agent systems. To bridge the gap, a new performance evaluation model derived from operation mechanisms of mobile agent platforms is proposed. Details are discussed for the design of companion simulation software, which can provide the system performance such as response time of platform to mobile agent. Further investigation is followed on the determination of model parameters. Finally comparison is made between the model-based simulation results and measurement-based real performance of mobile agent systems. The results show that the proposed model and designed software are effective in evaluating performance characteristics of mobile agent systems. The proposed approach can also be considered as the basis of performance analysis for large systems composed of multiple mobile agent platforms.
Modelling hybrid stars in quark-hadron approaches
Energy Technology Data Exchange (ETDEWEB)
Schramm, S. [FIAS, Frankfurt am Main (Germany); Dexheimer, V. [Kent State University, Department of Physics, Kent, OH (United States); Negreiros, R. [Federal Fluminense University, Gragoata, Niteroi (Brazil)
2016-01-15
The density in the core of neutron stars can reach values of about 5 to 10 times nuclear matter saturation density. It is, therefore, a natural assumption that hadrons may have dissolved into quarks under such conditions, forming a hybrid star. This star will have an outer region of hadronic matter and a core of quark matter or even a mixed state of hadrons and quarks. In order to investigate such phases, we discuss different model approaches that can be used in the study of compact stars as well as being applicable to a wider range of temperatures and densities. One major model ingredient, the role of quark interactions in the stability of massive hybrid stars is discussed. In this context, possible conflicts with lattice QCD simulations are investigated. (orig.)
Biogas Production Modelling: A Control System Engineering Approach
Stollenwerk, D.; Rieke, C.; Dahmen, M.; Pieper, M.
2016-03-01
Due to the Renewable Energy Act, in Germany it is planned to increase the amount of renewable energy carriers up to 60%. One of the main problems is the fluctuating supply of wind and solar energy. Here biogas plants provide a solution, because a demand-driven supply is possible. Before running such a plant, it is necessary to simulate and optimize the process feeding strategy. Current simulation models are either very detailed like the ADM 1, which leads to very long optimization runtimes or not accurate enough to handle the biogas production kinetics. Therefore this paper provides a new model of a biogas plant, which is easy to parametrize but also has the needed accuracy for the output prediction. It is based on the control system approach of system identification and validated with laboratory results of a real biogas production testing facility.
The shell model approach: Key to hadron structure
Energy Technology Data Exchange (ETDEWEB)
Lipkin, H.J. (Weizmann Inst. of Science, Rehovoth (Israel). Dept. of Nuclear Physics)
1989-08-14
A shell model approach leads to a simple constituent quark model for hadron structure in which mesons and baryons consist only of constituent quarks. Hadron masses are the sums of the constituent quark effective masses and a hyperfine interaction inversely proportional to the product of these same masses. Hadron masses and magnetic moments are related by the assumption that the same effective mass parameter appears in the additive mass term, the hyperfine interaction, and the quark magnetic moment, both in mesons and baryons. The analysis pinpoints the physical assumptions needed for each relation and gives two new mass relations. Application to weak decays and recent polarized EMC data confirms conclusions previously obtained that the current quark contribution to the spin structure of the proton vanishes, but without need for the questionable assumption of SU(3) symmetry relating hyperon decays and proton structure. SU(3) symmetry breaking is clarified. 24 refs.
Static models, recursive estimators and the zero-variance approach
Rubino, Gerardo
2016-01-07
When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.
Tunneling approach and thermality in dispersive models of analogue gravity
Belgiorno, F; Piazza, F Dalla
2014-01-01
We set up a tunneling approach to the analogue Hawking effect in the case of models of analogue gravity which are affected by dispersive effects. An effective Schroedinger-like equation for the basic scattering phenomenon IN->P+N*, where IN is the incident mode, P is the positive norm reflected mode, and N* is the negative norm one, signalling particle creation, is derived, aimed to an approximate description of the phenomenon. Horizons and barrier penetration play manifestly a key-role in giving rise to pair-creation. The non-dispersive limit is also correctly recovered. Drawbacks of the model are also pointed out and a possible solution ad hoc is suggested.
Ordered LOGIT Model approach for the determination of financial distress.
Kinay, B
2010-01-01
Nowadays, as a result of the global competition encountered, numerous companies come up against financial distresses. To predict and take proactive approaches for those problems is quite important. Thus, the prediction of crisis and financial distress is essential in terms of revealing the financial condition of companies. In this study, financial ratios relating to 156 industrial firms that are quoted in the Istanbul Stock Exchange are used and probabilities of financial distress are predicted by means of an ordered logit regression model. By means of Altman's Z Score, the dependent variable is composed by scaling the level of risk. Thus, a model that can compose an early warning system and predict financial distress is proposed.
Performance optimization of Jatropha biodiesel engine model using Taguchi approach
Energy Technology Data Exchange (ETDEWEB)
Ganapathy, T.; Murugesan, K.; Gakkhar, R.P. [Mechanical and Industrial Engineering Department, Indian Institute of Technology Roorkee, Roorkee 247 667 (India)
2009-11-15
This paper proposes a methodology for thermodynamic model analysis of Jatropha biodiesel engine in combination with Taguchi's optimization approach to determine the optimum engine design and operating parameters. A thermodynamic model based on two-zone Weibe's heat release function has been employed to simulate the Jatropha biodiesel engine performance. Among the important engine design and operating parameters 10 critical parameters were selected assuming interactions between the pair of parameters. Using linear graph theory and Taguchi method an L{sub 16} orthogonal array has been utilized to determine the engine test trials layout. In order to maximize the performance of Jatropha biodiesel engine the signal to noise ratio (SNR) related to higher-the-better (HTB) quality characteristics has been used. The present methodology correctly predicted the compression ratio, Weibe's heat release constants and combustion zone duration as the critical parameters that affect the performance of the engine compared to other parameters. (author)
Wind Turbine Noise Propagation Modelling: An Unsteady Approach
Barlas, E.; Zhu, W. J.; Shen, W. Z.; Andersen, S. J.
2016-09-01
Wind turbine sound generation and propagation phenomena are inherently time dependent, hence tools that incorporate the dynamic nature of these two issues are needed for accurate modelling. In this paper, we investigate the sound propagation from a wind turbine by considering the effects of unsteady flow around it and time dependent source characteristics. For the acoustics modelling we employ the Parabolic Equation (PE) method while Large Eddy Simulation (LES) as well as synthetically generated turbulence fields are used to generate the medium flow upon which sound propagates. Unsteady acoustic simulations are carried out for three incoming wind shear and various turbulence intensities, using a moving source approach to mimic the rotating turbine blades. The focus of the present paper is to study the near and far field amplitude modulation characteristics and time evolution of Sound Pressure Level (SPL).
New modeling approach for bounding flight in birds.
Sachs, Gottfried; Lenz, Jakob
2011-12-01
A new modeling approach is presented which accounts for the unsteady motion features and dynamics characteristics of bounding flight. For this purpose, a realistic mathematical model is developed to describe the flight dynamics of a bird with regard to a motion which comprises flapping and bound phases involving acceleration and deceleration as well as, simultaneously, pull-up and push-down maneuvers. Furthermore, a mathematical optimization method is used for determining that bounding flight mode which yields the minimum energy expenditure per range. Thus, it can be shown to what extent bounding flight is aerodynamically superior to continuous flapping flight, yielding a reduction in the energy expenditure in the speed range practically above the maximum range speed. Moreover, the role of the body lift for the efficiency of bounding flight is identified and quantified. Introducing an appropriate non-dimensionalization of the relations describing the bird's flight dynamics, results of generally valid nature are derived for the addressed items.
Kinetics approach to modeling of polymer additive degradation in lubricants
Institute of Scientific and Technical Information of China (English)
llyaI.KUDISH; RubenG.AIRAPETYAN; Michael; J.; COVITCH
2001-01-01
A kinetics problem for a degrading polymer additive dissolved in a base stock is studied.The polymer degradation may be caused by the combination of such lubricant flow parameters aspressure, elongational strain rate, and temperature as well as lubricant viscosity and the polymercharacteristics (dissociation energy, bead radius, bond length, etc.). A fundamental approach tothe problem of modeling mechanically induced polymer degradation is proposed. The polymerdegradation is modeled on the basis of a kinetic equation for the density of the statistical distribu-tion of polymer molecules as a function of their molecular weight. The integrodifferential kineticequation for polymer degradation is solved numerically. The effects of pressure, elongational strainrate, temperature, and lubricant viscosity on the process of lubricant degradation are considered.The increase of pressure promotes fast degradation while the increase of temperature delaysdegradation. A comparison of a numerically calculated molecular weight distribution with an ex-perimental one obtained in bench tests showed that they are in excellent agreement with eachother.
A Mixed Approach for Modeling Blood Flow in Brain Microcirculation
Peyrounette, M.; Sylvie, L.; Davit, Y.; Quintard, M.
2014-12-01
We have previously demonstrated [1] that the vascular system of the healthy human brain cortex is a superposition of two structural components, each corresponding to a different spatial scale. At small-scale, the vascular network has a capillary structure, which is homogeneous and space-filling over a cut-off length. At larger scale, veins and arteries conform to a quasi-fractal branched structure. This structural duality is consistent with the functional duality of the vasculature, i.e. distribution and exchange. From a modeling perspective, this can be viewed as the superposition of: (a) a continuum model describing slow transport in the small-scale capillary network, characterized by a representative elementary volume and effective properties; and (b) a discrete network approach [2] describing fast transport in the arterial and venous network, which cannot be homogenized because of its fractal nature. This problematic is analogous to modeling problems encountered in geological media, e.g, in petroleum engineering, where fast conducting channels (wells or fractures) are embedded in a porous medium (reservoir rock). An efficient method to reduce the computational cost of fractures/continuum simulations is to use relatively large grid blocks for the continuum model. However, this also makes it difficult to accurately couple both structural components. In this work, we solve this issue by adapting the "well model" concept used in petroleum engineering [3] to brain specific 3-D situations. We obtain a unique linear system of equations describing the discrete network, the continuum and the well model coupling. Results are presented for realistic geometries and compared with a non-homogenized small-scale network model of an idealized periodic capillary network of known permeability. [1] Lorthois & Cassot, J. Theor. Biol. 262, 614-633, 2010. [2] Lorthois et al., Neuroimage 54 : 1031-1042, 2011. [3] Peaceman, SPE J. 18, 183-194, 1978.
A Workflow-Oriented Approach To Propagation Models In Heliophysics
Directory of Open Access Journals (Sweden)
Gabriele Pierantoni
2014-01-01
Full Text Available The Sun is responsible for the eruption of billions of tons of plasma andthe generation of near light-speed particles that propagate throughout the solarsystem and beyond. If directed towards Earth, these events can be damaging toour tecnological infrastructure. Hence there is an effort to understand the causeof the eruptive events and how they propagate from Sun to Earth. However, thephysics governing their propagation is not well understood, so there is a need todevelop a theoretical description of their propagation, known as a PropagationModel, in order to predict when they may impact Earth. It is often difficultto define a single propagation model that correctly describes the physics ofsolar eruptive events, and even more difficult to implement models capable ofcatering for all these complexities and to validate them using real observational data.In this paper, we envisage that workflows offer both a theoretical andpractical framerwork for a novel approach to propagation models. We definea mathematical framework that aims at encompassing the different modalitieswith which workflows can be used, and provide a set of generic building blockswritten in the TAVERNA workflow language that users can use to build theirown propagation models. Finally we test both the theoretical model and thecomposite building blocks of the workflow with a real Science Use Case that wasdiscussed during the 4th CDAW (Coordinated Data Analysis Workshop eventheld by the HELIO project. We show that generic workflow building blocks canbe used to construct a propagation model that succesfully describes the transitof solar eruptive events toward Earth and predict a correct Earth-impact time
Drifting model approach to modeling based on weighted support vector machines
Institute of Scientific and Technical Information of China (English)
冯瑞; 宋春林; 邵惠鹤
2004-01-01
This paper proposes a novel drifting modeling (DM) method. Briefly, we first employ an improved SVMs algorithm named weighted support vector machines (W_SVMs), which is suitable for locally learning, and then the DM method using the algorithm is proposed. By applying the proposed modeling method to Fluidized Catalytic Cracking Unit (FCCU), the simulation results show that the property of this proposed approach is superior to global modeling method based on standard SVMs.
A quality risk management model approach for cell therapy manufacturing.
Lopez, Fabio; Di Bartolo, Chiara; Piazza, Tommaso; Passannanti, Antonino; Gerlach, Jörg C; Gridelli, Bruno; Triolo, Fabio
2010-12-01
International regulatory authorities view risk management as an essential production need for the development of innovative, somatic cell-based therapies in regenerative medicine. The available risk management guidelines, however, provide little guidance on specific risk analysis approaches and procedures applicable in clinical cell therapy manufacturing. This raises a number of problems. Cell manufacturing is a poorly automated process, prone to operator-introduced variations, and affected by heterogeneity of the processed organs/tissues and lot-dependent variability of reagent (e.g., collagenase) efficiency. In this study, the principal challenges faced in a cell-based product manufacturing context (i.e., high dependence on human intervention and absence of reference standards for acceptable risk levels) are identified and addressed, and a risk management model approach applicable to manufacturing of cells for clinical use is described for the first time. The use of the heuristic and pseudo-quantitative failure mode and effect analysis/failure mode and critical effect analysis risk analysis technique associated with direct estimation of severity, occurrence, and detection is, in this specific context, as effective as, but more efficient than, the analytic hierarchy process. Moreover, a severity/occurrence matrix and Pareto analysis can be successfully adopted to identify priority failure modes on which to act to mitigate risks. The application of this approach to clinical cell therapy manufacturing in regenerative medicine is also discussed.
THE FAIRSHARES MODEL: AN ETHICAL APPROACH TO SOCIAL ENTERPRISE DEVELOPMENT?
Directory of Open Access Journals (Sweden)
Rory James Ridley-Duff
2015-07-01
Full Text Available This paper is based on the keynote address to the 14th International Association of Public and Non-Profit Marketing (IAPNM conference. It explore the question "What impact do ethical values in the FairShares Model have on social entrepreneurial behaviour?" In the first part, three broad approaches to social enterprise are set out: co-operative and mutual enterprises (CMEs, social and responsible businesses (SRBs and charitable trading activities (CTAs. The ethics that guide each approach are examined to provide a conceptual framework for examining FairShares as a case study. In the second part, findings are scrutinised in terms of the ethical values and principles that are activated when FairShares is applied to practice. The paper contributes to knowledge by giving an example of the way OpenSource technology (Loomio has been used to translate 'espoused theories' into 'theories in use' to advance social enterprise development. The review of FairShares using the conceptual framework suggests there is a fourth approach based on multi-stakeholder co-operation to create 'associative democracy' in the workplace.
An approach to model based testing of multiagent systems.
Ur Rehman, Shafiq; Nadeem, Aamer
2015-01-01
Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.
An Approach to Model Based Testing of Multiagent Systems
Directory of Open Access Journals (Sweden)
Shafiq Ur Rehman
2015-01-01
Full Text Available Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.
Right approach to 3D modeling using CAD tools
Baddam, Mounica Reddy
The thesis provides a step-by-step methodology to enable an instructor dealing with CAD tools to optimally guide his/her students through an understandable 3D modeling approach which will not only enhance their knowledge about the tool's usage but also enable them to achieve their desired result in comparatively lesser time. In the known practical field, there is particularly very little information available to apply CAD skills to formal beginners' training sessions. Additionally, advent of new software in 3D domain cumulates updating into a more difficult task. Keeping up to the industry's advanced requirements emphasizes the importance of more skilled hands in the field of CAD development, rather than just prioritizing manufacturing in terms of complex software features. The thesis analyses different 3D modeling approaches specified to the varieties of CAD tools currently available in the market. Utilizing performance-time databases, learning curves have been generated to measure their performance time, feature count etc. Based on the results, improvement parameters have also been provided for (Asperl, 2005).
A New Approach in Regression Analysis for Modeling Adsorption Isotherms
Directory of Open Access Journals (Sweden)
Dana D. Marković
2014-01-01
Full Text Available Numerous regression approaches to isotherm parameters estimation appear in the literature. The real insight into the proper modeling pattern can be achieved only by testing methods on a very big number of cases. Experimentally, it cannot be done in a reasonable time, so the Monte Carlo simulation method was applied. The objective of this paper is to introduce and compare numerical approaches that involve different levels of knowledge about the noise structure of the analytical method used for initial and equilibrium concentration determination. Six levels of homoscedastic noise and five types of heteroscedastic noise precision models were considered. Performance of the methods was statistically evaluated based on median percentage error and mean absolute relative error in parameter estimates. The present study showed a clear distinction between two cases. When equilibrium experiments are performed only once, for the homoscedastic case, the winning error function is ordinary least squares, while for the case of heteroscedastic noise the use of orthogonal distance regression or Margart’s percent standard deviation is suggested. It was found that in case when experiments are repeated three times the simple method of weighted least squares performed as well as more complicated orthogonal distance regression method.
A Variational Approach to the Modeling of MIMO Systems
Directory of Open Access Journals (Sweden)
A. Jraifi
2007-05-01
Full Text Available Motivated by the study of the optimization of the quality of service for multiple input multiple output (MIMO systems in 3G (third generation, we develop a method for modeling MIMO channel Ã¢Â„Â‹. This method, which uses a statistical approach, is based on a variational form of the usual channel equation. The proposed equation is given by ÃŽÂ´2=Ã¢ÂŒÂ©ÃŽÂ´R|Ã¢Â„Â‹|ÃŽÂ´EÃ¢ÂŒÂª+Ã¢ÂŒÂ©ÃŽÂ´R|(ÃŽÂ´Ã¢Â„Â‹|EÃ¢ÂŒÂª with scalar variable ÃŽÂ´=Ã¢Â€Â–ÃŽÂ´RÃ¢Â€Â–. Minimum distance ÃŽÂ´min of received vectors |RÃ¢ÂŒÂª is used as the random variable to model MIMO channel. This variable is of crucial importance for the performance of the transmission system as it captures the degree of interference between neighbors vectors. Then, we use this approach to compute numerically the total probability of errors with respect to signal-to-noise ratio (SNR and then predict the numbers of antennas. By fixing SNR variable to a specific value, we extract informations on the optimal numbers of MIMO antennas.
A Simplified Approach to Multivariable Model Predictive Control
Directory of Open Access Journals (Sweden)
Michael Short
2015-01-01
Full Text Available The benefits of applying the range of technologies generally known as Model Predictive Control (MPC to the control of industrial processes have been well documented in recent years. One of the principal drawbacks to MPC schemes are the relatively high on-line computational burdens when used with adaptive, constrained and/or multivariable processes, which has warranted some researchers and practitioners to seek simplified approaches for its implementation. To date, several schemes have been proposed based around a simplified 1-norm formulation of multivariable MPC, which is solved online using the simplex algorithm in both the unconstrained and constrained cases. In this paper a 2-norm approach to simplified multivariable MPC is formulated, which is solved online using a vector-matrix product or a simple iterative coordinate descent algorithm for the unconstrained and constrained cases respectively. A CARIMA model is employed to ensure offset-free control, and a simple scheme to produce the optimal predictions is described. A small simulation study and further discussions help to illustrate that this quadratic formulation performs well and can be considered a useful adjunct to its linear counterpart, and still retains the beneficial features such as ease of computer-based implementation.
Thin inclusion approach for modelling of heterogeneous conducting materials
Energy Technology Data Exchange (ETDEWEB)
Lavrov, Nikolay [Davenport University, 4801 Oakman Boulevard, Dearborn, MI 48126 (United States); Smirnova, Alevtina; Gorgun, Haluk; Sammes, Nigel [University of Connecticut, Department of Materials Science and Engineering, Connecticut Global Fuel Center, 44 Weaver Road, Unit 5233, Storrs, CT 06269 (United States)
2006-04-21
Experimental data show that heterogeneous nanostructure of solid oxide and polymer electrolyte fuel cells could be approximated as an infinite set of fiber-like or penny-shaped inclusions in a continuous medium. Inclusions can be arranged in a cluster mode and regular or random order. In the newly proposed theoretical model of nanostructured material, the most attention is paid to the small aspect ratio of structural elements as well as to some model problems of electrostatics. The proposed integral equation for electric potential caused by the charge distributed over the single circular or elliptic cylindrical conductor of finite length, as a single unit of a nanostructured material, has been asymptotically simplified for the small aspect ratio and solved numerically. The result demonstrates that surface density changes slightly in the middle part of the thin domain and has boundary layers localized near the edges. It is anticipated, that contribution of boundary layer solution to the surface density is significant and cannot be governed by classic equation for smooth linear charge. The role of the cross-section shape is also investigated. Proposed approach is sufficiently simple, robust and allows extension to either regular or irregular system of various inclusions. This approach can be used for the development of the system of conducting inclusions, which are commonly present in nanostructured materials used for solid oxide and polymer electrolyte fuel cell (PEMFC) materials. (author)
A Modeling Approach for Plastic-Metal Laser Direct Joining
Lutey, Adrian H. A.; Fortunato, Alessandro; Ascari, Alessandro; Romoli, Luca
2017-09-01
Laser processing has been identified as a feasible approach to direct joining of metal and plastic components without the need for adhesives or mechanical fasteners. The present work sees development of a modeling approach for conduction and transmission laser direct joining of these materials based on multi-layer optical propagation theory and numerical heat flow simulation. The scope of this methodology is to predict process outcomes based on the calculated joint interface and upper surface temperatures. Three representative cases are considered for model verification, including conduction joining of PBT and aluminum alloy, transmission joining of optically transparent PET and stainless steel, and transmission joining of semi-transparent PA 66 and stainless steel. Conduction direct laser joining experiments are performed on black PBT and 6082 anticorodal aluminum alloy, achieving shear loads of over 2000 N with specimens of 2 mm thickness and 25 mm width. Comparison with simulation results shows that consistently high strength is achieved where the peak interface temperature is above the plastic degradation temperature. Comparison of transmission joining simulations and published experimental results confirms these findings and highlights the influence of plastic layer optical absorption on process feasibility.
Modelling the Heat Consumption in District Heating Systems using a Grey-box approach
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Madsen, Henrik
2006-01-01
identification of an overall model structure followed by data-based modelling, whereby the details of the model are identified. This approach is sometimes called grey-box modelling, but the specific approach used here does not require states to be specified. Overall, the paper demonstrates the power of the grey......-box approach. (c) 2005 Elsevier B.V. All rights reserved....
Modelling the Heat Consumption in District Heating Systems using a Grey-box approach
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Madsen, Henrik
2006-01-01
identification of an overall model structure followed by data-based modelling, whereby the details of the model are identified. This approach is sometimes called grey-box modelling, but the specific approach used here does not require states to be specified. Overall, the paper demonstrates the power of the grey......-box approach. (c) 2005 Elsevier B.V. All rights reserved....
Lithium battery aging model based on Dakin's degradation approach
Baghdadi, Issam; Briat, Olivier; Delétage, Jean-Yves; Gyan, Philippe; Vinassa, Jean-Michel
2016-09-01
This paper proposes and validates a calendar and power cycling aging model for two different lithium battery technologies. The model development is based on previous SIMCAL and SIMSTOCK project data. In these previous projects, the effect of the battery state of charge, temperature and current magnitude on aging was studied on a large panel of different battery chemistries. In this work, data are analyzed using Dakin's degradation approach. In fact, the logarithms of battery capacity fade and the increase in resistance evolves linearly over aging. The slopes identified from straight lines correspond to battery aging rates. Thus, a battery aging rate expression function of aging factors was deduced and found to be governed by Eyring's law. The proposed model simulates the capacity fade and resistance increase as functions of the influencing aging factors. Its expansion using Taylor series was consistent with semi-empirical models based on the square root of time, which are widely studied in the literature. Finally, the influence of the current magnitude and temperature on aging was simulated. Interestingly, the aging rate highly increases with decreasing and increasing temperature for the ranges of -5 °C-25 °C and 25 °C-60 °C, respectively.
An inverse problem approach to modelling coastal effluent plumes
Lam, D. C. L.; Murthy, C. R.; Miners, K. C.
Formulated as an inverse problem, the diffusion parameters associated with length-scale dependent eddy diffusivities can be viewed as the unknowns in the mass conservation equation for coastal zone transport problems. The values of the diffusion parameters can be optimized according to an error function incorporated with observed concentration data. Examples are given for the Fickian, shear diffusion and inertial subrange diffusion models. Based on a new set of dyeplume data collected in the coastal zone off Bronte, Lake Ontario, it is shown that the predictions of turbulence closure models can be evaluated for different flow conditions. The choice of computational schemes for this diagnostic approach is based on tests with analytic solutions and observed data. It is found that the optimized shear diffusion model produced a better agreement with observations for both high and low advective flows than, e.g., the unoptimized semi-empirical model, Ky=0.075 σy1.2, described by Murthy and Kenney.
A fuzzy approach to the Weighted Overlap Dominance model
DEFF Research Database (Denmark)
Franco de los Rios, Camilo Andres; Hougaard, Jens Leth; Nielsen, Kurt
2013-01-01
Decision support models are required to handle the various aspects of multi-criteria decision problems in order to help the individual understand its possible solutions. In this sense, such models have to be capable of aggregating and exploiting different types of measurements and evaluations in ...... is presented for ordering and identifying the best alternatives under an interactive procedure that takes into account the natural imprecision and relevance of information....... in an interactive way, where input data can take the form of uniquely-graded or interval-valued information. Here we explore the Weighted Overlap Dominance (WOD) model from a fuzzy perspective and its outranking approach to decision support and multidimensional interval analysis. Firstly, imprecision measures...... are introduced for characterizing the type of uncertainty being expressed by intervals, examining at the same time how the WOD model handles both non-interval as well as interval data, and secondly, relevance degrees are proposed for obtaining a ranking over the alternatives. Hence, a complete methodology...
Replacement model of city bus: A dynamic programming approach
Arifin, Dadang; Yusuf, Edhi
2017-06-01
This paper aims to develop a replacement model of city bus vehicles operated in Bandung City. This study is driven from real cases encountered by the Damri Company in the efforts to improve services to the public. The replacement model propounds two policy alternatives: First, to maintain or keep the vehicles, and second is to replace them with new ones taking into account operating costs, revenue, salvage value, and acquisition cost of a new vehicle. A deterministic dynamic programming approach is used to solve the model. The optimization process was heuristically executed using empirical data of Perum Damri. The output of the model is to determine the replacement schedule and the best policy if the vehicle has passed the economic life. Based on the results, the technical life of the bus is approximately 20 years old, while the economic life is an average of 9 (nine) years. It means that after the bus is operated for 9 (nine) years, managers should consider the policy of rejuvenation.
Directory of Open Access Journals (Sweden)
Merler Stefano
2010-06-01
Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age
A computational toy model for shallow landslides: Molecular dynamics approach
Martelloni, Gianluca; Bagnoli, Franco; Massaro, Emanuele
2013-09-01
The aim of this paper is to propose a 2D computational algorithm for modeling the triggering and propagation of shallow landslides caused by rainfall. We used a molecular dynamics (MD) approach, similar to the discrete element method (DEM), that is suitable to model granular material and to observe the trajectory of a single particle, so to possibly identify its dynamical properties. We consider that the triggering of shallow landslides is caused by the decrease of the static friction along the sliding surface due to water infiltration by rainfall. Thence the triggering is caused by the two following conditions: (a) a threshold speed of the particles and (b) a condition on the static friction, between the particles and the slope surface, based on the Mohr-Coulomb failure criterion. The latter static condition is used in the geotechnical model to estimate the possibility of landslide triggering. The interaction force between particles is modeled, in the absence of experimental data, by means of a potential similar to the Lennard-Jones one. The viscosity is also introduced in the model and for a large range of values of the model's parameters, we observe a characteristic velocity pattern, with acceleration increments, typical of real landslides. The results of simulations are quite promising: the energy and time triggering distribution of local avalanches show a power law distribution, analogous to the observed Gutenberg-Richter and Omori power law distributions for earthquakes. Finally, it is possible to apply the method of the inverse surface displacement velocity [4] for predicting the failure time.
Predicting future glacial lakes in Austria using different modelling approaches
Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus
2017-04-01
Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers
Ice Shelf Modeling: A Cross-Polar Bayesian Statistical Approach
Kirchner, N.; Furrer, R.; Jakobsson, M.; Zwally, H. J.
2010-12-01
Ice streams interlink glacial terrestrial and marine environments: embedded in a grounded inland ice such as the Antarctic Ice Sheet or the paleo ice sheets covering extensive parts of the Eurasian and Amerasian Arctic respectively, ice streams are major drainage agents facilitating the discharge of substantial portions of continental ice into the ocean. At their seaward side, ice streams can either extend onto the ocean as floating ice tongues (such as the Drygalsky Ice Tongue/East Antarctica), or feed large ice shelves (as is the case for e.g. the Siple Coast and the Ross Ice Shelf/West Antarctica). The flow behavior of ice streams has been recognized to be intimately linked with configurational changes in their attached ice shelves; in particular, ice shelf disintegration is associated with rapid ice stream retreat and increased mass discharge from the continental ice mass, contributing eventually to sea level rise. Investigations of ice stream retreat mechanism are however incomplete if based on terrestrial records only: rather, the dynamics of ice shelves (and, eventually, the impact of the ocean on the latter) must be accounted for. However, since floating ice shelves leave hardly any traces behind when melting, uncertainty regarding the spatio-temporal distribution and evolution of ice shelves in times prior to instrumented and recorded observation is high, calling thus for a statistical modeling approach. Complementing ongoing large-scale numerical modeling efforts (Pollard & DeConto, 2009), we model the configuration of ice shelves by using a Bayesian Hiearchial Modeling (BHM) approach. We adopt a cross-polar perspective accounting for the fact that currently, ice shelves exist mainly along the coastline of Antarctica (and are virtually non-existing in the Arctic), while Arctic Ocean ice shelves repeatedly impacted the Arctic ocean basin during former glacial periods. Modeled Arctic ocean ice shelf configurations are compared with geological spatial
An Approach to Enforcing Clark-Wilson Model in Role-based Access Control Model
Institute of Scientific and Technical Information of China (English)
LIANGBin; SHIWenchang; SUNYufang; SUNBo
2004-01-01
Using one security model to enforce another is a prospective solution to multi-policy support. In this paper, an approach to the enforcing Clark-Wilson data integrity model in the Role-based access control (RBAC) model is proposed. An enforcement construction with great feasibility is presented. In this construction, a direct way to enforce the Clark-Wilson model is provided, the corresponding relations among users, transformation procedures, and constrained data items are strengthened; the concepts of task and subtask are introduced to enhance the support to least-privilege. The proposed approach widens the applicability of RBAC. The theoretical foundation for adopting Clark-Wilson model in a RBAC system with small cost is offered to meet the requirements of multi-policy support and policy flexibility.
Modeling quasi-static poroelastic propagation using an asymptotic approach
Energy Technology Data Exchange (ETDEWEB)
Vasco, D.W.
2007-11-01
Since the formulation of poroelasticity (Biot(1941)) and its reformulation (Rice & Cleary(1976)), there have been many efforts to solve the coupled system of equations. Perhaps because of the complexity of the governing equations, most of the work has been directed towards finding numerical solutions. For example, Lewis and co-workers published early papers (Lewis & Schrefler(1978); Lewis et al.(1991)Lewis, Schrefler, & Simoni) concerned with finite-element methods for computing consolidation, subsidence, and examining the importance of coupling. Other early work dealt with flow in a deformable fractured medium (Narasimhan & Witherspoon 1976); Noorishad et al.(1984)Noorishad, Tsang, & Witherspoon. This effort eventually evolved into a general numerical approach for modeling fluid flow and deformation (Rutqvist et al.(2002)Rutqvist, Wu, Tsang, & Bodvarsson). As a result of this and other work, numerous coupled, computer-based algorithms have emerged, typically falling into one of three categories: one-way coupling, loose coupling, and full coupling (Minkoff et al.(2003)Minkoff, Stone, Bryant, Peszynska, & Wheeler). In one-way coupling the fluid flow is modeled using a conventional numerical simulator and the resulting change in fluid pressures simply drives the deformation. In loosely coupled modeling distinct geomechanical and fluid flow simulators are run for a sequence of time steps and at the conclusion of each step information is passed between the simulators. In full coupling, the fluid flow and geomechanics equations are solved simultaneously at each time step (Lewis & Sukirman(1993); Lewis & Ghafouri(1997); Gutierrez & Lewis(2002)). One disadvantage of a purely numerical approach to solving the governing equations of poroelasticity is that it is not clear how the various parameters interact and influence the solution. Analytic solutions have an advantage in that respect; the relationship between the medium and fluid properties is clear from the form of the
Modeling healthcare authorization and claim submissions using the openEHR dual-model approach
Directory of Open Access Journals (Sweden)
Freire Sergio M
2011-10-01
Full Text Available Abstract Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing
Hubbard Model Approach to X-ray Spectroscopy
Ahmed, Towfiq
We have implemented a Hubbard model based first-principles approach for real-space calculations of x-ray spectroscopy, which allows one to study excited state electronic structure of correlated systems. Theoretical understanding of many electronic features in d and f electron systems remains beyond the scope of conventional density functional theory (DFT). In this work our main effort is to go beyond the local density approximation (LDA) by incorporating the Hubbard model within the real-space multiple-scattering Green's function (RSGF) formalism. Historically, the first theoretical description of correlated systems was published by Sir Neville Mott and others in 1937. They realized that the insulating gap and antiferromagnetism in the transition metal oxides are mainly caused by the strong on-site Coulomb interaction of the localized unfilled 3d orbitals. Even with the recent progress of first principles methods (e.g. DFT) and model Hamiltonian approaches (e.g., Hubbard-Anderson model), the electronic description of many of these systems remains a non-trivial combination of both. X-ray absorption near edge spectra (XANES) and x-ray emission spectra (XES) are very powerful spectroscopic probes for many electronic features near Fermi energy (EF), which are caused by the on-site Coulomb interaction of localized electrons. In this work we focus on three different cases of many-body effects due to the interaction of localized d electrons. Here, for the first time, we have applied the Hubbard model in the real-space multiple scattering (RSGF) formalism for the calculation of x-ray spectra of Mott insulators (e.g., NiO and MnO). Secondly, we have implemented in our RSGF approach a doping dependent self-energy that was constructed from a single-band Hubbard model for the over doped high-T c cuprate La2-xSrxCuO4. Finally our RSGF calculation of XANES is calculated with the spectral function from Lee and Hedin's charge transfer satellite model. For all these cases our
A Dynamic Approach to Modeling Dependence Between Human Failure Events
Energy Technology Data Exchange (ETDEWEB)
Boring, Ronald Laurids [Idaho National Laboratory
2015-09-01
In practice, most HRA methods use direct dependence from THERP—the notion that error be- gets error, and one human failure event (HFE) may increase the likelihood of subsequent HFEs. In this paper, we approach dependence from a simulation perspective in which the effects of human errors are dynamically modeled. There are three key concepts that play into this modeling: (1) Errors are driven by performance shaping factors (PSFs). In this context, the error propagation is not a result of the presence of an HFE yielding overall increases in subsequent HFEs. Rather, it is shared PSFs that cause dependence. (2) PSFs have qualities of lag and latency. These two qualities are not currently considered in HRA methods that use PSFs. Yet, to model the effects of PSFs, it is not simply a matter of identifying the discrete effects of a particular PSF on performance. The effects of PSFs must be considered temporally, as the PSFs will have a range of effects across the event sequence. (3) Finally, there is the concept of error spilling. When PSFs are activated, they not only have temporal effects but also lateral effects on other PSFs, leading to emergent errors. This paper presents the framework for tying together these dynamic dependence concepts.
Modeling of movement-related potentials using a fractal approach.
Uşakli, Ali Bülent
2010-06-01
In bio-signal applications, classification performance depends greatly on feature extraction, which is also the case for electroencephalogram (EEG) based applications. Feature extraction, and consequently classification of EEG signals is not an easy task due to their inherent low signal-to-noise ratios and artifacts. EEG signals can be treated as the output of a non-linear dynamical (chaotic) system in the human brain and therefore they can be modeled by their dimension values. In this study, the variance fractal dimension technique is suggested for the modeling of movement-related potentials (MRPs). Experimental data sets consist of EEG signals recorded during the movements of right foot up, lip pursing and a simultaneous execution of these two tasks. The experimental results and performance tests show that the proposed modeling method can efficiently be applied to MRPs especially in the binary approached brain computer interface applications aiming to assist severely disabled people such as amyotrophic lateral sclerosis patients in communication and/or controlling devices.
Hybrid empirical--theoretical approach to modeling uranium adsorption
Energy Technology Data Exchange (ETDEWEB)
Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W
2004-05-01
An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K{sub f} parameter is correlated to sediment surface area (r{sup 2}=0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth.
Sulfur Deactivation of NOx Storage Catalysts: A Multiscale Modeling Approach
Directory of Open Access Journals (Sweden)
Rankovic N.
2013-09-01
Full Text Available Lean NOx Trap (LNT catalysts, a promising solution for reducing the noxious nitrogen oxide emissions from the lean burn and Diesel engines, are technologically limited by the presence of sulfur in the exhaust gas stream. Sulfur stemming from both fuels and lubricating oils is oxidized during the combustion event and mainly exists as SOx (SO2 and SO3 in the exhaust. Sulfur oxides interact strongly with the NOx trapping material of a LNT to form thermodynamically favored sulfate species, consequently leading to the blockage of NOx sorption sites and altering the catalyst operation. Molecular and kinetic modeling represent a valuable tool for predicting system behavior and evaluating catalytic performances. The present paper demonstrates how fundamental ab initio calculations can be used as a valuable source for designing kinetic models developed in the IFP Exhaust library, intended for vehicle simulations. The concrete example we chose to illustrate our approach was SO3 adsorption on the model NOx storage material, BaO. SO3 adsorption was described for various sites (terraces, surface steps and kinks and bulk for a closer description of a real storage material. Additional rate and sensitivity analyses provided a deeper understanding of the poisoning phenomena.
Mobile phone use while driving: a hybrid modeling approach.
Márquez, Luis; Cantillo, Víctor; Arellana, Julián
2015-05-01
The analysis of the effects that mobile phone use produces while driving is a topic of great interest for the scientific community. There is consensus that using a mobile phone while driving increases the risk of exposure to traffic accidents. The purpose of this research is to evaluate the drivers' behavior when they decide whether or not to use a mobile phone while driving. For that, a hybrid modeling approach that integrates a choice model with the latent variable "risk perception" was used. It was found that workers and individuals with the highest education level are more prone to use a mobile phone while driving than others. Also, "risk perception" is higher among individuals who have been previously fined and people who have been in an accident or almost been in an accident. It was also found that the tendency to use mobile phones while driving increases when the traffic speed reduces, but it decreases when the fine increases. Even though the urgency of the phone call is the most important explanatory variable in the choice model, the cost of the fine is an important attribute in order to control mobile phone use while driving. Copyright © 2015 Elsevier Ltd. All rights reserved.
A New Approach to Model Verification, Falsification and Selection
Directory of Open Access Journals (Sweden)
Andrew J. Buck
2015-06-01
Full Text Available This paper shows that a qualitative analysis, i.e., an assessment of the consistency of a hypothesized sign pattern for structural arrays with the sign pattern of the estimated reduced form, can always provide decisive insight into a model’s validity both in general and compared to other models. Qualitative analysis can show that it is impossible for some models to have generated the data used to estimate the reduced form, even though standard specification tests might show the model to be adequate. A partially specified structural hypothesis can be falsified by estimating as few as one reduced form equation. Zero restrictions in the structure can themselves be falsified. It is further shown how the information content of the hypothesized structural sign patterns can be measured using a commonly applied concept of statistical entropy. The lower the hypothesized structural sign pattern’s entropy, the more a priori information it proposes about the sign pattern of the estimated reduced form. As an hypothesized structural sign pattern has a lower entropy, it is more subject to type 1 error and less subject to type 2 error. Three cases illustrate the approach taken here.
Modeling tropical river runoff:A time dependent approach
Institute of Scientific and Technical Information of China (English)
Rashmi Nigam; Sudhir Nigam; Sushil K.Mittal
2014-01-01
Forecasting of rainfall and subsequent river runoff is important for many operational problems and applications related to hydrol-ogy. Modeling river runoff often requires rigorous mathematical analysis of vast historical data to arrive at reasonable conclusions. In this paper we have applied the stochastic method to characterize and predict river runoff of the perennial Kulfo River in south-ern Ethiopia. The time series analysis based auto regressive integrated moving average (ARIMA) approach is applied to mean monthly runoff data with 10 and 20 years spans. The varying length of the input runoff data is shown to influence the forecasting efficiency of the stochastic process. Preprocessing of the runoff time series data indicated that the data do not follow a seasonal pattern. Our forecasts were made using parsimonious non seasonal ARIMA models and the results were compared to actual 10-year and 20-year mean monthly runoff data of the Kulfo River. Our results indicate that river runoff forecasts based upon the 10-year data are more accurate and efficient than the model based on the 20-year time series.
A message passing approach for general epidemic models
Karrer, Brian
2010-01-01
In most models of the spread of disease over contact networks it is assumed that the probabilities of disease transmission and recovery from disease are constant in time. In real life, however, this is far from true. In many diseases, for instance, recovery occurs at about the same time after infection for all individuals, rather than at a constant rate. In this paper, we study a generalized version of the SIR (susceptible-infected-recovered) model of epidemic disease that allows for arbitrary nonuniform distributions of transmission and recovery times. Standard differential equation approaches cannot be used for this generalized model, but we show that the problem can be reformulated as a time-dependent message passing calculation on the appropriate contact network. The calculation is exact on trees (i.e., loopless networks) or locally tree-like networks (such as random graphs) in the large system size limit. On non-tree-like networks we show that the calculation gives a rigorous bound on the size of disease...
Masked areas in shear peak statistics. A forward modeling approach
Energy Technology Data Exchange (ETDEWEB)
Bard, D.; Kratochvil, J. M.; Dawson, W.
2016-03-09
The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.
DEFF Research Database (Denmark)
Panduro, Toke Emil; Thorsen, Bo Jellesmark
2014-01-01
evaluate two common model reduction approaches in an empirical case. The first relies on a principal component analysis (PCA) used to construct new orthogonal variables, which are applied in the hedonic model. The second relies on a stepwise model reduction based on the variance inflation index and Akaike......’s information criteria. Our empirical application focuses on estimating the implicit price of forest proximity in a Danish case area, with a dataset containing 86 relevant variables. We demonstrate that the estimated implicit price for forest proximity, while positive in all models, is clearly sensitive...
Modelling pathways to Rubisco degradation: a structural equation network modelling approach.
Directory of Open Access Journals (Sweden)
Catherine Tétard-Jones
Full Text Available 'Omics analysis (transcriptomics, proteomics quantifies changes in gene/protein expression, providing a snapshot of changes in biochemical pathways over time. Although tools such as modelling that are needed to investigate the relationships between genes/proteins already exist, they are rarely utilised. We consider the potential for using Structural Equation Modelling to investigate protein-protein interactions in a proposed Rubisco protein degradation pathway using previously published data from 2D electrophoresis and mass spectrometry proteome analysis. These informed the development of a prior model that hypothesised a pathway of Rubisco Large Subunit and Small Subunit degradation, producing both primary and secondary degradation products. While some of the putative pathways were confirmed by the modelling approach, the model also demonstrated features that had not been originally hypothesised. We used Bayesian analysis based on Markov Chain Monte Carlo simulation to generate output statistics suggesting that the model had replicated the variation in the observed data due to protein-protein interactions. This study represents an early step in the development of approaches that seek to enable the full utilisation of information regarding the dynamics of biochemical pathways contained within proteomics data. As these approaches gain attention, they will guide the design and conduct of experiments that enable 'Omics modelling to become a common place practice within molecular biology.
A Multiscale Approach for Modeling Oxygen Production by Adsorption
Directory of Open Access Journals (Sweden)
Pavone D.
2013-10-01
Full Text Available Oxygen production processes using adsorbents for application to CCS technologies (Carbon Capture and Storage offer potential cost benefits over classical cryogenics. In order to model adsorption processes an approach using three size scales has been developed. This work is being conducted in the framework of the DECARBit European research project. The first scale is at the size of the oxygen adsorption bed to be modelled as a vertical cylinder filled with pellets. Its length is 0.2 m (scale 10-1 m. The bed is homogeneous in the transversal direction so that the problem is 1D (independent variables t, x. The physics in the process include gas species (Cbk (t, x convection and dispersion, thermal convection and conduction (T(t, x and hydrodynamics (v(t, x. The gas constituents involved are N2, 02, CO2 and H2O. The second scale is at the size of the pellets that fill the adsorber and which are assumed to be of spherical shape with a typical radius of 5 mm (scale 10-3 m. The independent variable for the pellets is the radius “rp”. At a certain height (x down in the adsorber all the pellets are the same and are surrounded by the same gas composition but inside the pellets the concentrations may vary. The state variables for the inner part of the pellets are the gas concentrations Cpk(t, x, rp. The pellets are so small that they are assumed to have a uniform temperature. This leads to a 2D transient model for the pellets linked to the 1D transient model for the bulk. The third scale looks into the detailed structure of the pellets that are made of perovskite crystallites. The latter are assumed to be spherical. Oxygen adsorption occurs in the crystallites which have a radius of about 0.5 pm (scale 10-7 m. All the crystallites at the same radius in a pellet are supposed to behave the same and because they are spherical, the only independent variable for a crystallite located at (x, rp is its radius “rc”. The state variables for the crystallites
An approach to model validation and model-based prediction -- polyurethane foam case study.
Energy Technology Data Exchange (ETDEWEB)
Dowding, Kevin J.; Rutherford, Brian Milne
2003-07-01
Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical
METHODOLOGICAL APPROACH AND MODEL ANALYSIS FOR IDENTIFICATION OF TOURIST TRENDS
Directory of Open Access Journals (Sweden)
Neven Šerić
2015-06-01
Full Text Available The draw and diversity of the destination’s offer is an antecedent of the tourism visits growth. The destination supply differentiation is carried through new, specialised tourism products. The usual approach consists of forming specialised tourism products in accordance with the existing tourism destination image. Another approach, prevalent in practice of developed tourism destinations is based on innovating the destination supply through accordance with the global tourism trends. For this particular purpose, it is advisable to choose a monitoring and analysis method of tourism trends. The goal is to determine actual trends governing target markets, differentiating whims from trends during the tourism preseason. When considering the return on investment, modifying the destination’s tourism offer on the basis of a tourism whim is a risky endeavour, indeed. Adapting the destination’s supply to tourism whims can result in a shifted image, one that is unable to ensure a long term interest and tourist vacation growth. With regard to tourism trend research and based on the research conducted, an advisable model for evaluating tourism phenomena is proposed, one that determines whether tourism phenomena is a tourism trend or a tourism whim.
A dynamic appearance descriptor approach to facial actions temporal modeling.
Jiang, Bihan; Valstar, Michel; Martinez, Brais; Pantic, Maja
2014-02-01
Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.
A Gaussian graphical model approach to climate networks
Energy Technology Data Exchange (ETDEWEB)
Zerenner, Tanja, E-mail: tanjaz@uni-bonn.de [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Friederichs, Petra; Hense, Andreas [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany); Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14-16, 53115 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany)
2014-06-15
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.
A Unified Approach to Model-Based Planning and Execution
Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)
2000-01-01
Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.
A model-data based systems approach to process intensification
DEFF Research Database (Denmark)
Gani, Rafiqul
In recent years process intensification (PI) has attracted much interest as a potential means of process improvement to meet the demands, such as, for sustainable production. A variety of intensified equipment are being developed that potentially creates options to meet these demands...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model....... Here, established procedures for computer aided molecular design is adopted since combination of phenomena to form unit operations with desired objectives is, in principle, similar to combining atoms to form molecules with desired properties. The concept of the phenomena-based synthesis/design method...
Agents, Bayes, and Climatic Risks - a modular modelling approach
Directory of Open Access Journals (Sweden)
A. Haas
2005-01-01
Full Text Available When insurance firms, energy companies, governments, NGOs, and other agents strive to manage climatic risks, it is by no way clear what the aggregate outcome should and will be. As a framework for investigating this subject, we present the LAGOM model family. It is based on modules depicting learning social agents. For managing climate risks, our agents use second order probabilities and update them by means of a Bayesian mechanism while differing in priors and risk aversion. The interactions between these modules and the aggregate outcomes of their actions are implemented using further modules. The software system is implemented as a series of parallel processes using the CIAMn approach. It is possible to couple modules irrespective of the language they are written in, the operating system under which they are run, and the physical location of the machine.
Benchmarking of computer codes and approaches for modeling exposure scenarios
Energy Technology Data Exchange (ETDEWEB)
Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)
1994-08-01
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.
New Cutting Force Modeling Approach for Flat End Mill
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
A new mechanistic cutting force model for flat end milling using the instantaneous cutting force coefficients is proposed. An in-depth analysis shows that the total cutting forces can be separated into two terms: a nominal component independent of the runout and a perturbation component induced by the runout. The instantaneous value of the nominal component is used to calibrate the cutting force coefficients. With the help of the perturbation component and the cutting force coeffcients obtained above, the cutter runout is identified.Based on simulation and experimental results, the validity of the identification approach is demonstrated. The advantage of the proposed method lies in that the calibration performed with data of one cutting test under a specific regime can be applied for a great range of cutting conditions.
Correlations in a generalized elastic model: fractional Langevin equation approach.
Taloni, Alessandro; Chechkin, Aleksei; Klafter, Joseph
2010-12-01
The generalized elastic model (GEM) provides the evolution equation which governs the stochastic motion of several many-body systems in nature, such as polymers, membranes, and growing interfaces. On the other hand a probe (tracer) particle in these systems performs a fractional Brownian motion due to the spatial interactions with the other system's components. The tracer's anomalous dynamics can be described by a fractional Langevin equation (FLE) with a space-time correlated noise. We demonstrate that the description given in terms of GEM coincides with that furnished by the relative FLE, by showing that the correlation functions of the stochastic field obtained within the FLE framework agree with the corresponding quantities calculated from the GEM. Furthermore we show that the Fox H -function formalism appears to be very convenient to describe the correlation properties within the FLE approach.
A Dynamic Linear Modeling Approach to Public Policy Change
DEFF Research Database (Denmark)
Loftis, Matthew; Mortensen, Peter Bjerre
2017-01-01
Theories of public policy change, despite their differences, converge on one point of strong agreement. The relationship between policy and its causes can and does change over time. This consensus yields numerous empirical implications, but our standard analytical tools are inadequate for testing...... them. As a result, the dynamic and transformative relationships predicted by policy theories have been left largely unexplored in time-series analysis of public policy. This paper introduces dynamic linear modeling (DLM) as a useful statistical tool for exploring time-varying relationships in public...... policy. The paper offers a detailed exposition of the DLM approach and illustrates its usefulness with a time series analysis of U.S. defense policy from 1957-2010. The results point the way for a new attention to dynamics in the policy process and the paper concludes with a discussion of how...
Systems approaches to computational modeling of the oral microbiome
Directory of Open Access Journals (Sweden)
Dimiter V. Dimitrov
2013-07-01
Full Text Available Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet – oral microbiome – host mucosal transcriptome interactions. In particular we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, to human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders
Modeling Educational Content: The Cognitive Approach of the PALO Language
Directory of Open Access Journals (Sweden)
M. Felisa Verdejo Maíllo
2004-01-01
Full Text Available This paper presents a reference framework to describe educational material. It introduces the PALO Language as a cognitive based approach to Educational Modeling Languages (EML. In accordance with recent trends for reusability and interoperability in Learning Technologies, EML constitutes an evolution of the current content-centered specifications of learning material, involving the description of learning processes and methods from a pedagogical and instructional perspective. The PALO Language, thus, provides a layer of abstraction for the description of learning material, including the description of learning activities, structure and scheduling. The framework makes use of domain and pedagogical ontologies as a reusable and maintainable way to represent and store instructional content, and to provide a pedagogical level of abstraction in the authoring process.
IONONEST—A Bayesian approach to modeling the lower ionosphere
Martin, Poppy L.; Scaife, Anna M. M.; McKay, Derek; McCrea, Ian
2016-08-01
Obtaining high-resolution electron density height profiles for the D region of the ionosphere as a well-sampled function of time is difficult for most methods of ionospheric measurement. Here we present a new method of using multifrequency riometry data for producing D region height profiles via inverse methods. To obtain these profiles, we use the nested sampling technique, implemented through our code, IONONEST. We demonstrate this approach using new data from the Kilpisjärvi Atmospheric Imaging Receiver Array (KAIRA) instrument and consider two electron density models. We compare the recovered height profiles from the KAIRA data with those from incoherent scatter radar using data from the European Incoherent Scatter Facility (EISCAT) instrument and find that there is good agreement between the two techniques, allowing for instrumental differences.
Using graph approach for managing connectivity in integrative landscape modelling
Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger
2013-04-01
In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). Open
A Neural Model of Face Recognition: a Comprehensive Approach
Stara, Vera; Montesanto, Anna; Puliti, Paolo; Tascini, Guido; Sechi, Cristina
Visual recognition of faces is an essential behavior of humans: we have optimal performance in everyday life and just such a performance makes us able to establish the continuity of actors in our social life and to quickly identify and categorize people. This remarkable ability justifies the general interest in face recognition of researchers belonging to different fields and specially of designers of biometrical identification systems able to recognize the features of person's faces in a background. Due to interdisciplinary nature of this topic in this contribute we deal with face recognition through a comprehensive approach with the purpose to reproduce some features of human performance, as evidenced by studies in psychophysics and neuroscience, relevant to face recognition. This approach views face recognition as an emergent phenomenon resulting from the nonlinear interaction of a number of different features. For this reason our model of face recognition has been based on a computational system implemented through an artificial neural network. This synergy between neuroscience and engineering efforts allowed us to implement a model that had a biological plausibility, performed the same tasks as human subjects, and gave a possible account of human face perception and recognition. In this regard the paper reports on an experimental study of performance of a SOM-based neural network in a face recognition task, with reference both to the ability to learn to discriminate different faces, and to the ability to recognize a face already encountered in training phase, when presented in a pose or with an expression differing from the one present in the training context.
Multi-Model approach to reconstruct the Mediterranean Freshwater Evolution
Simon, Dirk; Marzocchi, Alice; Flecker, Rachel; Lunt, Dan; Hilgen, Frits; Meijer, Paul
2016-04-01
Today the Mediterranean Sea is isolated from the global ocean by the Strait of Gibraltar. This restricted nature causes the Mediterranean basin to react more sensitively to climatic and tectonic related phenomena than the global ocean. Not just eustatic sea-level and regional river run-off, but also gateway tectonics and connectivity between sub-basins are leaving an enhanced fingerprint in its geological record. To understand its evolution, it is crucial to understand how these different effects are coupled. The Miocene-Pliocene sedimentary record of the Mediterranean shows alternations in composition and colour and has been astronomically tuned. Around the Miocene-Pliocene Boundary the most extreme changes occur in the Mediterranean Sea. About 6% of the salt in the global ocean deposited in the Mediterranean Region, forming an approximately 2 km thick salt layer, which is still present today. This extreme event is named the Messinian Salinity Crisis (MSC, 5.97-5.33 Ma). The gateway and climate evolution is not well constrained for this time, which makes it difficult to distinguish which of the above mentioned drivers might have triggered the MSC. We, therefore, decided to tackle this problem via a multi-model approach: (1) We calculate the Mediterranean freshwater evolution via 30 atmosphere-ocean-vegetation simulations (using HadCM3L), to which we fitted to a function, using a regression model. This allows us to directly relate the orbital curves to evaporation, precipitation and run off. The resulting freshwater evolution can be directly correlated to other sedimentary and proxy records in the late Miocene. (2) By feeding the new freshwater evolution curve into a box/budget model we can predict the salinity and strontium evolution of the Mediterranean for a certain Atlantic-Mediterranean gateway. (3) By comparing these results to the known salinity thresholds of gypsum and halite saturation of sea water, but also to the late Miocene Mediterranean strontium
A model comparison approach shows stronger support for economic models of fertility decline.
Shenk, Mary K; Towner, Mary C; Kress, Howard C; Alam, Nurul
2013-05-14
The demographic transition is an ongoing global phenomenon in which high fertility and mortality rates are replaced by low fertility and mortality. Despite intense interest in the causes of the transition, especially with respect to decreasing fertility rates, the underlying mechanisms motivating it are still subject to much debate. The literature is crowded with competing theories, including causal models that emphasize (i) mortality and extrinsic risk, (ii) the economic costs and benefits of investing in self and children, and (iii) the cultural transmission of low-fertility social norms. Distinguishing between models, however, requires more comprehensive, better-controlled studies than have been published to date. We use detailed demographic data from recent fieldwork to determine which models produce the most robust explanation of the rapid, recent demographic transition in rural Bangladesh. To rigorously compare models, we use an evidence-based statistical approach using model selection techniques derived from likelihood theory. This approach allows us to quantify the relative evidence the data give to alternative models, even when model predictions are not mutually exclusive. Results indicate that fertility, measured as either total fertility or surviving children, is best explained by models emphasizing economic factors and related motivations for parental investment. Our results also suggest important synergies between models, implicating multiple causal pathways in the rapidity and degree of recent demographic transitions.
Economic modelling approaches to cost estimates for the control of carbon dioxide emissions
Zhang, Z.X.; Folmer, H.
1998-01-01
This article gives an assessment of the relative strengths and weaknesses of a variety of economic modelling approaches commonly used for cost estimates for limiting carbon emissions, including the ad hoc approach, dynamic optimization approach, input-output approach, macroeconomic approach, computa
National Research Council Canada - National Science Library
Vijay Nehra
2014-01-01
.... The present paper addresses different approaches used to derive mathematical models of first and second order system, developing MATLAB script implementation and building a corresponding Simulink model...
A mechanism-based approach to modeling ductile fracture.
Energy Technology Data Exchange (ETDEWEB)
Bammann, Douglas J.; Hammi, Youssef; Antoun, Bonnie R.; Klein, Patrick A.; Foulk, James W., III; McFadden, Sam X.
2004-01-01
Ductile fracture in metals has been observed to result from the nucleation, growth, and coalescence of voids. The evolution of this damage is inherently history dependent, affected by how time-varying stresses drive the formation of defect structures in the material. At some critically damaged state, the softening response of the material leads to strain localization across a surface that, under continued loading, becomes the faces of a crack in the material. Modeling localization of strain requires introduction of a length scale to make the energy dissipated in the localized zone well-defined. In this work, a cohesive zone approach is used to describe the post-bifurcation evolution of material within the localized zone. The relations are developed within a thermodynamically consistent framework that incorporates temperature and rate-dependent evolution relationships motivated by dislocation mechanics. As such, we do not prescribe the evolution of tractions with opening displacements across the localized zone a priori. The evolution of tractions is itself an outcome of the solution of particular, initial boundary value problems. The stress and internal state of the material at the point of bifurcation provides the initial conditions for the subsequent evolution of the cohesive zone. The models we develop are motivated by in-situ scanning electron microscopy of three-point bending experiments using 6061-T6 aluminum and 304L stainless steel, The in situ observations of the initiation and evolution of fracture zones reveal the scale over which the failure mechanisms act. In addition, these observations are essential for motivating the micromechanically-based models of the decohesion process that incorporate the effects of loading mode mixity, temperature, and loading rate. The response of these new cohesive zone relations is demonstrated by modeling the three-point bending configuration used for the experiments. In addition, we survey other methods with the potential
Teaching EFL Writing: An Approach Based on the Learner's Context Model
Lin, Zheng
2017-01-01
This study aims to examine qualitatively a new approach to teaching English as a foreign language (EFL) writing based on the learner's context model. It investigates the context model-based approach in class and identifies key characteristics of the approach delivered through a four-phase teaching and learning cycle. The model collects research…
Monte Carlo path sampling approach to modeling aeolian sediment transport
Hardin, E. J.; Mitasova, H.; Mitas, L.
2011-12-01
Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient
Do recommender systems benefit users? a modeling approach
Yeung, Chi Ho
2016-04-01
Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.
A Systematic Approach to Modelling Change Processes in Construction Projects
Directory of Open Access Journals (Sweden)
Ibrahim Motawa
2012-11-01
Full Text Available Modelling change processes within construction projects isessential to implement changes efficiently. Incomplete informationon the project variables at the early stages of projects leads toinadequate knowledge of future states and imprecision arisingfrom ambiguity in project parameters. This lack of knowledge isconsidered among the main source of changes in construction.Change identification and evaluation, in addition to predictingits impacts on project parameters, can help in minimising thedisruptive effects of changes. This paper presents a systematicapproach to modelling change process within construction projectsthat helps improve change identification and evaluation. Theapproach represents the key decisions required to implementchanges. The requirements of an effective change processare presented first. The variables defined for efficient changeassessment and diagnosis are then presented. Assessmentof construction changes requires an analysis for the projectcharacteristics that lead to change and also analysis of therelationship between the change causes and effects. The paperconcludes that, at the early stages of a project, projects with a highlikelihood of change occurrence should have a control mechanismover the project characteristics that have high influence on theproject. It also concludes, for the relationship between changecauses and effects, the multiple causes of change should bemodelled in a way to enable evaluating the change effects moreaccurately. The proposed approach is the framework for tacklingsuch conclusions and can be used for evaluating change casesdepending on the available information at the early stages ofconstruction projects.
Toward the design of sustainable biofuel landscapes: A modeling approach
Izaurralde, R. C.; Zhang, X.; Manowitz, D. H.; Sahajpal, R.
2011-12-01
Biofuel crops have emerged as promising feedstocks for advanced bioenergy production in the form of cellulosic ethanol and biodiesel. However, large-scale deployment of biofuel crops for energy production has the potential to conflict with food production and generate a myriad of environmental outcomes related to land and water resources (e.g., decreases in soil carbon storage, increased erosion, altered runoff, deterioration in water quality). In order to anticipate the possible impacts of biofuel crop production on food production systems and the environment and contribute to the design of sustainable biofuel landscapes, we developed a spatially-explicit integrated modeling framework (SEIMF) aimed at understanding, among other objectives, the complex interactions among land, water, and energy. The framework is a research effort of the DOE Great Lakes Bioenergy Research Center. The SEIMF has three components: (1) a GIS-based data analysis system, (2) the biogeochemical model EPIC (Environmental Policy Integrated Climate), and (3) an evolutionary multi-objective optimization algorithm for examining trade-offs between biofuel energy production and ecosystem responses. The SEIMF was applied at biorefinery scale to simulate biofuel production scenarios and the yield and environmental results were used to develop trade-offs, economic and life-cycle analyses. The SEIMF approach was also applied to test the hypothesis that growing perennial herbaceous species on marginal lands can satisfy a significant fraction of targeted demands while avoiding competition with food systems and maintaining ecosystem services.
Modelling and simulating retail management practices: a first approach
Siebers, Peer-Olaf; Celia, Helen; Clegg, Chris
2010-01-01
Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizational capabilities in the future. Our multi-disciplinary research team has worked closely with one of the UK's top ten retailers to collect data and build an understanding of shop-floor operations and the key actors in a department (customers, staff, and managers). Based on this case study we have built and tested our first version of a retail branch agent-based simulation model where we have focused on how we can simulate the effects of people management practices on customer satisfaction and sales. In our experiments we hav...
Forecasting wind-driven wildfires using an inverse modelling approach
Directory of Open Access Journals (Sweden)
O. Rios
2014-06-01
Full Text Available A technology able to rapidly forecast wildfire dynamics would lead to a paradigm shift in the response to emergencies, providing the Fire Service with essential information about the ongoing fire. This paper presents and explores a novel methodology to forecast wildfire dynamics in wind-driven conditions, using real-time data assimilation and inverse modelling. The forecasting algorithm combines Rothermel's rate of spread theory with a perimeter expansion model based on Huygens principle and solves the optimisation problem with a tangent linear approach and forward automatic differentiation. Its potential is investigated using synthetic data and evaluated in different wildfire scenarios. The results show the capacity of the method to quickly predict the location of the fire front with a positive lead time (ahead of the event in the order of 10 min for a spatial scale of 100 m. The greatest strengths of our method are lightness, speed and flexibility. We specifically tailor the forecast to be efficient and computationally cheap so it can be used in mobile systems for field deployment and operativeness. Thus, we put emphasis on producing a positive lead time and the means to maximise it.
A mechanism-based approach for absorption modeling: the Gastro-Intestinal Transit Time (GITT) model.
Hénin, Emilie; Bergstrand, Martin; Standing, Joseph F; Karlsson, Mats O
2012-06-01
Absorption models used in the estimation of pharmacokinetic drug characteristics from plasma concentration data are generally empirical and simple, utilizing no prior information on gastro-intestinal (GI) transit patterns. Our aim was to develop and evaluate an estimation strategy based on a mechanism-based model for drug absorption, which takes into account the tablet movement through the GI transit. This work is an extension of a previous model utilizing tablet movement characteristics derived from magnetic marker monitoring (MMM) and pharmacokinetic data. The new approach, which replaces MMM data with a GI transit model, was evaluated in data sets where MMM data were available (felodipine) or not available (diclofenac). Pharmacokinetic profiles in both datasets were well described by the model according to goodness-of-fit plots. Visual predictive checks showed the model to give superior simulation properties compared with a standard empirical approach (first-order absorption rate + lag-time). This model represents a step towards an integrated mechanism-based NLME model, where the use of physiological knowledge and in vitro–in vivo correlation helps fully characterize PK and generate hypotheses for new formulations or specific populations.
Evaluation of approaches focused on modelling of organic carbon stocks using the RothC model
Koco, Štefan; Skalský, Rastislav; Makovníková, Jarmila; Tarasovičová, Zuzana; Barančíková, Gabriela
2014-05-01
The aim of current efforts in the European area is the protection of soil organic matter, which is included in all relevant documents related to the protection of soil. The use of modelling of organic carbon stocks for anticipated climate change, respectively for land management can significantly help in short and long-term forecasting of the state of soil organic matter. RothC model can be applied in the time period of several years to centuries and has been tested in long-term experiments within a large range of soil types and climatic conditions in Europe. For the initialization of the RothC model, knowledge about the carbon pool sizes is essential. Pool size characterization can be obtained from equilibrium model runs, but this approach is time consuming and tedious, especially for larger scale simulations. Due to this complexity we search for new possibilities how to simplify and accelerate this process. The paper presents a comparison of two approaches for SOC stocks modelling in the same area. The modelling has been carried out on the basis of unique input of land use, management and soil data for each simulation unit separately. We modeled 1617 simulation units of 1x1 km grid on the territory of agroclimatic region Žitný ostrov in the southwest of Slovakia. The first approach represents the creation of groups of simulation units based on the evaluation of results for simulation unit with similar input values. The groups were created after the testing and validation of modelling results for individual simulation units with results of modelling the average values of inputs for the whole group. Tests of equilibrium model for interval in the range 5 t.ha-1 from initial SOC stock showed minimal differences in results comparing with result for average value of whole interval. Management inputs data from plant residues and farmyard manure for modelling of carbon turnover were also the same for more simulation units. Combining these groups (intervals of initial
Genetic and Modeling Approaches Reveal Distinct Components of Impulsive Behavior.
Nautiyal, Katherine M; Wall, Melanie M; Wang, Shuai; Magalong, Valerie M; Ahmari, Susanne E; Balsam, Peter D; Blanco, Carlos; Hen, René
2017-01-18
Impulsivity is an endophenotype found in many psychiatric disorders including substance use disorders, pathological gambling, and attention deficit hyperactivity disorder. Two behavioral features often considered in impulsive behavior are behavioral inhibition (impulsive action) and delayed gratification (impulsive choice). However, the extent to which these behavioral constructs represent distinct facets of behavior with discrete biological bases is unclear. To test the hypothesis that impulsive action and impulsive choice represent statistically independent behavioral constructs in mice, we collected behavioral measures of impulsivity in a single cohort of mice using well-validated operant behavioral paradigms. Mice with manipulation of serotonin 1B receptor (5-HT1BR) expression were included as a model of disordered impulsivity. A factor analysis was used to characterize correlations between the measures of impulsivity and to identify covariates. Using two approaches, we dissociated impulsive action from impulsive choice. First, the absence of 5-HT1BRs caused increased impulsive action, but not impulsive choice. Second, based on an exploratory factor analysis, a two-factor model described the data well, with measures of impulsive action and choice separating into two independent factors. A multiple-indicator multiple-causes analysis showed that 5-HT1BR expression and sex were significant covariates of impulsivity. Males displayed increased impulsivity in both dimensions, whereas 5-HT1BR expression was a predictor of increased impulsive action only. These data support the conclusion that impulsive action and impulsive choice are distinct behavioral phenotypes with dissociable biological influences that can be modeled in mice. Our work may help inform better classification, diagnosis, and treatment of psychiatric disorders, which present with disordered impulsivity.Neuropsychopharmacology advance online publication, 18 January 2017; doi:10.1038/npp.2016.277.