Two adaptive radiative transfer schemes for numerical weather prediction models
Directory of Open Access Journals (Sweden)
V. Venema
2007-11-01
Full Text Available Radiative transfer calculations in atmospheric models are computationally expensive, even if based on simplifications such as the δ-two-stream approximation. In most weather prediction models these parameterisation schemes are therefore called infrequently, accepting additional model error due to the persistence assumption between calls. This paper presents two so-called adaptive parameterisation schemes for radiative transfer in a limited area model: A perturbation scheme that exploits temporal correlations and a local-search scheme that mainly takes advantage of spatial correlations. Utilising these correlations and with similar computational resources, the schemes are able to predict the surface net radiative fluxes more accurately than a scheme based on the persistence assumption. An important property of these adaptive schemes is that their accuracy does not decrease much in case of strong reductions in the number of calls to the δ-two-stream scheme. It is hypothesised that the core idea can also be employed in parameterisation schemes for other processes and in other dynamical models.
Adaptive Packet Combining Scheme in Three State Channel Model
Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak
2018-01-01
The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.
Directory of Open Access Journals (Sweden)
R. Sitharthan
2016-09-01
Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.
Algorithms for Optimal Model Distributions in Adaptive Switching Control Schemes
Directory of Open Access Journals (Sweden)
Debarghya Ghosh
2016-03-01
Full Text Available Several multiple model adaptive control architectures have been proposed in the literature. Despite many advances in theory, the crucial question of how to synthesize the pairs model/controller in a structurally optimal way is to a large extent not addressed. In particular, it is not clear how to place the pairs model/controller is such a way that the properties of the switching algorithm (e.g., number of switches, learning transient, final performance are optimal with respect to some criteria. In this work, we focus on the so-called multi-model unfalsified adaptive supervisory switching control (MUASSC scheme; we define a suitable structural optimality criterion and develop algorithms for synthesizing the pairs model/controller in such a way that they are optimal with respect to the structural optimality criterion we defined. The peculiarity of the proposed optimality criterion and algorithms is that the optimization is carried out so as to optimize the entire behavior of the adaptive algorithm, i.e., both the learning transient and the steady-state response. A comparison is made with respect to the model distribution of the robust multiple model adaptive control (RMMAC, where the optimization considers only the steady-state ideal response and neglects any learning transient.
Adaptive multiresolution WENO schemes for multi-species kinematic flow models
International Nuclear Information System (INIS)
Buerger, Raimund; Kozakevicius, Alice
2007-01-01
Multi-species kinematic flow models lead to strongly coupled, nonlinear systems of first-order, spatially one-dimensional conservation laws. The number of unknowns (the concentrations of the species) may be arbitrarily high. Models of this class include a multi-species generalization of the Lighthill-Whitham-Richards traffic model and a model for the sedimentation of polydisperse suspensions. Their solutions typically involve kinematic shocks separating areas of constancy, and should be approximated by high resolution schemes. A fifth-order weighted essentially non-oscillatory (WENO) scheme is combined with a multiresolution technique that adaptively generates a sparse point representation (SPR) of the evolving numerical solution. Thus, computational effort is concentrated on zones of strong variation near shocks. Numerical examples from the traffic and sedimentation models demonstrate the effectiveness of the resulting WENO multiresolution (WENO-MRS) scheme
Jha, Pradeep Kumar
Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow
Efficient adaptive fuzzy control scheme
Papp, Z.; Driessen, B.J.F.
1995-01-01
The paper presents an adaptive nonlinear (state-) feedback control structure, where the nonlinearities are implemented as smooth fuzzy mappings defined as rule sets. The fine tuning and adaption of the controller is realized by an indirect adaptive scheme, which modifies the parameters of the fuzzy
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
Adaptive Optics Metrics & QC Scheme
Girard, Julien H.
2017-09-01
"There are many Adaptive Optics (AO) fed instruments on Paranal and more to come. To monitor their performances and assess the quality of the scientific data, we have developed a scheme and a set of tools and metrics adapted to each flavour of AO and each data product. Our decisions to repeat observations or not depends heavily on this immediate quality control "zero" (QC0). Atmospheric parameters monitoring can also help predict performances . At the end of the chain, the user must be able to find the data that correspond to his/her needs. In Particular, we address the special case of SPHERE."
Model Reference Adaptive Scheme for Multi-drug Infusion for Blood Pressure Control
Enbiya, Saleh; Mahieddine, Fatima; Hossain, Alamgir
2011-01-01
Using multiple interacting drugs to control both the mean arterial pressure (MAP) and cardiac output (CO) of patients with different sensitivity to drugs is a challenging task which this paper attempts to address. A multivariable model reference adaptive control (MRAC) algorithm is developed using a two-input, two-output patient model. The control objective is to maintain the homodynamic variables MAP and CO at the normal values by simultaneously administering two drugs; sodium nitroprusside ...
Wang, Tianbo; Zhou, Wuneng; Zhao, Shouwei; Yu, Weiqin
2014-03-01
In this paper, the robust exponential synchronization problem for a class of uncertain delayed master-slave dynamical system is investigated by using the adaptive control method. Different from some existing master-slave models, the considered master-slave system includes bounded unmodeled dynamics. In order to compensate the effect of unmodeled dynamics and effectively achieve synchronization, a novel adaptive controller with simple updated laws is proposed. Moreover, the results are given in terms of LMIs, which can be easily solved by LMI Toolbox in Matlab. A numerical example is given to illustrate the effectiveness of the method. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
A new adaptive control scheme based on the interacting multiple model (IMM) estimation
International Nuclear Information System (INIS)
Afshari, Hamed H.; Al-Ani, Dhafar; Habibi, Saeid
2016-01-01
In this paper, an Interacting multiple model (IMM) adaptive estimation approach is incorporated to design an optimal adaptive control law for stabilizing an Unmanned vehicle. Due to variations of the forward velocity of the Unmanned vehicle, its aerodynamic derivatives are constantly changing. In order to stabilize the unmanned vehicle and achieve the control objectives for in-flight conditions, one seeks for an adaptive control strategy that can adjust itself to varying flight conditions. In this context, a bank of linear models is used to describe the vehicle dynamics in different operating modes. Each operating mode represents a particular dynamic with a different forward velocity. These models are then used within an IMM filter containing a bank of Kalman filters (KF) in a parallel operating mechanism. To regulate and stabilize the vehicle, a Linear quadratic regulator (LQR) law is designed and implemented for each mode. The IMM structure determines the particular mode based on the stored models and in-flight input-output measurements. The LQR controller also provides a set of controllers; each corresponds to a particular flight mode and minimizes the tracking error. Finally, the ultimate control law is obtained as a weighted summation of all individual controllers whereas weights are obtained using mode probabilities of each operating mode.
A new adaptive control scheme based on the interacting multiple model (IMM) estimation
Energy Technology Data Exchange (ETDEWEB)
Afshari, Hamed H.; Al-Ani, Dhafar; Habibi, Saeid [McMaster University, Hamilton (Canada)
2016-06-15
In this paper, an Interacting multiple model (IMM) adaptive estimation approach is incorporated to design an optimal adaptive control law for stabilizing an Unmanned vehicle. Due to variations of the forward velocity of the Unmanned vehicle, its aerodynamic derivatives are constantly changing. In order to stabilize the unmanned vehicle and achieve the control objectives for in-flight conditions, one seeks for an adaptive control strategy that can adjust itself to varying flight conditions. In this context, a bank of linear models is used to describe the vehicle dynamics in different operating modes. Each operating mode represents a particular dynamic with a different forward velocity. These models are then used within an IMM filter containing a bank of Kalman filters (KF) in a parallel operating mechanism. To regulate and stabilize the vehicle, a Linear quadratic regulator (LQR) law is designed and implemented for each mode. The IMM structure determines the particular mode based on the stored models and in-flight input-output measurements. The LQR controller also provides a set of controllers; each corresponds to a particular flight mode and minimizes the tracking error. Finally, the ultimate control law is obtained as a weighted summation of all individual controllers whereas weights are obtained using mode probabilities of each operating mode.
Directory of Open Access Journals (Sweden)
Xiangyang Zhou
2016-01-01
Full Text Available This paper describes a method to suppress the effect of nonlinear and time-varying mass unbalance torque disturbance on the dynamic performances of an aerial inertially stabilized platform (ISP. To improve the tracking accuracy and robustness of the ISP, a compound control scheme based on both of model reference adaptive control (MRAC and PID control methods is proposed. The dynamic model is first developed which reveals the unbalance torque disturbance with the characteristic of being nonlinear and time-varying. Then, the MRAC/PID compound controller is designed, in which the PID parameters are adaptively adjusted based on the output errors between the reference model and the actual system. In this way, the position errors derived from the prominent unbalance torque disturbance are corrected in real time so that the tracking accuracy is improved. To verify the method, the simulations and experiments are, respectively, carried out. The results show that the compound scheme has good ability in mass unbalance disturbance rejection, by which the system obtains higher stability accuracy compared with the PID method.
Application of stable adaptive schemes to nuclear reactor systems, (1)
International Nuclear Information System (INIS)
Fukuda, Toshio
1978-01-01
Parameter identification and adaptive control schemes are presented for a point reactor with internal feedbacks which lead to the nonlinearity of the overall system. Both are shown stable with new representation of the system, which corresponds to the nonminimal system representation, in the vein of the Model Reference Adaptive System (MRAS) via the Lyapunov's method. For the sake of the parameter identification, model parameters can be adjusted adaptively as soon as measurements start, while plant parameters can also adaptively be compensated through control input to reduce the output error between the model and the plant for the case of the adaptive control. In the case of the adaptive control, control schemes are presented for two cases, the case of the unknown decay constant of the delayed neutron and the case of the known constant. The adaptive control scheme for the latter case is shown extremely simpler than that for the former. Furthermore, when plant parameters vary slowly with time, computer simulations show that the proposed adaptive control scheme works satisfactorily enough to stabilize an unstable reactor and that it does even in the noise with small variance. (auth.)
Energy Technology Data Exchange (ETDEWEB)
Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)
2007-10-15
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
International Nuclear Information System (INIS)
Rybynok, V O; Kyriacou, P A
2007-01-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media
Adaptive Scheme for the CLIC Orbit Feedback
Pfingstner, J; Hofbaur, M
2010-01-01
One of the major challenges of the CLIC main linac is the preservation of the ultra-low beam emittance. The dynamic effect of ground motion would lead to a rapid emittance increase. have to be counteracted by using orbit feedback (FB) systems. Orbit feedback systems (FB) have to be optimized to optimally attenuate ground motion (disturbance), in spite of drifts of accelerator parameters (imperfect system knowledge). This paper presents a new FB strategy for the main linac of CLIC. It addresses the above mentioned issues, with the help of an adaptive control scheme. The rst part of this system is a system identication unit. It delivers an estimate of the time-varying system behavior. The second part is a control algorithm, which uses the most recent system estimate of the identication unit. It uses H2 control theory to deliver an optimal prediction of the ground motion. This approach takes into account the frequency and spacial properties of the ground motion, as well as their impact on the emittance growth.
Adaptable Iterative and Recursive Kalman Filter Schemes
Zanetti, Renato
2014-01-01
Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.
Adaptive transmission schemes for MISO spectrum sharing systems
Bouida, Zied
2013-06-01
We propose three adaptive transmission techniques aiming to maximize the capacity of a multiple-input-single-output (MISO) secondary system under the scenario of an underlay cognitive radio network. In the first scheme, namely the best antenna selection (BAS) scheme, the antenna maximizing the capacity of the secondary link is used for transmission. We then propose an orthogonal space time bloc code (OSTBC) transmission scheme using the Alamouti scheme with transmit antenna selection (TAS), namely the TAS/STBC scheme. The performance improvement offered by this scheme comes at the expense of an increased complexity and delay when compared to the BAS scheme. As a compromise between these schemes, we propose a hybrid scheme using BAS when only one antenna verifies the interference condition and TAS/STBC when two or more antennas are illegible for communication. We first derive closed-form expressions of the statistics of the received signal-to-interference-and-noise ratio (SINR) at the secondary receiver (SR). These results are then used to analyze the performance of the proposed techniques in terms of the average spectral efficiency, the average number of transmit antennas, and the average bit error rate (BER). This performance is then illustrated via selected numerical examples. © 2013 IEEE.
Semantic HyperMultimedia Adaptation Schemes and Applications
Bieliková, Mária; Mylonas, Phivos; Tsapatsoulis, Nicolas
2013-01-01
Nowadays, more and more users are witnessing the impact of Hypermedia/Multimedia as well as the penetration of social applications in their life. Parallel to the evolution of the Internet and Web, several Hypermedia/Multimedia schemes and technologies bring semantic-based intelligent, personalized and adaptive services to the end users. More and more techniques are applied in media systems in order to be user/group-centric, adapting to different content and context features of a single or a community user. In respect to all the above, researchers need to explore and study the plethora of challenges that emergent personalisation and adaptation technologies bring to the new era. This edited volume aims to increase the awareness of researchers in this area. All contributions provide an in-depth investigation on research and deployment issues, regarding already introduced schemes and applications in Semantic Hyper/Multimedia and Social Media Adaptation. Moreover, the authors provide survey-based articles, so as p...
Measurement-Based Transmission Line Parameter Estimation with Adaptive Data Selection Scheme
DEFF Research Database (Denmark)
Li, Changgang; Zhang, Yaping; Zhang, Hengxu
2017-01-01
Accurate parameters of transmission lines are critical for power system operation and control decision making. Transmission line parameter estimation based on measured data is an effective way to enhance the validity of the parameters. This paper proposes a multi-point transmission line parameter...... of the proposed model. Some 500kV transmission lines from a provincial power system of China are estimated to demonstrate the applicability of the presented model. The superiority of the proposed model over fixed data selection schemes is also verified....... estimation model with an adaptive data selection scheme based on measured data. Data selection scheme, defined with time window and number of data points, is introduced in the estimation model as additional variables to optimize. The data selection scheme is adaptively adjusted to minimize the relative...
Design of an adaptive overcurrent protection scheme for microgrids ...
African Journals Online (AJOL)
Microgrid is a new phenomenon regarded to Distributed Generation (DG) penetration in the existing distribution systems. In this paper adaptive over current (OC) protection technique for a distribution system with DG penetration is proposed. This scheme takes into account general protection requirements, impacts of DG on ...
Design of an adaptive overcurrent protection scheme for microgrids
African Journals Online (AJOL)
Microgrid is a new phenomenon regarded to Distributed Generation (DG) penetration in the existing distribution systems. In this paper adaptive over current (OC) protection technique for a distribution system with DG penetration is proposed. This scheme takes into account general protection requirements, impacts of DG on ...
International Nuclear Information System (INIS)
Ren, Quansheng; He, Mingli; Yu, Xiaoqian; Long, Qiufeng; Zhao, Jianye
2014-01-01
In the paper, we applied an adaptive principle to three kinds of complex networks as well as a random network within the context of the Kuramoto model. We found that the adaptive scheme could suppress the negative effect of the heterogeneity in the networks and the phase synchronization is enhanced obviously. The paper mainly investigates the adaptive coupling scheme in the small-world network, the scale-free network, and the modular network. Comparing with other weighted or unweighted static coupling schemes, the adaptive coupling scheme has a better performance in synchronization and communication efficiency, and provides a more realistic picture of synchronization in complex networks.
Adaptive Tracking Control for Robots With an Interneural Computing Scheme.
Tsai, Feng-Sheng; Hsu, Sheng-Yi; Shih, Mau-Hsiang
2018-04-01
Adaptive tracking control of mobile robots requires the ability to follow a trajectory generated by a moving target. The conventional analysis of adaptive tracking uses energy minimization to study the convergence and robustness of the tracking error when the mobile robot follows a desired trajectory. However, in the case that the moving target generates trajectories with uncertainties, a common Lyapunov-like function for energy minimization may be extremely difficult to determine. Here, to solve the adaptive tracking problem with uncertainties, we wish to implement an interneural computing scheme in the design of a mobile robot for behavior-based navigation. The behavior-based navigation adopts an adaptive plan of behavior patterns learning from the uncertainties of the environment. The characteristic feature of the interneural computing scheme is the use of neural path pruning with rewards and punishment interacting with the environment. On this basis, the mobile robot can be exploited to change its coupling weights in paths of neural connections systematically, which can then inhibit or enhance the effect of flow elimination in the dynamics of the evolutionary neural network. Such dynamical flow translation ultimately leads to robust sensory-to-motor transformations adapting to the uncertainties of the environment. A simulation result shows that the mobile robot with the interneural computing scheme can perform fault-tolerant behavior of tracking by maintaining suitable behavior patterns at high frequency levels.
Adaptive PCA based fault diagnosis scheme in imperial smelting process.
Hu, Zhikun; Chen, Zhiwen; Gui, Weihua; Jiang, Bin
2014-09-01
In this paper, an adaptive fault detection scheme based on a recursive principal component analysis (PCA) is proposed to deal with the problem of false alarm due to normal process changes in real process. Our further study is also dedicated to develop a fault isolation approach based on Generalized Likelihood Ratio (GLR) test and Singular Value Decomposition (SVD) which is one of general techniques of PCA, on which the off-set and scaling fault can be easily isolated with explicit off-set fault direction and scaling fault classification. The identification of off-set and scaling fault is also applied. The complete scheme of PCA-based fault diagnosis procedure is proposed. The proposed scheme is first applied to Imperial Smelting Process, and the results show that the proposed strategies can be able to mitigate false alarms and isolate faults efficiently. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
An Adaptive Lossless Data Compression Scheme for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Jonathan Gana Kolo
2012-01-01
Full Text Available Energy is an important consideration in the design and deployment of wireless sensor networks (WSNs since sensor nodes are typically powered by batteries with limited capacity. Since the communication unit on a wireless sensor node is the major power consumer, data compression is one of possible techniques that can help reduce the amount of data exchanged between wireless sensor nodes resulting in power saving. However, wireless sensor networks possess significant limitations in communication, processing, storage, bandwidth, and power. Thus, any data compression scheme proposed for WSNs must be lightweight. In this paper, we present an adaptive lossless data compression (ALDC algorithm for wireless sensor networks. Our proposed ALDC scheme performs compression losslessly using multiple code options. Adaptive compression schemes allow compression to dynamically adjust to a changing source. The data sequence to be compressed is partitioned into blocks, and the optimal compression scheme is applied for each block. Using various real-world sensor datasets we demonstrate the merits of our proposed compression algorithm in comparison with other recently proposed lossless compression algorithms for WSNs.
An Adaptive Ship Detection Scheme for Spaceborne SAR Imagery
Directory of Open Access Journals (Sweden)
Xiangguang Leng
2016-08-01
Full Text Available With the rapid development of spaceborne synthetic aperture radar (SAR and the increasing need of ship detection, research on adaptive ship detection in spaceborne SAR imagery is of great importance. Focusing on practical problems of ship detection, this paper presents a highly adaptive ship detection scheme for spaceborne SAR imagery. It is able to process a wide range of sensors, imaging modes and resolutions. Two main stages are identified in this paper, namely: ship candidate detection and ship discrimination. Firstly, this paper proposes an adaptive land masking method using ship size and pixel size. Secondly, taking into account the imaging mode, incidence angle, and polarization channel of SAR imagery, it implements adaptive ship candidate detection in spaceborne SAR imagery by applying different strategies to different resolution SAR images. Finally, aiming at different types of typical false alarms, this paper proposes a comprehensive ship discrimination method in spaceborne SAR imagery based on confidence level and complexity analysis. Experimental results based on RADARSAT-1, RADARSAT-2, TerraSAR-X, RS-1, and RS-3 images demonstrate that the adaptive scheme proposed in this paper is able to detect ship targets in a fast, efficient and robust way.
Bouida, Zied
2012-09-01
Under the scenario of an underlay cognitive radio network, we propose in this paper an adaptive scheme using transmit power adaptation, switched transmit diversity, and adaptive modulation in order to improve the performance of existing switching efficient schemes (SES) and bandwidth efficient schemes (BES). Taking advantage of the channel reciprocity principle, we assume that the channel state information (CSI) of the interference link is available to the secondary transmitter. This information is then used by the secondary transmitter to adapt its transmit power, modulation constellation size, and used transmit branch. The goal of this joint adaptation is to minimize the average number of switched branches and the average system delay given the fading channel conditions, the required error rate performance, and a peak interference constraint to the primary receiver. We analyze the proposed scheme in terms of the average number of branch switching, average delay, and we provide a closed-form expression of the average bit error rate (BER). We demonstrate through numerical examples that the proposed scheme provides a compromise between the SES and the BES schemes. © 2012 IEEE.
Towards Adaptive High-Resolution Images Retrieval Schemes
Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.
2016-10-01
Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.
Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature
Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)
2000-01-01
This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.
Adaptation of failure scenario based resilience schemes toward availability guarantees
Scheffel, Matthias
2006-07-01
Various resilience schemes have been proposed to allow for fault-tolerant transport networks. Their common aim is to survive certain failure patterns such as node or span failures by providing alternative transmission paths. However, network operators guarantee the resulting network reliability in terms of service availability to their business customers. A maximum duration of service disruption per year must not be exceeded. We investigate an optimal design of resilient network configurations that adapts to end-to-end availability requirements. We formulate an integer linear program that minimizes the resource utilization and investigate a case study.
An Adaptive Motion Estimation Scheme for Video Coding
Directory of Open Access Journals (Sweden)
Pengyu Liu
2014-01-01
Full Text Available The unsymmetrical-cross multihexagon-grid search (UMHexagonS is one of the best fast Motion Estimation (ME algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.
Adaptive dynamic capacity allocation scheme for voice and data transmission
Yu, Yonglin; Wang, Gang; Lei, Daocheng; Ma, Runnian
2011-12-01
Based on the theory of adaptive modulation, the compressed format is introduced in voice and data transmission, and a novel adaptive dynamic capability allocation algorithm is presented. In the given transmission system model, according to the channel state information (CSI) provided by channel estimating, the transmitter can adaptively select the modulation model, and shrink the voice symbol duration to improve the data throughput of data transmission. Simulation results shows that the novel algorithm can effectively evaluate the percentage occupation of data bit in one fame, and improve the data throughput.
Optimal model distributions in supervisory adaptive control
Ghosh, D.; Baldi, S.
2017-01-01
Several classes of multi-model adaptive control schemes have been proposed in literature: instead of one single parameter-varying controller, in this adaptive methodology multiple fixed-parameter controllers for different operating regimes (i.e. different models) are utilised. Despite advances in
An adaptive interpolation scheme for molecular potential energy surfaces
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
An Adaptive Estimation Scheme for Open-Circuit Voltage of Power Lithium-Ion Battery
Directory of Open Access Journals (Sweden)
Yun Zhang
2013-01-01
Full Text Available Open-circuit voltage (OCV is one of the most important parameters in determining state of charge (SoC of power battery. The direct measurement of it is costly and time consuming. This paper describes an adaptive scheme that can be used to derive OCV of the power battery. The scheme only uses the measurable input (terminal current and the measurable output (terminal voltage signals of the battery system and is simple enough to enable online implement. Firstly an equivalent circuit model is employed to describe the polarization characteristic and the dynamic behavior of the lithium-ion battery; the state-space representation of the electrical performance for the battery is obtained based on the equivalent circuit model. Then the implementation procedure of the adaptive scheme is given; also the asymptotic convergence of the observer error and the boundedness of all the parameter estimates are proven. Finally, experiments are carried out, and the effectiveness of the adaptive estimation scheme is validated by the experimental results.
Adaptive Decision-Making Scheme for Cognitive Radio Networks
Alqerm, Ismail
2014-05-01
Radio resource management becomes an important aspect of the current wireless networks because of spectrum scarcity and applications heterogeneity. Cognitive radio is a potential candidate for resource management because of its capability to satisfy the growing wireless demand and improve network efficiency. Decision-making is the main function of the radio resources management process as it determines the radio parameters that control the use of these resources. In this paper, we propose an adaptive decision-making scheme (ADMS) for radio resources management of different types of network applications including: power consuming, emergency, multimedia, and spectrum sharing. ADMS exploits genetic algorithm (GA) as an optimization tool for decision-making. It consists of the several objective functions for the decision-making process such as minimizing power consumption, packet error rate (PER), delay, and interference. On the other hand, maximizing throughput and spectral efficiency. Simulation results and test bed evaluation demonstrate ADMS functionality and efficiency.
Directory of Open Access Journals (Sweden)
A.M. Ibrahim
2016-09-01
Full Text Available This paper presents an adaptive protection coordination scheme for optimal coordination of DOCRs in interconnected power networks with the impact of DG, the used coordination technique is the Artificial Bee Colony (ABC. The scheme adapts to system changes; new relays settings are obtained as generation-level or system-topology changes. The developed adaptive scheme is applied on the IEEE 30-bus test system for both single- and multi-DG existence where results are shown and discussed.
Design and Analysis of Schemes for Adapting Migration Intervals in Parallel Evolutionary Algorithms.
Mambrini, Andrea; Sudholt, Dirk
2015-01-01
The migration interval is one of the fundamental parameters governing the dynamic behaviour of island models. Yet, there is little understanding on how this parameter affects performance, and how to optimally set it given a problem in hand. We propose schemes for adapting the migration interval according to whether fitness improvements have been found. As long as no improvement is found, the migration interval is increased to minimise communication. Once the best fitness has improved, the migration interval is decreased to spread new best solutions more quickly. We provide a method for obtaining upper bounds on the expected running time and the communication effort, defined as the expected number of migrants sent. Example applications of this method to common example functions show that our adaptive schemes are able to compete with, or even outperform, the optimal fixed choice of the migration interval, with regard to running time and communication effort.
Adaptive lifting scheme with sparse criteria for image coding
Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe
2012-12-01
Lifting schemes (LS) were found to be efficient tools for image coding purposes. Since LS-based decompositions depend on the choice of the prediction/update operators, many research efforts have been devoted to the design of adaptive structures. The most commonly used approaches optimize the prediction filters by minimizing the variance of the detail coefficients. In this article, we investigate techniques for optimizing sparsity criteria by focusing on the use of an ℓ 1 criterion instead of an ℓ 2 one. Since the output of a prediction filter may be used as an input for the other prediction filters, we then propose to optimize such a filter by minimizing a weighted ℓ 1 criterion related to the global rate-distortion performance. More specifically, it will be shown that the optimization of the diagonal prediction filter depends on the optimization of the other prediction filters and vice-versa. Related to this fact, we propose to jointly optimize the prediction filters by using an algorithm that alternates between the optimization of the filters and the computation of the weights. Experimental results show the benefits which can be drawn from the proposed optimization of the lifting operators.
An adaptive nonlinear solution scheme for reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Lett, G.S. [Scientific Software - Intercomp, Inc., Denver, CO (United States)
1996-12-31
Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.
Adaptive multi-objective Optimization scheme for cognitive radio resource management
Alqerm, Ismail
2014-12-01
Cognitive Radio is an intelligent Software Defined Radio that is capable to alter its transmission parameters according to predefined objectives and wireless environment conditions. Cognitive engine is the actuator that performs radio parameters configuration by exploiting optimization and machine learning techniques. In this paper, we propose an Adaptive Multi-objective Optimization Scheme (AMOS) for cognitive radio resource management to improve spectrum operation and network performance. The optimization relies on adapting radio transmission parameters to environment conditions using constrained optimization modeling called fitness functions in an iterative manner. These functions include minimizing power consumption, Bit Error Rate, delay and interference. On the other hand, maximizing throughput and spectral efficiency. Cross-layer optimization is exploited to access environmental parameters from all TCP/IP stack layers. AMOS uses adaptive Genetic Algorithm in terms of its parameters and objective weights as the vehicle of optimization. The proposed scheme has demonstrated quick response and efficiency in three different scenarios compared to other schemes. In addition, it shows its capability to optimize the performance of TCP/IP layers as whole not only the physical layer.
Energy Technology Data Exchange (ETDEWEB)
Yun, Yuxing [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China; Fan, Jiwen [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xiao, Heng [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Zhang, Guang J. [Scripps Institution of Oceanography, University of California, San Diego CA USA; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xu, Kuan-Man [NASA Langley Research Center, Hampton VA USA; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Gustafson, William I. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA
2017-11-01
Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32 km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.
Adaptive Test Schemes for Control of Paratuberculosis in Dairy Cows
DEFF Research Database (Denmark)
Kirkeby, Carsten Thure; Græsbøll, Kaare; Nielsen, Søren Saxmose
2016-01-01
consequences of continuously adapting the sampling interval in response to the estimated true prevalence in the herd. The key results were that the true prevalence was greatly affected by the hygiene level and to some extent by the test-frequency. Furthermore, the choice of prevalence that will be tolerated...... through a variety of test-strategies, but are challenged by the lack of perfect tests. Frequent testing increases the sensitivity but the costs of testing are a cause of concern for farmers. Here, we used a herd simulation model using milk ELISA tests to evaluate the epidemiological and economic...... in a control scenario had a major impact on the true prevalence in the normal hygiene setting, but less so when the hygiene was poor. The net revenue is not greatly affected by the test-strategy, because of the general variation in net revenues between farms. An exception to this is the low hygiene herd, where...
A Reconfiguration Control Scheme for a Quadrotor Helicopter via Combined Multiple Models
Directory of Open Access Journals (Sweden)
Fuyang Chen
2014-08-01
Full Text Available In this paper, an optimal reconfiguration control scheme is proposed for a quadrotor helicopter with actuator faults via adaptive control and combined multiple models. The combined models set contains several fixed models, an adaptive model and a reinitialized adaptive model. The fixed models and the adaptive model can describe the failure system under different fault conditions. Moreover, the proposed reinitialized adaptive model refers to the closest model of the current system and can improve the speed of convergence effectively. In addition, the reference model is designed in consideration of an optimal control performance index and the principle of the minimum cost to achieve perfect tracking performance. Finally, some simulation results demonstrate the effectiveness of the proposed reconfiguration control scheme for faulty cases.
A Rate Adaptation Scheme According to Channel Conditions in Wireless LANs
Numoto, Daisuke; Inai, Hiroshi
Rate adaptation in wireless LANs is to select the most suitable transmission rate automatically according to channel condition. If the channel condition is good, a station can choose a higher transmission rate, otherwise, it should choose a lower but noise-resistant transmission rate. Since IEEE 802.11 does not specify any rate adaptation scheme, several schemes have been proposed. However those schemes provide low throughput or unfair transmission opportunities among stations especially when the number of stations increases. In this paper, we propose a rate adaptation scheme under which the transmission rate quickly closes and then stays around an optimum rate even in the presence of a large number of stations. Via simulation, our scheme provides higher throughput than existing ones and almost equal fairness.
Adaptive Single-Pole Autoreclosure Scheme Based on Wavelet ...
African Journals Online (AJOL)
Adaptive autoreclosing is a fast emerging technology for improving power system marginal sta-bility during faults. It avoids reclosing unto permanent faults ... the latter, predict opti-mal reclosure times. Keywords: Adaptive autoreclosure, Artificial neural networks, Autoreclosure, Signal processing, Stability, Wavelet transform ...
Adaptive nonseparable vector lifting scheme for digital holographic data compression.
Xing, Yafei; Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Dufaux, Frédéric
2015-01-01
Holographic data play a crucial role in recent three-dimensional imaging as well as microscopic applications. As a result, huge amounts of storage capacity will be involved for this kind of data. Therefore, it becomes necessary to develop efficient hologram compression schemes for storage and transmission purposes. In this paper, we focus on the shifted distance information, obtained by the phase-shifting algorithm, where two sets of difference data need to be encoded. More precisely, a nonseparable vector lifting scheme is investigated in order to exploit the two-dimensional characteristics of the holographic contents. Simulations performed on different digital holograms have shown the effectiveness of the proposed method in terms of bitrate saving and quality of object reconstruction.
Performance analysis of joint diversity combining, adaptive modulation, and power control schemes
Qaraqe, Khalid A.
2011-01-01
Adaptive modulation and diversity combining represent very important adaptive solutions for future generations of wireless communication systems. Indeed, in order to improve the performance and the efficiency of these systems, these two techniques have been recently used jointly in new schemes named joint adaptive modulation and diversity combining (JAMDC) schemes. Considering the problem of finding low hardware complexity, bandwidth-efficient, and processing-power efficient transmission schemes for a downlink scenario and capitalizing on some of these recently proposed JAMDC schemes, we propose and analyze in this paper three joint adaptive modulation, diversity combining, and power control (JAMDCPC) schemes where a constant-power variable-rate adaptive modulation technique is used with an adaptive diversity combining scheme and a common power control process. More specifically, the modulation constellation size, the number of combined diversity paths, and the needed power level are jointly determined to achieve the highest spectral efficiency with the lowest possible processing power consumption quantified in terms of the average number of combined paths, given the fading channel conditions and the required bit error rate (BER) performance. In this paper, the performance of these three JAMDCPC schemes is analyzed in terms of their spectral efficiency, processing power consumption, and error-rate performance. Selected numerical examples show that these schemes considerably increase the spectral efficiency of the existing JAMDC schemes with a slight increase in the average number of combined paths for the low signal-to-noise ratio range while maintaining compliance with the BER performance and a low radiated power which yields to a substantial decrease in interference to co-existing users and systems. © 2011 IEEE.
A spectrally efficient detect-and-forward scheme with two-tier adaptive cooperation
Benjillali, Mustapha
2011-09-01
We propose a simple relay-based adaptive cooperation scheme to improve the spectral efficiency of "Detect-and-Forward" (DetF) half-duplex relaying in fading channels. In a new common framework, we show that the proposed scheme offers considerable gainsin terms of the achievable information ratescompared to conventional DetF relaying schemes for both orthogonal and non-orthogonal source/relay transmissions. The analysis leads on to a general adaptive cooperation strategy based on the maximization of information rates at the destination which needs to observe only the average signal-to-noise ratios of the links. © 2006 IEEE.
An adaptive sampling scheme for deep-penetration calculation
International Nuclear Information System (INIS)
Wang, Ruihong; Ji, Zhicheng; Pei, Lucheng
2013-01-01
As we know, the deep-penetration problem has been one of the important and difficult problems in shielding calculation with Monte Carlo Method for several decades. In this paper, an adaptive Monte Carlo method under the emission point as a sampling station for shielding calculation is investigated. The numerical results show that the adaptive method may improve the efficiency of the calculation of shielding and might overcome the under-estimation problem easy to happen in deep-penetration calculation in some degree
Li, Wei; Huang, Zhitong; Li, Haoyue; Ji, Yuefeng
2018-04-01
Visible light communication (VLC) is a promising candidate for short-range broadband access due to its integration of advantages for both optical communication and wireless communication, whereas multi-user access is a key problem because of the intra-cell and inter-cell interferences. In addition, the non-flat channel effect results in higher losses for users in high frequency bands, which leads to unfair qualities. To solve those issues, we propose a power adaptive multi-filter carrierless amplitude and phase access (PA-MF-CAPA) scheme, and in the first step of this scheme, the MF-CAPA scheme utilizing multiple filters as different CAP dimensions is used to realize multi-user access. The character of orthogonality among the filters in different dimensions can mitigate the effect of intra-cell and inter-cell interferences. Moreover, the MF-CAPA scheme provides different channels modulated on the same frequency bands, which further increases the transmission rate. Then, the power adaptive procedure based on MF-CAPA scheme is presented to realize quality fairness. As demonstrated in our experiments, the MF-CAPA scheme yields an improved throughput compared with multi-band CAP access scheme, and the PA-MF-CAPA scheme enhances the quality fairness and further improves the throughput compared with the MF-CAPA scheme.
A novel perceptually adaptive image watermarking scheme by ...
African Journals Online (AJOL)
Threshold and modification value were selected adaptively for each image block, which improved robustness and transparency. The proposed algorithm was able to withstand a variety of attacks and image processing operations like rotation, cropping, noise addition, resizing, lossy compression and etc. The experimental ...
Raul, Pramod R; Pagilla, Prabhakar R
2015-05-01
In this paper, two adaptive Proportional-Integral (PI) control schemes are designed and discussed for control of web tension in Roll-to-Roll (R2R) manufacturing systems. R2R systems are used to transport continuous materials (called webs) on rollers from the unwind roll to the rewind roll. Maintaining web tension at the desired value is critical to many R2R processes such as printing, coating, lamination, etc. Existing fixed gain PI tension control schemes currently used in industrial practice require extensive tuning and do not provide the desired performance for changing operating conditions and material properties. The first adaptive PI scheme utilizes the model reference approach where the controller gains are estimated based on matching of the actual closed-loop tension control systems with an appropriately chosen reference model. The second adaptive PI scheme utilizes the indirect adaptive control approach together with relay feedback technique to automatically initialize the adaptive PI gains. These adaptive tension control schemes can be implemented on any R2R manufacturing system. The key features of the two adaptive schemes is that their designs are simple for practicing engineers, easy to implement in real-time, and automate the tuning process. Extensive experiments are conducted on a large experimental R2R machine which mimics many features of an industrial R2R machine. These experiments include trials with two different polymer webs and a variety of operating conditions. Implementation guidelines are provided for both adaptive schemes. Experimental results comparing the two adaptive schemes and a fixed gain PI tension control scheme used in industrial practice are provided and discussed. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
On the feedback error compensation for adaptive modulation and coding scheme
Choi, Seyeong
2011-11-25
In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify the performance of two joint AMDC schemes in the presence of feedback error, in terms of the average spectral efficiency, the average number of combined paths, and the average bit error rate. The benefit of feedback error compensation with adaptive combining is also quantified. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. Copyright (c) 2011 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Shunfu Jin
2013-01-01
Full Text Available In cognitive radio networks, if all the secondary user (SU packets join the system without any restrictions, the average latency of the SU packets will be greater, especially when the traffic load of the system is higher. For this, we propose an adaptive admission control scheme with a system access probability for the SU packets in this paper. We suppose the system access probability is inversely proportional to the total number of packets in the system and introduce an Adaptive Factor to adjust the system access probability. Accordingly, we build a discrete-time preemptive queueing model with adjustable joining rate. In order to obtain the steady-state distribution of the queueing model exactly, we construct a two-dimensional Markov chain. Moreover, we derive the formulas for the blocking rate, the throughput, and the average latency of the SU packets. Afterwards, we provide numerical results to investigate the influence of the Adaptive Factor on different performance measures. We also give the individually optimal strategy and the socially optimal strategy from the standpoints of the SU packets. Finally, we provide a pricing mechanism to coordinate the two optimal strategies.
Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD
Yee, H. C.; Sjoegreen, B.
2004-01-01
The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux
Islanding detection scheme based on adaptive identifier signal estimation method.
Bakhshi, M; Noroozian, R; Gharehpetian, G B
2017-11-01
This paper proposes a novel, passive-based anti-islanding method for both inverter and synchronous machine-based distributed generation (DG) units. Unfortunately, when the active/reactive power mismatches are near to zero, majority of the passive anti-islanding methods cannot detect the islanding situation, correctly. This study introduces a new islanding detection method based on exponentially damped signal estimation method. The proposed method uses adaptive identifier method for estimating of the frequency deviation of the point of common coupling (PCC) link as a target signal that can detect the islanding condition with near-zero active power imbalance. Main advantage of the adaptive identifier method over other signal estimation methods is its small sampling window. In this paper, the adaptive identifier based islanding detection method introduces a new detection index entitled decision signal by estimating of oscillation frequency of the PCC frequency and can detect islanding conditions, properly. In islanding conditions, oscillations frequency of PCC frequency reach to zero, thus threshold setting for decision signal is not a tedious job. The non-islanding transient events, which can cause a significant deviation in the PCC frequency are considered in simulations. These events include different types of faults, load changes, capacitor bank switching, and motor starting. Further, for islanding events, the capability of the proposed islanding detection method is verified by near-to-zero active power mismatches. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
A low order adaptive control scheme for hydraulic servo systems
DEFF Research Database (Denmark)
Andersen, Torben Ole; Pedersen, Henrik Clemmensen; Bech, Michael Møller
2015-01-01
, active gain feedforward shows a slightly improved performance. Computed-Torque Control shows better performance, but requires a well described system for best performance. A novel Adaptive Inverse Dynamics Controller was tested and the performance was found to be similar to that of Computed...... system were constructed and linearized. Controllers are implemented and tested on the manipulator. Pressure feedback was found to greatly improve system stability margins. Passive gain feedforward shows improved tracking performance for small changes in load pressure. For large changes in load pressure...
Fromang, S.; Hennebelle, P.; Teyssier, R.
2006-01-01
In this paper, we present a new method to perform numerical simulations of astrophysical MHD flows using the Adaptive Mesh Refinement framework and Constrained Transport. The algorithm is based on a previous work in which the MUSCL--Hancock scheme was used to evolve the induction equation. In this paper, we detail the extension of this scheme to the full MHD equations and discuss its properties. Through a series of test problems, we illustrate the performances of this new code using two diffe...
Provisioning of adaptability to variable topologies for routing schemes in MANETs
DEFF Research Database (Denmark)
Jiang, Shengming; Liu, Yaoda; Jiang, Yuming
2004-01-01
Frequent changes in network topologies caused by mobility in. mobile ad hoc networks (MANETs) impose great challenges to designing routing schemes for such networks. Various routing schemes each aiming at particular type of MANET (e.g., flat or clustered. MANETs) with different mobility degrees (e...... in the dynamic source routing protocol to provide the adaptability to variable topologies caused by mobility through computer simulation in NS-2....
Estimation of Stator Winding Faults in Induction Motors using an Adaptive Observer Scheme
DEFF Research Database (Denmark)
Kallesøe, C. S.; Vadstrup, P.; Rasmussen, Henrik
2004-01-01
This paper addresses the subject of inter-turn short circuit estimation in the stator of an induction motor. In the paper an adaptive observer scheme is proposed. The proposed observer is capable of simultaneously estimating the speed of the motor, the amount turns involved in the short circuit...... and an expression of the current in the short circuit. Moreover the states of the motor are estimated, meaning that the magnetizing currents are made available even though a fault has happened in the motor. To be able to develop this observer, a model particular suitable for the chosen observer design, is also...... derived. The efficiency of the proposed observer is demonstrated by tests performed on a test setup with a customized designed induction motor. With this motor it is possible to simulate inter-turn short circuit faults....
Estimation of Stator Winding Faults in Induction Motors using an Adaptive Observer Scheme
DEFF Research Database (Denmark)
Kallesøe, C. S.; Vadstrup, P.; Rasmussen, Henrik
2004-01-01
and an expression of the current in the short circuit. Moreover the states of the motor are estimated, meaning that the magnetizing currents are made available even though a fault has happened in the motor. To be able to develop this observer, a model particular suitable for the chosen observer design, is also......This paper addresses the subject of inter-turn short circuit estimation in the stator of an induction motor. In the paper an adaptive observer scheme is proposed. The proposed observer is capable of simultaneously estimating the speed of the motor, the amount turns involved in the short circuit...... derived. The efficiency of the proposed observer is demonstrated by tests performed on a test setup with a customized designed induction motor. With this motor it is possible to simulate inter-turn short circuit faults....
Efficient Pseudorecursive Evaluation Schemes for Non-adaptive Sparse Grids
Buse, Gerrit
2014-01-01
In this work we propose novel algorithms for storing and evaluating sparse grid functions, operating on regular (not spatially adaptive), yet potentially dimensionally adaptive grid types. Besides regular sparse grids our approach includes truncated grids, both with and without boundary grid points. Similar to the implicit data structures proposed in Feuersänger (Dünngitterverfahren für hochdimensionale elliptische partielle Differntialgleichungen. Diploma Thesis, Institut für Numerische Simulation, Universität Bonn, 2005) and Murarasu et al. (Proceedings of the 16th ACM Symposium on Principles and Practice of Parallel Programming. Cambridge University Press, New York, 2011, pp. 25–34) we also define a bijective mapping from the multi-dimensional space of grid points to a contiguous index, such that the grid data can be stored in a simple array without overhead. Our approach is especially well-suited to exploit all levels of current commodity hardware, including cache-levels and vector extensions. Furthermore, this kind of data structure is extremely attractive for today’s real-time applications, as it gives direct access to the hierarchical structure of the grids, while outperforming other common sparse grid structures (hash maps, etc.) which do not match with modern compute platforms that well. For dimensionality d ≤ 10 we achieve good speedups on a 12 core Intel Westmere-EP NUMA platform compared to the results presented in Murarasu et al. (Proceedings of the International Conference on Computational Science—ICCS 2012. Procedia Computer Science, 2012). As we show, this also holds for the results obtained on Nvidia Fermi GPUs, for which we observe speedups over our own CPU implementation of up to 4.5 when dealing with moderate dimensionality. In high-dimensional settings, in the order of tens to hundreds of dimensions, our sparse grid evaluation kernels on the CPU outperform any other known implementation.
Analysis of an Adaptive P-Persistent MAC Scheme for WLAN Providing Delay Fairness
Yen, Chih-Ming; Chang, Chung-Ju; Chen, Yih-Shen; Huang, Ching Yao
The paper proposes and analyzes an adaptive p-persistent-based (APP) medium access control (MAC) scheme for IEEE 802.11 WLAN. The APP MAC scheme intends to support delay fairness for every station in each access, denoting small delay variance. It differentiates permission probabilities of transmission for stations which are incurred with various packet delays. This permission probability is designed as a function of the numbers of retransmissions and re-backoffs so that stations with larger packet delay are endowed with higher permission probability. Also, the scheme is analyzed by a Markov-chain analysis, where the collision probability, the system throughput, and the average delay are successfully obtained. Numerical results show that the proposed APP MAC scheme can attain lower mean delay and higher mean throughput. In the mean time, simulation results are given to justify the validity of the analysis, and also show that the APP MAC scheme can achieve more delay fairness than conventional algorithms.
Neural network models of learning and adaptation
Denker, John S.
1986-10-01
Recent work has applied ideas from many fields including biology, physics and computer science, in order to understand how a highly interconnected network of simple processing elements can perform useful computation. Such networks can be used as associative memories, or as analog computers to solve optimization problems. This article reviews the workings of a standard model with particular emphasis on various schemes for learning and adaptation.
Joint multiuser switched diversity and adaptive modulation schemes for spectrum sharing systems
Qaraqe, Marwa
2012-12-01
In this paper, we develop multiuser access schemes for spectrum sharing systems whereby secondary users are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. In particular, we devise two schemes for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme selects the user that reports the best channel quality. In order to alleviate the high feedback load associated with the first scheme, we develop a second scheme based on the concept of switched diversity where the base station scans the users in a sequential manner until an acceptable user is found. In addition to these two selection schemes, we consider two power adaptive settings at the secondary users based on the amount of interference available at the secondary transmitter. In the On/Off power setting, users are allowed to transmit based on whether the interference constraint is met or not, while in the full power adaptive setting, the users are allowed to vary their transmission power to satisfy the interference constraint. Finally, we present numerical results for our proposed algorithms where we show the trade-off between the average spectral efficiency and average feedback load for both schemes. © 2012 IEEE.
A chaos detectable and time step-size adaptive numerical scheme for nonlinear dynamical systems
Chen, Yung-Wei; Liu, Chein-Shan; Chang, Jiang-Ren
2007-02-01
The first step in investigation the dynamics of a continuous time system described by ordinary differential equations is to integrate them to obtain trajectories. In this paper, we convert the group-preserving scheme (GPS) developed by Liu [International Journal of Non-Linear Mechanics 36 (2001) 1047-1068] to a time step-size adaptive scheme, x=x+hf(x,t), where x∈R is the system variables we are concerned with, and f(x,t)∈R is a time-varying vector field. The scheme has the form similar to the Euler scheme, x=x+Δtf(x,t), but our step-size h is adaptive automatically. Very interestingly, the ratio h/Δt, which we call the adaptive factor, can forecast the appearance of chaos if the considered dynamical system becomes chaotical. The numerical examples of the Duffing equation, the Lorenz equation and the Rossler equation, which may exhibit chaotic behaviors under certain parameters values, are used to demonstrate these phenomena. Two other non-chaotic examples are included to compare the performance of the GPS and the adaptive one.
SYNTHESIS OF VISCOELASTIC MATERIAL MODELS (SCHEMES
Directory of Open Access Journals (Sweden)
V. Bogomolov
2014-10-01
Full Text Available The principles of structural viscoelastic schemes construction for materials with linear viscoelastic properties in accordance with the given experimental data on creep tests are analyzed. It is shown that there can be only four types of materials with linear visco-elastic properties.
Adaptive numerical algorithms in space weather modeling
Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2012-02-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit
Adaptive numerical algorithms in space weather modeling
International Nuclear Information System (INIS)
Tóth, Gábor; Holst, Bart van der; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2012-01-01
Space weather describes the various processes in the Sun–Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit
Adaptive Numerical Algorithms in Space Weather Modeling
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.;
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical
Modeling and Simulation of Downlink Subcarrier Allocation Schemes in LTE
DEFF Research Database (Denmark)
Popovska Avramova, Andrijana; Yan, Ying; Dittmann, Lars
2012-01-01
The efficient utilization of the air interface in the LTE standard is achieved through a combination of subcarrier allocation schemes, adaptive modulation and coding, and transmission power allotment. The scheduler in the base station has a major role in achieving the required QoS and the overall...
What Drives Business Model Adaptation?
DEFF Research Database (Denmark)
Saebi, Tina; Lien, Lasse B.; Foss, Nicolai Juul
2017-01-01
Business models change as managers not only innovate business models, but also engage in more mundane adaptation in response to external changes, such as changes in the level or composition of demand. However, little is known about what causes such business model adaptation. We employ threat......-rigidity as well as prospect theory to examine business model adaptation in response to external threats and opportunities. Additionally, drawing on the behavioural theory of the firm, we argue that the past strategic orientation of a firm creates path dependencies that influence the propensity of the firm...... to adapt its business model. We test our hypotheses on a sample of 1196 Norwegian companies, and find that firms are more likely to adapt their business model under conditions of perceived threats than opportunities, and that strategic orientation geared towards market development is more conducive...
An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks
Directory of Open Access Journals (Sweden)
Ali Safa Sadiq
2014-01-01
Full Text Available We propose an adaptive handover prediction (AHP scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.
An adaptive short-term prediction scheme for wind energy storage management
International Nuclear Information System (INIS)
Blonbou, Ruddy; Monjoly, Stephanie; Dorville, Jean-Francois
2011-01-01
Research highlights: → We develop a real time algorithm for grid-connected wind energy storage management. → The method aims to guarantee, with ±5% error margin, the power sent to the grid. → Dynamic scheduling of energy storage is based on short-term energy prediction. → Accurate predictions reduce the need in storage capacity. -- Abstract: Efficient forecasting scheme that includes some information on the likelihood of the forecast and based on a better knowledge of the wind variations characteristics along with their influence on power output variation is of key importance for the optimal integration of wind energy in island's power system. In the Guadeloupean archipelago (French West-Indies), with a total wind power capacity of 25 MW; wind energy can represent up to 5% of the instantaneous electricity production. At this level, wind energy contribution can be equivalent to the current network primary control reserve, which causes balancing difficult. The share of wind energy is due to grow even further since the objective is set to reach 118 MW by 2020. It is an absolute evidence for the network operator that due to security concerns of the electrical grid, the share of wind generation should not increase unless solutions are found to solve the prediction problem. The University of French West-Indies and Guyana has developed a short-term wind energy prediction scheme that uses artificial neural networks and adaptive learning procedures based on Bayesian approach and Gaussian approximation. This paper reports the results of the evaluation of the proposed approach; the improvement with respect to the simple persistent prediction model was globally good. A discussion on how such a tool combined with energy storage capacity could help to smooth the wind power variation and improve the wind energy penetration rate into island utility network is also proposed.
Multiple model adaptive control with mixing
Kuipers, Matthew
Despite the remarkable theoretical accomplishments and successful applications of adaptive control, the field is not sufficiently mature to solve challenging control problems requiring strict performance and safety guarantees. Towards addressing these issues, a novel deterministic multiple-model adaptive control approach called adaptive mixing control is proposed. In this approach, adaptation comes from a high-level system called the supervisor that mixes into feedback a number of candidate controllers, each finely-tuned to a subset of the parameter space. The mixing signal, the supervisor's output, is generated by estimating the unknown parameters and, at every instant of time, calculating the contribution level of each candidate controller based on certainty equivalence. The proposed architecture provides two characteristics relevant to solving stringent, performance-driven applications. First, the full-suite of linear time invariant control tools is available. A disadvantage of conventional adaptive control is its restriction to utilizing only those control laws whose solutions can be feasibly computed in real-time, such as model reference and pole-placement type controllers. Because its candidate controllers are computed off line, the proposed approach suffers no such restriction. Second, the supervisor's output is smooth and does not necessarily depend on explicit a priori knowledge of the disturbance model. These characteristics can lead to improved performance by avoiding the unnecessary switching and chattering behaviors associated with some other multiple adaptive control approaches. The stability and robustness properties of the adaptive scheme are analyzed. It is shown that the mean-square regulation error is of the order of the modeling error. And when the parameter estimate converges to its true value, which is guaranteed if a persistence of excitation condition is satisfied, the adaptive closed-loop system converges exponentially fast to a closed
An adaptive scaling and biasing scheme for OFDM-based visible light communication systems.
Wang, Zhaocheng; Wang, Qi; Chen, Sheng; Hanzo, Lajos
2014-05-19
Orthogonal frequency-division multiplexing (OFDM) has been widely used in visible light communication systems to achieve high-rate data transmission. Due to the nonlinear transfer characteristics of light emitting diodes (LEDs) and owing the high peak-to-average-power ratio of OFDM signals, the transmitted signal has to be scaled and biased before modulating the LEDs. In this contribution, an adaptive scaling and biasing scheme is proposed for OFDM-based visible light communication systems, which fully exploits the dynamic range of the LEDs and improves the achievable system performance. Specifically, the proposed scheme calculates near-optimal scaling and biasing factors for each specific OFDM symbol according to the distribution of the signals, which strikes an attractive trade-off between the effective signal power and the clipping-distortion power. Our simulation results demonstrate that the proposed scheme significantly improves the performance without changing the LED's emitted power, while maintaining the same receiver structure.
Directory of Open Access Journals (Sweden)
Chuan Zhu
2014-01-01
Full Text Available This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes.
Directory of Open Access Journals (Sweden)
Cuthbert Laurie
2011-01-01
Full Text Available Abstract A downlink adaptive distributed precoding scheme is proposed for coordinated multi-point (CoMP transmission systems. The serving base station (BS obtains the optimal precoding vector via user feedback. Meanwhile, the precoding vector of each coordinated BS is determined by adaptive gradient iteration according to the perturbation vector and the adjustment factor based on the vector perturbation method. In each transmission frame, the CoMP user feeds the precoding matrix index back to the serving BS, and feeds back the adjustment factor index to the coordinated BSs, which can reduce the uplink feedback overhead. The selected adjustment factor for each coordinated BS is obtained via the precoding vector of the coordinated BS used in the previous frame and the preferred precoding vector of the serving BS in this frame. The proposed scheme takes advantage of the spatial non-correlation and temporal correlation of the distributed MIMO channel. The design of the adjustment factor set is given and the channel feedback delay is considered. The system performance of the proposed scheme is verified with and without feedback delay respectively and the system feedback overhead is analyzed. Simulation results show that the proposed scheme has a good trade-off between system performance and the system control information overhead on feedback.
International Nuclear Information System (INIS)
Jing, Wang; Zhen-Yu, Tan; Xi-Kui, Ma; Jin-Feng, Gao
2009-01-01
A novel adaptive observer-based control scheme is presented for synchronization and suppression of a class of uncertain chaotic system. First, an adaptive observer based on an orthogonal neural network is designed. Subsequently, the sliding mode controllers via the proposed adaptive observer are proposed for synchronization and suppression of the uncertain chaotic systems. Theoretical analysis and numerical simulation show the effectiveness of the proposed scheme. (general)
User Behavior Prediction based Adaptive Policy Pre-fetching Scheme for Efficient Network Management
Yuanlong Cao; Jianfeng Guan; Wei Quan; Jia Zhao; Changqiao Xu; Hongke Zhang
2013-01-01
In recent years, network management is commonly regarded as an essential and promising function for managing and improving the security of network infrastructures. However, as networks get faster and network centric applications get more complex, there is still significant ongoing work addressing many challenges of the network management. Traditional passive network censoring systems lack of adaptive policy pre-fetching scheme, as a result, preventing malicious behavior (such as hacker, malwa...
Auzinger, Winfried
2016-07-28
We present a number of new contributions to the topic of constructing efficient higher-order splitting methods for the numerical integration of evolution equations. Particular schemes are constructed via setup and solution of polynomial systems for the splitting coefficients. To this end we use and modify a recent approach for generating these systems for a large class of splittings. In particular, various types of pairs of schemes intended for use in adaptive integrators are constructed.
Adaptive transmission schemes for MISO spectrum sharing systems: Tradeoffs and performance analysis
Bouida, Zied
2014-10-01
In this paper, we propose a number of adaptive transmission techniques in order to improve the performance of the secondary link in a spectrum sharing system. We first introduce the concept of minimum-selection maximum ratio transmission (MS-MRT) as an adaptive variation of the existing MRT (MRT) technique. While in MRT all available antennas are used for transmission, MS-MRT uses the minimum subset of antennas verifying both the interference constraint (IC) to the primary user and the bit error rate (BER) requirements. Similar to MRT, MS-MRT assumes that perfect channel state information (CSI) is available at the secondary transmitter (ST), which makes this scheme challenging from a practical point of view. To overcome this challenge, we propose another transmission technique based on orthogonal space-time block codes with transmit antenna selection (TAS). This technique uses the full-rate full-diversity Alamouti scheme in order to maximize the secondary\\'s transmission rate. The performance of these techniques is analyzed in terms of the average spectral efficiency (ASE), average number of transmit antennas, average delay, average BER, and outage performance. In order to give the motivation behind these analytical results, the tradeoffs offered by the proposed schemes are summarized and then demonstrated through several numerical examples.
A Modification of the Fuzzy Logic Based DASH Adaptation Scheme for Performance Improvement
Directory of Open Access Journals (Sweden)
Hyun Jun Kim
2018-01-01
Full Text Available We propose a modification of the fuzzy logic based DASH adaptation scheme (FDASH for seamless media service in time-varying network conditions. The proposed scheme (mFDASH selects a more appropriate bit-rate for the next segment by modification of the Fuzzy Logic Controller (FLC and estimates more accurate available bandwidth than FDASH scheme by using History-Based TCP Throughput Estimation. Moreover, mFDASH reduces the number of video bit-rate changes by applying Segment Bit-Rate Filtering Module (SBFM and employs Start Mechanism for clients to provide high-quality videos in the very beginning stage of the streaming service. Lastly, Sleeping Mechanism is applied to avoid any expected buffer overflow. We then use NS-3 Network Simulator to verify the performance of mFDASH. Upon the experimental results, mFDASH shows no buffer overflow within the limited buffer size, which is not guaranteed in FDASH. Also, we confirm that mFDASH provides the highest QoE to DASH clients among the three schemes (mFDASH, FDASH, and SVAA in Point-to-Point networks, Wi-Fi networks, and LTE networks, respectively.
An operator model-based filtering scheme
International Nuclear Information System (INIS)
Sawhney, R.S.; Dodds, H.L.; Schryer, J.C.
1990-01-01
This paper presents a diagnostic model developed at Oak Ridge National Laboratory (ORNL) for off-normal nuclear power plant events. The diagnostic model is intended to serve as an embedded module of a cognitive model of the human operator, one application of which could be to assist control room operators in correctly responding to off-normal events by providing a rapid and accurate assessment of alarm patterns and parameter trends. The sequential filter model is comprised of two distinct subsystems --- an alarm analysis followed by an analysis of interpreted plant signals. During the alarm analysis phase, the alarm pattern is evaluated to generate hypotheses of possible initiating events in order of likelihood of occurrence. Each hypothesis is further evaluated through analysis of the current trends of state variables in order to validate/reject (in the form of increased/decreased certainty factor) the given hypothesis. 7 refs., 4 figs
Decentralized & Adaptive Load-Frequency Control Scheme of Variable Speed Wind Turbines
DEFF Research Database (Denmark)
Hoseinzadeh, Bakhtyar; Silva, Filipe Miguel Faria da; Bak, Claus Leth
2014-01-01
In power systems with high penetration of Wind Power (WP), transferring a part of Load Frequency Control (LFC) burden to variable speed Wind Turbines (WTs) is inevitable. The conventional LFC schemes merely rely on frequency information and since frequency is a common variable throughout the netw......In power systems with high penetration of Wind Power (WP), transferring a part of Load Frequency Control (LFC) burden to variable speed Wind Turbines (WTs) is inevitable. The conventional LFC schemes merely rely on frequency information and since frequency is a common variable throughout...... and therefore determining the contribution factor of each individual WT to gain an adaptive LFC approach. The Electrical Distance (ED) concept confirms that the locally measured voltage decay is a proper criterion of closeness to the disturbance place. Numerical simulations carried out in DigSilent Power...
Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.
Hu, Sudeng; Wang, Hanli; Kwong, Sam
2012-04-01
In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.
Iteration schemes for parallelizing models of superconductivity
Energy Technology Data Exchange (ETDEWEB)
Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)
1996-12-31
The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.
Analysis of Adaptive Control Scheme in IEEE 802.11 and IEEE 802.11e Wireless LANs
Lee, Bih-Hwang; Lai, Hui-Cheng
In order to achieve the prioritized quality of service (QoS) guarantee, the IEEE 802.11e EDCAF (the enhanced distributed channel access function) provides the distinguished services by configuring the different QoS parameters to different access categories (ACs). An admission control scheme is needed to maximize the utilization of wireless channel. Most of papers study throughput improvement by solving the complicated multidimensional Markov-chain model. In this paper, we introduce a back-off model to study the transmission probability of the different arbitration interframe space number (AIFSN) and the minimum contention window size (CWmin). We propose an adaptive control scheme (ACS) to dynamically update AIFSN and CWmin based on the periodical monitoring of current channel status and QoS requirements to achieve the specific service differentiation at access points (AP). This paper provides an effective tuning mechanism for improving QoS in WLAN. Analytical and simulation results show that the proposed scheme outperforms the basic EDCAF in terms of throughput and service differentiation especially at high collision rate.
Gil Gye-Tae; Lee Doo-Won; Kim Dong-Hoi
2010-01-01
We deal with a cost-based adaptive handover hysteresis scheme for the horizontal handover decision strategies, as one of the self-optimization techniques that can minimize the handover failure rate (HFR) in the 3rd generation partnership project (3GPP) long-term evolution (LTE) system based on the network-controlled hard handover. Especially, for real-time operation, we propose an adaptive hysteresis scheme with a simplified cost function considering some dominant factors closely related to ...
Sivakumar Dakshinamurthy
2010-01-01
A non-identifier-based adaptive PI controller is designed using a gradient approach to improve the performance of a control system when device aging and environmental factors degrade the efficiency of the process. The design approach is based on the model reference adaptive control technique. The controller drives the difference (error) between the process response and desired model output to zero asymptotically at a rate constrained by the desired characteristics of the model. The tuning r...
Directory of Open Access Journals (Sweden)
Sivakumar Dakshinamurthy
2010-07-01
Full Text Available A non-identifier-based adaptive PI controller is designed using a gradient approach to improve the performance of a control system when device aging and environmental factors degrade the efficiency of the process. The design approach is based on the model reference adaptive control technique. The controller drives the difference (error between the process response and desired model output to zero asymptotically at a rate constrained by the desired characteristics of the model. The tuning rules are designed and justified for a non-linear process with dominant dynamics of second order. The advantage of this method for tracking and regulation compared to adaptive MIT control was validated in real time by conducting experiments on a laboratory air flow control system using the dSPACE interface in the SIMULINK software. The experimental results show that the process with adaptive PI controller has better dynamic performance and robustness than that with traditional adaptive MIT controller.
Adaptive rate selection scheme for video transmission to resolve IEEE 802.11 performance anomaly
Tang, Guijin; Zhu, Xiuchang
2011-10-01
Multi-rate transmission may lead to performance anomaly in an IEEE 802.11 network. It will decrease the throughputs of all the higher rate stations. This paper proposes an adaptive rate selection scheme for video service when performance anomaly occurs. Considering that video has the characteristic of tolerance to packet loss, we actively drop several packets so as to select the rates as high as possible for transmitting packets. Experiment shows our algorithm can decrease the delay and jitter of video, and improve the system throughput as well.
Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations
Christensen, H. M.; Dawson, A.; Palmer, T.
2017-12-01
Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.
Multi-model ensemble schemes for predicting northeast monsoon ...
Indian Academy of Sciences (India)
An attempt has been made to improve the accuracy of predicted rainfall using three different multi-model ensemble (MME) schemes, viz., simple arithmetic mean of models (EM), principal component regression (PCR) and singular value decomposition based multiple linear regressions (SVD). It is found out that among ...
Inflationary gravitational waves in collapse scheme models
Energy Technology Data Exchange (ETDEWEB)
Mariani, Mauro, E-mail: mariani@carina.fcaglp.unlp.edu.ar [Facultad de Ciencias Astronómicas y Geofísicas, Universidad Nacional de La Plata, Paseo del Bosque S/N, 1900 La Plata (Argentina); Bengochea, Gabriel R., E-mail: gabriel@iafe.uba.ar [Instituto de Astronomía y Física del Espacio (IAFE), UBA-CONICET, CC 67, Suc. 28, 1428 Buenos Aires (Argentina); León, Gabriel, E-mail: gleon@df.uba.ar [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria – Pab. I, 1428 Buenos Aires (Argentina)
2016-01-10
The inflationary paradigm is an important cornerstone of the concordance cosmological model. However, standard inflation cannot fully address the transition from an early homogeneous and isotropic stage, to another one lacking such symmetries corresponding to our present universe. In previous works, a self-induced collapse of the wave function has been suggested as the missing ingredient of inflation. Most of the analysis regarding the collapse hypothesis has been solely focused on the characteristics of the spectrum associated to scalar perturbations, and within a semiclassical gravity framework. In this Letter, working in terms of a joint metric-matter quantization for inflation, we calculate, for the first time, the tensor power spectrum and the tensor-to-scalar ratio corresponding to the amplitude of primordial gravitational waves resulting from considering a generic self-induced collapse.
Inflationary gravitational waves in collapse scheme models
Directory of Open Access Journals (Sweden)
Mauro Mariani
2016-01-01
Full Text Available The inflationary paradigm is an important cornerstone of the concordance cosmological model. However, standard inflation cannot fully address the transition from an early homogeneous and isotropic stage, to another one lacking such symmetries corresponding to our present universe. In previous works, a self-induced collapse of the wave function has been suggested as the missing ingredient of inflation. Most of the analysis regarding the collapse hypothesis has been solely focused on the characteristics of the spectrum associated to scalar perturbations, and within a semiclassical gravity framework. In this Letter, working in terms of a joint metric-matter quantization for inflation, we calculate, for the first time, the tensor power spectrum and the tensor-to-scalar ratio corresponding to the amplitude of primordial gravitational waves resulting from considering a generic self-induced collapse.
Inflationary gravitational waves in collapse scheme models
International Nuclear Information System (INIS)
Mariani, Mauro; Bengochea, Gabriel R.; León, Gabriel
2016-01-01
The inflationary paradigm is an important cornerstone of the concordance cosmological model. However, standard inflation cannot fully address the transition from an early homogeneous and isotropic stage, to another one lacking such symmetries corresponding to our present universe. In previous works, a self-induced collapse of the wave function has been suggested as the missing ingredient of inflation. Most of the analysis regarding the collapse hypothesis has been solely focused on the characteristics of the spectrum associated to scalar perturbations, and within a semiclassical gravity framework. In this Letter, working in terms of a joint metric-matter quantization for inflation, we calculate, for the first time, the tensor power spectrum and the tensor-to-scalar ratio corresponding to the amplitude of primordial gravitational waves resulting from considering a generic self-induced collapse.
An Adaptive Fault-Tolerant Communication Scheme for Body Sensor Networks
Directory of Open Access Journals (Sweden)
Zichuan Xu
2010-10-01
Full Text Available A high degree of reliability for critical data transmission is required in body sensor networks (BSNs. However, BSNs are usually vulnerable to channel impairments due to body fading effect and RF interference, which may potentially cause data transmission to be unreliable. In this paper, an adaptive and flexible fault-tolerant communication scheme for BSNs, namely AFTCS, is proposed. AFTCS adopts a channel bandwidth reservation strategy to provide reliable data transmission when channel impairments occur. In order to fulfill the reliability requirements of critical sensors, fault-tolerant priority and queue are employed to adaptively adjust the channel bandwidth allocation. Simulation results show that AFTCS can alleviate the effect of channel impairments, while yielding lower packet loss rate and latency for critical sensors at runtime.
Hybrid Adaptive Flight Control with Model Inversion Adaptation
Nguyen, Nhan
2011-01-01
This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.
Adaptive Fault-Tolerant Control for Flight Systems with Input Saturation and Model Mismatch
Directory of Open Access Journals (Sweden)
Man Wang
2013-01-01
the original reference model may not be appropriate. Under this circumstance, an adaptive reference model which can also provide satisfactory performance is designed. Simulations of a flight control example are given to illustrate the effectiveness of the proposed scheme.
An Adaptive Window-setting Scheme for Segmentation of Bladder Tumor Surface via MR Cystography
Duan, Chaijie; Liu, Fanghua; Xiao, Ping; Lv, Guoqing
2012-01-01
This paper proposes an adaptive window-setting scheme for non-invasive detection and segmentation of bladder tumor surface in T1-weighted magnetic resonance (MR) images. The inner border of the bladder wall is firstly covered by a group of ball-shaped detecting windows with different radii. By extracting the candidate tumor windows and excluding the false positive (FP) candidates, the entire bladder tumor surface is detected and segmented by the remaining windows. Different from previous bladder tumor detection methods which are mostly focusing on the existence of a tumor, this paper emphasizes segmenting the entire tumor surface in addition to detecting the presence of the tumor. The presented scheme was validated by 10 clinical T1-weighted MR image datasets (5 volunteers and 5 patients). The bladder tumor surfaces and the normal bladder wall inner borders in the ten datasets were covered by 223 and 10491 windows, respectively. Such large number of the detecting windows makes the validation statistically meaningful. In the FP reduction step, the best feature combination was obtained by using receiver operating characteristics or ROC analysis. The validation results demonstrated the potential of this presented scheme in segmenting the entire tumor surface with high sensitivity and low FP rate. This work inherits our previous results of automatic segmentation of the bladder wall and will be an important element in our MR-based virtual cystoscopy or MR cystography system. PMID:22645274
Directory of Open Access Journals (Sweden)
Gil Gye-Tae
2010-01-01
Full Text Available We deal with a cost-based adaptive handover hysteresis scheme for the horizontal handover decision strategies, as one of the self-optimization techniques that can minimize the handover failure rate (HFR in the 3rd generation partnership project (3GPP long-term evolution (LTE system based on the network-controlled hard handover. Especially, for real-time operation, we propose an adaptive hysteresis scheme with a simplified cost function considering some dominant factors closely related to HFR performance such as the load difference between the target and serving cells, the velocity of user equipment (UE, and the service type. With the proposed scheme, a proper hysteresis value based on the dominant factors is easily obtained, so that the handover parameter optimization for minimizing the HFR can be effectively achieved. Simulation results show that the proposed scheme can support better HFR performance than the conventional schemes.
Peano—A Traversal and Storage Scheme for Octree-Like Adaptive Cartesian Multiscale Grids
Weinzierl, Tobias
2011-01-01
Almost all approaches to solving partial differential equations (PDEs) are based upon a spatial discretization of the computational domain-a grid. This paper presents an algorithm to generate, store, and traverse a hierarchy of d-dimensional Cartesian grids represented by a (k = 3)- spacetree, a generalization of the well-known octree concept, and it also shows the correctness of the approach. These grids may change their adaptive structure throughout the traversal. The algorithm uses 2d + 4 stacks as data structures for both cells and vertices, and the storage requirements for the pure grid reduce to one bit per vertex for both the complete grid connectivity structure and the multilevel grid relations. Since the traversal algorithm uses only stacks, the algorithm\\'s cache hit rate is continually higher than 99.9 percent, and the runtime per vertex remains almost constant; i.e., it does not depend on the overall number of vertices or the adaptivity pattern. We use the algorithmic approach as the fundamental concept for a mesh management for d-dimensional PDEs and for a matrix-free PDE solver represented by a compact discrete 3 d-point operator. In the latter case, one can implement a Jacobi smoother, a Krylov solver, or a geometric multigrid scheme within the presented traversal scheme which inherits the low memory requirements and the good memory access characteristics directly. © 2011 Society for Industrial and Applied Mathematics.
Introducing a moisture scheme to a nonhydrostatic sigma coordinate model
CSIR Research Space (South Africa)
Bopape, Mary-Jane M
2011-09-01
Full Text Available and precipitation in mid-latitude cyclones. VII: A model for the ?seeder-feeder? process in warm-frontal rainbands. Journal of the Atmospheric Sciences, 40, 1185-1206. Stensrud DJ, 2007: Parameterization schemes. Keys to understanding numerical weather...
Acharya Nachiketa Multi-model ensemble schemes for predicting ...
Indian Academy of Sciences (India)
AUTHOR INDEX. Acharya Nachiketa. Multi-model ensemble schemes for predicting northeast monsoon rainfall over peninsular India. 795. Agarwal Neeraj see Shahi Naveen R. 337. Aggarwal Neha see Jha Neerja. 663. Ahmed Shakeel see Sarah S. 399. Alavi Amir Hossein see Mousavi Seyyed Mohammad. 1001.
A new parallelization algorithm of ocean model with explicit scheme
Fu, X. D.
2017-08-01
This paper will focus on the parallelization of ocean model with explicit scheme which is one of the most commonly used schemes in the discretization of governing equation of ocean model. The characteristic of explicit schema is that calculation is simple, and that the value of the given grid point of ocean model depends on the grid point at the previous time step, which means that one doesn’t need to solve sparse linear equations in the process of solving the governing equation of the ocean model. Aiming at characteristics of the explicit scheme, this paper designs a parallel algorithm named halo cells update with tiny modification of original ocean model and little change of space step and time step of the original ocean model, which can parallelize ocean model by designing transmission module between sub-domains. This paper takes the GRGO for an example to implement the parallelization of GRGO (Global Reduced Gravity Ocean model) with halo update. The result demonstrates that the higher speedup can be achieved at different problem size.
Kumar, Navneet; Raj Chelliah, Thanga; Srivastava, S P
2015-07-01
Model Based Control (MBC) is one of the energy optimal controllers used in vector-controlled Induction Motor (IM) for controlling the excitation of motor in accordance with torque and speed. MBC offers energy conservation especially at part-load operation, but it creates ripples in torque and speed during load transition, leading to poor dynamic performance of the drive. This study investigates the opportunity for improving dynamic performance of a three-phase IM operating with MBC and proposes three control schemes: (i) MBC with a low pass filter (ii) torque producing current (iqs) injection in the output of speed controller (iii) Variable Structure Speed Controller (VSSC). The pre and post operation of MBC during load transition is also analyzed. The dynamic performance of a 1-hp, three-phase squirrel-cage IM with mine-hoist load diagram is tested. Test results are provided for the conventional field-oriented (constant flux) control and MBC (adjustable excitation) with proposed schemes. The effectiveness of proposed schemes is also illustrated for parametric variations. The test results and subsequent analysis confer that the motor dynamics improves significantly with all three proposed schemes in terms of overshoot/undershoot peak amplitude of torque and DC link power in addition to energy saving during load transitions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
M. Louta
2014-01-01
Full Text Available WiMAX (Worldwide Interoperability for Microwave Access constitutes a candidate networking technology towards the 4G vision realization. By adopting the Orthogonal Frequency Division Multiple Access (OFDMA technique, the latest IEEE 802.16x amendments manage to provide QoS-aware access services with full mobility support. A number of interesting scheduling and mapping schemes have been proposed in research literature. However, they neglect a considerable asset of the OFDMA-based wireless systems: the dynamic adjustment of the downlink-to-uplink width ratio. In order to fully exploit the supported mobile WiMAX features, we design, develop, and evaluate a rigorous adaptive model, which inherits its main aspects from the reinforcement learning field. The model proposed endeavours to efficiently determine the downlink-to-uplinkwidth ratio, on a frame-by-frame basis, taking into account both the downlink and uplink traffic in the Base Station (BS. Extensive evaluation results indicate that the model proposed succeeds in providing quite accurate estimations, keeping the average error rate below 15% with respect to the optimal sub-frame configurations. Additionally, it presents improved performance compared to other learning methods (e.g., learning automata and notable improvements compared to static schemes that maintain a fixed predefined ratio in terms of service ratio and resource utilization.
An integrated urban drainage system model for assessing renovation scheme.
Dong, X; Zeng, S; Chen, J; Zhao, D
2012-01-01
Due to sustained economic growth in China over the last three decades, urbanization has been on a rapidly expanding track. In recent years, regional industrial relocations were also accelerated across the country from the east coast to the west inland. These changes have led to a large-scale redesign of urban infrastructures, including the drainage system. To help the reconstructed infrastructures towards a better sustainability, a tool is required for assessing the efficiency and environmental performance of different renovation schemes. This paper developed an integrated dynamic modeling tool, which consisted of three models for describing the sewer, the wastewater treatment plant (WWTP) and the receiving water body respectively. Three auxiliary modules were also incorporated to conceptualize the model, calibrate the simulations, and analyze the results. The developed integrated modeling tool was applied to a case study in Shenzhen City, which is one of the most dynamic cities and facing considerable challenges for environmental degradation. The renovation scheme proposed to improve the environmental performance of Shenzhen City's urban drainage system was modeled and evaluated. The simulation results supplied some suggestions for the further improvement of the renovation scheme.
Adaptive Numerical Dissipation Control in High Order Schemes for Multi-D Non-Ideal MHD
Yee, H. C.; Sjoegreen, B.
2005-01-01
The required type and amount of numerical dissipation/filter to accurately resolve all relevant multiscales of complex MHD unsteady high-speed shock/shear/turbulence/combustion problems are not only physical problem dependent, but also vary from one flow region to another. In addition, proper and efficient control of the divergence of the magnetic field (Div(B)) numerical error for high order shock-capturing methods poses extra requirements for the considered type of CPU intensive computations. The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multiresolution wavelets (WAV) (for the above types of flow feature). These filters also provide a natural and efficient way for the minimization of Div(B) numerical error.
A general hybrid radiation transport scheme for star formation simulations on an adaptive grid
Energy Technology Data Exchange (ETDEWEB)
Klassen, Mikhail; Pudritz, Ralph E. [Department of Physics and Astronomy, McMaster University 1280 Main Street W, Hamilton, ON L8S 4M1 (Canada); Kuiper, Rolf [Max Planck Institute for Astronomy Königstuhl 17, D-69117 Heidelberg (Germany); Peters, Thomas [Institut für Computergestützte Wissenschaften, Universität Zürich Winterthurerstrasse 190, CH-8057 Zürich (Switzerland); Banerjee, Robi; Buntemeyer, Lars, E-mail: klassm@mcmaster.ca [Hamburger Sternwarte, Universität Hamburg Gojenbergsweg 112, D-21029 Hamburg (Germany)
2014-12-10
Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.
Directory of Open Access Journals (Sweden)
Xiaoyi Zhou
2018-01-01
Full Text Available Digital watermarking is an effective solution to the problem of copyright protection, thus maintaining the security of digital products in the network. An improved scheme to increase the robustness of embedded information on the basis of discrete cosine transform (DCT domain is proposed in this study. The embedding process consisted of two main procedures. Firstly, the embedding intensity with support vector machines (SVMs was adaptively strengthened by training 1600 image blocks which are of different texture and luminance. Secondly, the embedding position with the optimized genetic algorithm (GA was selected. To optimize GA, the best individual in the first place of each generation directly went into the next generation, and the best individual in the second position participated in the crossover and the mutation process. The transparency reaches 40.5 when GA’s generation number is 200. A case study was conducted on a 256 × 256 standard Lena image with the proposed method. After various attacks (such as cropping, JPEG compression, Gaussian low-pass filtering (3,0.5, histogram equalization, and contrast increasing (0.5,0.6 on the watermarked image, the extracted watermark was compared with the original one. Results demonstrate that the watermark can be effectively recovered after these attacks. Even though the algorithm is weak against rotation attacks, it provides high quality in imperceptibility and robustness and hence it is a successful candidate for implementing novel image watermarking scheme meeting real timelines.
International Nuclear Information System (INIS)
Handa, Himesh; Sharma, B.B.
2016-01-01
Highlights: • New adaptive control design strategy to address chaotic system synchronization in master-slave configuration. • To derive control structure using model reference adaptive control like approach. • Extension of results to address general case with known and unknown system parameters. • Application of proposed strategy to chaotic systems. - Abstract: In this paper, a new adaptive feedback control design technique for the synchronization of a class of chaotic systems in master–slave configuration is proposed. The controller parameters are assumed to be unknown and are evolved using adaptation laws so as to achieve synchronization. To replicate real system operation, uncertainties are considered in both master as well as salve system parameters and adaptation laws for uncertain parameters are analytically derived using Lyapunov stability theory. The proposed strategy is derived by mimicking model reference adaptive control like structure for synchronization problem. To validate the methodology, two Genesio–Tesi systems and two Rossler's Prototype-4 systems are considered in master–slave configuration for synchronization. The analysis is done first with known system parameters and then uncertainties in system parameters are considered. Finally, detailed simulation results are provided to illustrate the effectiveness of the proposed results.
Goldberg, Niels; Ospald, Felix; Schneider, Matti
2017-10-01
In this article we introduce a fiber orientation-adapted integration scheme for Tucker's orientation averaging procedure applied to non-linear material laws, based on angular central Gaussian fiber orientation distributions. This method is stable w.r.t. fiber orientations degenerating into planar states and enables the construction of orthotropic hyperelastic energies for truly orthotropic fiber orientation states. We establish a reference scenario for fitting the Tucker average of a transversely isotropic hyperelastic energy, corresponding to a uni-directional fiber orientation, to microstructural simulations, obtained by FFT-based computational homogenization of neo-Hookean constituents. We carefully discuss ideas for accelerating the identification process, leading to a tremendous speed-up compared to a naive approach. The resulting hyperelastic material map turns out to be surprisingly accurate, simple to integrate in commercial finite element codes and fast in its execution. We demonstrate the capabilities of the extracted model by a finite element analysis of a fiber reinforced chain link.
Model building by Coset Space Dimensional Reduction scheme
Jittoh, Toshifumi; Koike, Masafumi; Nomura, Takaaki; Sato, Joe; Shimomura, Takashi
2009-04-01
We investigate the gauge-Higgs unification models within the scheme of the coset space dimensional reduction, beginning with a gauge theory in a fourteen-dimensional spacetime where extra-dimensional space has the structure of a ten-dimensional compact coset space. We found seventeen phenomenologically acceptable models through an exhaustive search for the candidates of the coset spaces, the gauge group in fourteen dimension, and fermion representation. Of the seventeen, ten models led to SO(10)(×U(1)) GUT-like models after dimensional reduction, three models led to SU(5)×U(1) GUT-like models, and four to SU(3)×SU(2)×U(1)×U(1) Standard-Model-like models. The combinations of the coset space, the gauge group in the fourteen-dimensional spacetime, and the representation of the fermion contents of such models are listed.
Patre, Parag; Joshi, Suresh M.
2011-01-01
Decentralized adaptive control is considered for systems consisting of multiple interconnected subsystems. It is assumed that each subsystem s parameters are uncertain and the interconnection parameters are not known. In addition, mismatch can exist between each subsystem and its reference model. A strictly decentralized adaptive control scheme is developed, wherein each subsystem has access only to its own state but has the knowledge of all reference model states. The mismatch is estimated online for each subsystem and the mismatch estimates are used to adaptively modify the corresponding reference models. The adaptive control scheme is extended to the case with actuator failures in addition to mismatch.
Sotiropoulos, Vassilios; Kaznessis, Yiannis N
2008-01-07
Models involving stochastic differential equations (SDEs) play a prominent role in a wide range of applications where systems are not at the thermodynamic limit, for example, biological population dynamics. Therefore there is a need for numerical schemes that are capable of accurately and efficiently integrating systems of SDEs. In this work we introduce a variable size step algorithm and apply it to systems of stiff SDEs with multiple multiplicative noise. The algorithm is validated using a subclass of SDEs called chemical Langevin equations that appear in the description of dilute chemical kinetics models, with important applications mainly in biology. Three representative examples are used to test and report on the behavior of the proposed scheme. We demonstrate the advantages and disadvantages over fixed time step integration schemes of the proposed method, showing that the adaptive time step method is considerably more stable than fixed step methods with no excessive additional computational overhead.
A High-Capacity Image Data Hiding Scheme Using Adaptive LSB Substitution
Directory of Open Access Journals (Sweden)
H. Yang
2009-12-01
Full Text Available Many existing steganographic methods hide more secret data into edged areas than smooth areas in the host image, which does not differentiate textures from edges and causes serious degradation in actual edge areas. To avoid abrupt changes in image edge areas, as well as to achieve better quality of the stego-image, a novel image data hiding technique by adaptive Least Significant Bits (LSBs substitution is proposed in this paper. The scheme exploits the brightness, edges, and texture masking of the host image to estimate the number k of LSBs for data hiding. Pixels in the noise non-sensitive regions are embedded by a k-bit LSB substitution with a lager value of k than that of the pixels in noise sensitive regions. Moreover, an optimal pixel adjustment process is used to enhance stego-image visual quality obtained by simple LSB substitution method. To ensure that the adaptive number k of LSBs remains unchanged after pixel modification, the LSBs number is computed by the high-order bits rather than all the bits of the image pixel value. The theoretical analyses and experiment results show that the proposed method achieves higher embedding capacity and better stegoimage quality compared with some existing LSB methods.
Fuzzy adaptive integration scheme for low-cost SINS/GPS navigation system
Nourmohammadi, Hossein; Keighobadi, Jafar
2018-01-01
Due to weak stand-alone accuracy as well as poor run-to-run stability of micro-electro mechanical system (MEMS)-based inertial sensors, special approaches are required to integrate low-cost strap-down inertial navigation system (SINS) with global positioning system (GPS), particularly in long-term applications. This paper aims to enhance long-term performance of conventional SINS/GPS navigation systems using a fuzzy adaptive integration scheme. The main concept behind the proposed adaptive integration is the good performance of attitude-heading reference system (AHRS) in low-accelerated motions and its degradation in maneuvered or accelerated motions. Depending on vehicle maneuvers, gravity-based attitude angles can be intelligently utilized to improve orientation estimation in the SINS. Knowledge-based fuzzy inference system is developed for decision-making between the AHRS and the SINS according to vehicle maneuvering conditions. Inertial measurements are the main input data of the fuzzy system to determine the maneuvering level during the vehicle motions. Accordingly, appropriate weighting coefficients are produced to combine the SINS/GPS and the AHRS, efficiently. The assessment of the proposed integrated navigation system is conducted via real data in airborne tests.
An Industrial Model Based Disturbance Feedback Control Scheme
DEFF Research Database (Denmark)
Kawai, Fukiko; Nakazawa, Chikashi; Vinther, Kasper
2014-01-01
This paper presents a model based disturbance feedback control scheme. Industrial process systems have been traditionally controlled by using relay and PID controller. However these controllers are affected by disturbances and model errors and these effects degrade control performance. The authors...... propose a new control method that can decrease the negative impact of disturbance and model errors. The control method is motivated by industrial practice by Fuji Electric. Simulation tests are examined with a conventional PID controller and the disturbance feedback control. The simulation results...... demonstrate the effectiveness of the proposed method comparing with the conventional PID controller...
Generalized Roe's numerical scheme for a two-fluid model
International Nuclear Information System (INIS)
Toumi, I.; Raymond, P.
1993-01-01
This paper is devoted to a mathematical and numerical study of a six equation two-fluid model. We will prove that the model is strictly hyperbolic due to the inclusion of the virtual mass force term in the phasic momentum equations. The two-fluid model is naturally written under a nonconservative form. To solve the nonlinear Riemann problem for this nonconservative hyperbolic system, a generalized Roe's approximate Riemann solver, is used, based on a linearization of the nonconservative terms. A Godunov type numerical scheme is built, using this approximate Riemann solver. 10 refs., 5 figs,
Directory of Open Access Journals (Sweden)
Yu Ya-Huei
2007-01-01
Full Text Available Scalable video coding (SVC has been an active research topic for the past decade. In the past, most SVC technologies were based on a coarse-granularity scalable model which puts many scalability constraints on the encoded bitstreams. As a result, the application scenario of adapting a preencoded bitstream multiple times along the distribution chain has not been seriously investigated before. In this paper, a model-based multiple-adaptation framework based on a wavelet video codec, MC-EZBC, is proposed. The proposed technology allows multiple adaptations on both the video data and the content-adaptive FEC protection codes. For multiple adaptations of video data, rate-distortion information must be embedded within the video bitstream in order to allow rate-distortion optimized operations for each adaptation. Experimental results show that the proposed method reduces the amount of side information by more than 50% on average when compared to the existing technique. It also reduces the number of iterations required to perform the tier-2 entropy coding by more than 64% on average. In addition, due to the nondiscrete nature of the rate-distortion model, the proposed framework also enables multiple adaptations of content-adaptive FEC protection scheme for more flexible error-resilient transmission of bitstreams.
He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying
2014-05-01
A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
Bouida, Zied
2012-12-01
Under the scenario of an underlay cognitive radio network, we propose in this paper two adaptive schemes using switched transmit diversity and adaptive modulation in order to increase the spectral efficiency of the secondary link and maintain a desired performance for the primary link. The proposed switching efficient scheme (SES) and bandwidth efficient scheme (BES) use the scan and wait combining technique (SWC) where a transmission occurs only when a branch with an acceptable performance is found, otherwise data is buffered. In these schemes, the modulation constellation size and the used transmit branch are determined to minimize the average number of switched branches and to achieve the highest spectral efficiency given the fading channel conditions, the required error rate performance, and a peak interference constraint to the primary receiver (PR). For delay-sensitive applications, we also propose two variations of the SES and BES schemes using power control (SES-PC and BES-PC) where the secondary transmitter (ST) starts sending data using a nominal power level which is selected in order to minimize the average delay introduced by the SWC technique. We demonstrate through numerical examples that the BES scheme increases the capacity of the secondary link when compared to the SES scheme. This spectral efficiency improvement comes at the expense of an increased average number of switched branches and thus an increased average delay. We also show that the SES-PC and the BES-PC schemes minimize the average delay while satisfying the same spectral efficiency as the SES and BES schemes, respectively. © 2012 IEEE.
Directory of Open Access Journals (Sweden)
Marie Ramon
2009-01-01
Full Text Available Systematic lossy error protection (SLEP is a robust error resilient mechanism based on principles of Wyner-Ziv (WZ coding for video transmission over error-prone networks. In an SLEP scheme, the video bitstream is separated into two parts: a systematic part consisting of a video sequence transmitted without channel coding, and additional information consisting of a WZ supplementary stream. This paper presents an adaptive SLEP scheme in which the WZ stream is obtained by frequency filtering in the transform domain. Additionally, error resilience varies adaptively depending on the characteristics of compressed video. We show that the proposed SLEP architecture achieves graceful degradation of reconstructed video quality in the presence of increasing transmission errors. Moreover, it provides good performances in terms of error protection as well as reconstructed video quality if compared to solutions based on coarser quantization, while offering an interesting embedded scheme to apply digital video format conversion.
Multivariable robust adaptive controller using reduced-order model
Directory of Open Access Journals (Sweden)
Wei Wang
1990-04-01
Full Text Available In this paper a multivariable robust adaptive controller is presented for a plant with bounded disturbances and unmodeled dynamics due to plant-model order mismatches. The robust stability of the closed-loop system is achieved by using the normalization technique and the least squares parameter estimation scheme with dead zones. The weighting polynomial matrices are incorporated into the control law, so that the open-loop unstable or/and nonminimum phase plants can be handled.
International Nuclear Information System (INIS)
Muhammad, K.; Jan, Z.; Khan, Z
2015-01-01
Wireless Sensor Networks (WSNs) are memory and bandwidth limited networks whose main goals are to maximize the network lifetime and minimize the energy consumption and transmission cost. To achieve these goals, different techniques of compression and clustering have been used. However, security is an open and major issue in WSNs for which different approaches are used, both in centralized and distributed WSNs' environments. This paper presents an adaptive cryptographic scheme for secure transmission of various sensitive parameters, sensed by wireless sensors to the fusion center for further processing in WSNs such as military networks. The proposed method encrypts the sensitive captured data of sensor nodes using various encryption procedures (bitxor operation, bits shuffling, and secret key based encryption) and then sends it to the fusion center. At the fusion center, the received encrypted data is decrypted for taking further necessary actions. The experimental results with complexity analysis, validate the effectiveness and feasibility of the proposed method in terms of security in WSNs. (author)
Reinharz, Vladimir; Dahari, Harel; Barash, Danny
2018-03-15
Age-structured PDE models have been developed to study viral infection and treatment. However, they are notoriously difficult to solve. Here, we investigate the numerical solutions of an age-based multiscale model of hepatitis C virus (HCV) dynamics during antiviral therapy and compare them with an analytical approximation, namely its long-term approximation. First, starting from a simple yet flexible numerical solution that also considers an integral approximated over previous iterations, we show that the long-term approximation is an underestimate of the PDE model solution as expected since some infection events are being ignored. We then argue for the importance of having a numerical solution that takes into account previous iterations for the associated integral, making problematic the use of canned solvers. Second, we demonstrate that the governing differential equations are stiff and the stability of the numerical scheme should be considered. Third, we show that considerable gain in efficiency can be achieved by using adaptive stepsize methods over fixed stepsize methods for simulating realistic scenarios when solving multiscale models numerically. Finally, we compare between several numerical schemes for the solution of the equations and demonstrate the use of a numerical optimization scheme for the parameter estimation performed directly from the equations. Copyright © 2018 Elsevier Inc. All rights reserved.
An integration scheme for stiff solid-gas reactor models
Directory of Open Access Journals (Sweden)
Bjarne A. Foss
2001-04-01
Full Text Available Many dynamic models encounter numerical integration problems because of a large span in the dynamic modes. In this paper we develop a numerical integration scheme for systems that include a gas phase, and solid and liquid phases, such as a gas-solid reactor. The method is based on neglecting fast dynamic modes and exploiting the structure of the algebraic equations. The integration method is suitable for a large class of industrially relevant systems. The methodology has proven remarkably efficient. It has in practice performed excellent and been a key factor for the success of the industrial simulator for electrochemical furnaces for ferro-alloy production.
Zhu, Zhen-Cai; Li, Xiang; Shen, Gang; Zhu, Wei-Dong
2018-01-01
This paper concerns wire rope tension control of a double-rope winding hoisting system (DRWHS), which consists of a hoisting system employed to realize a transportation function and an electro-hydraulic servo system utilized to adjust wire rope tensions. A dynamic model of the DRWHS is developed in which parameter uncertainties and external disturbances are considered. A comparison between simulation results using the dynamic model and experimental results using a double-rope winding hoisting experimental system is given in order to demonstrate accuracy of the dynamic model. In order to improve the wire rope tension coordination control performance of the DRWHS, a robust nonlinear adaptive backstepping controller (RNABC) combined with a nonlinear disturbance observer (NDO) is proposed. Main features of the proposed combined controller are: (1) using the RNABC to adjust wire rope tensions with consideration of parameter uncertainties, whose parameters are designed online by adaptive laws derived from Lyapunov stability theory to guarantee the control performance and stability of the closed-loop system; and (2) introducing the NDO to deal with uncertain external disturbances. In order to demonstrate feasibility and effectiveness of the proposed controller, experimental studies have been conducted on the DRWHS controlled by an xPC rapid prototyping system. Experimental results verify that the proposed controller exhibits excellent performance on wire rope tension coordination control compared with a conventional proportional-integral (PI) controller and adaptive backstepping controller. Copyright © 2017 ISA. All rights reserved.
Subjective quality assessment of an adaptive video streaming model
Tavakoli, Samira; Brunnström, Kjell; Wang, Kun; Andrén, Börje; Shahid, Muhammad; Garcia, Narciso
2014-01-01
With the recent increased popularity and high usage of HTTP Adaptive Streaming (HAS) techniques, various studies have been carried out in this area which generally focused on the technical enhancement of HAS technology and applications. However, a lack of common HAS standard led to multiple proprietary approaches which have been developed by major Internet companies. In the emerging MPEG-DASH standard the packagings of the video content and HTTP syntax have been standardized; but all the details of the adaptation behavior are left to the client implementation. Nevertheless, to design an adaptation algorithm which optimizes the viewing experience of the enduser, the multimedia service providers need to know about the Quality of Experience (QoE) of different adaptation schemes. Taking this into account, the objective of this experiment was to study the QoE of a HAS-based video broadcast model. The experiment has been carried out through a subjective study of the end user response to various possible clients' behavior for changing the video quality taking different QoE-influence factors into account. The experimental conclusions have made a good insight into the QoE of different adaptation schemes which can be exploited by HAS clients for designing the adaptation algorithms.
Model and Adaptive Operations of an Adaptive Component
Wei, Le; Zhao, Qiuyun; Shu, Hongping
In order to keep up with the dynamical and open internet environment and in terms of component, an adaptive component model which is based on event mechanism and policy binding is proposed. Components of the model can sense external changes and give the explicit description of the external environment. According to preset policy, component also can take adaptive operations such as adding, deleting, replacing and updating when necessary, and adjust the behavior and structure of the internetware to provide better services.
A dual adaptive watermarking scheme in contourlet domain for DICOM images
Directory of Open Access Journals (Sweden)
Rabbani Hossein
2011-06-01
Full Text Available Abstract Background Nowadays, medical imaging equipments produce digital form of medical images. In a modern health care environment, new systems such as PACS (picture archiving and communication systems, use the digital form of medical image too. The digital form of medical images has lots of advantages over its analog form such as ease in storage and transmission. Medical images in digital form must be stored in a secured environment to preserve patient privacy. It is also important to detect modifications on the image. These objectives are obtained by watermarking in medical image. Methods In this paper, we present a dual and oblivious (blind watermarking scheme in the contourlet domain. Because of importance of ROI (region of interest in interpretation by medical doctors rather than RONI (region of non-interest, we propose an adaptive dual watermarking scheme with different embedding strength in ROI and RONI. We embed watermark bits in singular value vectors of the embedded blocks within lowpass subband in contourlet domain. Results The values of PSNR (peak signal-to-noise ratio and SSIM (structural similarity measure index of ROI for proposed DICOM (digital imaging and communications in medicine images in this paper are respectively larger than 64 and 0.997. These values confirm that our algorithm has good transparency. Because of different embedding strength, BER (bit error rate values of signature watermark are less than BER values of caption watermark. Our results show that watermarked images in contourlet domain have greater robustness against attacks than wavelet domain. In addition, the qualitative analysis of our method shows it has good invisibility. Conclusions The proposed contourlet-based watermarking algorithm in this paper uses an automatically selection for ROI and embeds the watermark in the singular values of contourlet subbands that makes the algorithm more efficient, and robust against noise attacks than other transform
Fast Proton Titration Scheme for Multiscale Modeling of Protein Solutions.
Teixeira, Andre Azevedo Reis; Lund, Mikael; da Silva, Fernando Luís Barroso
2010-10-12
Proton exchange between titratable amino acid residues and the surrounding solution gives rise to exciting electric processes in proteins. We present a proton titration scheme for studying acid-base equilibria in Metropolis Monte Carlo simulations where salt is treated at the Debye-Hückel level. The method, rooted in the Kirkwood model of impenetrable spheres, is applied on the three milk proteins α-lactalbumin, β-lactoglobulin, and lactoferrin, for which we investigate the net-charge, molecular dipole moment, and charge capacitance. Over a wide range of pH and salt conditions, excellent agreement is found with more elaborate simulations where salt is explicitly included. The implicit salt scheme is orders of magnitude faster than the explicit analog and allows for transparent interpretation of physical mechanisms. It is shown how the method can be expanded to multiscale modeling of aqueous salt solutions of many biomolecules with nonstatic charge distributions. Important examples are protein-protein aggregation, protein-polyelectrolyte complexation, and protein-membrane association.
Study on noise prediction model and control schemes for substation.
Chen, Chuanmin; Gao, Yang; Liu, Songtao
2014-01-01
With the government's emphasis on environmental issues of power transmission and transformation project, noise pollution has become a prominent problem now. The noise from the working transformer, reactor, and other electrical equipment in the substation will bring negative effect to the ambient environment. This paper focuses on using acoustic software for the simulation and calculation method to control substation noise. According to the characteristics of the substation noise and the techniques of noise reduction, a substation's acoustic field model was established with the SoundPLAN software to predict the scope of substation noise. On this basis, 4 reasonable noise control schemes were advanced to provide some helpful references for noise control during the new substation's design and construction process. And the feasibility and application effect of these control schemes can be verified by using the method of simulation modeling. The simulation results show that the substation always has the problem of excessive noise at boundary under the conventional measures. The excess noise can be efficiently reduced by taking the corresponding noise reduction methods.
Study on Noise Prediction Model and Control Schemes for Substation
Gao, Yang; Liu, Songtao
2014-01-01
With the government's emphasis on environmental issues of power transmission and transformation project, noise pollution has become a prominent problem now. The noise from the working transformer, reactor, and other electrical equipment in the substation will bring negative effect to the ambient environment. This paper focuses on using acoustic software for the simulation and calculation method to control substation noise. According to the characteristics of the substation noise and the techniques of noise reduction, a substation's acoustic field model was established with the SoundPLAN software to predict the scope of substation noise. On this basis, 4 reasonable noise control schemes were advanced to provide some helpful references for noise control during the new substation's design and construction process. And the feasibility and application effect of these control schemes can be verified by using the method of simulation modeling. The simulation results show that the substation always has the problem of excessive noise at boundary under the conventional measures. The excess noise can be efficiently reduced by taking the corresponding noise reduction methods. PMID:24672356
Thermal Error Modeling of a Machine Tool Using Data Mining Scheme
Wang, Kun-Chieh; Tseng, Pai-Chang
In this paper the knowledge discovery technique is used to build an effective and transparent mathematic thermal error model for machine tools. Our proposed thermal error modeling methodology (called KRL) integrates the schemes of K-means theory (KM), rough-set theory (RS), and linear regression model (LR). First, to explore the machine tool's thermal behavior, an integrated system is designed to simultaneously measure the temperature ascents at selected characteristic points and the thermal deformations at spindle nose under suitable real machining conditions. Second, the obtained data are classified by the KM method, further reduced by the RS scheme, and a linear thermal error model is established by the LR technique. To evaluate the performance of our proposed model, an adaptive neural fuzzy inference system (ANFIS) thermal error model is introduced for comparison. Finally, a verification experiment is carried out and results reveal that the proposed KRL model is effective in predicting thermal behavior in machine tools. Our proposed KRL model is transparent, easily understood by users, and can be easily programmed or modified for different machining conditions.
Directory of Open Access Journals (Sweden)
Ho-Nien Shou
2012-02-01
Full Text Available This paper presents a genetic-based control scheme that not only utilizes evolutionary characteristics to find the signal acquisition parameters, but also employs an adaptive scheme to control the search space and avoid the genetic control converging to local optimal value so as to acquire the desired signal precisely and rapidly. Simulations and experiment results show that the proposed method can improve the precision of signal parameters and take less signal acquisition time than traditional serial search methods for global navigation satellite system (GNSS signals.
Plant adaptive behaviour in hydrological models (Invited)
van der Ploeg, M. J.; Teuling, R.
2013-12-01
Models that will be able to cope with future precipitation and evaporation regimes need a solid base that describes the essence of the processes involved [1]. Micro-behaviour in the soil-vegetation-atmosphere system may have a large impact on patterns emerging at larger scales. A complicating factor in the micro-behaviour is the constant interaction between vegetation and geology in which water plays a key role. The resilience of the coupled vegetation-soil system critically depends on its sensitivity to environmental changes. As a result of environmental changes vegetation may wither and die, but such environmental changes may also trigger gene adaptation. Constant exposure to environmental stresses, biotic or abiotic, influences plant physiology, gene adaptations, and flexibility in gene adaptation [2-6]. Gene expression as a result of different environmental conditions may profoundly impact drought responses across the same plant species. Differences in response to an environmental stress, has consequences for the way species are currently being treated in models (single plant to global scale). In particular, model parameters that control root water uptake and plant transpiration are generally assumed to be a property of the plant functional type. Assigning plant functional types does not allow for local plant adaptation to be reflected in the model parameters, nor does it allow for correlations that might exist between root parameters and soil type. Models potentially provide a means to link root water uptake and transport to large scale processes (e.g. Rosnay and Polcher 1998, Feddes et al. 2001, Jung 2010), especially when powered with an integrated hydrological, ecological and physiological base. We explore the experimental evidence from natural vegetation to formulate possible alternative modeling concepts. [1] Seibert, J. 2000. Multi-criteria calibration of a conceptual runoff model using a genetic algorithm. Hydrology and Earth System Sciences 4(2): 215
An intracloud lightning parameterization scheme for a storm electrification model
Helsdon, John H., Jr.; Wu, Gang; Farley, Richard D.
1992-01-01
The parameterization of an intracloud lightning discharge has been implemented in the present storm electrification model. The initiation, propagation direction, and termination of the discharge are computed using the magnitude and direction of the electric field vector as the determining criteria. The charge redistribution due to the lightning is approximated assuming the channel to be an isolated conductor with zero net charge over its entire length. Various simulations involving differing amounts of charge transferred and distribution of charges have been done. Values of charge transfer, dipole moment change, and electrical energy dissipation computed in the model are consistent with observations. The effects of the lightning-produced ions on the hydrometeor charges and electric field components depend strongly on the amount of charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge shows favorable agreement. Limitations of the parameterization scheme are discussed.
Dynamics Model Abstraction Scheme Using Radial Basis Functions
Directory of Open Access Journals (Sweden)
Silvia Tolu
2012-01-01
Full Text Available This paper presents a control model for object manipulation. Properties of objects and environmental conditions influence the motor control and learning. System dynamics depend on an unobserved external context, for example, work load of a robot manipulator. The dynamics of a robot arm change as it manipulates objects with different physical properties, for example, the mass, shape, or mass distribution. We address active sensing strategies to acquire object dynamical models with a radial basis function neural network (RBF. Experiments are done using a real robot’s arm, and trajectory data are gathered during various trials manipulating different objects. Biped robots do not have high force joint servos and the control system hardly compensates all the inertia variation of the adjacent joints and disturbance torque on dynamic gait control. In order to achieve smoother control and lead to more reliable sensorimotor complexes, we evaluate and compare a sparse velocity-driven versus a dense position-driven control scheme.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Directory of Open Access Journals (Sweden)
Tomoki Murakami
2017-01-01
Full Text Available This paper introduces a network-assisted interference suppression scheme using beam-tilt switching per frame for wireless local area network systems and its effectiveness in an actual indoor environment. In the proposed scheme, two access points simultaneously transmit to their own desired station by adjusting angle of beam-tilt including transmit power assisted from network server for the improvement of system throughput. In the conventional researches, it is widely known that beam-tilt is effective for ICI suppression in the outdoor scenario. However, the indoor effectiveness of beam-tilt for ICI suppression has not yet been indicated from the experimental evaluation. Thus, this paper indicates the effectiveness of the proposed scheme by analyzing multiple-input multiple-output channel matrices from experimental measurements in an office environment. The experimental results clearly show that the proposed scheme offers higher system throughput than the conventional scheme using just transmit power control.
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad
2017-07-01
This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.
Two nonlinear control schemes contrasted on a hydrodynamiclike model
Keefe, Laurence R.
1993-01-01
The principles of two flow control strategies, those of Huebler (Luescher and Huebler, 1989) and of Ott et al. (1990) are discussed, and the two schemes are compared for their ability to control shear flow, using fully developed and transitional solutions of the Ginzburg-Landau equation as models for such flows. It was found that the effectiveness of both methods in obtaining control of fully developed flows depended strongly on the 'distance' in state space between the uncontrolled flow and goal dynamics. There were conceptual difficulties in applying the Ott et al. method to transitional convectively unstable flows. On the other hand, the Huebler method worked well, within certain limitations, although at a large cost in energy terms.
Radiolytic oxidation of propane: computer modeling of the reaction scheme
International Nuclear Information System (INIS)
Gupta, A.K.; Hanrahan, R.J.
1991-01-01
The oxidation of gaseous propane under gamma radiolysis was studied at 100 torr pressure and 25 o C, at oxygen pressures from 1 to 15 torr. Major oxygen-containing products and their G-values with 10% added oxygen are as follows: acetone, 0.98; i-propyl alcohol, 0.86; propionaldehyde, 0.43; n-propyl alcohol, 0.11; acrolein, 0.14; and allyl alcohol, 0.038. The formation of major oxygen-containing products was explained on the basis that the alkyl radicals combine with molecular oxygen to give peroxyl radicals; the peroxyl radicals react with one another to give alkoxyl radicals, which in turn react with one another to form carbonyl compounds and alcohols. The reaction scheme for the formation of major products was examined using computer modeling based on a mechanism involving 28 reactions. Yields could be brought into agreement with the data within experimental error in nearly all cases. (author)
Multiple Model Adaptive Control Using Dual Youla-Kucera Factorisation
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, Klaus
2012-01-01
We propose a multi-model adaptive control scheme for uncertain linear plants based on the concept of model unfalsification. The approach relies on examining the ability of a pre-computed set of plant-controller candidates and choosing the one that is best able to reproduce observed in- and output...... signal samples. The ability to reproduce observations is measured as an easily computable signal norm. Compared to other related approaches, our procedure is designed to be able to handle significant measurement noise and closed-loop correlations between output measurements and control signals....
OMEGA: The operational multiscale environment model with grid adaptivity
International Nuclear Information System (INIS)
Bacon, D.P.
1995-01-01
This review talk describes the OMEGA code, used for weather simulation and the modeling of aerosol transport through the atmosphere. Omega employs a 3D mesh of wedge shaped elements (triangles when viewed from above) that adapt with time. Because wedges are laid out in layers of triangular elements, the scheme can utilize structured storage and differencing techniques along the elevation coordinate, and is thus a hybrid of structured and unstructured methods. The utility of adaptive gridding in this moded, near geographic features such as coastlines, where material properties change discontinuously, is illustrated. Temporal adaptivity was used additionally to track moving internal fronts, such as clouds of aerosol contaminants. The author also discusses limitations specific to this problem, including manipulation of huge data bases and fixed turn-around times. In practice, the latter requires a carefully tuned optimization between accuracy and computation speed
Liu Yue; Zhou Shuo
2016-01-01
To improve the dynamic performance of permanent magnet synchronous motor(PMSM) drive system, a adaptive nonsingular terminal sliding model control((NTSMC) strategy was proposed. The proposed control strategy presents an adaptive variable-rated exponential reaching law which the L1 norm of state variables is introduced. Exponential and constant approach speed can adaptively adjust according to the state variables’ distance to the equilibrium position.The proposed scheme can shorten the reachin...
Qaraqe, Marwa
2014-04-01
This paper focuses on the development of multiuser access schemes for spectrum sharing systems whereby secondary users are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. In particular, two scheduling schemes are proposed for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme focuses on optimizing the average spectral efficiency by selecting the user that reports the best channel quality. In order to alleviate the relatively high feedback required by the first scheme, a second scheme based on the concept of switched diversity is proposed, where the base station (BS) scans the secondary users in a sequential manner until a user whose channel quality is above an acceptable predetermined threshold is found. We develop expressions for the statistics of the signal-to-interference and noise ratio as well as the average spectral efficiency, average feedback load, and the delay at the secondary BS. We then present numerical results for the effect of the number of users and the interference constraint on the optimal switching threshold and the system performance and show that our analysis results are in perfect agreement with the numerical results. © 2014 John Wiley & Sons, Ltd.
Zhao, Wenjie; Peng, Yiran; Wang, Bin; Yi, Bingqi; Lin, Yanluan; Li, Jiangnan
2018-05-01
A newly implemented Baum-Yang scheme for simulating ice cloud optical properties is compared with existing schemes (Mitchell and Fu schemes) in a standalone radiative transfer model and in the global climate model (GCM) Community Atmospheric Model Version 5 (CAM5). This study systematically analyzes the effect of different ice cloud optical schemes on global radiation and climate by a series of simulations with a simplified standalone radiative transfer model, atmospheric GCM CAM5, and a comprehensive coupled climate model. Results from the standalone radiative model show that Baum-Yang scheme yields generally weaker effects of ice cloud on temperature profiles both in shortwave and longwave spectrum. CAM5 simulations indicate that Baum-Yang scheme in place of Mitchell/Fu scheme tends to cool the upper atmosphere and strengthen the thermodynamic instability in low- and mid-latitudes, which could intensify the Hadley circulation and dehydrate the subtropics. When CAM5 is coupled with a slab ocean model to include simplified air-sea interaction, reduced downward longwave flux to surface in Baum-Yang scheme mitigates ice-albedo feedback in the Arctic as well as water vapor and cloud feedbacks in low- and mid-latitudes, resulting in an overall temperature decrease by 3.0/1.4 °C globally compared with Mitchell/Fu schemes. Radiative effect and climate feedback of the three ice cloud optical schemes documented in this study can be referred for future improvements on ice cloud simulation in CAM5.
Unobtrusive user modeling for adaptive hypermedia
Holz, H.J.; Hofmann, K.; Reed, C.; Uchyigit, G.; Ma, M.Y.
2008-01-01
We propose a technique for user modeling in Adaptive Hypermedia (AH) that is unobtrusive at both the level of observable behavior and that of cognition. Unobtrusive user modeling is complementary to transparent user modeling. Unobtrusive user modeling induces user models appropriate for Educational
Numerical Modeling of Deep Mantle Convection: Advection and Diffusion Schemes for Marker Methods
Mulyukova, Elvira; Dabrowski, Marcin; Steinberger, Bernhard
2013-04-01
Thermal and chemical evolution of Earth's deep mantle can be studied by modeling vigorous convection in a chemically heterogeneous fluid. Numerical modeling of such a system poses several computational challenges. Dominance of heat advection over the diffusive heat transport, and a negligible amount of chemical diffusion results in sharp gradients of thermal and chemical fields. The exponential dependence of the viscosity of mantle materials on temperature also leads to high gradients of the velocity field. The accuracy of many numerical advection schemes degrades quickly with increasing gradient of the solution, while the computational effort, in terms of the scheme complexity and required resolution, grows. Additional numerical challenges arise due to a large range of length-scales characteristic of a thermochemical convection system with highly variable viscosity. To examplify, the thickness of the stem of a rising thermal plume may be a few percent of the mantle thickness. An even thinner filament of an anomalous material that is entrained by that plume may consitute less than a tenth of a percent of the mantle thickness. We have developed a two-dimensional FEM code to model thermochemical convection in a hollow cylinder domain, with a depth- and temperature-dependent viscosity representative of the mantle (Steinberger and Calderwood, 2006). We use marker-in-cell method for advection of chemical and thermal fields. The main advantage of perfoming advection using markers is absence of numerical diffusion during the advection step, as opposed to the more diffusive field-methods. However, in the common implementation of the marker-methods, the solution of the momentum and energy equations takes place on a computational grid, and nodes do not generally coincide with the positions of the markers. Transferring velocity-, temperature-, and chemistry- information between nodes and markers introduces errors inherent to inter- and extrapolation. In the numerical scheme
Modeling adaptive and non-adaptive responses to environmental change
DEFF Research Database (Denmark)
Coulson, Tim; Kendall, Bruce E; Barthold, Julia A.
2017-01-01
Understanding how the natural world will be impacted by environmental change over the coming decades is one of the most pressing challenges facing humanity. Addressing this challenge is difficult because environmental change can generate both population level plastic and evolutionary responses...... construct a number of example models to demonstrate that evolutionary responses to environmental change over the short-term will be considerably slower than plastic responses, and that the rate of adaptive evolution to a new environment depends upon whether plastic responses are adaptive or non...... machinery of the evolutionarily explicit models we develop will be needed to predict responses to environmental change, or whether simpler non-evolutionary models that are now widely constructed may be sufficient....
Radiolytic oxidation of propane: Computer modeling of the reaction scheme
Gupta, Avinash K.; Hanrahan, Robert J.
The oxidation of gaseous propane under gamma radiolysis was studied at 100 torr pressure and 25°C, at oxygen pressures from 1 to 15 torr. Major oxygen-containing products and their G-values with 10% added oxygen are as follows: acetone, 0.98; i-propyl alcohol, 0.86; propionaldehyde, 0.43; n-propyl alcohol, 0.11; acrolein, 0.14; and allyl alcohol, 0.038. Minor products include i-butyl alcohol, t-amyl alcohol, n-butyl alcohol, n-amyl alcohol, and i-amyl alcohol. Small yields of i-hexyl alcohol and n-hexyl alcohol were also observed. There was no apparent difference in the G-values at pressures of 50, 100 and 150 torr. When the oxygen concentration was decreased below 5%, the yields of acetone, i-propyl alcohol, and n-propyl alcohol increased, the propionaldehyde yield decreased, and the yields of other products remained constant. The formation of major oxygen-containing products was explained on the basis that the alkyl radicals combine with molecular oxygen to give peroxyl radicals; the peroxyl radicals react with one another to give alkoxyl radicals, which in turn react with one another to form carbonyl compounds and alcohols. The reaction scheme for the formation of major products was examined using computer modeling based on a mechanism involving 28 reactions. Yields could be brought into agreement with the data within experimental error in nearly all cases.
3 Lectures: "Lagrangian Models", "Numerical Transport Schemes", and "Chemical and Transport Models"
Douglass, A.
2005-01-01
The topics for the three lectures for the Canadian Summer School are Lagrangian Models, numerical transport schemes, and chemical and transport models. In the first lecture I will explain the basic components of the Lagrangian model (a trajectory code and a photochemical code), the difficulties in using such a model (initialization) and show some applications in interpretation of aircraft and satellite data. If time permits I will show some results concerning inverse modeling which is being used to evaluate sources of tropospheric pollutants. In the second lecture I will discuss one of the core components of any grid point model, the numerical transport scheme. I will explain the basics of shock capturing schemes, and performance criteria. I will include an example of the importance of horizontal resolution to polar processes. We have learned from NASA's global modeling initiative that horizontal resolution matters for predictions of the future evolution of the ozone hole. The numerical scheme will be evaluated using performance metrics based on satellite observations of long-lived tracers. The final lecture will discuss the evolution of chemical transport models over the last decade. Some of the problems with assimilated winds will be demonstrated, using satellite data to evaluate the simulations.
Nitrogen and Phosphorus Biomass-Kinetic Model for Chlorella vulgaris in a Biofuel Production Scheme
2010-03-01
NITROGEN AND PHOSPHORUS BIOMASS-KINETIC MODEL FOR CHLORELLA VULGARIS IN A BIOFUEL PRODUCTION SCHEME THESIS William M. Rowley, Major...States Government. AFIT/GES/ENV/10-M04 NITROGEN AND PHOSPHORUS BIOMASS-KINETIC MODEL FOR CHLORELLA VULGARIS IN A BIOFUEL...MODEL FOR CHLORELLA VULGARIS IN A BIOFUEL PRODUCTION SCHEME William M. Rowley, BS Major, USMC Approved
Armand J, K. M.
2017-12-01
In this study, version 4 of the regional climate model (RegCM4) is used to perform 6 years simulation including one year for spin-up (from January 2001 to December 2006) over Central Africa using four convective schemes: The Emmanuel scheme (MIT), the Grell scheme with Arakawa-Schulbert closure assumption (GAS), the Grell scheme with Fritsch-Chappell closure assumption (GFC) and the Anthes-Kuo scheme (Kuo). We have investigated the ability of the model to simulate precipitation, surface temperature, wind and aerosols optical depth. Emphasis in the model results were made in December-January-February (DJF) and July-August-September (JAS) periods. Two subregions have been identified for more specific analysis namely: zone 1 which corresponds to the sahel region mainly classified as desert and steppe and zone 2 which is a region spanning the tropical rain forest and is characterised by a bimodal rain regime. We found that regardless of periods or simulated parameters, MIT scheme generally has a tendency to overestimate. The GAS scheme is more suitable in simulating the aforementioned parameters, as well as the diurnal cycle of precipitations everywhere over the study domain irrespective of the season. In JAS, model results are similar in the representation of regional wind circulation. Apart from the MIT scheme, all the convective schemes give the same trends in aerosols optical depth simulations. Additional experiment reveals that the use of BATS instead of Zeng scheme to calculate ocean flux appears to improve the quality of the model simulations.
Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; Lang, Russell
2012-01-01
The present three single-case studies assessed the effectiveness of technology-based programs to help three persons with multiple disabilities exercise adaptive response schemes independently. The response schemes included (a) left and right head movements for a man who kept his head increasingly static on his wheelchair's headrest (Study I), (b)…
On usage of CABARET scheme for tracer transport in INM ocean model
International Nuclear Information System (INIS)
Diansky, Nikolay; Kostrykin, Sergey; Gusev, Anatoly; Salnikov, Nikolay
2010-01-01
The contemporary state of ocean numerical modelling sets some requirements for the numerical advection schemes used in ocean general circulation models (OGCMs). The most important requirements are conservation, monotonicity and numerical efficiency including good parallelization properties. Investigation of some advection schemes shows that one of the best schemes satisfying the criteria is CABARET scheme. 3D-modification of the CABARET scheme was used to develop a new transport module (for temperature and salinity) for the Institute of Numerical Mathematics ocean model (INMOM). Testing of this module on some common benchmarks shows a high accuracy in comparison with the second-order advection scheme used in the INMOM. This new module was incorporated in the INMOM and experiments with the modified model showed a better simulation of oceanic circulation than its previous version.
Universal block diagram based modeling and simulation schemes for fractional-order control systems.
Bai, Lu; Xue, Dingyü
2017-05-08
Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Incompressible Turbulent Flow Simulation Using the κ-ɛ Model and Upwind Schemes
Directory of Open Access Journals (Sweden)
V. G. Ferreira
2007-01-01
Full Text Available In the computation of turbulent flows via turbulence modeling, the treatment of the convective terms is a key issue. In the present work, we present a numerical technique for simulating two-dimensional incompressible turbulent flows. In particular, the performance of the high Reynolds κ-ɛ model and a new high-order upwind scheme (adaptative QUICKEST by Kaibara et al. (2005 is assessed for 2D confined and free-surface incompressible turbulent flows. The model equations are solved with the fractional-step projection method in primitive variables. Solutions are obtained by using an adaptation of the front tracking GENSMAC (Tomé and McKee (1994 methodology for calculating fluid flows at high Reynolds numbers. The calculations are performed by using the 2D version of the Freeflow simulation system (Castello et al. (2000. A specific way of implementing wall functions is also tested and assessed. The numerical procedure is tested by solving three fluid flow problems, namely, turbulent flow over a backward-facing step, turbulent boundary layer over a flat plate under zero-pressure gradients, and a turbulent free jet impinging onto a flat surface. The numerical method is then applied to solve the flow of a horizontal jet penetrating a quiescent fluid from an entry port beneath the free surface.
Ilik, Semih C.; Arsoy, Aysen B.
2017-07-01
Integration of distributed generation (DG) such as renewable energy sources to electrical network becomes more prevalent in recent years. Grid connection of DG has effects on load flow directions, voltage profile, short circuit power and especially protection selectivity. Applying traditional overcurrent protection scheme is inconvenient when system reliability and sustainability are considered. If a fault happens in DG connected network, short circuit contribution of DG, creates additional branch element feeding the fault current; compels to consider directional overcurrent (OC) protection scheme. Protection coordination might get lost for changing working conditions when DG sources are connected. Directional overcurrent relay parameters are determined for downstream and upstream relays when different combinations of DG connected singular or plural, on radial test system. With the help of proposed flow chart, relay parameters are updated and coordination between relays kept sustained for different working conditions in DigSILENT PowerFactory program.
Fully Adaptive Radar Modeling and Simulation Development
2017-04-01
AFRL-RY-WP-TR-2017-0074 FULLY ADAPTIVE RADAR MODELING AND SIMULATION DEVELOPMENT Kristine L. Bell and Anthony Kellems Metron, Inc...SMALL BUSINESS INNOVATION RESEARCH (SBIR) PHASE I REPORT. Approved for public release; distribution unlimited. See additional restrictions...2017 4. TITLE AND SUBTITLE FULLY ADAPTIVE RADAR MODELING AND SIMULATION DEVELOPMENT 5a. CONTRACT NUMBER FA8650-16-M-1774 5b. GRANT NUMBER 5c
BOT schemes as financial model of hydro power projects
International Nuclear Information System (INIS)
Grausam, A.
1997-01-01
Build-operate-transfer (BOT) schemes are the latest methods adopted in the developing infrastructure projects. This paper outlines the project financing through BOT schemes and briefly focuses on the factors particularly relevant to hydro power projects. Hydro power development provides not only the best way to produce electricity, it can also solve problems in different fields, such as navigation problems in case of run-of-the river plants, ground water management systems and flood control etc. This makes HPP projects not cheaper, but hydro energy is a clean and renewable energy and the hydro potential worldwide will play a major role to meet the increased demand in future. 5 figs
Directory of Open Access Journals (Sweden)
Guilin Zheng
2011-03-01
Full Text Available Fire hazard monitoring and evacuation for building environments is a novel application area for the deployment of wireless sensor networks. In this context, adaptive routing is essential in order to ensure safe and timely data delivery in building evacuation and fire fighting resource applications. Existing routing mechanisms for wireless sensor networks are not well suited for building fires, especially as they do not consider critical and dynamic network scenarios. In this paper, an emergency-adaptive, real-time and robust routing protocol is presented for emergency situations such as building fire hazard applications. The protocol adapts to handle dynamic emergency scenarios and works well with the routing hole problem. Theoretical analysis and simulation results indicate that our protocol provides a real-time routing mechanism that is well suited for dynamic emergency scenarios in building fires when compared with other related work.
Directory of Open Access Journals (Sweden)
Yuanyuan Zeng
2010-06-01
Full Text Available Fire hazard monitoring and evacuation for building environments is a novel application area for the deployment of wireless sensor networks. In this context, adaptive routing is essential in order to ensure safe and timely data delivery in building evacuation and fire fighting resource applications. Existing routing mechanisms for wireless sensor networks are not well suited for building fires, especially as they do not consider critical and dynamic network scenarios. In this paper, an emergency-adaptive, real-time and robust routing protocol is presented for emergency situations such as building fire hazard applications. The protocol adapts to handle dynamic emergency scenarios and works well with the routing hole problem. Theoretical analysis and simulation results indicate that our protocol provides a real-time routing mechanism that is well suited for dynamic emergency scenarios in building fires when compared with other related work.
Unstructured mesh adaptivity for urban flooding modelling
Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.
2018-05-01
Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.
Directory of Open Access Journals (Sweden)
Arturo Torres-González
2014-04-01
Full Text Available This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons are used as landmarks for range-only (RO simultaneous localization and mapping (SLAM. This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively than traditional methods with a lower computational burden (16% and similar beacon energy consumption.
A Distributed Taxation Based Rank Adaptation Scheme for 5G Small Cells
DEFF Research Database (Denmark)
Catania, Davide; Cattoni, Andrea Fabio; Mahmood, Nurul Huda
2015-01-01
The further densification of small cells impose high and undesirable levels of inter-cell interference. Multiple Input Multiple Output (MIMO) systems along with advanced receiver techniques provide us with extra degrees of freedom to combat such a problem. With such tools, rank adaptation algorit...
Analyses of models for promotion schemes and ownership arrangements
DEFF Research Database (Denmark)
Hansen, Lise-Lotte Pade; Schröder, Sascha Thorsten; Münster, Marie
2011-01-01
as increase the national competitiveness. The stationary fuel cell technology is still in a rather early stage of development and faces a long list of challenges and barriers of which some are linked directly to the technology through the need of cost decrease and reliability improvements. Others are linked...... countries should opt to support stationary fuel cells, we find that in Denmark it would be promising to apply the net metering based support scheme for households with an electricity consumption exceeding the electricity production from the fuel cell. In France and Portugal the most promising support scheme...... is price premium when the fuel cell is run as a part of a virtual power plant. From a system perspective, it appears that it is more important which kind of energy system (represented by country) the FC’s are implemented in, rather than which operation strategy is used. In an energy system with lots...
Adaptive Partially Hidden Markov Models
DEFF Research Database (Denmark)
Forchhammer, Søren Otto; Rasmussen, Tage
1996-01-01
Partially Hidden Markov Models (PHMM) have recently been introduced. The transition and emission probabilities are conditioned on the past. In this report, the PHMM is extended with a multiple token version. The different versions of the PHMM are applied to bi-level image coding....
The behaviour of adaptive boneremodeling simulation models
Weinans, H.; Huiskes, R.; Grootenboer, H.J.
1992-01-01
The process of adaptive bone remodeling can be described mathematically and simulated in a computer model, integrated with the finite element method. In the model discussed here, cortical and trabecular bone are described as continuous materials with variable density. The remodeling rule applied to
An adaptive stochastic model for financial markets
International Nuclear Information System (INIS)
Hernández, Juan Antonio; Benito, Rosa Marı´a; Losada, Juan Carlos
2012-01-01
An adaptive stochastic model is introduced to simulate the behavior of real asset markets. The model adapts itself by changing its parameters automatically on the basis of the recent historical data. The basic idea underlying the model is that a random variable uniformly distributed within an interval with variable extremes can replicate the histograms of asset returns. These extremes are calculated according to the arrival of new market information. This adaptive model is applied to the daily returns of three well-known indices: Ibex35, Dow Jones and Nikkei, for three complete years. The model reproduces the histograms of the studied indices as well as their autocorrelation structures. It produces the same fat tails and the same power laws, with exactly the same exponents, as in the real indices. In addition, the model shows a great adaptation capability, anticipating the volatility evolution and showing the same volatility clusters observed in the assets. This approach provides a novel way to model asset markets with internal dynamics which changes quickly with time, making it impossible to define a fixed model to fit the empirical observations.
Unconditionally energy stable numerical schemes for phase-field vesicle membrane model
Guillén-González, F.; Tierra, G.
2018-02-01
Numerical schemes to simulate the deformation of vesicles membranes via minimizing the bending energy have been widely studied in recent times due to its connection with many biological motivated problems. In this work we propose a new unconditionally energy stable numerical scheme for a vesicle membrane model that satisfies exactly the conservation of volume constraint and penalizes the surface area constraint. Moreover, we extend these ideas to present an unconditionally energy stable splitting scheme decoupling the interaction of the vesicle with a surrounding fluid. Finally, the well behavior of the proposed schemes are illustrated through several computational experiments.
A New Key Predistribution Scheme for Multiphase Sensor Networks Using a New Deployment Model
Directory of Open Access Journals (Sweden)
Boqing Zhou
2014-01-01
Full Text Available During the lifecycle of sensor networks, making use of the existing key predistribution schemes using deployment knowledge for pairwise key establishment and authentication between nodes, a new challenge is elevated. Either the resilience against node capture attacks or the global connectivity will significantly decrease with time. In this paper, a new deployment model is developed for multiphase deployment sensor networks, and then a new key management scheme is further proposed. Compared with the existing schemes using deployment knowledge, our scheme has better performance in global connectivity, resilience against node capture attacks throughout their lifecycle.
New Identity-Based Blind Signature and Blind Decryption Scheme in the Standard Model
Phong, Le Trieu; Ogata, Wakaha
We explicitly describe and analyse blind hierachical identity-based encryption (blind HIBE) schemes, which are natural generalizations of blind IBE schemes [20]. We then uses the blind HIBE schemes to construct: (1) An identity-based blind signature scheme secure in the standard model, under the computational Diffie-Hellman (CDH) assumption, and with much shorter signature size and lesser communication cost, compared to existing proposals. (2) A new mechanism supporting a user to buy digital information over the Internet without revealing what he/she has bought, while protecting the providers from cheating users.
Directory of Open Access Journals (Sweden)
Fuqing Zhao
2016-01-01
Full Text Available A fixed evolutionary mechanism is usually adopted in the multiobjective evolutionary algorithms and their operators are static during the evolutionary process, which causes the algorithm not to fully exploit the search space and is easy to trap in local optima. In this paper, a SPEA2 algorithm which is based on adaptive selection evolution operators (AOSPEA is proposed. The proposed algorithm can adaptively select simulated binary crossover, polynomial mutation, and differential evolution operator during the evolutionary process according to their contribution to the external archive. Meanwhile, the convergence performance of the proposed algorithm is analyzed with Markov chain. Simulation results on the standard benchmark functions reveal that the performance of the proposed algorithm outperforms the other classical multiobjective evolutionary algorithms.
Decentralized & Adaptive Load-Frequency Control Scheme of Variable Speed Wind Turbines
DEFF Research Database (Denmark)
Hoseinzadeh, Bakhtyar; Silva, Filipe Miguel Faria da; Bak, Claus Leth
2014-01-01
and therefore determining the contribution factor of each individual WT to gain an adaptive LFC approach. The Electrical Distance (ED) concept confirms that the locally measured voltage decay is a proper criterion of closeness to the disturbance place. Numerical simulations carried out in DigSilent PowerFactory...... software confirm the efficiency of proposed methodology to stabilize the power system after a severe contingency....
High-performance adaptive intelligent Direct Torque Control schemes for induction motor drives
Directory of Open Access Journals (Sweden)
Vasudevan M.
2005-01-01
Full Text Available This paper presents a detailed comparison between viable adaptive intelligent torque control strategies of induction motor, emphasizing advantages and disadvantages. The scope of this paper is to choose an adaptive intelligent controller for induction motor drive proposed for high performance applications. Induction motors are characterized by complex, highly non-linear, time varying dynamics, inaccessibility of some states and output for measurements and hence can be considered as a challenging engineering problem. The advent of torque and flux control techniques have partially solved induction motor control problems, because they are sensitive to drive parameter variations and performance may deteriorate if conventional controllers are used. Intelligent controllers are considered as potential candidates for such an application. In this paper, the performance of the various sensor less intelligent Direct Torque Control (DTC techniques of Induction motor such as neural network, fuzzy and genetic algorithm based torque controllers are evaluated. Adaptive intelligent techniques are applied to achieve high performance decoupled flux and torque control. This paper contributes: i Development of Neural network algorithm for state selection in DTC; ii Development of new algorithm for state selection using Genetic algorithm principle; and iii Development of Fuzzy based DTC. Simulations have been performed using the trained state selector neural network instead of conventional DTC and Fuzzy controller instead of conventional DTC controller. The results show agreement with those of the conventional DTC.
An improved snow scheme for the ECMWF land surface model: Description and offline validation
Emanuel Dutra; Gianpaolo Balsamo; Pedro Viterbo; Pedro M. A. Miranda; Anton Beljaars; Christoph Schar; Kelly Elder
2010-01-01
A new snow scheme for the European Centre for Medium-Range Weather Forecasts (ECMWF) land surface model has been tested and validated. The scheme includes a new parameterization of snow density, incorporating a liquid water reservoir, and revised formulations for the subgrid snow cover fraction and snow albedo. Offline validation (covering a wide range of spatial and...
Eliseev, A. V.; Coumou, D.; Chernokulsky, A. V.; Petoukhov, V.; Petri, S.
2013-01-01
In this study we present a scheme for calculating the characteristics of multi-layer cloudiness and precipitation for Earth system models of intermediate complexity (EMICs). This scheme considers three-layer stratiform cloudiness and single-column convective clouds. It distinguishes between ice and
Road Sign Recognition with Fuzzy Adaptive Pre-Processing Models
Directory of Open Access Journals (Sweden)
Ming-Shi Wang
2012-05-01
Full Text Available A road sign recognition system based on adaptive image pre-processing models using two fuzzy inference schemes has been proposed. The first fuzzy inference scheme is to check the changes of the light illumination and rich red color of a frame image by the checking areas. The other is to check the variance of vehicle’s speed and angle of steering wheel to select an adaptive size and position of the detection area. The Adaboost classifier was employed to detect the road sign candidates from an image and the support vector machine technique was employed to recognize the content of the road sign candidates. The prohibitory and warning road traffic signs are the processing targets in this research. The detection rate in the detection phase is 97.42%. In the recognition phase, the recognition rate is 93.04%. The total accuracy rate of the system is 92.47%. For video sequences, the best accuracy rate is 90.54%, and the average accuracy rate is 80.17%. The average computing time is 51.86 milliseconds per frame. The proposed system can not only overcome low illumination and rich red color around the road sign problems but also offer high detection rates and high computing performance.
Advanced radar detection schemes under mismatched signal models
Bandiera, Francesco
2009-01-01
Adaptive detection of signals embedded in correlated Gaussian noise has been an active field of research in the last decades. This topic is important in many areas of signal processing such as, just to give some examples, radar, sonar, communications, and hyperspectral imaging. Most of the existing adaptive algorithms have been designed following the lead of the derivation of Kelly's detector which assumes perfect knowledge of the target steering vector. However, in realistic scenarios, mismatches are likely to occur due to both environmental and instrumental factors. When a mismatched signal
Enhanced Physics-Based Numerical Schemes for Two Classes of Turbulence Models
Directory of Open Access Journals (Sweden)
Leo G. Rebholz
2009-01-01
Full Text Available We present enhanced physics-based finite element schemes for two families of turbulence models, the NS- models and the Stolz-Adams approximate deconvolution models. These schemes are delicate extensions of a method created for the Navier-Stokes equations in Rebholz (2007, that achieve high physical fidelity by admitting balances of both energy and helicity that match the true physics. The schemes' development requires carefully chosen discrete curl, discrete Laplacian, and discrete filtering operators, in order to permit the necessary differential operator commutations.
An Adaptive Medium Access Parameter Prediction Scheme for IEEE 802.11 Real-Time Applications
Directory of Open Access Journals (Sweden)
Estefanía Coronado
2017-01-01
Full Text Available Multimedia communications have experienced an unprecedented growth due mainly to the increase in the content quality and the emergence of smart devices. The demand for these contents is tending towards wireless technologies. However, these transmissions are quite sensitive to network delays. Therefore, ensuring an optimum QoS level becomes of great importance. The IEEE 802.11e amendment was released to address the lack of QoS capabilities in the original IEEE 802.11 standard. Accordingly, the Enhanced Distributed Channel Access (EDCA function was introduced, allowing it to differentiate traffic streams through a group of Medium Access Control (MAC parameters. Although EDCA recommends a default configuration for these parameters, it has been proved that it is not optimum in many scenarios. In this work a dynamic prediction scheme for these parameters is presented. This approach ensures an appropriate traffic differentiation while maintaining compatibility with the stations without QoS support. As the APs are the only devices that use this algorithm, no changes are required to current network cards. The results show improvements in both voice and video transmissions, as well as in the QoS level of the network that the proposal achieves with regard to EDCA.
Adaptive Control of MEMS Gyroscope Based on T-S Fuzzy Model
Directory of Open Access Journals (Sweden)
Yunmei Fang
2015-01-01
Full Text Available A multi-input multioutput (MIMO Takagi-Sugeno (T-S fuzzy model is built on the basis of a nonlinear model of MEMS gyroscope. A reference model is adjusted so that a local linear state feedback controller could be designed for each T-S fuzzy submodel based on a parallel distributed compensation (PDC method. A parameter estimation scheme for updating the parameters of the T-S fuzzy models is designed and analyzed based on the Lyapunov theory. A new adaptive law can be selected to be the former adaptive law plus a nonnegative in variable to guarantee that the derivative of the Lyapunov function is smaller than zero. The controller output is implemented on the nonlinear model and T-S fuzzy model, respectively, for the purpose of comparison. Numerical simulations are investigated to verify the effectiveness of the proposed control scheme and the correctness of the T-S fuzzy model.
Godunov-type schemes for hydrodynamic and magnetohydrodynamic modeling
International Nuclear Information System (INIS)
Vides-Higueros, Jeaniffer
2014-01-01
The main objective of this thesis concerns the study, design and numerical implementation of finite volume schemes based on the so-Called Godunov-Type solvers for hyperbolic systems of nonlinear conservation laws, with special attention given to the Euler equations and ideal MHD equations. First, we derive a simple and genuinely two-Dimensional Riemann solver for general conservation laws that can be regarded as an actual 2D generalization of the HLL approach, relying heavily on the consistency with the integral formulation and on the proper use of Rankine-Hugoniot relations to yield expressions that are simple enough to be applied in the structured and unstructured contexts. Then, a comparison between two methods aiming to numerically maintain the divergence constraint of the magnetic field for the ideal MHD equations is performed and we show how the 2D Riemann solver can be employed to obtain robust divergence-Free simulations. Next, we derive a relaxation scheme that incorporates gravity source terms derived from a potential into the hydrodynamic equations, an important problem in astrophysics, and finally, we review the design of finite volume approximations in curvilinear coordinates, providing a fresher view on an alternative discretization approach. Throughout this thesis, numerous numerical results are shown. (author) [fr
Adaptive regression for modeling nonlinear relationships
Knafl, George J
2016-01-01
This book presents methods for investigating whether relationships are linear or nonlinear and for adaptively fitting appropriate models when they are nonlinear. Data analysts will learn how to incorporate nonlinearity in one or more predictor variables into regression models for different types of outcome variables. Such nonlinear dependence is often not considered in applied research, yet nonlinear relationships are common and so need to be addressed. A standard linear analysis can produce misleading conclusions, while a nonlinear analysis can provide novel insights into data, not otherwise possible. A variety of examples of the benefits of modeling nonlinear relationships are presented throughout the book. Methods are covered using what are called fractional polynomials based on real-valued power transformations of primary predictor variables combined with model selection based on likelihood cross-validation. The book covers how to formulate and conduct such adaptive fractional polynomial modeling in the s...
Modeling Adaptive Behavior for Systems Design
DEFF Research Database (Denmark)
Rasmussen, Jens
1994-01-01
Field studies in modern work systems and analysis of recent major accidents have pointed to a need for better models of the adaptive behavior of individuals and organizations operating in a dynamic and highly competitive environment. The paper presents a discussion of some key characteristics...... of the predictive models required for the design of work supports systems, that is,information systems serving as the human-work interface. Three basic issues are in focus: 1.) Some fundamental problems in analysis and modeling modern dynamic work systems caused by the adaptive nature of human behavior; 2.......) The basic difference between the models of system functions used in engineering and design and those evolving from basic research within the various academic disciplines and finally 3.) The models and methods required for closed-loop, feedback system design....
Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments
Energy Technology Data Exchange (ETDEWEB)
Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao
2009-05-20
Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmark the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.
An Efficient Code-Based Threshold Ring Signature Scheme with a Leader-Participant Model
Directory of Open Access Journals (Sweden)
Guomin Zhou
2017-01-01
Full Text Available Digital signature schemes with additional properties have broad applications, such as in protecting the identity of signers allowing a signer to anonymously sign a message in a group of signers (also known as a ring. While these number-theoretic problems are still secure at the time of this research, the situation could change with advances in quantum computing. There is a pressing need to design PKC schemes that are secure against quantum attacks. In this paper, we propose a novel code-based threshold ring signature scheme with a leader-participant model. A leader is appointed, who chooses some shared parameters for other signers to participate in the signing process. This leader-participant model enhances the performance because every participant including the leader could execute the decoding algorithm (as a part of signing process upon receiving the shared parameters from the leader. The time complexity of our scheme is close to Courtois et al.’s (2001 scheme. The latter is often used as a basis to construct other types of code-based signature schemes. Moreover, as a threshold ring signature scheme, our scheme is as efficient as the normal code-based ring signature.
Error estimation and adaptive chemical transport modeling
Directory of Open Access Journals (Sweden)
Malte Braack
2014-09-01
Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.
Semantic models for adaptive interactive systems
Hussein, Tim; Lukosch, Stephan; Ziegler, Jürgen; Calvary, Gaëlle
2013-01-01
Providing insights into methodologies for designing adaptive systems based on semantic data, and introducing semantic models that can be used for building interactive systems, this book showcases many of the applications made possible by the use of semantic models.Ontologies may enhance the functional coverage of an interactive system as well as its visualization and interaction capabilities in various ways. Semantic models can also contribute to bridging gaps; for example, between user models, context-aware interfaces, and model-driven UI generation. There is considerable potential for using
An explanatory model of underwater adaptation
Directory of Open Access Journals (Sweden)
Joaquín Colodro
Full Text Available The underwater environment is an extreme environment that requires a process of human adaptation with specific psychophysiological demands to ensure survival and productive activity. From the standpoint of existing models of intelligence, personality and performance, in this explanatory study we have analyzed the contribution of individual differences in explaining the adaptation of military personnel in a stressful environment. Structural equation analysis was employed to verify a model representing the direct effects of psychological variables on individual adaptation to an adverse environment, and we have been able to confirm, during basic military diving courses, the structural relationships among these variables and their ability to predict a third of the variance of a criterion that has been studied very little to date. In this way, we have confirmed in a sample of professionals (N = 575 the direct relationship of emotional adjustment, conscientiousness and general mental ability with underwater adaptation, as well as the inverse relationship of emotional reactivity. These constructs are the psychological basis for working under water, contributing to an improved adaptation to this environment and promoting risk prevention and safety in diving activities.
Directory of Open Access Journals (Sweden)
Thang M. Luong
2018-01-01
Full Text Available A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF. A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.
Luong, Thang
2018-01-22
A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers) to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF) cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF). A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.
A model for optimal constrained adaptive testing
van der Linden, Willem J.; Reese, Lynda M.
1997-01-01
A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum
A model for optimal constrained adaptive testing
van der Linden, Willem J.; Reese, Lynda M.
2001-01-01
A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum
SEMPATH Ontology: modeling multidisciplinary treatment schemes utilizing semantics.
Alexandrou, Dimitrios Al; Pardalis, Konstantinos V; Bouras, Thanassis D; Karakitsos, Petros; Mentzas, Gregoris N
2012-03-01
A dramatic increase of demand for provided treatment quality has occurred during last decades. The main challenge to be confronted, so as to increase treatment quality, is the personalization of treatment, since each patient constitutes a unique case. Healthcare provision encloses a complex environment since healthcare provision organizations are highly multidisciplinary. In this paper, we present the conceptualization of the domain of clinical pathways (CP). The SEMPATH (SEMantic PATHways) Oontology comprises three main parts: 1) the CP part; 2) the business and finance part; and 3) the quality assurance part. Our implementation achieves the conceptualization of the multidisciplinary domain of healthcare provision, in order to be further utilized for the implementation of a Semantic Web Rules (SWRL rules) repository. Finally, SEMPATH Ontology is utilized for the definition of a set of SWRL rules for the human papillomavirus) disease and its treatment scheme. © 2012 IEEE
Kannan, Kidambi S.; Dasgupta, Abhijit
1998-04-01
Deformation control of smart structures and damage detection in smart composites by magneto-mechanical tagging are just a few of the increasing number of applications of polydomain, polycrystalline magnetostrictive materials that are currently being researched. Robust computational models of bulk magnetostriction will be of great assistance to designers of smart structures for optimization of performance and development of control strategies. This paper discusses the limitations of existing tools, and reports on the work of the authors in developing a 3D nonlinear continuum finite element scheme for magnetostrictive structures, based on an appropriate Galerkin variational principle and incremental constitutive relations. The unique problems posed by the form of the equations governing magneto-mechanical interactions as well as their impact on the proper choice of variational and finite element discretization schemes are discussed. An adaptation of vectorial edge functions for interpolation of magnetic field in hexahedral elements is outlined. The differences between the proposed finite element scheme and available formations are also discussed in this paper. Computational results obtained from the newly proposed scheme will be presented in a future paper.
Soft rotator model and {sup 246}Cm low-lying level scheme
Energy Technology Data Exchange (ETDEWEB)
Porodzinskij, Yu.V.; Sukhovitskij, E.Sh. [Radiation Physics and Chemistry Problems Inst., Minsk-Sosny (Belarus)
1997-03-01
Non-axial soft rotator nuclear model is suggested as self-consistent approach for interpretation of level schemes, {gamma}-transition probabilities and neutron interaction with even-even nuclei. (author)
International Nuclear Information System (INIS)
Mortazavi, S.M.J.; Mozdarani, H.
2000-01-01
Human lymphocytes exposed to low doses of X-rays, become less susceptible to the induction of chromosome aberrations by subsequent exposure to high doses of X-rays. This has been termed the radioadaptive response. One of the most important questions in the adaptive response studies was that of the possible existence of an optimum adapting dose. Early experiments indicated that this response could be induced by low doses of X-rays from 1 cGy to 20 cGy. Recently, it has been interestingly shown that the time scheme of exposure to adapting and challenge doses plays an important role in determination of the magnitude of the induced adaptive response. In this study, using the optimum irradiation time scheme (24-48), we have monitored the cytogenetic endpoint of chromosome aberrations to assess the magnitude of adaptation to ionizing radiation in the cultured human lymphocytes. Lymphocytes were pre-exposed to an adapting dose of 1-20 cGy at 24 hours, before an acute challenge dose of 1 or 2 Gy at 48 hours. Cells were fixed at 54 hours. Lymphocytes, which were pretreated with 5 as well as 10 cGy adapting doses, had significantly fewer chromosome aberrations. In spite of the fact that lymphocytes of some of our blood donors which were pre-treated with 1 or 20 cGy adapting doses, showed an adaptive response, the pooled data (all donors) indicated that such an induction of adaptive response can not be observed in these lymphocytes. The overall pattern of the induced adaptive response, indicated that in human lymphocyte (at least under the above mentioned irradiation scheme), 5 cGy and 10 cGy adapting doses are the optimum doses. (author)
Adaptive numerical modeling of dynamic crack propagation
International Nuclear Information System (INIS)
Adouani, H.; Tie, B.; Berdin, C.; Aubry, D.
2006-01-01
We propose an adaptive numerical strategy that aims at developing reliable and efficient numerical tools to model dynamic crack propagation and crack arrest. We use the cohesive zone theory as behavior of interface-type elements to model crack. Since the crack path is generally unknown beforehand, adaptive meshing is proposed to model the dynamic crack propagation. The dynamic study requires the development of specific solvers for time integration. As both geometry and finite element mesh of the studied structure evolve in time during transient analysis, the stability behavior of dynamic solver becomes a major concern. For this purpose, we use the space-time discontinuous Galerkin finite element method, well-known to provide a natural framework to manage meshes that evolve in time. As an important result, we prove that the space-time discontinuous Galerkin solver is unconditionally stable, when the dynamic crack propagation is modeled by the cohesive zone theory, which is highly non-linear. (authors)
Directory of Open Access Journals (Sweden)
Ku David N
2010-07-01
Full Text Available Abstract Background The finite volume solver Fluent (Lebanon, NH, USA is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. Methods An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK schemes. The fluid has a diffusivity of 3.125 × 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. Results The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140
Carroll, Gráinne T; Devereux, Paul D; Ku, David N; McGloughlin, Timothy M; Walsh, Michael T
2010-07-19
The finite volume solver Fluent (Lebanon, NH, USA) is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK) schemes. The fluid has a diffusivity of 3.125 x 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140% and 116% was demonstrated between the experimental
Impact of WRF model PBL schemes on air quality simulations over Catalonia, Spain.
Banks, R F; Baldasano, J M
2016-12-01
Here we analyze the impact of four planetary boundary-layer (PBL) parametrization schemes from the Weather Research and Forecasting (WRF) numerical weather prediction model on simulations of meteorological variables and predicted pollutant concentrations from an air quality forecast system (AQFS). The current setup of the Spanish operational AQFS, CALIOPE, is composed of the WRF-ARW V3.5.1 meteorological model tied to the Yonsei University (YSU) PBL scheme, HERMES v2 emissions model, CMAQ V5.0.2 chemical transport model, and dust outputs from BSC-DREAM8bv2. We test the performance of the YSU scheme against the Assymetric Convective Model Version 2 (ACM2), Mellor-Yamada-Janjic (MYJ), and Bougeault-Lacarrère (BouLac) schemes. The one-day diagnostic case study is selected to represent the most frequent synoptic condition in the northeast Iberian Peninsula during spring 2015; regional recirculations. It is shown that the ACM2 PBL scheme performs well with daytime PBL height, as validated against estimates retrieved using a micro-pulse lidar system (mean bias=-0.11km). In turn, the BouLac scheme showed WRF-simulated air and dew point temperature closer to METAR surface meteorological observations. Results are more ambiguous when simulated pollutant concentrations from CMAQ are validated against network urban, suburban, and rural background stations. The ACM2 scheme showed the lowest mean bias (-0.96μgm -3 ) with respect to surface ozone at urban stations, while the YSU scheme performed best with simulated nitrogen dioxide (-6.48μgm -3 ). The poorest results were with simulated particulate matter, with similar results found with all schemes tested. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Model reference adaptive systems some examples.
Landau, I. D.; Sinner, E.; Courtiol, B.
1972-01-01
A direct design method is derived for several single-input single-output model reference adaptive systems (M.R.A.S.). The approach used helps to clarify the various steps involved in a design, which utilizes the hyperstability concept. An example of a multiinput, multioutput M.R.A.S. is also discussed. Attention is given to the problem of a series compensator. It is pointed out that a series compensator which contains derivative terms must generally be introduced in the adaptation mechanism in order to assure asymptotic hyperstability. Results obtained by the simulation of a M.R.A.S. on an analog computer are also presented.
Inference for Optimal Dynamic Treatment Regimes using an Adaptive m-out-of-n Bootstrap Scheme
Chakraborty, Bibhas; Laber, Eric B.; Zhao, Yingqi
2013-01-01
Summary A dynamic treatment regime consists of a set of decision rules that dictate how to individualize treatment to patients based on available treatment and covariate history. A common method for estimating an optimal dynamic treatment regime from data is Q-learning which involves nonsmooth operations of the data. This nonsmoothness causes standard asymptotic approaches for inference like the bootstrap or Taylor series arguments to breakdown if applied without correction. Here, we consider the m-out-of-n bootstrap for constructing confidence intervals for the parameters indexing the optimal dynamic regime. We propose an adaptive choice of m and show that it produces asymptotically correct confidence sets under fixed alternatives. Furthermore, the proposed method has the advantage of being conceptually and computationally much more simple than competing methods possessing this same theoretical property. We provide an extensive simulation study to compare the proposed method with currently available inference procedures. The results suggest that the proposed method delivers nominal coverage while being less conservative than alternatives. The proposed methods are implemented in the qLearn R-package and have been made available on the Comprehensive R-Archive Network (http://cran.r-project.org/). Analysis of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study is used as an illustrative example. PMID:23845276
A Schelling model with adaptive tolerance.
Urselmans, Linda; Phelps, Steve
2018-01-01
We introduce a Schelling model in which people are modelled as agents following simple behavioural rules which dictate their tolerance to others, their corresponding preference for particular locations, and in turn their movement through a geographic or social space. Our innovation over previous work is to allow agents to adapt their tolerance to others in response to their local environment, in line with contemporary theories from social psychology. We show that adaptive tolerance leads to a polarization in tolerance levels, with distinct modes at either extreme of the distribution. Moreover, agents self-organize into communities of like-tolerance, just as they congregate with those of same colour. Our results are robust not only to variations in free parameters, but also experimental treatments in which migrants are dynamically introduced into the native population. We argue that this model provides one possible parsimonious explanation of the political landscape circa 2016.
A hybrid convection scheme for use in non-hydrostatic numerical weather prediction models
Directory of Open Access Journals (Sweden)
Volker Kuell
2008-12-01
Full Text Available The correct representation of convection in numerical weather prediction (NWP models is essential for quantitative precipitation forecasts. Due to its small horizontal scale convection usually has to be parameterized, e.g. by mass flux convection schemes. Classical schemes originally developed for use in coarse grid NWP models assume zero net convective mass flux, because the whole circulation of a convective cell is confined to the local grid column and all convective mass fluxes cancel out. However, in contemporary NWP models with grid sizes of a few kilometers this assumption becomes questionable, because here convection is partially resolved on the grid. To overcome this conceptual problem we propose a hybrid mass flux convection scheme (HYMACS in which only the convective updrafts and downdrafts are parameterized. The generation of the larger scale environmental subsidence, which may cover several grid columns, is transferred to the grid scale equations. This means that the convection scheme now has to generate a net convective mass flux exerting a direct dynamical forcing to the grid scale model via pressure gradient forces. The hybrid convection scheme implemented into the COSMO model of Deutscher Wetterdienst (DWD is tested in an idealized simulation of a sea breeze circulation initiating convection in a realistic manner. The results are compared with analogous simulations with the classical Tiedtke and Kain-Fritsch convection schemes.
Automated adaptive sliding mode control scheme for a class of real ...
Indian Academy of Sciences (India)
A class of real complicated systems, including chemical reactions, biological systems, information processing, laser systems, electrical circuits, information exchange, brain activities modelling, secure communication and other related ones can be presented through nonlinear and non-identical hyper-chaotic systems.
Transfer Scheme Evaluation Model for a Transportation Hub based on Vectorial Angle Cosine
Directory of Open Access Journals (Sweden)
Li-Ya Yao
2014-07-01
Full Text Available As the most important node in public transport network, efficiency of a transport hub determines the entire efficiency of the whole transport network. In order to put forward effective transfer schemes, a comprehensive evaluation index system of urban transport hubs’ transfer efficiency was built, evaluation indexes were quantified, and an evaluation model of a multi-objective decision hub transfer scheme was established based on vectorial angle cosine. Qualitative and quantitative analysis on factors affecting transfer efficiency is conducted, which discusses the passenger satisfaction, transfer coordination, transfer efficiency, smoothness, economy, etc. Thus, a new solution to transfer scheme utilization was proposed.
Energy Technology Data Exchange (ETDEWEB)
Silva, Filipe da, E-mail: tanatos@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal); Pinto, Martin Campos, E-mail: campos@ann.jussieu.fr [CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Després, Bruno, E-mail: despres@ann.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Heuraux, Stéphane, E-mail: stephane.heuraux@univ-lorraine.fr [Institut Jean Lamour, UMR 7198, CNRS – University Lorraine, Vandoeuvre (France)
2015-08-15
This work analyzes the stability of the Yee scheme for non-stationary Maxwell's equations coupled with a linear current model with density fluctuations. We show that the usual procedure may yield unstable scheme for physical situations that correspond to strongly magnetized plasmas in X-mode (TE) polarization. We propose to use first order clustered discretization of the vectorial product that gives back a stable coupling. We validate the schemes on some test cases representative of direct numerical simulations of X-mode in a magnetic fusion plasma including turbulence.
Adapting virtual camera behaviour through player modelling
DEFF Research Database (Denmark)
Burelli, Paolo; Yannakakis, Georgios N.
2015-01-01
Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose...... a novel approach to virtual camera control, which builds upon camera control and player modelling to provide the user with an adaptive point-of-view. To achieve this goal, we propose a methodology to model the player’s preferences on virtual camera movements and we employ the resulting models to tailor...
Modeling stable orographic precipitation at small scales. The impact of the autoconversion scheme
Energy Technology Data Exchange (ETDEWEB)
Zaengl, Guenther; Seifert, Axel [Deutscher Wetterdienst, Offenbach (Germany); Wobrock, Wolfram [Clermont Univ., Univ. Blaise Pascal, Lab. de Meteorologie Physique, Clermont-Ferrand (France); CNRS, INSU, UMR, LaMP, Aubiere (France)
2010-10-15
This study presents idealized numerical simulations of moist airflow over a narrow isolated mountain in order to investigate the impact of the autoconversion scheme on simulated precipitation. The default setup generates an isolated water cloud over the mountain, implying that autoconversion of cloud water into rain is the only process capable of initiating precipitation. For comparison, a set of sensitivity experiments considers the classical seeder-feeder configuration, which means that ambient precipitation generated by large-scale lifting is intensified within the orographic cloud. Most simulations have been performed with the nonhydrostatic COSMO model developed at the German Weather Service (DWD), comparing three different autoconversion schemes of varying sophistication. For reference, a subset of experiments has also been performed with a spectral (bin) microphysics model. While precipitation enhancement via the seeder-feeder mechanism turns out to be relatively insensitive against the autoconversion scheme because accretion is the leading process in this case, simulated precipitation amounts can vary by 1-2 orders of magnitude for purely orographic precipitation. By comparison to the reference experiments conducted with the bin model, the Seifert-Beheng autoconversion scheme (which is the default in the COSMO model) and the Berry-Reinhardt scheme are found to represent the nonlinear behaviour of orographic precipitation reasonably well, whereas the linear approach of the Kessler scheme appears to be less adequate. (orig.)
Estimation of Stator winding faults in induction motors using an adaptive observer scheme
DEFF Research Database (Denmark)
Kallesøe, C. S.; Vadstrup, P.; Rasmussen, Henrik
2004-01-01
and an expression of the current in the short circuit. Moreover the states of the motor are estimated, meaning that the magnetizing currents are made available even though a fault has happened in the motor. To be able to develop this observer, a model particular suitable for the chosen observer design, is also...... derived. The efficiency of the proposed observer is demonstrated by tests performed on a test setup with a customized designed induction motor. With this motor it is possible to simulate inter-turn short circuit faults....
Post-processing scheme for modelling the lithospheric magnetic field
Directory of Open Access Journals (Sweden)
V. Lesur
2013-03-01
Full Text Available We investigated how the noise in satellite magnetic data affects magnetic lithospheric field models derived from these data in the special case where this noise is correlated along satellite orbit tracks. For this we describe the satellite data noise as a perturbation magnetic field scaled independently for each orbit, where the scaling factor is a random variable, normally distributed with zero mean. Under this assumption, we have been able to derive a model for errors in lithospheric models generated by the correlated satellite data noise. Unless the perturbation field is known, estimating the noise in the lithospheric field model is a non-linear inverse problem. We therefore proposed an iterative post-processing technique to estimate both the lithospheric field model and its associated noise model. The technique has been successfully applied to derive a lithospheric field model from CHAMP satellite data up to spherical harmonic degree 120. The model is in agreement with other existing models. The technique can, in principle, be extended to all sorts of potential field data with "along-track" correlated errors.
Chek, Mohd Zaki Awang; Ahmad, Abu Bakar; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md.; Jamal, Nur Faezah; Ismail, Isma Liana; Zulkifli, Faiz; Noor, Syamsul Ikram Mohd
2012-09-01
The main objective of this study is to forecast the future claims amount of Invalidity Pension Scheme (IPS). All data were derived from SOCSO annual reports from year 1972 - 2010. These claims consist of all claims amount from 7 benefits offered by SOCSO such as Invalidity Pension, Invalidity Grant, Survivors Pension, Constant Attendance Allowance, Rehabilitation, Funeral and Education. Prediction of future claims of Invalidity Pension Scheme will be made using Univariate Forecasting Models to predict the future claims among workforce in Malaysia.
ADAPTATION MODEL FOR REDUCING THE MANAGERIAL STRESS
Directory of Open Access Journals (Sweden)
VIOLETA GLIGOROVSKI
2017-12-01
Full Text Available Changes are an inseparable component of the company's life cycle and they can contribute to its essential growth in the future. The purpose of this paper is to explain managerial stress caused by implementation of changes and creating an adaptation model to decrease managerial stress. How much the manager will successfully lead the project for implementation of a change and how much they will manage to amortize stress among employees, mostly depends on their expertise, knowledge and skills to accurately and comprehensively inform and integrate the employees in the overall process. The adaptation model is actually a new approach and recommendation for managers for dealing with stress when the changes are implemented. Methodology. For this purpose, the data presented, in fact, were collected through a questionnaire that was submitted to 61 respondents/ managers. The data were measured using the Likert scale from 1 to 7. Namely, with the help of the Likert scale, quantification of stress was made in relation to the various variables that were identified as the most important for the researched issues. An adaption model (new approach for amortizing changes was created using the DIA Diagram application, to show the relations between manager and the relevant amortization approaches.
A Lattice-Based Identity-Based Proxy Blind Signature Scheme in the Standard Model
Directory of Open Access Journals (Sweden)
Lili Zhang
2014-01-01
Full Text Available A proxy blind signature scheme is a special form of blind signature which allowed a designated person called proxy signer to sign on behalf of original signers without knowing the content of the message. It combines the advantages of proxy signature and blind signature. Up to date, most proxy blind signature schemes rely on hard number theory problems, discrete logarithm, and bilinear pairings. Unfortunately, the above underlying number theory problems will be solvable in the postquantum era. Lattice-based cryptography is enjoying great interest these days, due to implementation simplicity and provable security reductions. Moreover, lattice-based cryptography is believed to be hard even for quantum computers. In this paper, we present a new identity-based proxy blind signature scheme from lattices without random oracles. The new scheme is proven to be strongly unforgeable under the standard hardness assumption of the short integer solution problem (SIS and the inhomogeneous small integer solution problem (ISIS. Furthermore, the secret key size and the signature length of our scheme are invariant and much shorter than those of the previous lattice-based proxy blind signature schemes. To the best of our knowledge, our construction is the first short lattice-based identity-based proxy blind signature scheme in the standard model.
A Scratchpad Memory Allocation Scheme for Dataflow Models
2008-08-25
perform via static analysis of C/C++. We use the heterochronous dataflow (HDF) model of computation [16, 39] in Ptolemy II [11] as a means to specify the...buffer data) as the key memory requirements [9]. 4.1 Structure of an HDF Model We use Ptolemy II’s graphical interface and the HDF domain to specify...algorithm. The allocation algorithm was implemented in Ptolemy II [11], a Java-based framework for studying modeling, simulation and design of concurrent
A seawater desalination scheme for global hydrological models
Hanasaki, Naota; Yoshikawa, Sayaka; Kakinuma, Kaoru; Kanae, Shinjiro
2016-10-01
Seawater desalination is a practical technology for providing fresh water to coastal arid regions. Indeed, the use of desalination is rapidly increasing due to growing water demand in these areas and decreases in production costs due to technological advances. In this study, we developed a model to estimate the areas where seawater desalination is likely to be used as a major water source and the likely volume of production. The model was designed to be incorporated into global hydrological models (GHMs) that explicitly include human water usage. The model requires spatially detailed information on climate, income levels, and industrial and municipal water use, which represent standard input/output data in GHMs. The model was applied to a specific historical year (2005) and showed fairly good reproduction of the present geographical distribution and national production of desalinated water in the world. The model was applied globally to two periods in the future (2011-2040 and 2041-2070) under three distinct socioeconomic conditions, i.e., SSP (shared socioeconomic pathway) 1, SSP2, and SSP3. The results indicate that the usage of seawater desalination will have expanded considerably in geographical extent, and that production will have increased by 1.4-2.1-fold in 2011-2040 compared to the present (from 2.8 × 109 m3 yr-1 in 2005 to 4.0-6.0 × 109 m3 yr-1), and 6.7-17.3-fold in 2041-2070 (from 18.7 to 48.6 × 109 m3 yr-1). The estimated global costs for production for each period are USD 1.1-10.6 × 109 (0.002-0.019 % of the total global GDP), USD 1.6-22.8 × 109 (0.001-0.020 %), and USD 7.5-183.9 × 109 (0.002-0.100 %), respectively. The large spreads in these projections are primarily attributable to variations within the socioeconomic scenarios.
Directory of Open Access Journals (Sweden)
Jenhui Chen
2015-01-01
Full Text Available This paper deals with the problem of triggering handoff procedure at an appropriate point of time to reduce the ping-pong effect problem in the long-term evolution advanced (LTE-A network. In the meantime, we also have studied a dynamic handoff threshold scheme, named adaptive measurement report period and handoff threshold (AMPHT, based on the user equipment’s (UE’s reference signal received quality (RSRQ variation and the moving velocity of UE. AMPHT reduces the probability of unnecessarily premature handoff decision making and also avoids the problem of handoff failure due to too late handoff decision making when the moving velocity of UE is high. AMPHT is achieved by two critical parameters: (1 a dynamic RSRQ threshold for handoff making; (2 a dynamic interval of time for the UE’s RSRQ reporting. The performance of AMPHT is validated by comparing numerical experiments (MATLAB tool with simulation results (the ns-3 LENA module. Our experiments show that AMPHT reduces the premature handoff probability by 34% at most in a low moving velocity and reduces the handoff failure probability by 25% in a high moving velocity. Additionally, AMPHT can reduce a large number of unnecessary handoff overheads and can be easily implemented because it uses the original control messages of 3GPP E-UTRA.
Extension of the time-average model to Candu refueling schemes involving reshuffling
International Nuclear Information System (INIS)
Rouben, Benjamin; Nichita, Eleodor
2008-01-01
Candu reactors consist of a horizontal non-pressurized heavy-water-filled vessel penetrated axially by fuel channels, each containing twelve 50-cm-long fuel bundles cooled by pressurized heavy water. Candu reactors are refueled on-line and, as a consequence, the core flux and power distributions change continuously. For design purposes, a 'time-average' model was developed in the 1970's to calculate the average over time of the flux and power distribution and to study the effects of different refueling schemes. The original time-average model only allows treatment of simple push-through refueling schemes whereby fresh fuel is inserted at one end of the channel and irradiated fuel is removed from the other end. With the advent of advanced fuel cycles and new Candu designs, novel refueling schemes may be considered, such as reshuffling discharged fuel from some channels into other channels, to achieve better overall discharge burnup. Such reshuffling schemes cannot be handled by the original time-average model. This paper presents an extension of the time-average model to allow for the treatment of refueling schemes with reshuffling. Equations for the extended model are presented, together with sample results for a simple demonstration case. (authors)
A novel interacting multiple model based network intrusion detection scheme
Xin, Ruichi; Venkatasubramanian, Vijay; Leung, Henry
2006-04-01
In today's information age, information and network security are of primary importance to any organization. Network intrusion is a serious threat to security of computers and data networks. In internet protocol (IP) based network, intrusions originate in different kinds of packets/messages contained in the open system interconnection (OSI) layer 3 or higher layers. Network intrusion detection and prevention systems observe the layer 3 packets (or layer 4 to 7 messages) to screen for intrusions and security threats. Signature based methods use a pre-existing database that document intrusion patterns as perceived in the layer 3 to 7 protocol traffics and match the incoming traffic for potential intrusion attacks. Alternately, network traffic data can be modeled and any huge anomaly from the established traffic pattern can be detected as network intrusion. The latter method, also known as anomaly based detection is gaining popularity for its versatility in learning new patterns and discovering new attacks. It is apparent that for a reliable performance, an accurate model of the network data needs to be established. In this paper, we illustrate using collected data that network traffic is seldom stationary. We propose the use of multiple models to accurately represent the traffic data. The improvement in reliability of the proposed model is verified by measuring the detection and false alarm rates on several datasets.
Combining modelling tools to evaluate a goose management scheme
Baveco, Hans; Bergjord, Anne Kari; Bjerke, Jarle W.; Chudzińska, Magda E.; Pellissier, Loïc; Simonsen, Caroline E.; Madsen, Jesper; Tombre, Ingunn M.; Nolet, Bart A.
2017-01-01
Many goose species feed on agricultural land, and with growing goose numbers, conflicts with agriculture are increasing. One possible solution is to designate refuge areas where farmers are paid to leave geese undisturbed. Here, we present a generic modelling tool that can be used to designate the
Combining modelling tools to evaluate a goose management scheme.
Baveco, J.M.; Bergjord, A.K.; Bjerke, J.W.; Chudzińska, M.E.; Pellissier, L.; Simonsen, C.E.; Madsen, J.; Tombre, Ingunn M.; Nolet, B.A.
2017-01-01
Many goose species feed on agricultural land, and with growing goose numbers, conflicts with agriculture are increasing. One possible solution is to designate refuge areas where farmers are paid to leave geese undisturbed. Here, we present a generic modelling tool that can be used to designate the
Ensemble-based data assimilation schemes for atmospheric chemistry models
Barbu, A.L.
2010-01-01
The atmosphere is a complex system which includes physical, chemical and biological processes. Many of these processes affecting the atmosphere are subject to various interactions and can be highly nonlinear. This complexity makes it necessary to apply computer models in order to understand the
Multi-model ensemble schemes for predicting northeast monsoon ...
Indian Academy of Sciences (India)
Northeast monsoon; multi-model ensemble; rainfall; prediction; principal component regression; single value decomposition. J. Earth Syst. Sci. 120, No. 5, October 2011, pp. 795–805 c Indian Academy of Sciences. 795 ... Rakecha 1983; Krishnan 1984; Raj and Jamadar. 1990; Sridharan and Muthusamy 1990; Singh and.
Prudhomme, Serge
2015-01-07
The need for surrogate models and adaptive methods can be best appreciated if one is interested in parameter estimation using a Bayesian calibration procedure for validation purposes. We extend here our latest work on error decomposition and adaptive refinement for response surfaces to the development of surrogate models that can be substituted for the full models to estimate the parameters of Reynolds-averaged Navier-Stokes models. The error estimates and adaptive schemes are driven here by a quantity of interest and are thus based on the approximation of an adjoint problem. We will focus in particular to the accurate estimation of evidences to facilitate model selection. The methodology will be illustrated on the Spalart-Allmaras RANS model for turbulence simulation.
A seawater desalination scheme for global hydrological models
Directory of Open Access Journals (Sweden)
N. Hanasaki
2016-10-01
Full Text Available Seawater desalination is a practical technology for providing fresh water to coastal arid regions. Indeed, the use of desalination is rapidly increasing due to growing water demand in these areas and decreases in production costs due to technological advances. In this study, we developed a model to estimate the areas where seawater desalination is likely to be used as a major water source and the likely volume of production. The model was designed to be incorporated into global hydrological models (GHMs that explicitly include human water usage. The model requires spatially detailed information on climate, income levels, and industrial and municipal water use, which represent standard input/output data in GHMs. The model was applied to a specific historical year (2005 and showed fairly good reproduction of the present geographical distribution and national production of desalinated water in the world. The model was applied globally to two periods in the future (2011–2040 and 2041–2070 under three distinct socioeconomic conditions, i.e., SSP (shared socioeconomic pathway 1, SSP2, and SSP3. The results indicate that the usage of seawater desalination will have expanded considerably in geographical extent, and that production will have increased by 1.4–2.1-fold in 2011–2040 compared to the present (from 2.8 × 109 m3 yr−1 in 2005 to 4.0–6.0 × 109 m3 yr−1, and 6.7–17.3-fold in 2041–2070 (from 18.7 to 48.6 × 109 m3 yr−1. The estimated global costs for production for each period are USD 1.1–10.6 × 109 (0.002–0.019 % of the total global GDP, USD 1.6–22.8 × 109 (0.001–0.020 %, and USD 7.5–183.9 × 109 (0.002–0.100 %, respectively. The large spreads in these projections are primarily attributable to variations within the socioeconomic scenarios.
Central upwind scheme for a compressible two-phase flow model.
Ahmed, Munshoor; Saleem, M Rehan; Zia, Saqib; Qamar, Shamsul
2015-01-01
In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.
Central upwind scheme for a compressible two-phase flow model.
Directory of Open Access Journals (Sweden)
Munshoor Ahmed
Full Text Available In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.
Internal validation of risk models in clustered data: a comparison of bootstrap schemes
Bouwmeester, W.; Moons, K.G.M.; Kappen, T.H.; van Klei, W.A.; Twisk, J.W.R.; Eijkemans, M.J.C.; Vergouwe, Y.
2013-01-01
Internal validity of a risk model can be studied efficiently with bootstrapping to assess possible optimism in model performance. Assumptions of the regular bootstrap are violated when the development data are clustered. We compared alternative resampling schemes in clustered data for the estimation
The two-dimensional Godunov scheme and what it means for macroscopic pedestrian flow models
Van Wageningen-Kessels, F.L.M.; Daamen, W.; Hoogendoorn, S.P.
2015-01-01
An efficient simulation method for two-dimensional continuum pedestrian flow models is introduced. It is a two-dimensional and multi-class extension of the Go-dunov scheme for one-dimensional road traffic flow models introduced in the mid 1990’s. The method can be applied to continuum pedestrian
A low-bias simulation scheme for the SABR stochastic volatility model
B. Chen (Bin); C.W. Oosterlee (Cornelis); J.A.M. van der Weide
2012-01-01
htmlabstractThe Stochastic Alpha Beta Rho Stochastic Volatility (SABR-SV) model is widely used in the financial industry for the pricing of fixed income instruments. In this paper we develop an lowbias simulation scheme for the SABR-SV model, which deals efficiently with (undesired)
A dynamic neutral fluid model for the PIC scheme
Wu, Alan; Lieberman, Michael; Verboncoeur, John
2010-11-01
Fluid diffusion is an important aspect of plasma simulation. A new dynamic model is implemented using the continuity and boundary equations in OOPD1, an object oriented one-dimensional particle-in-cell code developed at UC Berkeley. The model is described and compared with analytical methods given in [1]. A boundary absorption parameter can be adjusted from ideal absorption to ideal reflection. Simulations exhibit good agreement with analytic time dependent solutions for the two ideal cases, as well as steady state solutions for mixed cases. For the next step, fluid sources and sinks due to particle-particle or particle-fluid collisions within the simulation volume and to surface reactions resulting in emission or absorption of fluid species will be implemented. The resulting dynamic interaction between particle and fluid species will be an improvement to the static fluid in the existing code. As the final step in the development, diffusion for multiple fluid species will be implemented. [4pt] [1] M.A. Lieberman and A.J. Lichtenberg, Principles of Plasma Discharges and Materials Processing, 2nd Ed, Wiley, 2005.
Modelling of Substrate Noise and Mitigation Schemes for UWB Systems
DEFF Research Database (Denmark)
Shen, Ming; Mikkelsen, Jan H.; Larsen, Torben
2012-01-01
-mode designs, digital switching noise is an ever-present problem that needs to be taken into consideration. This is of particular importance when low cost implementation technologies, e.g. lightly doped substrates, are aimed for. For traditional narrow-band designs much of the issue can be mitigated using...... tuned elements in the signal paths. However, for UWB designs this is not a viable option and other means are therefore required. Moreover, owing to the ultra-wideband nature and low power spectral density of the signal, UWB mixed-signal integrated circuits are more sensitive to substrate noise compared...... with narrow-band circuits. This chapter presents a study on the modeling and mitigation of substrate noise in mixed-signal integrated circuits (ICs), focusing on UWB system/circuit designs. Experimental impact evaluation of substrate noise on UWB circuits is presented. It shows how a wide-band circuit can...
Model reference adaptive control and adaptive stability augmentation
DEFF Research Database (Denmark)
Henningsen, Arne; Ravn, Ole
1993-01-01
A comparison of the standard concepts in MRAC design suggests that a combination of the implicit and the explicit design techniques may lead to an improvement of the overall system performance in the presence of unmodelled dynamics. Using the ideas of adaptive stability augmentation a combined...
A gradient stable scheme for a phase field model for the moving contact line problem
Gao, Min
2012-02-01
In this paper, an efficient numerical scheme is designed for a phase field model for the moving contact line problem, which consists of a coupled system of the Cahn-Hilliard and Navier-Stokes equations with the generalized Navier boundary condition [1,2,4]. The nonlinear version of the scheme is semi-implicit in time and is based on a convex splitting of the Cahn-Hilliard free energy (including the boundary energy) together with a projection method for the Navier-Stokes equations. We show, under certain conditions, the scheme has the total energy decaying property and is unconditionally stable. The linearized scheme is easy to implement and introduces only mild CFL time constraint. Numerical tests are carried out to verify the accuracy and stability of the scheme. The behavior of the solution near the contact line is examined. It is verified that, when the interface intersects with the boundary, the consistent splitting scheme [21,22] for the Navier Stokes equations has the better accuracy for pressure. © 2011 Elsevier Inc.
A method of LED free-form tilted lens rapid modeling based on scheme language
Dai, Yidan
2017-10-01
According to nonimaging optical principle and traditional LED free-form surface lens, a new kind of LED free-form tilted lens was designed. And a method of rapid modeling based on Scheme language was proposed. The mesh division method was applied to obtain the corresponding surface configuration according to the character of the light source and the desired energy distribution on the illumination plane. Then 3D modeling software and the Scheme language programming are used to generate lens model respectively. With the help of optical simulation software, a light source with the size of 1mm*1mm*1mm in volume is used in experiment, and the lateral migration distance of illumination area is 0.5m, in which total one million rays are computed. We could acquire the simulated results of both models. The simulated output result shows that the Scheme language can prevent the model deformation problems caused by the process of the model transfer, and the degree of illumination uniformity is reached to 82%, and the offset angle is 26°. Also, the efficiency of modeling process is greatly increased by using Scheme language.
Evaluation of nourishment schemes based on long-term morphological modeling
DEFF Research Database (Denmark)
Grunnet, Nicholas; Kristensen, Sten Esbjørn; Drønen, Nils
2012-01-01
A recently developed long-term morphological modeling concept is applied to evaluate the impact of nourishment schemes. The concept combines detailed two-dimensional morphological models and simple one-line models for the coastline evolution and is particularly well suited for long-term simulatio...... site. This study strongly indicates that the hybrid model may be used as an engineering tool to predict shoreline response following the implementation of a nourishment project....
Difference schemes for numerical solutions of lagging models of heat conduction
Cabrera Sánchez, Jesús; Castro López, María Ángeles; Rodríguez Mateo, Francisco; Martín Alustiza, José Antonio
2013-01-01
Non-Fourier models of heat conduction are increasingly being considered in the modeling of microscale heat transfer in engineering and biomedical heat transfer problems. The dual-phase-lagging model, incorporating time lags in the heat flux and the temperature gradient, and some of its particular cases and approximations, result in heat conduction modeling equations in the form of delayed or hyperbolic partial differential equations. In this work, the application of difference schemes for the...
SOLVING FRACTIONAL-ORDER COMPETITIVE LOTKA-VOLTERRA MODEL BY NSFD SCHEMES
Directory of Open Access Journals (Sweden)
S.ZIBAEI
2016-12-01
Full Text Available In this paper, we introduce fractional-order into a model competitive Lotka- Volterra prey-predator system. We will discuss the stability analysis of this fractional system. The non-standard nite difference (NSFD scheme is implemented to study the dynamic behaviors in the fractional-order Lotka-Volterra system. Proposed non-standard numerical scheme is compared with the forward Euler and fourth order Runge-Kutta methods. Numerical results show that the NSFD approach is easy and accurate for implementing when applied to fractional-order Lotka-Volterra model.
Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model
Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.
2017-10-01
We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.
Energy Technology Data Exchange (ETDEWEB)
Barriopedro, D. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal); Universidad de Extremadura, Departamento de Fisica, Facultad de Ciencias, Badajoz (Spain); Garcia-Herrera, R. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain); Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal)
2010-12-15
This paper aims to provide a new blocking definition with applicability to observations and model simulations. An updated review of previous blocking detection indices is provided and some of their implications and caveats discussed. A novel blocking index is proposed by reconciling two traditional approaches based on anomaly and absolute flows. Blocks are considered from a complementary perspective as a signature in the anomalous height field capable of reversing the meridional jet-based height gradient in the total flow. The method succeeds in identifying 2-D persistent anomalies associated to a weather regime in the total flow with blockage of the westerlies. The new index accounts for the duration, intensity, extension, propagation, and spatial structure of a blocking event. In spite of its increased complexity, the detection efficiency of the method is improved without hampering the computational time. Furthermore, some misleading identification problems and artificial assumptions resulting from previous single blocking indices are avoided with the new approach. The characteristics of blocking for 40 years of reanalysis (1950-1989) over the Northern Hemisphere are described from the perspective of the new definition and compared to those resulting from two standard blocking indices and different critical thresholds. As compared to single approaches, the novel index shows a better agreement with reported proxies of blocking activity, namely climatological regions of simultaneous wave amplification and maximum band-pass filtered height standard deviation. An additional asset of the method is its adaptability to different data sets. As critical thresholds are specific of the data set employed, the method is useful for observations and model simulations of different resolutions, temporal lengths and time variant basic states, optimizing its value as a tool for model validation. Special attention has been paid on the devise of an objective scheme easily applicable
White, Jeremy T.; Langevin, Christian D.; Hughes, Joseph D.
2010-01-01
Calibration of highly‐parameterized numerical models typically requires explicit Tikhonovtype regularization to stabilize the inversion process. This regularization can take the form of a preferred parameter values scheme or preferred relations between parameters, such as the preferred equality scheme. The resulting parameter distributions calibrate the model to a user‐defined acceptable level of model‐to‐measurement misfit, and also minimize regularization penalties on the total objective function. To evaluate the potential impact of these two regularization schemes on model predictive ability, a dataset generated from a synthetic model was used to calibrate a highly-parameterized variable‐density SEAWAT model. The key prediction is the length of time a synthetic pumping well will produce potable water. A bi‐objective Pareto analysis was used to explicitly characterize the relation between two competing objective function components: measurement error and regularization error. Results of the Pareto analysis indicate that both types of regularization schemes affect the predictive ability of the calibrated model.
An efficient numerical progressive diagonalization scheme for the quantum Rabi model revisited
International Nuclear Information System (INIS)
Pan, Feng; Bao, Lina; Dai, Lianrong; Draayer, Jerry P
2017-01-01
An efficient numerical progressive diagonalization scheme for the quantum Rabi model is revisited. The advantage of the scheme lies in the fact that the quantum Rabi model can be solved almost exactly by using the scheme that only involves a finite set of one variable polynomial equations. The scheme is especially efficient for a specified eigenstate of the model, for example, the ground state. Some low-lying level energies of the model for several sets of parameters are calculated, of which one set of the results is compared to that obtained from the Braak’s exact solution proposed recently. It is shown that the derivative of the entanglement measure defined in terms of the reduced von Neumann entropy with respect to the coupling parameter does reach the maximum near the critical point deduced from the classical limit of the Dicke model, which may provide a probe of the critical point of the crossover in finite quantum many-body systems, such as that in the quantum Rabi model. (paper)
Directory of Open Access Journals (Sweden)
B. Ervens
2012-07-01
Full Text Available Ice nucleation in clouds is often observed at temperatures >235 K, pointing to heterogeneous freezing as a predominant mechanism. Many models deterministically predict the number concentration of ice particles as a function of temperature and/or supersaturation. Several laboratory experiments, at constant temperature and/or supersaturation, report heterogeneous freezing as a stochastic, time-dependent process that follows classical nucleation theory; this might appear to contradict deterministic models that predict singular freezing behavior.
We explore the extent to which the choice of nucleation scheme (deterministic/stochastic, single/multiple contact angles θ affects the prediction of the fraction of frozen ice nuclei (IN and cloud evolution for a predetermined maximum IN concentration. A box model with constant temperature and supersaturation is used to mimic published laboratory experiments of immersion freezing of monodisperse (800 nm kaolinite particles (~243 K, and the fitness of different nucleation schemes. Sensitivity studies show that agreement of all five schemes is restricted to the narrow parameter range (time, temperature, IN diameter in the original laboratory studies, and that model results diverge for a wider range of conditions.
The schemes are implemented in an adiabatic parcel model that includes feedbacks of the formation and growth of drops and ice particles on supersaturation during ascent. Model results for the monodisperse IN population (800 nm show that these feedbacks limit ice nucleation events, often leading to smaller differences in number concentration of ice particles and ice water content (IWC between stochastic and deterministic approaches than expected from the box model studies. However, because the different parameterizations of θ distributions and time-dependencies are highly sensitive to IN size, simulations using polydisperse IN result in great differences in predicted ice number
Data-Adaptable Modeling and Optimization for Runtime Adaptable Systems
2016-06-08
MONITOR’S REPORT NUMBER(S) 16. SECURITY CLASSIFICATION OF: 19b. TELEPHONE NUMBER (Include area code) The public reporting burden for this collection...often encounter situations in which it is unable to retrieve video or GPS data in remote areas . A data-adaptable approach should enable such an...Farrell, M. Okincha, M. Parmar, and B. Wandell, “Using visible SNR (vSNR) to compare the image quality of pixel binning and digital resizing ,” In Proc
Change in Farm Production Structure Within Different CAP Schemes – an LP Modelling Approach
Directory of Open Access Journals (Sweden)
Jaka ŽGAJNAR
2008-01-01
Full Text Available After accession to European Union in 2004 direct payments became veryimportant income source also for farmers in Slovenia. But agricultural policy inplace at accession changed significantly in year 2007 as result of CAP reformimplementation. The objective of this study was to evaluate decision makingimpacts of direct payments scheme implemented with the reform: regional or morelikely hybrid scheme. The change in farm production structure was simulated withmodel, applying gross margin maximisation, based on static linear programmingapproach. The model has been developed in a spreadsheet framework in MS Excelplatform. A hypothetical farm has been chosen to analyse different scenarios andspecializations. Focus of the analysis was on cattle sector, since it is expected thatdecoupling is going to have significant influence on its optimal productionstructure. The reason is high level of direct payments that could in pre-reformscheme rise up to 70 % of total gross margin. Model results confirm that the reformshould have unfavourable impacts on cattle farms with intensive productionpractice. The results show that hybrid scheme has minor negative impacts in allcattle specializations, while regional scheme would be better option for sheepspecialized farm. Analysis has also shown growing importance of CAP pillar IIpayments, among them particularly agri-environmental measures. In all threeschemes budgetary payments enable farmers to improve financial results and inboth reform schemes they alleviate economic impacts of the CAP reform.
Performance of the Goddard multiscale modeling framework with Goddard ice microphysical schemes
Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Matsui, Toshihisa; Li, J.-L. F.; Mohr, Karen I.; Skofronick-Jackson, Gail M.; Peters-Lidard, Christa D.
2016-03-01
The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has become a new approach for climate modeling. The embedded CRMs make it possible to apply CRM-based cloud microphysics directly within a GCM. However, most such schemes have never been tested in a global environment for long-term climate simulation. The benefits of using an MMF to evaluate rigorously and improve microphysics schemes are here demonstrated. Four one-moment microphysical schemes are implemented into the Goddard MMF and their results validated against three CloudSat/CALIPSO cloud ice products and other satellite data. The new four-class (cloud ice, snow, graupel, and frozen drops/hail) ice scheme produces a better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes, with biases within the observational uncertainties. Sensitivity experiments are conducted to examine the impact of recently upgraded microphysical processes on global hydrometeor distributions. Five processes dominate the global distributions of cloud ice and snow amount in long-term simulations: (1) allowing for ice supersaturation in the saturation adjustment, (2) three additional correction terms in the depositional growth of cloud ice to snow, (3) accounting for cloud ice fall speeds, (4) limiting cloud ice particle size, and (5) new size-mapping schemes for snow and graupel. Despite the cloud microphysics improvements, systematic errors associated with subgrid processes, cyclic lateral boundaries in the embedded CRMs, and momentum transport remain and will require future improvement.
Modeling Adaptable Business Service for Enterprise Collaboration
Boukadi, Khouloud; Vincent, Lucien; Burlat, Patrick
Nowadays, a Service Oriented Architecture (SOA) seems to be one of the most promising paradigms for leveraging enterprise information systems. SOA creates opportunities for enterprises to provide value added service tailored for on demand enterprise collaboration. With the emergence and rapid development of Web services technologies, SOA is being paid increasing attention and has become widespread. In spite of the popularity of SOA, a standardized framework for modeling and implementing business services are still in progress. For the purpose of supporting these service-oriented solutions, we adopt a model driven development approach. This paper outlines the Contextual Service Oriented Modeling and Analysis (CSOMA) methodology and presents UML profiles for the PIM level service-oriented architectural modeling, as well as its corresponding meta-models. The proposed PIM (Platform Independent Model) describes the business SOA at a high level of abstraction regardless of techniques involved in the application employment. In addition, all essential service-specific concerns required for delivering quality and context-aware service are covered. Some of the advantages of this approach are that it is generic and thus not closely allied with Web service technology as well as specifically treating the service adaptability during the design stage.
Chern, J.; Tao, W.; Lang, S. E.; Matsui, T.
2012-12-01
The accurate representation of clouds and cloud processes in atmospheric general circulation models (GCMs) with relatively coarse resolution (~100 km) has been a long-standing challenge. With the rapid advancement in computational technology, new breed of GCMs that are capable of explicitly resolving clouds have been developed. Though still computationally very expensive, global cloud-resolving models (GCRMs) with horizontal resolutions of 3.5 to 14 km are already being run in an exploratory manner. Another less computationally demanding approach is the multi-scale modeling framework (MMF) that replaces conventional cloud parameterizations with a cloud-resolving model (CRM) in each grid column of a GCM. The Goddard MMF is based on the coupling of the Goddard Cumulus Ensemble (GCE), a CRM model, and the GEOS global model. In recent years a few new and improved microphysical schemes are developed and implemented to the GCE based on observations from field campaigns. It is important to evaluating these microphysical schemes for global applications such as the MMFs and GCRMs. Two-year (2007-2008) MMF sensitivity experiments have been carried out with different cloud microphysical schemes. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against TRMM, CloudSat and CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to evaluate the performance of different cloud microphysical schemes. We will assess the strengths and/or deficiencies in of these microphysics schemes and provide guidance on how to improve
European upper mantle tomography: adaptively parameterized models
Schäfer, J.; Boschi, L.
2009-04-01
We have devised a new algorithm for upper-mantle surface-wave tomography based on adaptive parameterization: i.e. the size of each parameterization pixel depends on the local density of seismic data coverage. The advantage in using this kind of parameterization is that a high resolution can be achieved in regions with dense data coverage while a lower (and cheaper) resolution is kept in regions with low coverage. This way, parameterization is everywhere optimal, both in terms of its computational cost, and of model resolution. This is especially important for data sets with inhomogenous data coverage, as it is usually the case for global seismic databases. The data set we use has an especially good coverage around Switzerland and over central Europe. We focus on periods from 35s to 150s. The final goal of the project is to determine a new model of seismic velocities for the upper mantle underlying Europe and the Mediterranean Basin, of resolution higher than what is currently found in the literature. Our inversions involve regularization via norm and roughness minimization, and this in turn requires that discrete norm and roughness operators associated with our adaptive grid be precisely defined. The discretization of the roughness damping operator in the case of adaptive parameterizations is not as trivial as it is for the uniform ones; important complications arise from the significant lateral variations in the size of pixels. We chose to first define the roughness operator in a spherical harmonic framework, and subsequently translate it to discrete pixels via a linear transformation. Since the smallest pixels we allow in our parameterization have a size of 0.625 °, the spherical-harmonic roughness operator has to be defined up to harmonic degree 899, corresponding to 810.000 harmonic coefficients. This results in considerable computational costs: we conduct the harmonic-pixel transformations on a small Beowulf cluster. We validate our implementation of adaptive
Fuzzy Multiple Criteria Decision Making Model with Fuzzy Time Weight Scheme
Chin-Yao Low; Sung-Nung Lin
2013-01-01
In this study, we purpose a common fuzzy multiple criteria decision making model. A brand new concept - fuzzy time weighted scheme is adopted for considering in the model to establish a fuzzy multiple criteria decision making with time weight (FMCDMTW) model. A real case of fuzzy multiple criteria decision making (FMCDM) problems to be considering in this study. The performance evaluation of auction websites based on all criteria proposed in related literature. Obviously, the problem under in...
End-point parametrization and guaranteed stability for a model predictive control scheme
Weiland, Siep; Stoorvogel, Antonie Arij; Tiagounov, Andrei A.
2001-01-01
In this paper we consider the closed-loop asymptotic stability of the model predictive control scheme which involves the minimization of a quadratic criterion with a varying weight on the end-point state. In particular, we investigate the stability properties of the (MPC-) controlled system as
DEFF Research Database (Denmark)
Hyun, Jaeyub; Kook, Junghwan; Wang, Semyung
2015-01-01
and basis vectors for use according to the target system. The proposed model reduction scheme is applied to the numerical simulation of the simple mass-damping-spring system and the acoustic metamaterial systems (i.e., acoustic lens and acoustic cloaking device) for the first time. Through these numerical...
RELAP5 two-phase fluid model and numerical scheme for economic LWR system simulation
International Nuclear Information System (INIS)
Ransom, V.H.; Wagner, R.J.; Trapp, J.A.
1981-01-01
The RELAP5 two-phase fluid model and the associated numerical scheme are summarized. The experience accrued in development of a fast running light water reactor system transient analysis code is reviewed and example of the code application are given
Adaptive control using a hybrid-neural model: application to a polymerisation reactor
Directory of Open Access Journals (Sweden)
Cubillos F.
2001-01-01
Full Text Available This work presents the use of a hybrid-neural model for predictive control of a plug flow polymerisation reactor. The hybrid-neural model (HNM is based on fundamental conservation laws associated with a neural network (NN used to model the uncertain parameters. By simulations, the performance of this approach was studied for a peroxide-initiated styrene tubular reactor. The HNM was synthesised for a CSTR reactor with a radial basis function neural net (RBFN used to estimate the reaction rates recursively. The adaptive HNM was incorporated in two model predictive control strategies, a direct synthesis scheme and an optimum steady state scheme. Tests for servo and regulator control showed excellent behaviour following different setpoint variations, and rejecting perturbations. The good generalisation and training capacities of hybrid models, associated with the simplicity and robustness characteristics of the MPC formulations, make an attractive combination for the control of a polymerisation reactor.
Automated adaptive inference of phenomenological dynamical models
Daniels, Bryan
Understanding the dynamics of biochemical systems can seem impossibly complicated at the microscopic level: detailed properties of every molecular species, including those that have not yet been discovered, could be important for producing macroscopic behavior. The profusion of data in this area has raised the hope that microscopic dynamics might be recovered in an automated search over possible models, yet the combinatorial growth of this space has limited these techniques to systems that contain only a few interacting species. We take a different approach inspired by coarse-grained, phenomenological models in physics. Akin to a Taylor series producing Hooke's Law, forgoing microscopic accuracy allows us to constrain the search over dynamical models to a single dimension. This makes it feasible to infer dynamics with very limited data, including cases in which important dynamical variables are unobserved. We name our method Sir Isaac after its ability to infer the dynamical structure of the law of gravitation given simulated planetary motion data. Applying the method to output from a microscopically complicated but macroscopically simple biological signaling model, it is able to adapt the level of detail to the amount of available data. Finally, using nematode behavioral time series data, the method discovers an effective switch between behavioral attractors after the application of a painful stimulus.
A hybrid scheme for absorbing edge reflections in numerical modeling of wave propagation
Liu, Yang
2010-03-01
We propose an efficient scheme to absorb reflections from the model boundaries in numerical solutions of wave equations. This scheme divides the computational domain into boundary, transition, and inner areas. The wavefields within the inner and boundary areas are computed by the wave equation and the one-way wave equation, respectively. The wavefields within the transition area are determined by a weighted combination of the wavefields computed by the wave equation and the one-way wave equation to obtain a smooth variation from the inner area to the boundary via the transition zone. The results from our finite-difference numerical modeling tests of the 2D acoustic wave equation show that the absorption enforced by this scheme gradually increases with increasing width of the transition area. We obtain equally good performance using pseudospectral and finite-element modeling with the same scheme. Our numerical experiments demonstrate that use of 10 grid points for absorbing edge reflections attains nearly perfect absorption. © 2010 Society of Exploration Geophysicists.
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks
DEFF Research Database (Denmark)
Hagen, Espen; Dahmen, David; Stavrinou, Maria L
2016-01-01
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical...... and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely...... on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network...
Korpusik, Adam
2017-02-01
We present a nonstandard finite difference scheme for a basic model of cellular immune response to viral infection. The main advantage of this approach is that it preserves the essential qualitative features of the original continuous model (non-negativity and boundedness of the solution, equilibria and their stability conditions), while being easy to implement. All of the qualitative features are preserved independently of the chosen step-size. Numerical simulations of our approach and comparison with other conventional simulation methods are presented.
A new numerical scheme for bounding acceleration in the LWR model
LECLERCQ, L; ELSEVIER
2005-01-01
This paper deals with the numerical resolution of bounded acceleration extensions of the LWR model. Two different manners for bounding accelerations in the LWR model will be presented: introducing a moving boundary condition in front of an accelerating flow or defining a field of constraints on the maximum allowed speed in the (x,t) plane. Both extensions lead to the same solutions if the declining branch of the fundamental diagram is linear. The existing numerical scheme for the latter exte...
Additive operator-difference schemes splitting schemes
Vabishchevich, Petr N
2013-01-01
Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.
Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T
2016-12-01
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.
Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme
Veljović, K.; Rajković, B.; Mesinger, F.
2009-04-01
Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat
A study of the spreading scheme for viral marketing based on a complex network model
Yang, Jianmei; Yao, Canzhong; Ma, Weicheng; Chen, Guanrong
2010-02-01
Buzzword-based viral marketing, known also as digital word-of-mouth marketing, is a marketing mode attached to some carriers on the Internet, which can rapidly copy marketing information at a low cost. Viral marketing actually uses a pre-existing social network where, however, the scale of the pre-existing network is believed to be so large and so random, so that its theoretical analysis is intractable and unmanageable. There are very few reports in the literature on how to design a spreading scheme for viral marketing on real social networks according to the traditional marketing theory or the relatively new network marketing theory. Complex network theory provides a new model for the study of large-scale complex systems, using the latest developments of graph theory and computing techniques. From this perspective, the present paper extends the complex network theory and modeling into the research of general viral marketing and develops a specific spreading scheme for viral marking and an approach to design the scheme based on a real complex network on the QQ instant messaging system. This approach is shown to be rather universal and can be further extended to the design of various spreading schemes for viral marketing based on different instant messaging systems.
Directory of Open Access Journals (Sweden)
Chang-bae Moon
2010-12-01
Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.
Directory of Open Access Journals (Sweden)
Chang-bae Moon
2011-01-01
Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.
Primdahl, Jørgen; Vesterager, Jens Peter; Finn, John A; Vlahos, George; Kristensen, Lone; Vejre, Henrik
2010-06-01
Agri-Environment Schemes (AES) to maintain or promote environmentally-friendly farming practices were implemented on about 25% of all agricultural land in the EU by 2002. This article analyses and discusses the actual and potential use of impact models in supporting the design, implementation and evaluation of AES. Impact models identify and establish the causal relationships between policy objectives and policy outcomes. We review and discuss the role of impact models at different stages in the AES policy process, and present results from a survey of impact models underlying 60 agri-environmental schemes in seven EU member states. We distinguished among three categories of impact models (quantitative, qualitative or common sense), depending on the degree of evidence in the formal scheme description, additional documents, or key person interviews. The categories of impact models used mainly depended on whether scheme objectives were related to natural resources, biodiversity or landscape. A higher proportion of schemes dealing with natural resources (primarily water) were based on quantitative impact models, compared to those concerned with biodiversity or landscape. Schemes explicitly targeted either on particular parts of individual farms or specific areas tended to be based more on quantitative impact models compared to whole-farm schemes and broad, horizontal schemes. We conclude that increased and better use of impact models has significant potential to improve efficiency and effectiveness of AES. (c) 2009 Elsevier Ltd. All rights reserved.
A gas dynamics scheme for a two moments model of radiative transfer
International Nuclear Information System (INIS)
Buet, Ch.; Despres, B.
2007-01-01
We address the discretization of the Levermore's two moments and entropy model of the radiative transfer equation. We present a new approach for the discretization of this model: first we rewrite the moment equations as a Compressible Gas Dynamics equation by introducing an additional quantity that plays the role of a density. After that we discretize using a Lagrange-projection scheme. The Lagrange-projection scheme permits us to incorporate the source terms in the fluxes of an acoustic solver in the Lagrange step, using the well-known piecewise steady approximation and thus to capture correctly the diffusion regime. Moreover we show that the discretization is entropic and preserve the flux-limited property of the moment model. Numerical examples illustrate the feasibility of our approach. (authors)
Modeling of environmental adaptation versus pollution mitigation
YATSENKO, Yuri; HRITONENKO, Natali; BRECHET, Thierry
2014-01-01
The paper combines analytic and numeric tools to investigate a nonlinear optimal control problem relevant to the economics of climate change. The problem describes optimal investments into pollution mitigation and environmental adaptation at a macroeconomic level. The steady-state analysis of this problem focuses on the optimal ratio between adaptation and mitigation. In particular, we analytically prove that the long- term investments into adaptation are profitable only for economies above c...
Model reference adaptive control and adaptive stability augmentation
DEFF Research Database (Denmark)
Henningsen, Arne; Ravn, Ole
1993-01-01
stability augmented model reference design is proposed. By utilizing the closed-loop control error, a simple auxiliary controller is tuned, using a normalized MIT rule for the parameter adjustment. The MIT adjustment is protected against the effects of unmodelled dynamics by lowpass filtering...
Model Building by Coset Space Dimensional Reduction Scheme Using Ten-Dimensional Coset Spaces
Jittoh, T.; Koike, M.; Nomura, T.; Sato, J.; Shimomura, T.
2008-12-01
We investigate the gauge-Higgs unification models within the scheme of the coset space dimensional reduction, beginning with a gauge theory in a fourteen-dimensional spacetime where extra-dimensional space has the structure of a ten-dimensional compact coset space. We found seventeen phenomenologically acceptable models through an exhaustive search for the candidates of the coset spaces, the gauge group in fourteen dimension, and fermion representation. Of the seventeen, ten models led to {SO}(10) (× {U}(1)) GUT-like models after dimensional reduction, three models led to {SU}(5) × {U}(1) GUT-like models, and four to {SU}(3) × {SU}(2) × {U}(1) × {U}(1) Standard-Model-like models. The combinations of the coset space, the gauge group in the fourteen-dimensional spacetime, and the representation of the fermion contents of such models are listed.
The ADAPT design model : towards instructional control of transfer
Jelsma, Otto; van Merrienboer, Jeroen J.G.; van Merrienboer, J.J.G.; Bijlstra, Jim P.; Bijlstra, J.P.
1990-01-01
This paper presents a detailed description of the ADAPT (Apply Delayed Automatization for Positive Transfer) design model. ADAPT is based upon production system models of learning and provides guidelines for developing instructional systems that offer transfer of leamed skills. The model suggests
Nazarova, G.; Ivashkina, E.; Ivanchina, E.; Kiseleva, S.; Stebeneva, V.
2015-11-01
The issue of improving the energy and resource efficiency of advanced petroleum processing can be solved by the development of adequate mathematical model based on physical and chemical regularities of process reactions with a high predictive potential in the advanced petroleum refining. In this work, the development of formalized hydrocarbon conversion scheme of catalytic cracking was performed using thermodynamic parameters of reaction defined by the Density Functional Theory. The list of reaction was compiled according to the results of feedstock structural-group composition definition, which was done by the n-d-m-method, the Hazelvuda method, qualitative composition of feedstock defined by gas chromatography-mass spectrometry and individual composition of catalytic cracking gasoline fraction. Formalized hydrocarbon conversion scheme of catalytic cracking will become the basis for the development of the catalytic cracking kinetic model.
A New Repeating Color Watermarking Scheme Based on Human Visual Model
Directory of Open Access Journals (Sweden)
Chang Chin-Chen
2004-01-01
Full Text Available This paper proposes a human-visual-model-based scheme that effectively protects the intellectual copyright of digital images. In the proposed method, the theory of the visual secret sharing scheme is used to create a master watermark share and a secret watermark share. The watermark share is kept secret by the owner. The master watermark share is embedded into the host image to generate a watermarked image based on the human visual model. The proposed method conforms to all necessary conditions of an image watermarking technique. After the watermarked image is put under various attacks such as lossy compression, rotating, sharpening, blurring, and cropping, the experimental results show that the extracted digital watermark from the attacked watermarked images can still be robustly detected using the proposed method.
Parametric modeling and optimization for adaptive architecture
Turrin, M.; Von Buelow, P.; Kilian, A.; Stouffs, R.M.F.
2011-01-01
In this paper we address performance oriented design applied to adaptive architecture in order to satisfy the performance requirements for changing contextual conditions. The domain of adaptive architecture is defined and specific focus is given to form-active architecture, in which geometric
DEFF Research Database (Denmark)
Lee, Hyewon; Hwang, Min; Muljadi, Eduard
2017-01-01
In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP) caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG...... demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.......) that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power...
Model-based fault diagnosis techniques design schemes, algorithms, and tools
Ding, Steven
2008-01-01
The objective of this book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms, and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers. This is a textbook with extensive examples and references. Most methods are given in the form of an algorithm that enables a direct implementation in a programme. Comparisons among different methods are included when possible.
Adaptable Authentication Model: Exploring Security with Weaker Attacker Models
DEFF Research Database (Denmark)
Ahmed, Naveed; Jensen, Christian D.
2011-01-01
Most methods for protocol analysis classify protocols as “broken” if they are vulnerable to attacks from a strong attacker, e.g., assuming the Dolev-Yao attacker model. In many cases, however, exploitation of existing vulnerabilities may not be practical and, moreover, not all applications may......; for each fine level authentication goal, we determine the “least strongest-attacker” for which the authentication goal can be satisfied. We demonstrate that this model can be used to reason about the security of supposedly insecure protocols. Such adaptability is particularly useful in those applications...
Circuit QED scheme for realization of the Lipkin-Meshkov-Glick model
Larson, Jonas
2010-01-01
We propose a scheme in which the Lipkin-Meshkov-Glick model is realized within a circuit QED system. An array of N superconducting qubits interacts with a driven cavity mode. In the dispersive regime, the cavity mode is adiabatically eliminated generating an effective model for the qubits alone. The characteristic long-range order of the Lipkin-Meshkov-Glick model is here mediated by the cavity field. For a closed qubit system, the inherent second order phase transition of the qubits is refle...
Aerosol model selection and uncertainty modelling by adaptive MCMC technique
Directory of Open Access Journals (Sweden)
M. Laine
2008-12-01
Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.
The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.
We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.
Implementation of a gust front head collapse scheme in the WRF numerical model
Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje
2018-05-01
Gust fronts are thunderstorm-related phenomena usually associated with severe winds which are of great importance in theoretical meteorology, weather forecasting, cloud dynamics and precipitation, and wind engineering. An important feature of gust fronts demonstrated through both theoretical and observational studies is the periodic collapse and rebuild of the gust front head. This cyclic behavior of gust fronts results in periodic forcing of vertical velocity ahead of the parent thunderstorm, which consequently influences the storm dynamics and microphysics. This paper introduces the first gust front pulsation parameterization scheme in the WRF-ARW model (Weather Research and Forecasting-Advanced Research WRF). The influence of this new scheme on model performances is tested through investigation of the characteristics of an idealized supercell cumulonimbus cloud, as well as studying a real case of thunderstorms above the United Arab Emirates. In the ideal case, WRF with the gust front scheme produced more precipitation and showed different time evolution of mixing ratios of cloud water and rain, whereas the mixing ratios of ice and graupel are almost unchanged when compared to the default WRF run without the parameterization of gust front pulsation. The included parameterization did not disturb the general characteristics of thunderstorm cloud, such as the location of updraft and downdrafts, and the overall shape of the cloud. New cloud cells in front of the parent thunderstorm are also evident in both ideal and real cases due to the included forcing of vertical velocity caused by the periodic collapse of the gust front head. Despite some differences between the two WRF simulations and satellite observations, the inclusion of the gust front parameterization scheme produced more cumuliform clouds and seem to match better with real observations. Both WRF simulations gave poor results when it comes to matching the maximum composite radar reflectivity from radar
A general scheme for training and optimization of the Grenander deformable template model
DEFF Research Database (Denmark)
Fisker, Rune; Schultz, Nette; Duta, N.
2000-01-01
parameters, a very fast general initialization algorithm and an adaptive likelihood model based on local means. The model parameters are trained by a combination of a 2D shape learning algorithm and a maximum likelihood based criteria. The fast initialization algorithm is based on a search approach using...
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
Analyzing numerics of bulk microphysics schemes in community models: warm rain processes
Directory of Open Access Journals (Sweden)
I. Sednev
2012-08-01
Full Text Available Implementation of bulk cloud microphysics (BLK parameterizations in atmospheric models of different scales has gained momentum in the last two decades. Utilization of these parameterizations in cloud-resolving models when timesteps used for the host model integration are a few seconds or less is justified from the point of view of cloud physics. However, mechanistic extrapolation of the applicability of BLK schemes to the regional or global scales and the utilization of timesteps of hundreds up to thousands of seconds affect both physics and numerics.
We focus on the mathematical aspects of BLK schemes, such as stability and positive-definiteness. We provide a strict mathematical definition for the problem of warm rain formation. We also derive a general analytical condition (SM-criterion that remains valid regardless of parameterizations for warm rain processes in an explicit Eulerian time integration framework used to advanced finite-difference equations, which govern warm rain formation processes in microphysics packages in the Community Atmosphere Model and the Weather Research and Forecasting model. The SM-criterion allows for the existence of a unique positive-definite stable mass-conserving numerical solution, imposes an additional constraint on the timestep permitted due to the microphysics (like the Courant-Friedrichs-Lewy condition for the advection equation, and prohibits use of any additional assumptions not included in the strict mathematical definition of the problem under consideration.
By analyzing the numerics of warm rain processes in source codes of BLK schemes implemented in community models we provide general guidelines regarding the appropriate choice of time steps in these models.
Wilburn, Brenton K.
This dissertation presents the design, development, and simulation testing of an adaptive trajectory tracking algorithm capable of compensating for various aircraft subsystem failures and upset conditions. A comprehensive adaptive control framework, here within referred to as the immune model reference adaptive control (IMRAC) algorithm, is developed by synergistically merging core concepts from the biologically- inspired artificial immune system (AIS) paradigm with more traditional optimal and adaptive control techniques. In particular, a model reference adaptive control (MRAC) algorithm is enhanced with the detection and learning capabilities of a novel, artificial neural network augmented AIS scheme. With the given modifications, the MRAC scheme is capable of detecting and identifying a given failure or upset condition, learning how to adapt to the problem, responding in a manner specific to the given failure condition, and retaining the learning parameters for quicker adaptation to subsequent failures of the same nature. The IMRAC algorithm developed in this dissertation is applicable to a wide range of control problems. However, the proposed methodology is demonstrated in simulation for an unmanned aerial vehicle. The results presented show that the IMRAC algorithm is an effective and valuable extension to traditional optimal and adaptive control techniques. The implementation of this methodology can potentially have significant impacts on the operational safety of many complex systems.
La Malfa, Giampaolo; Lassi, Stefano; Bertelli, Marco; Albertini, Giorgio; Dosen, Anton
2009-01-01
The importance of emotional aspects in developing cognitive and social abilities has already been underlined by many authors even if there is no unanimous agreement on the factors constituting adaptive abilities, nor is there any on the way to measure them or on the relation between adaptive ability and cognitive level. The purposes of this study…
Directory of Open Access Journals (Sweden)
Mohammad Iranmanesh
2014-12-01
Full Text Available Many standard brands sell products under the volume discount scheme (VDS as more and more consumers are fond of purchasing products under this scheme. Despite volume discount being commonly practiced, there is a dearth of research, both conceptual and empirical, focusing on purchase characteristics factors and consumer internal evaluation concerning the purchase of products under VDS. To attempt to fill this void, this article develops a conceptual model on VDS with the intention of delineating the influence of the purchase characteristics factors on the consumer intention to purchase products under VDS and provides an explanation of their effects through consumer internal evaluation. Finally, the authors discuss the managerial implications of their research and offer guidelines for future empirical research.
Relaxation approximations to second-order traffic flow models by high-resolution schemes
International Nuclear Information System (INIS)
Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.
2015-01-01
A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers
Relaxation approximations to second-order traffic flow models by high-resolution schemes
Energy Technology Data Exchange (ETDEWEB)
Nikolos, I.K.; Delis, A.I.; Papageorgiou, M. [School of Production Engineering and Management, Technical University of Crete, University Campus, Chania 73100, Crete (Greece)
2015-03-10
A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers.
Conti, Costanza; Romani, Lucia
2010-09-01
Univariate subdivision schemes are efficient iterative methods to generate smooth limit curves starting from a sequence of arbitrary points. Aim of this paper is to present and investigate a new family of 6-point interpolatory non-stationary subdivision schemes capable of reproducing important curves of great interest in geometric modeling and engineering applications, if starting from uniformly spaced initial samples. This new family can reproduce conic sections since it is obtained by a parameter depending affine combination of the cubic exponential B-spline symbol generating functions in the space V4,γ = {1,x,etx,e-tx} with t∈{0,s,is|s>0}. Moreover, the free parameter can be chosen to reproduce also other interesting analytic curves by imposing the algebraic conditions for the reproduction of an additional pair of exponential polynomials giving rise to different extensions of the space V4,γ.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models
van Elburg, R.A.J.; van Ooyen, A.
2009-01-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on
Boosting flood warning schemes with fast emulator of detailed hydrodynamic models
Bellos, V.; Carbajal, J. P.; Leitao, J. P.
2017-12-01
Floods are among the most destructive catastrophic events and their frequency has incremented over the last decades. To reduce flood impact and risks, flood warning schemes are installed in flood prone areas. Frequently, these schemes are based on numerical models which quickly provide predictions of water levels and other relevant observables. However, the high complexity of flood wave propagation in the real world and the need of accurate predictions in urban environments or in floodplains hinders the use of detailed simulators. This sets the difficulty, we need fast predictions that meet the accuracy requirements. Most physics based detailed simulators although accurate, will not fulfill the speed demand. Even if High Performance Computing techniques are used (the magnitude of required simulation time is minutes/hours). As a consequence, most flood warning schemes are based in coarse ad-hoc approximations that cannot take advantage a detailed hydrodynamic simulation. In this work, we present a methodology for developing a flood warning scheme using an Gaussian Processes based emulator of a detailed hydrodynamic model. The methodology consists of two main stages: 1) offline stage to build the emulator; 2) online stage using the emulator to predict and generate warnings. The offline stage consists of the following steps: a) definition of the critical sites of the area under study, and the specification of the observables to predict at those sites, e.g. water depth, flow velocity, etc.; b) generation of a detailed simulation dataset to train the emulator; c) calibration of the required parameters (if measurements are available). The online stage is carried on using the emulator to predict the relevant observables quickly, and the detailed simulator is used in parallel to verify key predictions of the emulator. The speed gain given by the emulator allows also to quantify uncertainty in predictions using ensemble methods. The above methodology is applied in real
Model-free adaptive sliding mode controller design for generalized ...
Indian Academy of Sciences (India)
L M WANG
2017-08-16
Aug 16, 2017 ... A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) ... the neural network theory, a model-free adaptive sliding mode controller is designed to guarantee asymptotic stability of the generalized ..... following optimization parameters are needed: ⎧.
Directory of Open Access Journals (Sweden)
Liu Yue
2016-01-01
Full Text Available To improve the dynamic performance of permanent magnet synchronous motor(PMSM drive system, a adaptive nonsingular terminal sliding model control((NTSMC strategy was proposed. The proposed control strategy presents an adaptive variable-rated exponential reaching law which the L1 norm of state variables is introduced. Exponential and constant approach speed can adaptively adjust according to the state variables’ distance to the equilibrium position.The proposed scheme can shorten the reaching time and weaken system chatting. The method was applied to the PMSM speed servo system, and compared with the traditional terminal-sliding-mode regulator and PI regulator. Simulation results show that the proposed control strategy can improve dynamic, steady performance and robustness.
International Nuclear Information System (INIS)
Silva, R.S.; Galeao, A.C.; Carmo, E.G.D. do
1989-07-01
In this paper a new finite element model is constructed combining an r- refinement scheme with the CCAU method. The new formulation gives better approximation for boundary and internal layers compared to the standard CCAU, without increasing computer codes. (author) [pt
SMAFS, Steady-state analysis Model for Advanced Fuel cycle Schemes
International Nuclear Information System (INIS)
LEE, Kwang-Seok
2006-01-01
1 - Description of program or function: The model was developed as a part of the study, 'Advanced Fuel Cycles and Waste Management', which was performed during 2003-2005 by an ad-hoc expert group under the Nuclear Development Committee in the OECD/NEA. The model was designed for an efficient conduct of nuclear fuel cycle scheme cost analyses. It is simple, transparent and offers users the capability to track down the cost analysis results. All the fuel cycle schemes considered in the model are represented in a graphic format and all values related to a fuel cycle step are shown in the graphic interface, i.e., there are no hidden values embedded in the calculations. All data on the fuel cycle schemes considered in the study including mass flows, waste generation, cost data, and other data such as activities, decay heat and neutron sources of spent fuel and high-level waste along time are included in the model and can be displayed. The user can modify easily the values of mass flows and/or cost parameters and see the corresponding changes in the results. The model calculates: front-end fuel cycle mass flows such as requirements of enrichment and conversion services and natural uranium; mass of waste based on the waste generation parameters and the mass flow; and all costs. It performs Monte Carlo simulations with changing the values of all unit costs within their respective ranges (from lower to upper bounds). 2 - Methods: In Monte Carlo simulation, it is assumed that all unit costs follow a triangular probability distribution function, i.e., the probability that the unit cost has a value increases linearly from its lower bound to the nominal value and then decreases linearly to its upper bound. 3 - Restrictions on the complexity of the problem: The limit for the Monte Carlo iterations is the one of an Excel worksheet, i.e. 65,536
ADAPTIVE MODEL REFINEMENT FOR THE IONOSPHERE AND THERMOSPHERE
National Aeronautics and Space Administration — ADAPTIVE MODEL REFINEMENT FOR THE IONOSPHERE AND THERMOSPHERE ANTHONY M. D’AMATO∗, AARON J. RIDLEY∗∗, AND DENNIS S. BERNSTEIN∗∗∗ Abstract. Mathematical models of...
Efficient ECG Signal Compression Using Adaptive Heart Model
National Research Council Canada - National Science Library
Szilagyi, S
2001-01-01
This paper presents an adaptive, heart-model-based electrocardiography (ECG) compression method. After conventional pre-filtering the waves from the signal are localized and the model's parameters are determined...
Recursive Gaussian Process Regression Model for Adaptive Quality Monitoring in Batch Processes
Directory of Open Access Journals (Sweden)
Le Zhou
2015-01-01
Full Text Available In chemical batch processes with slow responses and a long duration, it is time-consuming and expensive to obtain sufficient normal data for statistical analysis. With the persistent accumulation of the newly evolving data, the modelling becomes adequate gradually and the subsequent batches will change slightly owing to the slow time-varying behavior. To efficiently make use of the small amount of initial data and the newly evolving data sets, an adaptive monitoring scheme based on the recursive Gaussian process (RGP model is designed in this paper. Based on the initial data, a Gaussian process model and the corresponding SPE statistic are constructed at first. When the new batches of data are included, a strategy based on the RGP model is used to choose the proper data for model updating. The performance of the proposed method is finally demonstrated by a penicillin fermentation batch process and the result indicates that the proposed monitoring scheme is effective for adaptive modelling and online monitoring.
Directory of Open Access Journals (Sweden)
Hyewon Lee
2017-04-01
Full Text Available In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power-smoothing capability while preventing over-deceleration of the rotor speed, the gain of the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. The simulation results based on the IEEE 14-bus system clearly demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.
APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study
Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak
2017-04-01
In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.
Energy Technology Data Exchange (ETDEWEB)
Zubov, V.A.; Rozanov, E.V. [Main Geophysical Observatory, St.Petersburg (Russian Federation); Schlesinger, M.E.; Andronova, N.G. [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Atmospheric Sciences
1997-12-31
The problems of ozone depletion, climate change and atmospheric pollution strongly depend on the processes of production, destruction and transport of chemical species. A hybrid transport scheme was developed, consisting of the semi-Lagrangian scheme for horizontal advection and the Prather scheme for vertical transport, which have been used for the Atmospheric Chemical Transport model to calculate the distributions of different chemical species. The performance of the new hybrid scheme has been evaluated in comparison with other transport schemes on the basis of specially designed tests. The seasonal cycle of the distribution of N{sub 2}O simulated by the model, as well as the dispersion of NO{sub x} exhausted from subsonic aircraft, are in a good agreement with published data. (author) 8 refs.
The Application of Adaptive Behaviour Models: A Systematic Review
Directory of Open Access Journals (Sweden)
Jessica A. Price
2018-01-01
Full Text Available Adaptive behaviour has been viewed broadly as an individual’s ability to meet the standards of social responsibilities and independence; however, this definition has been a source of debate amongst researchers and clinicians. Based on the rich history and the importance of the construct of adaptive behaviour, the current study aimed to provide a comprehensive overview of the application of adaptive behaviour models to assessment tools, through a systematic review. A plethora of assessment measures for adaptive behaviour have been developed in order to adequately assess the construct; however, it appears that the only definition on which authors seem to agree is that adaptive behaviour is what adaptive behaviour scales measure. The importance of the construct for diagnosis, intervention and planning has been highlighted throughout the literature. It is recommended that researchers and clinicians critically review what measures of adaptive behaviour they are utilising and it is suggested that the definition and theory is revisited.
Energy Technology Data Exchange (ETDEWEB)
Mengelkamp, H.T.; Warrach, K.; Raschke, E. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Atmosphaerenphysik
1997-12-31
A soil-vegetation-atmosphere-transfer scheme is presented here which solves the coupled system of the Surface Energy and Water Balance (SEWAB) equations considering partly vegetated surfaces. It is based on the one-layer concept for vegetation. In the soil the diffusion equations for heat and moisture are solved on a multi-layer grid. SEWAB has been developed to serve as a land-surface scheme for atmospheric circulation models. Being forced with atmospheric data from either simulations or measurements it calculates surface and subsurface runoff that can serve as input to hydrologic models. The model has been validated with field data from the FIFE experiment and has participated in the PILPS project for intercomparison of land-surface parameterization schemes. From these experiments we feel that SEWAB reasonably well partitions the radiation and precipitation into sensible and latent heat fluxes as well as into runoff and soil moisture Storage. (orig.) [Deutsch] Ein Landoberflaechenschema wird vorgestellt, das den Transport von Waerme und Wasser zwischen dem Erdboden, der Vegetation und der Atmosphaere unter Beruecksichtigung von teilweise bewachsenem Boden beschreibt. Im Erdboden werden die Diffusionsgleichungen fuer Waerme und Feuchte auf einem Gitter mit mehreren Schichten geloest. Das Schema SEWAB (Surface Energy and Water Balance) beschreibt die Landoberflaechenprozesse in atmosphaerischen Modellen und berechnet den Oberflaechenabfluss und den Basisabfluss, die als Eingabedaten fuer hydrologische Modelle genutzt werden koennen. Das Modell wurde mit Daten des FIFE-Experiments kalibriert und hat an Vergleichsexperimenten fuer Landoberflaechen-Schemata im Rahmen des PILPS-Projektes teilgenommen. Dabei hat sich gezeigt, dass die Aufteilung der einfallenden Strahlung und des Niederschlages in den sensiblen und latenten Waermefluss und auch in Abfluss und Speicherung der Bodenfeuchte in SEWAB den beobachteten Daten recht gut entspricht. (orig.)
Impact of an improved shortwave radiation scheme in the MAECHAM5 General Circulation Model
Directory of Open Access Journals (Sweden)
J. J. Morcrette
2007-05-01
Full Text Available In order to improve the representation of ozone absorption in the stratosphere of the MAECHAM5 general circulation model, the spectral resolution of the shortwave radiation parameterization used in the model has been increased from 4 to 6 bands. Two 20-years simulations with the general circulation model have been performed, one with the standard and the other with the newly introduced parameterization respectively, to evaluate the temperature and dynamical changes arising from the two different representations of the shortwave radiative transfer. In the simulation with the increased spectral resolution in the radiation parameterization, a significant warming of almost the entire model domain is reported. At the summer stratopause the temperature increase is about 6 K and alleviates the cold bias present in the model when the standard radiation scheme is used. These general circulation model results are consistent both with previous validation of the radiation scheme and with the offline clear-sky comparison performed in the current work with a discrete ordinate 4 stream scattering line by line radiative transfer model. The offline validation shows a substantial reduction of the daily averaged shortwave heating rate bias (1–2 K/day cooling that occurs for the standard radiation parameterization in the upper stratosphere, present under a range of atmospheric conditions. Therefore, the 6 band shortwave radiation parameterization is considered to be better suited for the representation of the ozone absorption in the stratosphere than the 4 band parameterization. Concerning the dynamical response in the general circulation model, it is found that the reported warming at the summer stratopause induces stronger zonal mean zonal winds in the middle atmosphere. These stronger zonal mean zonal winds thereafter appear to produce a dynamical feedback that results in a dynamical warming (cooling of the polar winter (summer mesosphere, caused by an
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Directory of Open Access Journals (Sweden)
Shun-Yuan Wang
2015-03-01
Full Text Available This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC in the speed sensorless vector control of an induction motor (IM drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes—the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC—were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes.
Riaz, Faisal; Niazi, Muaz A
2017-01-01
This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.
Niazi, Muaz A.
2017-01-01
This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson’s arms race model has also been presented. The performance of the proposed social agent has been validated at two levels–firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme. PMID:29040294
Directory of Open Access Journals (Sweden)
Faisal Riaz
Full Text Available This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs, which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM level of the Cognitive Agent Based Computing (CABC framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.
A design of mathematical modelling for the mudharabah scheme in shariah insurance
Cahyandari, R.; Mayaningsih, D.; Sukono
2017-01-01
Indonesian Shariah Insurance Association (AASI) believes that 2014 is the year of Indonesian Shariah insurance, since its growth was above the conventional insurance. In December 2013, 43% growth was recorded for shariah insurance, while the conventional insurance was only hit 20%. This means that shariah insurance has tremendous potential to remain growing in the future. In addition, the growth can be predicted from the number of conventional insurance companies who open sharia division, along with the development of Islamic banking development which automatically demand the role of shariah insurance to protect assets and banking transactions. The development of shariah insurance should be accompanied by the development of premium fund management mechanism, in order to create innovation on shariah insurance products which beneficial for the society. The development of premium fund management model shows a positive progress through the emergence of Mudharabah, Wakala, Hybrid (Mudharabah-Wakala), and Wakala-Waqf. However, ‘model’ term that referred in this paper is regarded as an operational model in form of a scheme of management mechanism. Therefore, this paper will describe a mathematical modeling for premium fund management scheme, especially for Mudharabah concept. Mathematical modeling is required for an analysis process that can be used to predict risks that could be faced by a company in the future, so that the company could take a precautionary policy to minimize those risks.
Adaptation dynamics of the quasispecies model
Indian Academy of Sciences (India)
2015-11-27
Nov 27, 2015 ... We study the adaptation dynamics of an initially maladapted population evolving via the elementary processes of mutation and selection. The evolution occurs on rugged fitness landscapes which are defined on the multi-dimensional genotypic space and have many local peaks separated by low fitness ...
Modeling Family Adaptation to Fragile X Syndrome
Raspa, Melissa; Bailey, Donald, Jr.; Bann, Carla; Bishop, Ellen
2014-01-01
Using data from a survey of 1,099 families who have a child with Fragile X syndrome, we examined adaptation across 7 dimensions of family life: parenting knowledge, social support, social life, financial impact, well-being, quality of life, and overall impact. Results illustrate that although families report a high quality of life, they struggle…
Adaptation dynamics of the quasispecies model
Indian Academy of Sciences (India)
Abstract. We study the adaptation dynamics of an initially maladapted population evolving via the elementary processes of mutation and selection. The evolution occurs on rugged fitness landscapes which are defined on the multi-dimensional genotypic space and have many local peaks separated by low fitness valleys.
Adaptation dynamics of the quasispecies model
Indian Academy of Sciences (India)
We study the adaptation dynamics of an initially maladapted population evolving via the elementary processes of mutation and selection. The evolution occurs on rugged fitness landscapes which are defined on the multi-dimensional genotypic space and have many local peaks separated by low fitness valleys. We mainly ...
Zhao, F.; Veldkamp, T.; Frieler, K.; Schewe, J.; Ostberg, S.; Willner, S. N.; Schauberger, B.; Gosling, S.; Mueller Schmied, H.; Portmann, F. T.; Leng, G.; Huang, M.; Liu, X.; Tang, Q.; Hanasaki, N.; Biemans, H.; Gerten, D.; Satoh, Y.; Pokhrel, Y. N.; Stacke, T.; Ciais, P.; Chang, J.; Ducharne, A.; Guimberteau, M.; Wada, Y.; Kim, H.; Yamazaki, D.
2017-12-01
Global hydrological models (GHMs) have been applied to assess global flood hazards, but their capacity to capture the timing and amplitude of peak river discharge—which is crucial in flood simulations—has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a project. The runoff simulations were used as input for the global river routing model CaMa-Flood. The simulated daily discharge was compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, likely induced by the buffering capacity of floodplain reservoirs. For a majority of river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over about 2/3 of the analysed basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme choice in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not represented in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.
Histogram Equalization to Model Adaptation for Robust Speech Recognition
Directory of Open Access Journals (Sweden)
Suh Youngjoo
2010-01-01
Full Text Available We propose a new model adaptation method based on the histogram equalization technique for providing robustness in noisy environments. The trained acoustic mean models of a speech recognizer are adapted into environmentally matched conditions by using the histogram equalization algorithm on a single utterance basis. For more robust speech recognition in the heavily noisy conditions, trained acoustic covariance models are efficiently adapted by the signal-to-noise ratio-dependent linear interpolation between trained covariance models and utterance-level sample covariance models. Speech recognition experiments on both the digit-based Aurora2 task and the large vocabulary-based task showed that the proposed model adaptation approach provides significant performance improvements compared to the baseline speech recognizer trained on the clean speech data.
Histogram Equalization to Model Adaptation for Robust Speech Recognition
Suh, Youngjoo; Kim, Hoirin
2010-12-01
We propose a new model adaptation method based on the histogram equalization technique for providing robustness in noisy environments. The trained acoustic mean models of a speech recognizer are adapted into environmentally matched conditions by using the histogram equalization algorithm on a single utterance basis. For more robust speech recognition in the heavily noisy conditions, trained acoustic covariance models are efficiently adapted by the signal-to-noise ratio-dependent linear interpolation between trained covariance models and utterance-level sample covariance models. Speech recognition experiments on both the digit-based Aurora2 task and the large vocabulary-based task showed that the proposed model adaptation approach provides significant performance improvements compared to the baseline speech recognizer trained on the clean speech data.
Development and evaluation of a building energy model integrated in the TEB scheme
Directory of Open Access Journals (Sweden)
B. Bueno
2012-03-01
Full Text Available The use of air-conditioning systems is expected to increase as a consequence of global-scale and urban-scale climate warming. In order to represent future scenarios of urban climate and building energy consumption, the Town Energy Balance (TEB scheme must be improved. This paper presents a new building energy model (BEM that has been integrated in the TEB scheme. BEM-TEB makes it possible to represent the energy effects of buildings and building systems on the urban climate and to estimate the building energy consumption at city scale (~10 km with a resolution of a neighbourhood (~100 m. The physical and geometric definition of buildings in BEM has been intentionally kept as simple as possible, while maintaining the required features of a comprehensive building energy model. The model considers a single thermal zone, where the thermal inertia of building materials associated with multiple levels is represented by a generic thermal mass. The model accounts for heat gains due to transmitted solar radiation, heat conduction through the enclosure, infiltration, ventilation, and internal heat gains. BEM allows for previously unavailable sophistication in the modelling of air-conditioning systems. It accounts for the dependence of the system capacity and efficiency on indoor and outdoor air temperatures and solves the dehumidification of the air passing through the system. Furthermore, BEM includes specific models for passive systems, such as window shadowing devices and natural ventilation. BEM has satisfactorily passed different evaluation processes, including testing its modelling assumptions, verifying that the chosen equations are solved correctly, and validating the model with field data.
Development and evaluation of a building energy model integrated in the TEB scheme
Bueno, B.; Pigeon, G.; Norford, L. K.; Zibouche, K.; Marchadier, C.
2012-03-01
The use of air-conditioning systems is expected to increase as a consequence of global-scale and urban-scale climate warming. In order to represent future scenarios of urban climate and building energy consumption, the Town Energy Balance (TEB) scheme must be improved. This paper presents a new building energy model (BEM) that has been integrated in the TEB scheme. BEM-TEB makes it possible to represent the energy effects of buildings and building systems on the urban climate and to estimate the building energy consumption at city scale (~10 km) with a resolution of a neighbourhood (~100 m). The physical and geometric definition of buildings in BEM has been intentionally kept as simple as possible, while maintaining the required features of a comprehensive building energy model. The model considers a single thermal zone, where the thermal inertia of building materials associated with multiple levels is represented by a generic thermal mass. The model accounts for heat gains due to transmitted solar radiation, heat conduction through the enclosure, infiltration, ventilation, and internal heat gains. BEM allows for previously unavailable sophistication in the modelling of air-conditioning systems. It accounts for the dependence of the system capacity and efficiency on indoor and outdoor air temperatures and solves the dehumidification of the air passing through the system. Furthermore, BEM includes specific models for passive systems, such as window shadowing devices and natural ventilation. BEM has satisfactorily passed different evaluation processes, including testing its modelling assumptions, verifying that the chosen equations are solved correctly, and validating the model with field data.
Shirmin, G. I.
1980-08-01
In the present paper, an averaging on the basis of Fatou's (1931) scheme is obtained within the framework of a version of the doubly restricted problem of four bodies. A proof is obtained for the existence of particular solutions that are analogous to the Eulerian and Lagrangian solutions. The solutions are applied to an analysis of first-order secular disturbances in the positions of libration points, caused by the influence of a body whose attraction is neglected in the classical model of the restricted three-body problem. These disturbances are shown to lead to continuous displacements of the libration points.
A Certificateless Ring Signature Scheme with High Efficiency in the Random Oracle Model
Directory of Open Access Journals (Sweden)
Yingying Zhang
2017-01-01
Full Text Available Ring signature is a kind of digital signature which can protect the identity of the signer. Certificateless public key cryptography not only overcomes key escrow problem but also does not lose some advantages of identity-based cryptography. Certificateless ring signature integrates ring signature with certificateless public key cryptography. In this paper, we propose an efficient certificateless ring signature; it has only three bilinear pairing operations in the verify algorithm. The scheme is proved to be unforgeable in the random oracle model.
Energy Technology Data Exchange (ETDEWEB)
Park, Ju Yeop; In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok [Korea Atomic Energy Research Institute, Taejeon (Korea)
2000-02-01
The development of orthogonal 2-dimensional numerical code is made. The present code contains 9 kinds of turbulence models that are widely used. They include a standard k-{epsilon} model and 8 kinds of low Reynolds number ones. They also include 6 kinds of numerical schemes including 5 kinds of low order schemes and 1 kind of high order scheme such as QUICK. To verify the present numerical code, pipe flow, channel flow and expansion pipe flow are solved by this code with various options of turbulence models and numerical schemes and the calculated outputs are compared to experimental data. Furthermore, the discretization error that originates from the use of standard k-{epsilon} turbulence model with wall function is much more diminished by introducing a new grid system than a conventional one in the present code. 23 refs., 58 figs., 6 tabs. (Author)
ADAPTIVE LEARNING OF HIDDEN MARKOV MODELS FOR EMOTIONAL SPEECH
Directory of Open Access Journals (Sweden)
A. V. Tkachenia
2014-01-01
Full Text Available An on-line unsupervised algorithm for estimating the hidden Markov models (HMM parame-ters is presented. The problem of hidden Markov models adaptation to emotional speech is solved. To increase the reliability of estimated HMM parameters, a mechanism of forgetting and updating is proposed. A functional block diagram of the hidden Markov models adaptation algorithm is also provided with obtained results, which improve the efficiency of emotional speech recognition.
Comprehending isospin breaking effects of X (3872 ) in a Friedrichs-model-like scheme
Zhou, Zhi-Yong; Xiao, Zhiguang
2018-02-01
Recently, we have shown that the X (3872 ) state can be naturally generated as a bound state by incorporating the hadron interactions into the Godfrey-Isgur quark model using a Friedrichs-like model combined with the quark pair creation model, in which the wave function for the X (3872 ) as a combination of the bare c c ¯ state and the continuum states can also be obtained. Under this scheme, we now investigate the isospin-breaking effect of X (3872 ) in its decays to J /ψ π+π- and J /ψ π+π-π0. By coupling its dominant continuum parts to J /ψ ρ and J /ψ ω through the quark rearrangement process, one could obtain the reasonable ratio of B (X (3872 )→J /ψ π+π-π0)/B (X (3872 )→J /ψ π+π-)≃ (0.58 - 0.92 ) . It is also shown that the D ¯D* invariant mass distributions in the B →D ¯D*K decays could be understood qualitatively at the same time. This scheme may provide more insight into the enigmatic nature of the X (3872 ) state.
Koster, Rindal D.; Milly, P. C. D.
1997-01-01
The Project for Intercomparison of Land-surface Parameterization Schemes (PILPS) has shown that different land surface models (LSMS) driven by the same meteorological forcing can produce markedly different surface energy and water budgets, even when certain critical aspects of the LSMs (vegetation cover, albedo, turbulent drag coefficient, and snow cover) are carefully controlled. To help explain these differences, the authors devised a monthly water balance model that successfully reproduces the annual and seasonal water balances of the different PILPS schemes. Analysis of this model leads to the identification of two quantities that characterize an LSM's formulation of soil water balance dynamics: (1) the efficiency of the soil's evaporation sink integrated over the active soil moisture range, and (2) the fraction of this range over which runoff is generated. Regardless of the LSM's complexity, the combination of these two derived parameters with rates of interception loss, potential evaporation, and precipitation provides a reasonable estimate for the LSM's simulated annual water balance. The two derived parameters shed light on how evaporation and runoff formulations interact in an LSM, and the analysis as a whole underscores the need for compatibility in these formulations.
New aspects of the adaptive synchronization and hyperchaos suppression of a financial model
International Nuclear Information System (INIS)
Jajarmi, Amin; Hajipour, Mojtaba; Baleanu, Dumitru
2017-01-01
This paper mainly focuses on the analysis of a hyperchaotic financial system as well as its chaos control and synchronization. The phase diagrams of the above system are plotted and its dynamical behaviours like equilibrium points, stability, hyperchaotic attractors and Lyapunov exponents are investigated. In order to control the hyperchaos, an efficient optimal controller based on the Pontryagin’s maximum principle is designed and an adaptive controller established by the Lyapunov stability theory is also implemented. Furthermore, two identical financial models are globally synchronized by using an interesting adaptive control scheme. Finally, a fractional economic model is introduced which can also generate hyperchaotic attractors. In this case, a linear state feedback controller together with an active control technique are used in order to control the hyperchaos and realize the synchronization, respectively. Numerical simulations verifying the theoretical analysis are included.
Adaptive multiresolution methods
Directory of Open Access Journals (Sweden)
Schneider Kai
2011-12-01
Full Text Available These lecture notes present adaptive multiresolution schemes for evolutionary PDEs in Cartesian geometries. The discretization schemes are based either on finite volume or finite difference schemes. The concept of multiresolution analyses, including Harten’s approach for point and cell averages, is described in some detail. Then the sparse point representation method is discussed. Different strategies for adaptive time-stepping, like local scale dependent time stepping and time step control, are presented. Numerous numerical examples in one, two and three space dimensions validate the adaptive schemes and illustrate the accuracy and the gain in computational efficiency in terms of CPU time and memory requirements. Another aspect, modeling of turbulent flows using multiresolution decompositions, the so-called Coherent Vortex Simulation approach is also described and examples are given for computations of three-dimensional weakly compressible mixing layers. Most of the material concerning applications to PDEs is assembled and adapted from previous publications [27, 31, 32, 34, 67, 69].
A self-organized internal models architecture for coding sensory-motor schemes
Directory of Open Access Journals (Sweden)
Esaú eEscobar Juárez
2016-04-01
Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.
Discrete Model Reference Adaptive Control System for Automatic Profiling Machine
Directory of Open Access Journals (Sweden)
Peng Song
2012-01-01
Full Text Available Automatic profiling machine is a movement system that has a high degree of parameter variation and high frequency of transient process, and it requires an accurate control in time. In this paper, the discrete model reference adaptive control system of automatic profiling machine is discussed. Firstly, the model of automatic profiling machine is presented according to the parameters of DC motor. Then the design of the discrete model reference adaptive control is proposed, and the control rules are proven. The results of simulation show that adaptive control system has favorable dynamic performances.
Improved Gaussian Mixture Models for Adaptive Foreground Segmentation
DEFF Research Database (Denmark)
Katsarakis, Nikolaos; Pnevmatikakis, Aristodemos; Tan, Zheng-Hua
2016-01-01
Adaptive foreground segmentation is traditionally performed using Stauffer & Grimson’s algorithm that models every pixel of the frame by a mixture of Gaussian distributions with continuously adapted parameters. In this paper we provide an enhancement of the algorithm by adding two important dynamic...
Directory of Open Access Journals (Sweden)
Tao Chen
2017-05-01
Full Text Available The spatial distribution of precipitation is an important aspect of water-related research. The use of different interpolation schemes in the same catchment may cause large differences and deviations from the actual spatial distribution of rainfall. Our study analyzes different methods of spatial rainfall interpolation at annual, daily, and hourly time scales to provide a comprehensive evaluation. An improved regression-based scheme is proposed using principal component regression with residual correction (PCRR and is compared with inverse distance weighting (IDW and multiple linear regression (MLR interpolation methods. In this study, the meso-scale catchment of the Fuhe River in southeastern China was selected as a typical region. Furthermore, a hydrological model HEC-HMS was used to calculate streamflow and to evaluate the impact of rainfall interpolation methods on the results of the hydrological model. Results show that the PCRR method performed better than the other methods tested in the study and can effectively eliminate the interpolation anomalies caused by terrain differences between observation points and surrounding areas. Simulated streamflow showed different characteristics based on the mean, maximum, minimum, and peak flows. The results simulated by PCRR exhibited the lowest streamflow error and highest correlation with measured values at the daily time scale. The application of the PCRR method is found to be promising because it considers multicollinearity among variables.
Validity of tests under covariate-adaptive biased coin randomization and generalized linear models.
Shao, Jun; Yu, Xinxin
2013-12-01
Some covariate-adaptive randomization methods have been used in clinical trials for a long time, but little theoretical work has been done about testing hypotheses under covariate-adaptive randomization until Shao et al. (2010) who provided a theory with detailed discussion for responses under linear models. In this article, we establish some asymptotic results for covariate-adaptive biased coin randomization under generalized linear models with possibly unknown link functions. We show that the simple t-test without using any covariate is conservative under covariate-adaptive biased coin randomization in terms of its Type I error rate, and that a valid test using the bootstrap can be constructed. This bootstrap test, utilizing covariates in the randomization scheme, is shown to be asymptotically as efficient as Wald's test correctly using covariates in the analysis. Thus, the efficiency loss due to not using covariates in the analysis can be recovered by utilizing covariates in covariate-adaptive biased coin randomization. Our theory is illustrated with two most popular types of discrete outcomes, binary responses and event counts under the Poisson model, and exponentially distributed continuous responses. We also show that an alternative simple test without using any covariate under the Poisson model has an inflated Type I error rate under simple randomization, but is valid under covariate-adaptive biased coin randomization. Effects on the validity of tests due to model misspecification is also discussed. Simulation studies about the Type I errors and powers of several tests are presented for both discrete and continuous responses. © 2013, The International Biometric Society.
Self Adaptive Hypermedia Navigation Based On Learner Model Characters
Vassileva, Dessislava; Bontchev, Boyan
2006-01-01
Dessislava Vassileva, Boyan Bontchev "Self Adaptive Hypermedia Navigation Based On Learner Model Characters", IADAT-e2006, 3rd International Conference on Education, Barcelona (Spain), July 12-14, 2006, ISBN: 84-933971-9-9
Adaptive meshes in ecosystem modelling: a way forward?
Popova, E. E.; Ham, D. A.; Srokosz, M. A.; Piggott, M. D.
2009-04-01
The need to resolve physical processes occuring on many different length scales has lead to the development of ocean flow models based on unstructured and adaptive meshes. However, thus far models of biological processes have been based on fixed, structured grids which lack the ability to dynamically focus resolution on areas of developing small-scale structure. Here we will present the initial results of coupling a four component biological model to the 3D non-hydrostatic, finite element, adaptive grid ocean model ICOM (the Imperial College Ocean Model). Mesh adaptivity automatically resolves fine-scale physical or biological features as they develop, optimising computational cost by reducing resolution where it is not required. Experiments are carried out within the framework of a horizontally uniform water column. The vertical physical processes in top 500m are represented by a two equation turbulence model. The physical model is coupled to a four component biological model, which includes generic phytoplankton, zooplankton, nitrate and particular organic matter (detritus). The physical and biological model is set up to represent idealised oligotrophic conditions, typical of subtropical gyres. A stable annual cycle is achieved after a number of years of integration. We compare results obtained on a fully adaptive mesh with ones using a high resolution static mesh. We assess the computational efficiency of the adaptive approach for modelling of ecosystem processes such as the dynamics of the phytoplankton spring bloom, formation of the subsurface chlorophyll maximum and nutrient supply to the photic zone.
Modeling adaptation of carbon use efficiency in microbial communities
Directory of Open Access Journals (Sweden)
Steven D Allison
2014-10-01
Full Text Available In new microbial-biogeochemical models, microbial carbon use efficiency (CUE is often assumed to decline with increasing temperature. Under this assumption, soil carbon losses under warming are small because microbial biomass declines. Yet there is also empirical evidence that CUE may adapt (i.e. become less sensitive to warming, thereby mitigating negative effects on microbial biomass. To analyze potential mechanisms of CUE adaptation, I used two theoretical models to implement a tradeoff between microbial uptake rate and CUE. This rate-yield tradeoff is based on thermodynamic principles and suggests that microbes with greater investment in resource acquisition should have lower CUE. Microbial communities or individuals could adapt to warming by reducing investment in enzymes and uptake machinery. Consistent with this idea, a simple analytical model predicted that adaptation can offset 50% of the warming-induced decline in CUE. To assess the ecosystem implications of the rate-yield tradeoff, I quantified CUE adaptation in a spatially-structured simulation model with 100 microbial taxa and 12 soil carbon substrates. This model predicted much lower CUE adaptation, likely due to additional physiological and ecological constraints on microbes. In particular, specific resource acquisition traits are needed to maintain stoichiometric balance, and taxa with high CUE and low enzyme investment rely on low-yield, high-enzyme neighbors to catalyze substrate degradation. In contrast to published microbial models, simulations with greater CUE adaptation also showed greater carbon storage under warming. This pattern occurred because microbial communities with stronger CUE adaptation produced fewer degradative enzymes, despite increases in biomass. Thus the rate-yield tradeoff prevents CUE adaptation from driving ecosystem carbon loss under climate warming.
A system identification model for adaptive nonlinear control
Linse, Dennis J.; Stengel, Robert F.
1991-01-01
A system identification model that combines generalized-spline function approximation with a nonlinear control system is described. The complete control system contains three main elements: a nonlinear-inverse-dynamic control law that depends on a comprehensive model of the plant, a state estimator whose outputs drive the control law, and a function approximation scheme that models the system dynamics. The system-identification task, which combines an extended Kalman filter with a function approximator modeled as an artificial neural network, is considered. The results of an application of the identification techniques to a nonlinear transport aircraft model are presented.
Modeling of processes of an adaptive business management
Directory of Open Access Journals (Sweden)
Karev Dmitry Vladimirovich
2011-04-01
Full Text Available On the basis of the analysis of systems of adaptive management board business proposed the original version of the real system of adaptive management, the basis of which used dynamic recursive model cash flow forecast and real data. Proposed definitions and the simulation of scales and intervals of model time in the control system, as well as the thresholds observations and conditions of changing (correction of the administrative decisions. The process of adaptive management is illustrated on the basis proposed by the author of the script of development of business.
Development of a Multi-Model Ensemble Scheme for the Tropical Cyclone Forecast
Jun, S.; Lee, W. J.; Kang, K.; Shin, D. H.
2015-12-01
A Multi-Model Ensemble (MME) prediction scheme using selected and weighted method was developed and evaluated for tropical cyclone forecast. The analyzed tropical cyclone track and intensity data set provided by Korea Meteorological Administration and 11 numerical model outputs - GDAPS, GEPS, GFS (data resolution; 50 and 100 km), GFES, HWRF, IFS(data resolution; 50 and 100 km), IFS EPS, JGSM, and TEPS - during 2011-2014 were used for this study. The procedure suggested in this study was divided into two stages: selecting and weighting process. First several numerical models were chosen based on the past model's performances in the selecting stage. Next, weights, referred to as regression coefficients, for each model forecasts were calculated by applying the linear and nonlinear regression technique to past model forecast data in the weighting stage. Finally, tropical cyclone forecasts were determined by using both selected and weighted multi-model values at that forecast time. The preliminary result showed that selected MME's improvement rate (%) was more than 5% comparing with non-selected MME at 72 h track forecast.
DEFF Research Database (Denmark)
Primdahl, Jorgen; Vesterager, Jens Peter; Finn, John A.
2010-01-01
Agri-Environment Schemes (AES) to maintain or promote environmentally-friendly farming practices were implemented on about 25% of all agricultural land in the EU by 2002. This article analyses and discusses the actual and potential use of impact models in supporting the design, implementation...... and evaluation of AES. Impact models identify and establish the causal relationships between policy objectives and policy outcomes. We review and discuss the role of impact models at different stages in the AES policy process, and present results from a survey of impact models underlying 60 agri-environmental...... schemes in seven EU member states. We distinguished among three categories of impact models (quantitative, qualitative or common sense), depending on the degree of evidence in the formal scheme description, additional documents, or key person interviews. The categories of impact models used mainly...
An adaptation model for trabecular bone at different mechanical levels
Directory of Open Access Journals (Sweden)
Lv Linwei
2010-07-01
Full Text Available Abstract Background Bone has the ability to adapt to mechanical usage or other biophysical stimuli in terms of its mass and architecture, indicating that a certain mechanism exists for monitoring mechanical usage and controlling the bone's adaptation behaviors. There are four zones describing different bone adaptation behaviors: the disuse, adaptation, overload, and pathologic overload zones. In different zones, the changes of bone mass, as calculated by the difference between the amount of bone formed and what is resorbed, should be different. Methods An adaptation model for the trabecular bone at different mechanical levels was presented in this study based on a number of experimental observations and numerical algorithms in the literature. In the proposed model, the amount of bone formation and the probability of bone remodeling activation were proposed in accordance with the mechanical levels. Seven numerical simulation cases under different mechanical conditions were analyzed as examples by incorporating the adaptation model presented in this paper with the finite element method. Results The proposed bone adaptation model describes the well-known bone adaptation behaviors in different zones. The bone mass and architecture of the bone tissue within the adaptation zone almost remained unchanged. Although the probability of osteoclastic activation is enhanced in the overload zone, the potential of osteoblasts to form bones compensate for the osteoclastic resorption, eventually strengthening the bones. In the disuse zone, the disuse-mode remodeling removes bone tissue in disuse zone. Conclusions The study seeks to provide better understanding of the relationships between bone morphology and the mechanical, as well as biological environments. Furthermore, this paper provides a computational model and methodology for the numerical simulation of changes of bone structural morphology that are caused by changes of mechanical and biological
Finite-volume scheme for anisotropic diffusion
Energy Technology Data Exchange (ETDEWEB)
Es, Bram van, E-mail: bramiozo@gmail.com [Centrum Wiskunde & Informatica, P.O. Box 94079, 1090GB Amsterdam (Netherlands); FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands); Koren, Barry [Eindhoven University of Technology (Netherlands); Blank, Hugo J. de [FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands)
2016-02-01
In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.
Martin, Nancy
Presented is a technical report concerning the use of a mathematical model describing certain aspects of the duplication and selection processes in natural genetic adaptation. This reproductive plan/model occurs in artificial genetics (the use of ideas from genetics to develop general problem solving techniques for computers). The reproductive…
DEFF Research Database (Denmark)
Avolio, E.; Federico, S.; Miglietta, M.
2017-01-01
the surface, where the model uncertainties are, usually, smaller than at the surface. A general anticlockwise rotation of the simulated flow with height is found at all levels. The mixing height is overestimated by all schemes and a possible role of the simulated sensible heat fluxes for this mismatching......The sensitivity of boundary layer variables to five (two non-local and three local) planetary boundary-layer (PBL) parameterization schemes, available in the Weather Research and Forecasting (WRF) mesoscale meteorological model, is evaluated in an experimental site in Calabria region (southern...... is investigated. On a single-case basis, significantly better results are obtained when the atmospheric conditions near the measurement site are dominated by synoptic forcing rather than by local circulations. From this study, it follows that the two first order non-local schemes, ACM2 and YSU, are the schemes...
Ambara, M. D.; Gunawan, P. H.
2018-03-01
The impact of a dam-break wave on an erodible embankment with a steep slope has been studied recently using both experimental and numerical approaches. In this paper, the semi-implicit staggered scheme for approximating the shallow water-Exner model will be elaborated to describe the erodible sediment on a steep slope. This scheme is known as a robust scheme to approximate shallow water-Exner model. The results are shown in a good agreement with the experimental data. The comparisons of numerical results with data experiment using slopes Φ = 59.04 and Φ = 41.42 by coefficient of Grass formula Ag = 2 × 10‑5 and Ag = 10‑5 respectively are found the closest results to the experiment. This paper can be seen as the additional validation of semi-implicit staggered scheme in the paper of Gunawan, et al (2015).
Adapting Dynamic Mathematical Models to a Pilot Anaerobic Digestion Reactor
Directory of Open Access Journals (Sweden)
F. Haugen, R. Bakke, and B. Lie
2013-04-01
Full Text Available A dynamic model has been adapted to a pilot anaerobic reactor fed diarymanure. Both steady-state data from online sensors and laboratory analysis anddynamic operational data from online sensors are used in the model adaptation.The model is based on material balances, and comprises four state variables,namely biodegradable volatile solids, volatile fatty acids, acid generatingmicrobes (acidogens, and methane generating microbes (methanogens. The modelcan predict the methane gas flow produced in the reactor. The model may beused for optimal reactor design and operation, state-estimation and control.Also, a dynamic model for the reactor temperature based on energy balance ofthe liquid in the reactor is adapted. This model may be used for optimizationand control when energy and economy are taken into account.
Adaptive Networks Theory, Models and Applications
Gross, Thilo
2009-01-01
With adaptive, complex networks, the evolution of the network topology and the dynamical processes on the network are equally important and often fundamentally entangled. Recent research has shown that such networks can exhibit a plethora of new phenomena which are ultimately required to describe many real-world networks. Some of those phenomena include robust self-organization towards dynamical criticality, formation of complex global topologies based on simple, local rules, and the spontaneous division of "labor" in which an initially homogenous population of network nodes self-organizes into functionally distinct classes. These are just a few. This book is a state-of-the-art survey of those unique networks. In it, leading researchers set out to define the future scope and direction of some of the most advanced developments in the vast field of complex network science and its applications.
Modeling Students' Memory for Application in Adaptive Educational Systems
Pelánek, Radek
2015-01-01
Human memory has been thoroughly studied and modeled in psychology, but mainly in laboratory setting under simplified conditions. For application in practical adaptive educational systems we need simple and robust models which can cope with aspects like varied prior knowledge or multiple-choice questions. We discuss and evaluate several models of…
Directory of Open Access Journals (Sweden)
U.N. Band
Full Text Available Abstract A transition element is developed for the local global analysis of laminated composite beams. It bridges one part of the domain modelled with a higher order theory and other with a 2D mixed layerwise theory (LWT used at critical zone of the domain. The use of developed transition element makes the analysis for interlaminar stresses possible with significant accuracy. The mixed 2D model incorporates the transverse normal and shear stresses as nodal degrees of freedom (DOF which inherently ensures continuity of these stresses. Non critical zones are modelled with higher order equivalent single layer (ESL theory leading to the global mesh with multiple models applied simultaneously. Use of higher order ESL in non critical zones reduces the total number of elements required to map the domain. A substantial reduction in DOF as compared to a complete 2D mixed model is obvious. This computationally economical multiple modelling scheme using the transition element is applied to static and free vibration analyses of laminated composite beams. Results obtained are in good agreement with benchmarks available in literature.
Zhang, Yong; Meerschaert, Mark M.; Baeumer, Boris; LaBolle, Eric M.
2015-08-01
This study develops an explicit two-step Lagrangian scheme based on the renewal-reward process to capture transient anomalous diffusion with mixed retention and early arrivals in multidimensional media. The resulting 3-D anomalous transport simulator provides a flexible platform for modeling transport. The first step explicitly models retention due to mass exchange between one mobile zone and any number of parallel immobile zones. The mobile component of the renewal process can be calculated as either an exponential random variable or a preassigned time step, and the subsequent random immobile time follows a Hyper-exponential distribution for finite immobile zones or a tempered stable distribution for infinite immobile zones with an exponentially tempered power-law memory function. The second step describes well-documented early arrivals which can follow streamlines due to mechanical dispersion using the method of subordination to regional flow. Applicability and implementation of the Lagrangian solver are further checked against transport observed in various media. Results show that, although the time-nonlocal model parameters are predictable for transport with retention in alluvial settings, the standard time-nonlocal model cannot capture early arrivals. Retention and early arrivals observed in porous and fractured media can be efficiently modeled by our Lagrangian solver, allowing anomalous transport to be incorporated into 2-D/3-D models with irregular flow fields. Extensions of the particle-tracking approach are also discussed for transport with parameters conditioned on local aquifer properties, as required by transient flow and nonstationary media.
Transition point prediction in a multicomponent lattice Boltzmann model: Forcing scheme dependencies
Küllmer, Knut; Krämer, Andreas; Joppich, Wolfgang; Reith, Dirk; Foysi, Holger
2018-02-01
Pseudopotential-based lattice Boltzmann models are widely used for numerical simulations of multiphase flows. In the special case of multicomponent systems, the overall dynamics are characterized by the conservation equations for mass and momentum as well as an additional advection diffusion equation for each component. In the present study, we investigate how the latter is affected by the forcing scheme, i.e., by the way the underlying interparticle forces are incorporated into the lattice Boltzmann equation. By comparing two model formulations for pure multicomponent systems, namely the standard model [X. Shan and G. D. Doolen, J. Stat. Phys. 81, 379 (1995), 10.1007/BF02179985] and the explicit forcing model [M. L. Porter et al., Phys. Rev. E 86, 036701 (2012), 10.1103/PhysRevE.86.036701], we reveal that the diffusion characteristics drastically change. We derive a generalized, potential function-dependent expression for the transition point from the miscible to the immiscible regime and demonstrate that it is shifted between the models. The theoretical predictions for both the transition point and the mutual diffusion coefficient are validated in simulations of static droplets and decaying sinusoidal concentration waves, respectively. To show the universality of our analysis, two common and one new potential function are investigated. As the shift in the diffusion characteristics directly affects the interfacial properties, we additionally show that phenomena related to the interfacial tension such as the modeling of contact angles are influenced as well.
Third Order Reconstruction of the KP Scheme for Model of River Tinnelva
Directory of Open Access Journals (Sweden)
Susantha Dissanayake
2017-01-01
Full Text Available The Saint-Venant equation/Shallow Water Equation is used to simulate flow of river, flow of liquid in an open channel, tsunami etc. The Kurganov-Petrova (KP scheme which was developed based on the local speed of discontinuity propagation, can be used to solve hyperbolic type partial differential equations (PDEs, hence can be used to solve the Saint-Venant equation. The KP scheme is semi discrete: PDEs are discretized in the spatial domain, resulting in a set of Ordinary Differential Equations (ODEs. In this study, the common 2nd order KP scheme is extended into 3rd order scheme while following the Weighted Essentially Non-Oscillatory (WENO and Central WENO (CWENO reconstruction steps. Both the 2nd order and 3rd order schemes have been used in simulation in order to check the suitability of the KP schemes to solve hyperbolic type PDEs. The simulation results indicated that the 3rd order KP scheme shows some better stability compared to the 2nd order scheme. Computational time for the 3rd order KP scheme for variable step-length ode solvers in MATLAB is less compared to the computational time of the 2nd order KP scheme. In addition, it was confirmed that the order of the time integrators essentially should be lower compared to the order of the spatial discretization. However, for computation of abrupt step changes, the 2nd order KP scheme shows a more accurate solution.
Coirier, William John
1994-01-01
A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a
Numerical schemes for explosion hazards
International Nuclear Information System (INIS)
Therme, Nicolas
2015-01-01
In nuclear facilities, internal or external explosions can cause confinement breaches and radioactive materials release in the environment. Hence, modeling such phenomena is crucial for safety matters. Blast waves resulting from explosions are modeled by the system of Euler equations for compressible flows, whereas Navier-Stokes equations with reactive source terms and level set techniques are used to simulate the propagation of flame front during the deflagration phase. The purpose of this thesis is to contribute to the creation of efficient numerical schemes to solve these complex models. The work presented here focuses on two major aspects: first, the development of consistent schemes for the Euler equations, then the buildup of reliable schemes for the front propagation. In both cases, explicit in time schemes are used, but we also introduce a pressure correction scheme for the Euler equations. Staggered discretization is used in space. It is based on the internal energy formulation of the Euler system, which insures its positivity and avoids tedious discretization of the total energy over staggered grids. A discrete kinetic energy balance is derived from the scheme and a source term is added in the discrete internal energy balance equation to preserve the exact total energy balance at the limit. High order methods of MUSCL type are used in the discrete convective operators, based solely on material velocity. They lead to positivity of density and internal energy under CFL conditions. This ensures that the total energy cannot grow and we can furthermore derive a discrete entropy inequality. Under stability assumptions of the discrete L8 and BV norms of the scheme's solutions one can prove that a sequence of converging discrete solutions necessarily converges towards the weak solution of the Euler system. Besides it satisfies a weak entropy inequality at the limit. Concerning the front propagation, we transform the flame front evolution equation (the so
Directory of Open Access Journals (Sweden)
Klare Ingo
2007-04-01
Full Text Available Abstract Background MLVA (multiple-locus variable-number tandem repeat analysis is a reliable typing technique introduced recently to differentiate also isolates of Enterococcus faecium. We used the established VNTR (variable number of tandem repeats scheme to test its suitability to differentiate 58 E. faecium isolates representing mainly outbreaks and clusters of infections and colonizations among patients from 31 German hospitals. All isolates were vancomycin-resistant (vanA type. Typing results for MLVA are compared with results of macrorestriction analysis in PFGE (pulsed-field gel electrophoresis and MLST (multi-locus sequence typing. Results All 51 but one hospital isolates from 1996–2006 were assigned to the clonal complex (CC of epidemic-virulent, hospital-adapted lineages (MLST CC-17; MLVA CC-1 and differed from isolates of sporadic infections and colonizations (n = 7; 1991–1995 and other non-hospital origins (n = 27. Typing of all 58 hospital VRE revealed MLVA as the least discriminatory method (Simpson's diversity index 0.847 when compared to MLST (0.911 and PFGE (0.976. The two most common MLVA types MT-1 (n = 16 and MT-159 (n = 14 combined isolates of several MLST types including also major epidemic, hospital-adapted, clonal types (MT-1: ST-17, ST-18, ST-280, ST-282; MT-159: ST-78, ST-192, ST-203. These data clearly indicate that non-related E. faecium could possess an identical MLVA type being especially critical when MLVA is used to elucidate supposed outbreaks with E. faecium within a single or among different hospitals. Stability of a given MLVA profile MT-12 (ST-117 during an outbreak over a period of five years was also shown. Conclusion MLVA is a suitable method to assign isolates of E. faecium into distinct clonal complexes. To investigate outbreaks the current MLVA typing scheme for E. faecium does not discriminate enough and cannot be recommended as a standard superior to PFGE.
Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang
2018-05-01
The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.
Modeling the mechanics of HMX detonation using a Taylor–Galerkin scheme
Directory of Open Access Journals (Sweden)
Adam V. Duran
2016-05-01
Full Text Available Design of energetic materials is an exciting area in mechanics and materials science. Energetic composite materials are used as propellants, explosives, and fuel cell components. Energy release in these materials are accompanied by extreme events: shock waves travel at typical speeds of several thousand meters per second and the peak pressures can reach hundreds of gigapascals. In this paper, we develop a reactive dynamics code for modeling detonation wave features in one such material. The key contribution in this paper is an integrated algorithm to incorporate equations of state, Arrhenius kinetics, and mixing rules for particle detonation in a Taylor–Galerkin finite element simulation. We show that the scheme captures the distinct features of detonation waves, and the detonation velocity compares well with experiments reported in literature.
Incorporation of UK Met Office's radiation scheme into CPTEC's global model
Chagas, Júlio C. S.; Barbosa, Henrique M. J.
2009-03-01
Current parameterization of radiation in the CPTEC's (Center for Weather Forecast and Climate Studies, Cachoeira Paulista, SP, Brazil) operational AGCM has its origins in the work of Harshvardhan et al. (1987) and uses the formulation of Ramaswamy and Freidenreich (1992) for the short-wave absorption by water vapor. The UK Met Office's radiation code (Edwards and Slingo, 1996) was incorporated into CPTEC's global model, initially for short-wave only, and some impacts of that were shown by Chagas and Barbosa (2006). Current paper presents some impacts of the complete incorporation (both short-wave and long-wave) of UK Met Office's scheme. Selected results from off-line comparisons with line-by-line benchmark calculations are shown. Impacts on the AGCM's climate are assessed by comparing output of climate runs of current and modified AGCM with products from GEWEX/SRB (Surface Radiation Budget) project.
Investigation of thermalization in giant-spin models by different Lindblad schemes
Energy Technology Data Exchange (ETDEWEB)
Beckmann, Christian; Schnack, Jürgen, E-mail: jschnack@uni-bielefeld.de
2017-09-01
Highlights: • The non-equilibrium magnetization is investigated with quantum master equations that rest on Lindblad schemes. • It is studied how different couplings to the bath modify the magnetization. • Various field protocols are employed; relaxation times are deduced. • Result: the time evolution depends strongly on the details of the transition operator used in the Lindblad term. - Abstract: The theoretical understanding of time-dependence in magnetic quantum systems is of great importance in particular for cases where a unitary time evolution is accompanied by relaxation processes. A key example is given by the dynamics of single-molecule magnets where quantum tunneling of the magnetization competes with thermal relaxation over the anisotropy barrier. In this article we investigate how good a Lindblad approach describes the relaxation in giant spin models and how the result depends on the employed operator that transmits the action of the thermal bath.
Hyperbolic reformulation of a 1D viscoelastic blood flow model and ADER finite volume schemes
International Nuclear Information System (INIS)
Montecinos, Gino I.; Müller, Lucas O.; Toro, Eleuterio F.
2014-01-01
The applicability of ADER finite volume methods to solve hyperbolic balance laws with stiff source terms in the context of well-balanced and non-conservative schemes is extended to solve a one-dimensional blood flow model for viscoelastic vessels, reformulated as a hyperbolic system, via a relaxation time. A criterion for selecting relaxation times is found and an empirical convergence rate assessment is carried out to support this result. The proposed methodology is validated by applying it to a network of viscoelastic vessels for which experimental and numerical results are available. The agreement between the results obtained in the present paper and those available in the literature is satisfactory. Key features of the present formulation and numerical methodologies, such as accuracy, efficiency and robustness, are fully discussed in the paper
An implicit turbulence model for low-Mach Roe scheme using truncated Navier-Stokes equations
Li, Chung-Gang; Tsubokura, Makoto
2017-09-01
The original Roe scheme is well-known to be unsuitable in simulations of turbulence because the dissipation that develops is unsatisfactory. Simulations of turbulent channel flow for Reτ = 180 show that, with the 'low-Mach-fix for Roe' (LMRoe) proposed by Rieper [J. Comput. Phys. 230 (2011) 5263-5287], the Roe dissipation term potentially equates the simulation to an implicit large eddy simulation (ILES) at low Mach number. Thus inspired, a new implicit turbulence model for low Mach numbers is proposed that controls the Roe dissipation term appropriately. Referred to as the automatic dissipation adjustment (ADA) model, the method of solution follows procedures developed previously for the truncated Navier-Stokes (TNS) equations and, without tuning of parameters, uses the energy ratio as a criterion to automatically adjust the upwind dissipation. Turbulent channel flow at two different Reynold numbers and the Taylor-Green vortex were performed to validate the ADA model. In simulations of turbulent channel flow for Reτ = 180 at Mach number of 0.05 using the ADA model, the mean velocity and turbulence intensities are in excellent agreement with DNS results. With Reτ = 950 at Mach number of 0.1, the result is also consistent with DNS results, indicating that the ADA model is also reliable at higher Reynolds numbers. In simulations of the Taylor-Green vortex at Re = 3000, the kinetic energy is consistent with the power law of decaying turbulence with -1.2 exponents for both LMRoe with and without the ADA model. However, with the ADA model, the dissipation rate can be significantly improved near the dissipation peak region and the peak duration can be also more accurately captured. With a firm basis in TNS theory, applicability at higher Reynolds number, and ease in implementation as no extra terms are needed, the ADA model offers to become a promising tool for turbulence modeling.
Modeling Power Systems as Complex Adaptive Systems
Energy Technology Data Exchange (ETDEWEB)
Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.
2004-12-30
Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.
Model-based design of adaptive embedded systems
Hamberg, Roelof; Reckers, Frans; Verriet, Jacques
2013-01-01
Today’s embedded systems have to operate in a wide variety of dynamically changing environmental circumstances. Adaptivity, the ability of a system to autonomously adapt itself, is a means to optimise a system’s behaviour to accommodate changes in its environment. It involves making in-product trade-offs between system qualities at system level. The main challenge in the development of adaptive systems is keeping control of the intrinsic complexity of such systems while working with multi-disciplinary teams to create different parts of the system. Model-Based Development of Adaptive Embedded Systems focuses on the development of adaptive embedded systems both from an architectural and methodological point of view. It describes architectural solution patterns for adaptive systems and state-of-the-art model-based methods and techniques to support adaptive system development. In particular, the book describes the outcome of the Octopus project, a cooperation of a multi-disciplinary team of academic and indus...
Adaptive Modeling and Real-Time Simulation
1984-01-01
34 Artificial Inteligence , Vol. 13, pp. 27-39 (1980). Describes circumscription which is just the assumption that everything that is known to have a particular... Artificial Intelligence Truth Maintenance Planning Resolution Modeling Wcrld Models ~ .. ~2.. ASSTR AT (Coninue n evrse sieIf necesaran Identfy by...represents a marriage of (1) the procedural-network st, planning technology developed in artificial intelligence with (2) the PERT/CPM technology developed in
Günther, T; Büttner, C; Käsbohrer, A; Filter, M
2015-01-01
Mathematical models on properties and behavior of harmful organisms in the food chain are an increas- ingly relevant approach of the agriculture and food industry. As a consequence, there are many efforts to develop biological models in science, economics and risk assessment nowadays. However, there is a lack of international harmonized standards on model annotation and model formats, which would be neces- sary to set up efficient tools supporting broad model application and information exchange. There are some established standards in the field of systems biology, but there is currently no corresponding provi- sion in the area of plant protection. This work therefore aimed at the development of an annotation scheme using domain-specific metadata. The proposed scheme has been validated in a prototype implementation of a web-database model repository. This prototypic community resource currently contains models on aflatoxin secreting fungal Aspergillus flavus in maize, as these models have a high relevance to food safety and economic impact. Specifically, models describing biological processes of the fungus (growth, Aflatoxin secreting), as well as dose-response- and carry over models were included. Furthermore, phenological models for maize were integrated as well. The developed annotation scheme is based on the well-established data exchange format SBML, which is broadly applied in the field of systems biology. The identified example models were annotated according to the developed scheme and entered into a Web-table (Google Sheets), which was transferred to a web based demonstrator available at https://sites.google.com/site/test782726372685/. By implementation of a software demonstrator it became clear that the proposed annotation scheme can be applied to models on plant pathogens and that broad adoption within the domain could promote communication and application of mathematical models.
Modelling tools for managing Induced RiverBank Filtration MAR schemes
De Filippis, Giovanna; Barbagli, Alessio; Marchina, Chiara; Borsi, Iacopo; Mazzanti, Giorgio; Nardi, Marco; Vienken, Thomas; Bonari, Enrico; Rossetto, Rudy
2017-04-01
Induced RiverBank Filtration (IRBF) is a widely used technique in Managed Aquifer Recharge (MAR) schemes, when aquifers are hydraulically connected with surface water bodies, with proven positive effects on quality and quantity of groundwater. IRBF allows abstraction of a large volume of water, avoiding large decrease in groundwater heads. Moreover, thanks to the filtration process through the soil, the concentration of chemical species in surface water can be reduced, thus becoming an excellent resource for the production of drinking water. Within the FP7 MARSOL project (demonstrating Managed Aquifer Recharge as a SOLution to water scarcity and drought; http://www.marsol.eu/), the Sant'Alessio IRBF (Lucca, Italy) was used to demonstrate the feasibility and technical and economic benefits of managing IRBF schemes (Rossetto et al., 2015a). The Sant'Alessio IRBF along the Serchio river allows to abstract an overall amount of about 0.5 m3/s providing drinking water for 300000 people of the coastal Tuscany (mainly to the town of Lucca, Pisa and Livorno). The supplied water is made available by enhancing river bank infiltration into a high yield (10-2 m2/s transmissivity) sandy-gravelly aquifer by rising the river head and using ten vertical wells along the river embankment. A Decision Support System, consisting in connected measurements from an advanced monitoring network and modelling tools was set up to manage the IRBF. The modelling system is based on spatially distributed and physically based coupled ground-/surface-water flow and solute transport models integrated in the FREEWAT platform (developed within the H2020 FREEWAT project - FREE and Open Source Software Tools for WATer Resource Management; Rossetto et al., 2015b), an open source and public domain GIS-integrated modelling environment for the simulation of the hydrological cycle. The platform aims at improving water resource management by simplifying the application of EU water-related Directives and at
Support of surgical process modeling by using adaptable software user interfaces
Neumuth, T.; Kaschek, B.; Czygan, M.; Goldstein, D.; Strauß, G.; Meixensberger, J.; Burgert, O.
2010-03-01
Surgical Process Modeling (SPM) is a powerful method for acquiring data about the evolution of surgical procedures. Surgical Process Models are used in a variety of use cases including evaluation studies, requirements analysis and procedure optimization, surgical education, and workflow management scheme design. This work proposes the use of adaptive, situation-aware user interfaces for observation support software for SPM. We developed a method to support the modeling of the observer by using an ontological knowledge base. This is used to drive the graphical user interface for the observer to restrict the search space of terminology depending on the current situation. In the evaluation study it is shown, that the workload of the observer was decreased significantly by using adaptive user interfaces. 54 SPM observation protocols were analyzed by using the NASA Task Load Index and it was shown that the use of the adaptive user interface disburdens the observer significantly in workload criteria effort, mental demand and temporal demand, helping him to concentrate on his essential task of modeling the Surgical Process.
Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.
Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-05-01
Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.
Zeroual, Abdelhafid
2017-08-19
Monitoring vehicle traffic flow plays a central role in enhancing traffic management, transportation safety and cost savings. In this paper, we propose an innovative approach for detection of traffic congestion. Specifically, we combine the flexibility and simplicity of a piecewise switched linear (PWSL) macroscopic traffic model and the greater capacity of the exponentially-weighted moving average (EWMA) monitoring chart. Macroscopic models, which have few, easily calibrated parameters, are employed to describe a free traffic flow at the macroscopic level. Then, we apply the EWMA monitoring chart to the uncorrelated residuals obtained from the constructed PWSL model to detect congested situations. In this strategy, wavelet-based multiscale filtering of data has been used before the application of the EWMA scheme to improve further the robustness of this method to measurement noise and reduce the false alarms due to modeling errors. The performance of the PWSL-EWMA approach is successfully tested on traffic data from the three lane highway portion of the Interstate 210 (I-210) highway of the west of California and the four lane highway portion of the State Route 60 (SR60) highway from the east of California, provided by the Caltrans Performance Measurement System (PeMS). Results show the ability of the PWSL-EWMA approach to monitor vehicle traffic, confirming the promising application of this statistical tool to the supervision of traffic flow congestion.
Statistical Models of Adaptive Immune populations
Sethna, Zachary; Callan, Curtis; Walczak, Aleksandra; Mora, Thierry
The availability of large (104-106 sequences) datasets of B or T cell populations from a single individual allows reliable fitting of complex statistical models for naïve generation, somatic selection, and hypermutation. It is crucial to utilize a probabilistic/informational approach when modeling these populations. The inferred probability distributions allow for population characterization, calculation of probability distributions of various hidden variables (e.g. number of insertions), as well as statistical properties of the distribution itself (e.g. entropy). In particular, the differences between the T cell populations of embryonic and mature mice will be examined as a case study. Comparing these populations, as well as proposed mixed populations, provides a concrete exercise in model creation, comparison, choice, and validation.
Adaptive PID and Model Reference Adaptive Control Switch Controller for Nonlinear Hydraulic Actuator
Directory of Open Access Journals (Sweden)
Xin Zuo
2017-01-01
Full Text Available Nonlinear systems are modeled as piecewise linear systems at multiple operating points, where the operating points are modeled as switches between constituent linearized systems. In this paper, adaptive piecewise linear switch controller is proposed for improving the response time and tracking performance of the hydraulic actuator control system, which is essentially piecewise linear. The controller composed of PID and Model Reference Adaptive Control (MRAC adaptively chooses the proportion of these two components and makes the designed system have faster response time at the transient phase and better tracking performance, simultaneously. Then, their stability and tracking performance are analyzed and evaluated by the hydraulic actuator control system, the hydraulic actuator is controlled by the electrohydraulic system, and its model is built, which has piecewise linear characteristic. Then the controller results are compared between PID and MRAC and the switch controller designed in this paper is applied to the hydraulic actuator; it is obvious that adaptive switch controller has better effects both on response time and on tracking performance.
Switching Adaptability in Human-Inspired Sidesteps: A Minimal Model
Directory of Open Access Journals (Sweden)
Keisuke Fujii
2017-06-01
Full Text Available Humans can adapt to abruptly changing situations by coordinating redundant components, even in bipedality. Conventional adaptability has been reproduced by various computational approaches, such as optimal control, neural oscillator, and reinforcement learning; however, the adaptability in bipedal locomotion necessary for biological and social activities, such as unpredicted direction change in chase-and-escape, is unknown due to the dynamically unstable multi-link closed-loop system. Here we propose a switching adaptation model for performing bipedal locomotion by improving autonomous distributed control, where autonomous actuators interact without central control and switch the roles for propulsion, balancing, and leg swing. Our switching mobility model achieved direction change at any time using only three actuators, although it showed higher motor costs than comparable models without direction change. Our method of evaluating such adaptation at any time should be utilized as a prerequisite for understanding universal motor control. The proposed algorithm may simply explain and predict the adaptation mechanism in human bipedality to coordinate the actuator functions within and between limbs.
International Nuclear Information System (INIS)
Nishi, Tamaki; Nishimura, Yasumasa; Shibata, Toru; Tamura, Masaya; Nishigaito, Naohiro; Okumura, Masahiko
2013-01-01
Purpose: The aim of this study was to show the benefit of a two-step intensity modulated radiotherapy (IMRT) method by examining geometric and dosimetric changes. Material and Methods: Twenty patients with pharyngeal cancers treated with two-step IMRT combined with chemotherapy were included. Treatment-planning CT was done twice before IMRT (CT-1) and at the third or fourth week of IMRT for boost IMRT (CT-2). Transferred plans recalculated initial plan on CT-2 were compared with the initial plans on CT-1. Dose parameters were calculated for a total dose of 70 Gy for each plan. Results: The volumes of primary tumors and parotid glands on CT-2 regressed significantly. Parotid glands shifted medially an average of 4.2 mm on CT-2. The mean doses of the parotid glands in the initial and transferred plans were 25.2 Gy and 30.5 Gy, respectively. D 2 (dose to 2% of the volume) doses of the spinal cord were 37.1 Gy and 39.2 Gy per 70 Gy, respectively. Of 15 patients in whom xerostomia scores could be evaluated 1–2 years after IMRT, no patient complained of grade 2 or more xerostomia. Conclusions: This two-step IMRT method as an adaptive RT scheme could adapt to changes in body contour, target volumes and risk organs during IMRT
Yasunari, Teppei
2012-01-01
Recently the issue on glacier retreats comes up and many factors should be relevant to the issue. The absorbing aerosols such as dust and black carbon (BC) are considered to be one of the factors. After they deposited onto the snow surface, it will reduce snow albedo (called snow darkening effect) and probably contribute to further melting of glacier. The Goddard Earth Observing System version 5 (GEOS-5) has developed at NASA/GSFC. However, the original snowpack model used in the land surface model in the GEOS-5 did not consider the snow darkening effect. Here we developed the new snow albedo scheme which can consider the snow darkening effect. In addition, another scheme on calculating mass concentrations on the absorbing aerosols in snowpack was also developed, in which the direct aerosol depositions from the chemical transport model in the GEOS-5 were used. The scheme has been validated with the observed data obtained at backyard of the Institute of Low Temperature Science, Hokkaido University, by Dr. Teruo Aoki (Meteorological Research Institute) et aL including me. The observed data was obtained when I was Ph.D. candidate. The original GEOS-5during 2007-2009 over the Himalayas and Tibetan Plateau region showed more reductions of snow than that of the new GEOS-5 because the original one used lower albedo settings. On snow cover fraction, the new GEOS-5 simulated more realistic snow-covered area comparing to the MODIS snow cover fraction. The reductions on snow albedo, snow cover fraction, and snow water equivalent were seen with statistically significance if we consider the snow darkening effect comparing to the results without the snow darkening effect. In the real world, debris cover, inside refreezing process, surface flow of glacier, etc. affect glacier mass balance and the simulated results immediately do not affect whole glacier retreating. However, our results indicate that some surface melting over non debris covered parts of the glacier would be
Iyer, Subramaniam
2017-01-01
Among the systems in place in different countries for the protection of the population against the long-term contingencies of old-age (or retirement), disability and death (or survivorship), defined-benefit social security pension schemes, i.e. social insurance pension schemes, by far predominate, despite the recent trend towards defined-contribution arrangements in social security reforms. Actuarial valuations of these schemes, unlike other branches of insurance, continue to be carried out a...
DEFF Research Database (Denmark)
Yang, Z.; Izadi-Zamanabadi, Roozbeh; Blanke, M.
2000-01-01
Based on the model-matching strategy, an adaptive control reconfiguration method for a class of nonlinear control systems is proposed by using the multiple-model scheme. Instead of requiring the nominal and faulty nonlinear systems to match each other directly in some proper sense, three sets...... of LTI models are employed to approximate the faulty, reconfigured and nominal nonlinear systems respectively with respect to the on-line information of the operating system, and a set of compensating modules are proposed and designed so as to make the local LTI model approximating to the reconfigured...
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...
Adaptation dynamics of the quasispecies model
Indian Academy of Sciences (India)
Centre for Advanced Scientific Research, Jakkur P.O., Bangalore 560 064, India ... lation can increase with time in either a smooth continuous manner [2] or sudden ... 2. Quasispecies model and its steady state. We consider an infinitely large population reproducing asexually via the elementary processes of selection and ...
Directory of Open Access Journals (Sweden)
Mehdi Hussain
2016-05-01
Full Text Available The goal of image steganographic methods considers three main key issues: high embedding capacity, good visual symmetry/quality, and security. In this paper, a hybrid data hiding method combining the right-most digit replacement (RMDR with an adaptive least significant bit (ALSB is proposed to provide not only high embedding capacity but also maintain a good visual symmetry. The cover-image is divided into lower texture (symmetry patterns and higher texture (asymmetry patterns areas and these textures determine the selection of RMDR and ALSB methods, respectively, according to pixel symmetry. This paper has three major contributions. First, the proposed hybrid method enhanced the embedding capacity due to efficient ALSB utilization in the higher texture areas of cover images. Second, the proposed hybrid method maintains the high visual quality because RMDR has the closest selection process to generate the symmetry between stego and cover pixels. Finally, the proposed hybrid method is secure against statistical regular or singular (RS steganalysis and pixel difference histogram steganalysis because RMDR is capable of evading the risk of RS detection attacks due to pixel digits replacement instead of bits. Extensive experimental tests (over 1500+ cover images are conducted with recent least significant bit (LSB-based hybrid methods and it is demonstrated that the proposed hybrid method has a high embedding capacity (800,019 bits while maintaining good visual symmetry (39.00% peak signal-to-noise ratio (PSNR.
Yarrow, Maurice; VanderWijngaart, Rob; Kutler, Paul (Technical Monitor)
1997-01-01
The first release of the MPI version of the LU NAS Parallel Benchmark (NPB2.0) performed poorly compared to its companion NPB2.0 codes. The later LU release (NPB2.1 & 2.2) runs up to two and a half times faster, thanks to a revised point access scheme and related communications scheme. The new scheme sends substantially fewer messages. is cache "friendly", and has a better load balance. We detail the, observations and modifications that resulted in this efficiency improvement, and show that the poor behavior of the original code resulted from deriving a message passing scheme from an algorithm originally devised for a vector architecture.
Neuro- PI controller based model reference adaptive control for ...
African Journals Online (AJOL)
The control input to the plant is given by the sum of the output of conventional MRAC and the output of NN. The proposed Neural Network -based Model Reference Adaptive Controller (NN-MRAC) can significantly improve the system behavior and force the system to follow the reference model and minimize the error ...
The behavior of adaptive bone-remodeling simulation models
H.H. Weinans (Harrie); R. Huiskes (Rik); H.J. Grootenboer
1992-01-01
textabstractThe process of adaptive bone remodeling can be described mathematically and simulated in a computer model, integrated with the finite element method. In the model discussed here, cortical and trabecular bone are described as continuous materials with variable density. The remodeling rule
Directory of Open Access Journals (Sweden)
C. Bommaraju
2005-01-01
Full Text Available Numerical methods are extremely useful in solving real-life problems with complex materials and geometries. However, numerical methods in the time domain suffer from artificial numerical dispersion. Standard numerical techniques which are second-order in space and time, like the conventional Finite Difference 3-point (FD3 method, Finite-Difference Time-Domain (FDTD method, and Finite Integration Technique (FIT provide estimates of the error of discretized numerical operators rather than the error of the numerical solutions computed using these operators. Here optimally accurate time-domain FD operators which are second-order in time as well as in space are derived. Optimal accuracy means the greatest attainable accuracy for a particular type of scheme, e.g., second-order FD, for some particular grid spacing. The modified operators lead to an implicit scheme. Using the first order Born approximation, this implicit scheme is transformed into a two step explicit scheme, namely predictor-corrector scheme. The stability condition (maximum time step for a given spatial grid interval for the various modified schemes is roughly equal to that for the corresponding conventional scheme. The modified FD scheme (FDM attains reduction of numerical dispersion almost by a factor of 40 in 1-D case, compared to the FD3, FDTD, and FIT. The CPU time for the FDM scheme is twice of that required by the FD3 method. The simulated synthetic data for a 2-D P-SV (elastodynamics problem computed using the modified scheme are 30 times more accurate than synthetics computed using a conventional scheme, at a cost of only 3.5 times as much CPU time. The FDM is of particular interest in the modeling of large scale (spatial dimension is more or equal to one thousand wave lengths or observation time interval is very high compared to reference time step wave propagation and scattering problems, for instance, in ultrasonic antenna and synthetic scattering data modeling for Non
Maity, S.; Satyanarayana, A. N. V.; Mandal, M.; Nayak, S.
2017-11-01
In this study, an attempt has been made to investigate the sensitivity of land surface models (LSM) and cumulus convection schemes (CCS) using a regional climate model, RegCM Version-4.1 in simulating the Indian Summer Monsoon (ISM). Numerical experiments were conducted in seasonal scale (May-September) for three consecutive years: 2007, 2008, 2009 with two LSMs (Biosphere Atmosphere Transfer Scheme (BATS), Community Land Model (CLM 3.5) and five CCSs (MIT, KUO, GRELL, GRELL over land and MIT over ocean (GL_MO), GRELL over ocean and MIT over land (GO_ML)). Important synoptic features are validated using various reanalysis datasets and satellite derived products from TRMM and CRU data. Seasonally averaged surface temperature is reasonably well simulated by the model using both the LSMs along with CCSs namely, MIT, GO_ML and GL_MO schemes. Model simulations reveal slight warm bias using these schemes whereas significant cold bias is seen with KUO and GRELL schemes during all three years. It is noticed that the simulated Somali Jet (SJ) is weak in all simulations except MIT scheme in the simulations with (both BATS and CLM) in which the strength of SJ reasonably well captured. Although the model is able to simulate the Tropical Easterly Jet (TEJ) and Sub-Tropical Westerly Jet (STWJ) with all the CCSs in terms of their location and strength, the performance of MIT scheme seems to be better than the rest of the CCSs. Seasonal rainfall is not well simulated by the model. Significant underestimation of Indian Summer Monsoon Rainfall (ISMR) is observed over Central and North West India. Spatial distribution of seasonal ISMR is comparatively better simulated by the model with MIT followed by GO_ML scheme in combination with CLM although it overestimates rainfall over heavy precipitation zones. On overall statistical analysis, it is noticed that RegCM4 shows better skill in simulating ISM with MIT scheme using CLM.
A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...
The U.S. Environmental Protection Agency (USEPA) has a team of scientists developing a next generation air quality modeling system employing the Model for Prediction Across Scales – Atmosphere (MPAS-A) as its meteorological foundation. Several preferred physics schemes and ...
An economically designed, integrated quality and maintenance model using an adaptive Shewhart chart
International Nuclear Information System (INIS)
Panagiotidou, Sofia; Nenes, George
2009-01-01
This paper proposes a model for the economic design of a variable-parameter (Vp) Shewhart control chart used to monitor the mean in a process, where, apart from quality shifts, failures may also occur. Quality shifts result in poorer quality outcome, higher operational cost and higher failure rate. Thus, removal of such quality shifts, besides improving the quality of the outcome and reducing the quality cost, is also a preventive maintenance (PM) action since it reduces the probability of a failure and improves the equipment reliability. The proposed model allows the determination of the scheme parameters that minimize the total expected quality and maintenance cost of the procedure. The monitoring mechanism of the process employs an adaptive Vp-Shewhart control chart. To evaluate the effectiveness of the proposed model, its optimal expected cost is compared against the optimum cost of a fixed-parameter (Fp) chart
Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools
Ding, Steven X
2013-01-01
Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: · new material on fault isolation and identification, and fault detection in feedback control loops; · extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and · enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...
An iterative representer-based scheme for data inversion in reservoir modeling
International Nuclear Information System (INIS)
Iglesias, Marco A; Dawson, Clint
2009-01-01
In this paper, we develop a mathematical framework for data inversion in reservoir models. A general formulation is presented for the identification of uncertain parameters in an abstract reservoir model described by a set of nonlinear equations. Given a finite number of measurements of the state and prior knowledge of the uncertain parameters, an iterative representer-based scheme (IRBS) is proposed to find improved parameters. In this approach, the representer method is used to solve a linear data assimilation problem at each iteration of the algorithm. We apply the theory of iterative regularization to establish conditions for which the IRBS will converge to a stable approximation of a solution to the parameter identification problem. These theoretical results are applied to the identification of the second-order coefficient of a forward model described by a parabolic boundary value problem. Numerical results are presented to show the capabilities of the IRBS for the reconstruction of hydraulic conductivity from the steady-state of groundwater flow, as well as the absolute permeability in the single-phase Darcy flow through porous media
Directory of Open Access Journals (Sweden)
C. A. Randles
2013-03-01
Full Text Available In this study we examine the performance of 31 global model radiative transfer schemes in cloud-free conditions with prescribed gaseous absorbers and no aerosols (Rayleigh atmosphere, with prescribed scattering-only aerosols, and with more absorbing aerosols. Results are compared to benchmark results from high-resolution, multi-angular line-by-line radiation models. For purely scattering aerosols, model bias relative to the line-by-line models in the top-of-the atmosphere aerosol radiative forcing ranges from roughly −10 to 20%, with over- and underestimates of radiative cooling at lower and higher solar zenith angle, respectively. Inter-model diversity (relative standard deviation increases from ~10 to 15% as solar zenith angle decreases. Inter-model diversity in atmospheric and surface forcing decreases with increased aerosol absorption, indicating that the treatment of multiple-scattering is more variable than aerosol absorption in the models considered. Aerosol radiative forcing results from multi-stream models are generally in better agreement with the line-by-line results than the simpler two-stream schemes. Considering radiative fluxes, model performance is generally the same or slightly better than results from previous radiation scheme intercomparisons. However, the inter-model diversity in aerosol radiative forcing remains large, primarily as a result of the treatment of multiple-scattering. Results indicate that global models that estimate aerosol radiative forcing with two-stream radiation schemes may be subject to persistent biases introduced by these schemes, particularly for regional aerosol forcing.
Zhang, Yichuan; Wang, Jiangping
2015-07-01
Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.
Adapting to life: Ecosystem and ocean modelling using dynamic adaptive remeshing
Hill, J.; Popova, E.; Piggott, M. D.; Ham, D.; Srokosz, M. A.
2011-12-01
Primary production in the world ocean is significantly controlled by meso- and sub-mesocale process. Thus existing general circulation models applied at the basin and global scale are limited by two opposing requirements: to have high enough spatial resolution to resolve fully the processes involved (down to order 1km) and the need to realistically simulate the basin scale. No model can currently satisfy both of these constraints. Adaptive unstructured mesh techniques offer a fundamental advantage over standard fixed structured mesh models by automatically generating very high resolution at locations only where and when it is required. Mesh adaptivity automatically resolves fine-scale physical or biological features as they develop, optimising computational cost by reducing resolution where it is not required. Here, we describe Fluidity-ICOM, a non-hydrostatic, finite-element, unstructured mesh ocean model, into which we have embedded a six-component ecosystem model, that has been validated at a number of ocean locations. We demonstrate the benefits of adaptive unstructured mesh techniques for coupled physical and biological modelling by examining a convective example where a chimney of cold water is allowed to restratify. The restratification leads to changes in the mixed layer depth, pumping nutrients from depth, affecting the dynamics and spatial distribution of the ecosystem components. We examine the effects of a number of factors, including wind stress and temperature fluxes, on the ecosystem during the restratification. Comparing results between the fixed and adaptive mesh simulations shows the importance of sub-mesoscale processes in determining the biological response, and stresses the need for high-resolution in coupled biology-physics ocean models.
Directory of Open Access Journals (Sweden)
Richard Yao Kuma Agyeman
2017-01-01
Full Text Available Seasonal predictions of precipitation, among others, are important to help mitigate the effects of drought and floods on agriculture, hydropower generation, disasters, and many more. This work seeks to obtain a suitable combination of physics schemes of the Weather Research and Forecasting (WRF model for seasonal precipitation simulation over Ghana. Using the ERA-Interim reanalysis as forcing data, simulation experiments spanning eight months (from April to November were performed for two different years: a dry year (2001 and a wet year (2008. A double nested approach was used with the outer domain at 50 km resolution covering West Africa and the inner domain covering Ghana at 10 km resolution. The results suggest that the WRF model generally overestimated the observed precipitation by a mean value between 3% and 64% for both years. Most of the scheme combinations overestimated (underestimated precipitation over coastal (northern zones of Ghana for both years but estimated precipitation reasonably well over forest and transitional zones. On the whole, the combination of WRF Single-Moment 6-Class Microphysics Scheme, Grell-Devenyi Ensemble Cumulus Scheme, and Asymmetric Convective Model Planetary Boundary Layer Scheme simulated the best temporal pattern and temporal variability with the least relative bias for both years and therefore is recommended for Ghana.
Modeling Adaptive and Nonadaptive Responses of Populations to Environmental Change.
Coulson, Tim; Kendall, Bruce E; Barthold, Julia; Plard, Floriane; Schindler, Susanne; Ozgul, Arpat; Gaillard, Jean-Michel
2017-09-01
Understanding how the natural world will be impacted by environmental change over the coming decades is one of the most pressing challenges facing humanity. Addressing this challenge is difficult because environmental change can generate both population-level plastic and evolutionary responses, with plastic responses being either adaptive or nonadaptive. We develop an approach that links quantitative genetic theory with data-driven structured models to allow prediction of population responses to environmental change via plasticity and adaptive evolution. After introducing general new theory, we construct a number of example models to demonstrate that evolutionary responses to environmental change over the short-term will be considerably slower than plastic responses and that the rate of adaptive evolution to a new environment depends on whether plastic responses are adaptive or nonadaptive. Parameterization of the models we develop requires information on genetic and phenotypic variation and demography that will not always be available, meaning that simpler models will often be required to predict responses to environmental change. We consequently develop a method to examine whether the full machinery of the evolutionarily explicit models we develop will be needed to predict responses to environmental change or whether simpler nonevolutionary models that are now widely constructed may be sufficient.
Directory of Open Access Journals (Sweden)
P. Chitra
2017-04-01
Full Text Available Recently, wireless network technologies were designed for most of the applications. Congestion raised in the wireless network degrades the performance and reduces the throughput. Congestion-free network is quit essen- tial in the transport layer to prevent performance degradation in a wireless network. Game theory is a branch of applied mathematics and applied sciences that used in wireless network, political science, biology, computer science, philosophy and economics. e great challenges of wireless network are their congestion by various factors. E ective congestion-free alternate path routing is pretty essential to increase network performance. Stackelberg game theory model is currently employed as an e ective tool to design and formulate conges- tion issues in wireless networks. is work uses a Stackelberg game to design alternate path model to avoid congestion. In this game, leaders and followers are selected to select an alternate routing path. e correlated equilibrium is used in Stackelberg game for making better decision between non-cooperation and cooperation. Congestion was continuously monitored to increase the throughput in the network. Simulation results show that the proposed scheme could extensively improve the network performance by reducing congestion with the help of Stackelberg game and thereby enhance throughput.
Khairoutdinov, M.
2015-12-01
The representation of microphysics, especially ice microphysics, remains one of the major uncertainties in cloud-resolving models (CRMs). Most of the cloud schemes use the so-called bulk microphysics approach, in which a few moments of such distributions are used as the prognostic variables. The System for Atmospheric Modeling (SAM) is the CRM that employs two such schemes. The single-moment scheme, which uses only mass for each of the water phases, and the two-moment scheme, which adds the particle concentration for each of the hydrometeor category. Of the two, the single-moment scheme is much more computationally efficient as it uses only two prognostic microphysics variables compared to ten variables used by the two-moment scheme. The efficiency comes from a rather considerable oversimplification of the microphysical processes. For instance, only a sum of the liquid and icy cloud water is predicted with the temperature used to diagnose the mixing ratios of different hydrometeors. The main motivation for using such simplified microphysics has been computational efficiency, especially in the applications of SAM as the super-parameterization in global climate models. Recently, we have extended the single-moment microphysics by adding only one additional prognostic variable, which has, nevertheless, allowed us to separate the cloud ice from liquid water. We made use of some of the recent observations of ice microphysics collected at various parts of the world to parameterize several aspects of ice microphysics that have not been explicitly represented before in our sing-moment scheme. For example, we use the observed broad dependence of ice concentration on temperature to diagnose the ice concentration in addition to prognostic mass. Also, there is no artificial separation between the pristine ice and snow, often used by bulk models. Instead we prescribed the ice size spectrum as the gamma distribution, with the distribution shape parameter controlled by the
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2015-10-01
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.
Kumar, R.; Samaniego, L. E.
2011-12-01
Spatially distributed hydrologic models at mesoscale level are based on the conceptualization and generalization of hydrological processes. Therefore, such models require parameter adjustment for its successful application at a given scale. Automatic computer-based algorithms are commonly used for the calibration purpose. While such algorithms can provide much faster and efficient results as compared to the traditional manual calibration method, they are also prone to overtraining of a parameter set for a given catchment. As a result, the transferability of model parameters from a calibration site to un-calibrated site is limited. In this study, we propose a regional multi-basin calibration scheme to prevent the overtraining of model parameters in a specific catchment. The idea is to split the available catchments into two disjoint groups in such a way that catchments belonging to the first group can be used for calibration (i.e. for minimization or maximization of objective functions), and catchments belonging to other group are used to cross-validation of the model performance for each generated parameter set. The calibration process should be stopped if the model shows a significant decrease in its performance at cross-validation catchments while increasing performance at calibration sites. Hydrologically diverse catchments were selected as members of each calibration and cross-validation groups to obtain a regional set of robust parameter. A dissimilarity measure based on runoff and antecedent precipitation copulas was used for the selection of the disjoint sets. The proposed methodology was used to calibrate transfer function parameters of a distributed mesoscale hydrologic model (mHM), whose parameter fields are linked to catchment characteristics through a set of transfer functions using a multiscale parameter regionalisation method. This study was carried out in 106 south German catchments ranging in size from 4 km2 to 12 700 km2. Initial test results
Directory of Open Access Journals (Sweden)
Moussab eBennehar
2015-12-01
Full Text Available This paper deals with a new control scheme for Parallel Kinematic Manipulators (PKMs based on the L1 adaptive control theory. The original L1 adaptive controller is extended by including an adaptive loop based on the dynamics of the PKM. The additional model-based term is in charge of the compensation of the modeled nonlinear dynamics in the aim of improving the tracking performance. Moreover, the proposed controller is enhanced to reduce the internal forces, which may appear in the case of Redundantly Actuated PKMs (RA-PKMs. The generated control inputs are first regulated through a projection mechanism that reduces the antagonistic internal forces, before being applied to the manipulator. To validate the proposed controller and to show its effectiveness, real-time experiments are conducted on a new four degrees-of-freedom (4-DOFs RA-PKM developed in our laboratory.
Stock market modeling and forecasting a system adaptation approach
Zheng, Xiaolian
2013-01-01
Stock Market Modeling translates experience in system adaptation gained in an engineering context to the modeling of financial markets with a view to improving the capture and understanding of market dynamics. The modeling process is considered as identifying a dynamic system in which a real stock market is treated as an unknown plant and the identification model proposed is tuned by feedback of the matching error. Like a physical system, a stock market exhibits fast and slow dynamics corresponding to internal (such as company value and profitability) and external forces (such as investor sentiment and commodity prices) respectively. The framework presented here, consisting of an internal model and an adaptive filter, is successful at considering both fast and slow market dynamics. A double selection method is efficacious in identifying input factors influential in market movements, revealing them to be both frequency- and market-dependent. The authors present work on both developed and developing markets ...
Evaluation of European air quality modelled by CAMx including the volatility basis set scheme
Directory of Open Access Journals (Sweden)
G. Ciarelli
2016-08-01
Full Text Available Four periods of EMEP (European Monitoring and Evaluation Programme intensive measurement campaigns (June 2006, January 2007, September–October 2008 and February–March 2009 were modelled using the regional air quality model CAMx with VBS (volatility basis set approach for the first time in Europe within the framework of the EURODELTA-III model intercomparison exercise. More detailed analysis and sensitivity tests were performed for the period of February–March 2009 and June 2006 to investigate the uncertainties in emissions as well as to improve the modelling of organic aerosol (OA. Model performance for selected gas phase species and PM2.5 was evaluated using the European air quality database AirBase. Sulfur dioxide (SO2 and ozone (O3 were found to be overestimated for all the four periods, with O3 having the largest mean bias during June 2006 and January–February 2007 periods (8.9 pbb and 12.3 ppb mean biases respectively. In contrast, nitrogen dioxide (NO2 and carbon monoxide (CO were found to be underestimated for all the four periods. CAMx reproduced both total concentrations and monthly variations of PM2.5 for all the four periods with average biases ranging from −2.1 to 1.0 µg m−3. Comparisons with AMS (aerosol mass spectrometer measurements at different sites in Europe during February–March 2009 showed that in general the model overpredicts the inorganic aerosol fraction and underpredicts the organic one, such that the good agreement for PM2.5 is partly due to compensation of errors. The effect of the choice of VBS scheme on OA was investigated as well. Two sensitivity tests with volatility distributions based on previous chamber and ambient measurements data were performed. For February–March 2009 the chamber case reduced the total OA concentrations by about 42 % on average. In contrast, a test based on ambient measurement data increased OA concentrations by about 42 % for the same period bringing
International Nuclear Information System (INIS)
Moiseenko, Vitali; Battista, Jerry; Van Dyk, Jake
2000-01-01
Purpose: To evaluate the impact of dose-volume histogram (DVH) reduction schemes and models of normal tissue complication probability (NTCP) on ranking of radiation treatment plans. Methods and Materials: Data for liver complications in humans and for spinal cord in rats were used to derive input parameters of four different NTCP models. DVH reduction was performed using two schemes: 'effective volume' and 'preferred Lyman'. DVHs for competing treatment plans were derived from a sample DVH by varying dose uniformity in a high dose region so that the obtained cumulative DVHs intersected. Treatment plans were ranked according to the calculated NTCP values. Results: Whenever the preferred Lyman scheme was used to reduce the DVH, competing plans were indistinguishable as long as the mean dose was constant. The effective volume DVH reduction scheme did allow us to distinguish between these competing treatment plans. However, plan ranking depended on the radiobiological model used and its input parameters. Conclusions: Dose escalation will be a significant part of radiation treatment planning using new technologies, such as 3-D conformal radiotherapy and tomotherapy. Such dose escalation will depend on how the dose distributions in organs at risk are interpreted in terms of expected complication probabilities. The present study indicates considerable variability in predicted NTCP values because of the methods used for DVH reduction and radiobiological models and their input parameters. Animal studies and collection of standardized clinical data are needed to ascertain the effects of non-uniform dose distributions and to test the validity of the models currently in use
Wahid, Abdul; Khan, Dost Muhammad; Hussain, Ijaz
2017-01-01
High dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. To address this issue different penalized regression procedures have been introduced in the litrature, but these methods cannot cope with the problem of outliers and leverage points in the heavy tailed high dimensional data. For this purppose, a new Robust Adaptive Lasso (RAL) method is proposed which is based on pearson residuals weighting scheme. The weight function determines the compatibility of each observations and downweight it if they are inconsistent with the assumed model. It is observed that RAL estimator can correctly select the covariates with non-zero coefficients and can estimate parameters, simultaneously, not only in the presence of influential observations, but also in the presence of high multicolliearity. We also discuss the model selection oracle property and the asymptotic normality of the RAL. Simulations findings and real data examples also demonstrate the better performance of the proposed penalized regression approach.
Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang
2015-11-06
As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ'E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ'E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models.
Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang
2015-01-01
As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ′E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ′E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models. PMID:26541245
A model of adaptive decision-making from representation of information environment by quantum fields
Bagarello, F.; Haven, E.; Khrennikov, A.
2017-10-01
We present the mathematical model of decision-making (DM) of agents acting in a complex and uncertain environment (combining huge variety of economical, financial, behavioural and geopolitical factors). To describe interaction of agents with it, we apply the formalism of quantum field theory (QTF). Quantum fields are a purely informational nature. The QFT model can be treated as a far relative of the expected utility theory, where the role of utility is played by adaptivity to an environment (bath). However, this sort of utility-adaptivity cannot be represented simply as a numerical function. The operator representation in Hilbert space is used and adaptivity is described as in quantum dynamics. We are especially interested in stabilization of solutions for sufficiently large time. The outputs of this stabilization process, probabilities for possible choices, are treated in the framework of classical DM. To connect classical and quantum DM, we appeal to Quantum Bayesianism. We demonstrate the quantum-like interference effect in DM, which is exhibited as a violation of the formula of total probability, and hence the classical Bayesian inference scheme. This article is part of the themed issue `Second quantum revolution: foundational questions'.
Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S
2014-06-01
Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.
Directory of Open Access Journals (Sweden)
Natalia A. Tomashenko
2016-11-01
Full Text Available Subject of Research. We study speaker adaptation of deep neural network (DNN acoustic models in automatic speech recognition systems. The aim of speaker adaptation techniques is to improve the accuracy of the speech recognition system for a particular speaker. Method. A novel method for training and adaptation of deep neural network acoustic models has been developed. It is based on using an auxiliary GMM (Gaussian Mixture Models model and GMMD (GMM-derived features. The principle advantage of the proposed GMMD features is the possibility of performing the adaptation of a DNN through the adaptation of the auxiliary GMM. In the proposed approach any methods for the adaptation of the auxiliary GMM can be used, hence, it provides a universal method for transferring adaptation algorithms developed for GMMs to DNN adaptation.Main Results. The effectiveness of the proposed approach was shown by means of one of the most common adaptation algorithms for GMM models – MAP (Maximum A Posteriori adaptation. Different ways of integration of the proposed approach into state-of-the-art DNN architecture have been proposed and explored. Analysis of choosing the type of the auxiliary GMM model is given. Experimental results on the TED-LIUM corpus demonstrate that, in an unsupervised adaptation mode, the proposed adaptation technique can provide, approximately, a 11–18% relative word error reduction (WER on different adaptation sets, compared to the speaker-independent DNN system built on conventional features, and a 3–6% relative WER reduction compared to the SAT-DNN trained on fMLLR adapted features.
Student Modelling in Adaptive E-Learning Systems
Directory of Open Access Journals (Sweden)
Clemens Bechter
2011-09-01
Full Text Available Most e-Learning systems provide web-based learning so that students can access the same online courses via the Internet without adaptation, based on each student's profile and behavior. In an e-Learning system, one size does not fit all. Therefore, it is a challenge to make e-Learning systems that are suitably “adaptive”. The aim of adaptive e-Learning is to provide the students the appropriate content at the right time, means that the system is able to determine the knowledge level, keep track of usage, and arrange content automatically for each student for the best learning result. This study presents a proposed system which includes major adaptive features based on a student model. The proposed system is able to initialize the student model for determining the knowledge level of a student when the student registers for the course. After a student starts learning the lessons and doing many activities, the system can track information of the student until he/she takes a test. The student’s knowledge level, based on the test scores, is updated into the system for use in the adaptation process, which combines the student model with the domain model in order to deliver suitable course contents to the students. In this study, the proposed adaptive e-Learning system is implemented on an “Introduction to Java Programming Language” course, using LearnSquare software. After the system was tested, the results showed positive feedback towards the proposed system, especially in its adaptive capability.
2009-09-01
FVCOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7 ICOM ...coastal re- gions featured with complex irregular geometry and steep bottom topography. ICOM Imperial College Ocean Model FEM (CG and DG) using...the ICOM group. This code is written in C. Finel was also developed by the same group, and it is a three-dimensional non- hydrostatic finite element
Model-free adaptive sliding mode controller design for generalized ...
Indian Academy of Sciences (India)
L M WANG
2017-08-16
Aug 16, 2017 ... Abstract. A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) between two entirely unknown fractional-order chaotic systems subject to the external disturbances. To solve the difficulties from the little knowledge about the master–slave system ...
Space Weather Forecasts Driven by the ADAPT Model
Henney, C. J.; Arge, C. N.; Shurkin, K.; Schooley, A. K.; Hock, R. A.; White, S.
2015-12-01
In this presentation, we highlight recent progress to forecast key space weather parameters with the ADAPT (Air Force Data Assimilative Photospheric flux Transport) model. Driven by a magnetic flux transport model, ADAPT evolves global solar magnetic maps forward 1 to 7 days in the future to provide realistic estimates of the solar near-side field distribution used to forecast the solar wind, F10.7 (i.e., the solar 10.7 cm radio flux), extreme ultraviolet (EUV) and far ultraviolet (FUV) irradiance. Input to the ADAPT model includes solar near-side estimates of the inferred photospheric magnetic field from space-based (i.e., HMI) and ground-based (e.g., GONG & VSM) instruments. We summarize the recent findings that: 1) the sum of the absolute value of strong magnetic fields, associated with sunspots, is shown to correlate well with the observed daily F10.7 variability (Henney et al. 2012); and 2) the sum of the absolute value of weak magnetic fields, associated with plage regions, is shown to correlate well with EUV and FUV irradiance variability (Henney et al. 2015). In addition, recent progress to utilize the ADAPT global maps as input to the Wang-Sheeley-Arge (WSA) coronal and solar wind model is presented. We also discuss the challenges of observing less than half of the solar surface at any given time and the need for future magnetograph instruments near L1 and L5.
Why Reinvent the Wheel? Let's Adapt Our Institutional Assessment Model.
Aguirre, Francisco; Hawkins, Linda
This paper reports on the implementation of an Integrated Assessment and Strategic Planning (IASP) process to comply with accountability requirements at the community college of New Mexico State University at Alamogordo. The IASP model adapted an existing compliance matrix and applied it to the business college program in 1995 to assess and…
Fast, Sequence Adaptive Parcellation of Brain MR Using Parametric Models
DEFF Research Database (Denmark)
Puonti, Oula; Iglesias, Juan Eugenio; Van Leemput, Koen
2013-01-01
-of-the-art segmentation performance in both cortical and subcortical structures, while retaining all the benefits of generative parametric models, including high computational speed, automatic adaptiveness to changes in image contrast when different scanner platforms and pulse sequences are used, and the ability...
Temimi, Marouane; Chaouch, Naira; Weston, Michael; Ghedira, Hosni
2017-04-01
This study covers five fog events reported in 2014 at Abu Dhabi International Airport in the United Arab Emirates (UAE). We assess the performance of WRF-ARW model during fog conditions and we intercompare seven different PBL schemes and assess their impact on the performance of the simulations. Seven PBL schemes, namely, Yonsei University (YSU), Mellor-Yamada-Janjic (MYJ), Moller-Yamada Nakanishi and Niino (MYNN) level 2.5, Quasi-Normal Scale Elimination (QNSE-EDMF), Asymmetric Convective Model (ACM2), Grenier-Bretherton-McCaa (GBM) and MYNN level 3 were tested. Radiosonde data from the Abu Dhabi International Airport and surface measurements of relative humidity (RH), dew point temperature, wind speed, and temperature profiles were used to assess the performance of the model. All PBL schemes showed comparable skills with relatively higher performance with the QNSE scheme. The average RH Root Mean Square Error (RMSE) and BIAS for all PBLs were 15.75 % and -9.07 %, respectively, whereas the obtained RMSE and BIAS when QNSE was used were 14.65 % and -6.3 % respectively. Comparable skills were obtained for the rest of the variables. Local PBL schemes showed better performance than non-local schemes. Discrepancies between simulated and observed values were higher at the surface level compared to high altitude values. The sensitivity to lead time showed that best simulation performances were obtained when the lead time varies between 12 and 18 hours. In addition, the results of the simulations show that better performance is obtained when the starting condition is dry.
Adapting to life: simulating an ecosystem within an unstructured adaptive mesh ocean model
Hill, J.; Piggott, M. D.; Popova, E. E.; Ham, D. A.; Srokosz, M. A.
2010-12-01
Ocean oligotrophic gyres are characterised by low rates of primary production. Nevertheless their great area, covering roughly a third of the Earth's surface, and probably constituting the largest ecosystem on the planet means that they play a crucial role in global biogeochemistry. Current models give values of primary production two orders of magnitude lower than those observed, thought to be due to the non-resolution of sub-mesoscale phenomena, which play a significant role in nutrient supply in such areas. However, which aspects of sub-mesoscale processes are responsible for the observed higher productivity is an open question. Existing models are limited by two opposing requirements: to have high enough spatial resolution to resolve fully the processes involved (down to order 1km) and the need to realistically simulate the full gyre. No model can currently satisfy both of these constraints. Here, we detail Fluidity-ICOM, a non-hydrostatic, finite-element, unstructured mesh ocean model. Adaptive mesh techniques allow us to focus resolution where and when we require it. We present the first steps towards performing a full North Atlantic simulation, by showing that adaptive mesh techniques can be used in conjunction with both turbulent parametrisations and ecosystems models in psuedo-1D water columns. We show that the model can successfully reproduce the annual variation of the mixed layer depth at keys locations within the North Atlantic gyre, with adaptive meshing producing more accurate results than the fixed mesh simulations, with fewer degrees of freedom. Moreover, the model is capable of reproducing the key behaviour of the ecosystem in those locations.
Francisco, R. V.; Argete, J.; Giorgi, F.; Pal, J.; Bi, X.; Gutowski, W. J.
2006-09-01
The latest version of the Abdus Salam International Centre for Theoretical Physics (ICTP) regional model RegCM is used to investigate summer monsoon precipitation over the Philippine archipelago and surrounding ocean waters, a region where regional climate models have not been applied before. The sensitivity of simulated precipitation to driving lateral boundary conditions (NCEP and ERA40 reanalyses) and ocean surface flux scheme (BATS and Zeng) is assessed for 5 monsoon seasons. The ability of the RegCM to simulate the spatial patterns and magnitude of monsoon precipitation is demonstrated, both in response to the prominent large scale circulations over the region and to the local forcing by the physiographical features of the Philippine islands. This provides encouraging indications concerning the development of a regional climate modeling system for the Philippine region. On the other hand, the model shows a substantial sensitivity to the analysis fields used for lateral boundary conditions as well as the ocean surface flux schemes. The use of ERA40 lateral boundary fields consistently yields greater precipitation amounts compared to the use of NCEP fields. Similarly, the BATS scheme consistently produces more precipitation compared to the Zeng scheme. As a result, different combinations of lateral boundary fields and surface ocean flux schemes provide a good simulation of precipitation amounts and spatial structure over the region. The response of simulated precipitation to using different forcing analysis fields is of the same order of magnitude as the response to using different surface flux parameterizations in the model. As a result it is difficult to unambiguously establish which of the model configurations is best performing.
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Semiparametric Efficient Adaptive Estimation of the PTTGARCH model
Ciccarelli, Nicola
2016-01-01
Financial data sets exhibit conditional heteroskedasticity and asymmetric volatility. In this paper we derive a semiparametric efficient adaptive estimator of a conditional heteroskedasticity and asymmetric volatility GARCH-type model (i.e., the PTTGARCH(1,1) model). Via kernel density estimation of the unknown density function of the innovation and via the Newton-Raphson technique applied on the root-n-consistent quasi-maximum likelihood estimator, we construct a more efficient estimator tha...
ADAPTATION OF WOFOST MODEL FROM CGMS TO ROMANIAN CONDITIONS
LAZĂR CĂTĂLIN; BARUTH BETTINA; MICALE FABIO; LAZĂR DANIELA ANCA
2009-01-01
This preliminary study is an inventory of the main resources and difficulties in adaptation of the Crop Growth Monitoring System (CGMS) used by Agri4cast unit of IPSC from Joint Research Centre (JRC) - Ispra of European Commission to conditions of Romania.In contrast with the original model calibrated mainly with statistical average yields at national level, for local calibration of the model the statistical yields at lower administrative units (macroregion or county) must be used. In additio...
A phoneme-based student model for adaptive spelling training
Baschera, Gian-Marco; Gross, Markus H.
2009-01-01
We present a novel phoneme-based student model for spelling training. Our model is data driven, adapts to the user and provides information for, e.g., optimal word selection. We describe spelling errors using a set of features accounting for phonemic, capitalization, typo, and other error categories. We compute the influence of individual features on the error expectation values based on previous input data using Poisson regression. This enables us to predict error expectation values and to c...
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press
Reddy, Sunita; Mary, Immaculate
2013-01-01
The Rajiv Aarogyasri Community Health Insurance (RACHI) in Andhra Pradesh (AP) has been very popular social insurance scheme with a private public partnership model to deal with the problems of catastrophic medical expenditures at tertiary level care for the poor households. A brief analysis of the RACHI scheme based on officially available data and media reports has been undertaken from a public health perspective to understand the nature and financing of partnership and the lessons it provides. The analysis of the annual budget spent on the surgeries in private hospitals compared to tertiary public hospitals shows that the current scheme is not sustainable and pose huge burden on the state exchequers. The private hospital association's in AP, further acts as pressure groups to increase the budget or threaten to withdraw services. Thus, profits are privatized and losses are socialized.
Zhu, Guangpu
2018-01-26
In this paper, a fully discrete scheme which considers temporal and spatial discretizations is presented for the coupled Cahn-Hilliard equation in conserved form with the dynamic contact line condition and the Navier-Stokes equation with the generalized Navier boundary condition. Variable densities and viscosities are incorporated in this model. A rigorous proof of energy stability is provided for the fully discrete scheme based on a semi-implicit temporal discretization and a finite difference method on the staggered grids for the spatial discretization. A splitting method based on the pressure stabilization is implemented to solve the Navier-Stokes equation, while the stabilization approach is also used for the Cahn-Hilliard equation. Numerical results in both 2-D and 3-D demonstrate the accuracy, efficiency and decaying property of discrete energy of the proposed scheme.
Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data.
Liu, Zitao; Hauskrecht, Milos
2016-02-01
Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy.
Construction of Gait Adaptation Model in Human Splitbelt Treadmill Walking
Directory of Open Access Journals (Sweden)
Yuji Otoda
2009-01-01
Full Text Available There are a huge number of studies that measure kinematics, dynamics, the oxygen uptake and so on in human walking on the treadmill. Especially in walking on the splitbelt treadmill where the speed of the right and left belt is different, remarkable differences in kinematics are seen between normal and cerebellar disease subjects. In order to construct the gait adaptation model of such human splitbelt treadmill walking, we proposed a simple control model and made a newly developed 2D biped robot walk on the splitbelt treadmill. We combined the conventional limit-cycle based control consisting of joint PD-control, cyclic motion trajectory planning and a stepping reflex with a newly proposed adjustment of P-gain at the hip joint of the stance leg. We showed that the data of robot (normal subject model and cerebellum disease subject model experiments had high similarities with the data of normal subjects and cerebellum disease subjects experiments carried out by Reisman et al. (2005 and Morton and Bastian (2006 in ratios and patterns. We also showed that P-gain at the hip joint of the stance leg was the control parameter of adaptation for symmetric gaits in splitbelt walking and P-gain adjustment corresponded to muscle stiffness adjustment by the cerebellum. Consequently, we successfully proposed the gait adaptation model in human splitbelt treadmill walking and confirmed the validity of our hypotheses and the proposed model using the biped robot.
Directory of Open Access Journals (Sweden)
Isaac Osei
2016-11-01
Full Text Available Techno-economic models for optimised utilisation of jatropha oil under an out-grower farming scheme were developed based on different considerations for oil and by-product utilisation. Model 1: Out-grower scheme where oil is exported and press cake utilised for compost. Model 2: Out-grower scheme with six scenarios considered for the utilisation of oil and by-products. Linear programming models were developed based on outcomes of the models to optimise the use of the oil through profit maximisation. The findings revealed that Model 1 was financially viable from the processors’ perspective but not for the farmer at seed price of $0.07/kg. All scenarios considered under Model 2 were financially viable from the processors perspective but not for the farmer at seed price of $0.07/kg; however, at seed price of $0.085/kg, financial viability was achieved for both parties. Optimising the utilisation of the oil resulted in an annual maximum profit of $123,300.
Numerical simulations of multicomponent ecological models with adaptive methods.
Owolabi, Kolade M; Patidar, Kailash C
2016-01-08
The study of dynamic relationship between a multi-species models has gained a huge amount of scientific interest over the years and will continue to maintain its dominance in both ecology and mathematical ecology in the years to come due to its practical relevance and universal existence. Some of its emergence phenomena include spatiotemporal patterns, oscillating solutions, multiple steady states and spatial pattern formation. Many time-dependent partial differential equations are found combining low-order nonlinear with higher-order linear terms. In attempt to obtain a reliable results of such problems, it is desirable to use higher-order methods in both space and time. Most computations heretofore are restricted to second order in time due to some difficulties introduced by the combination of stiffness and nonlinearity. Hence, the dynamics of a reaction-diffusion models considered in this paper permit the use of two classic mathematical ideas. As a result, we introduce higher order finite difference approximation for the spatial discretization, and advance the resulting system of ODE with a family of exponential time differencing schemes. We present the stability properties of these methods along with the extensive numerical simulations for a number of multi-species models. When the diffusivity is small many of the models considered in this paper are found to exhibit a form of localized spatiotemporal patterns. Such patterns are correctly captured in the local analysis of the model equations. An extended 2D results that are in agreement with Turing typical patterns such as stripes and spots, as well as irregular snakelike structures are presented. We finally show that the designed schemes are dynamically consistent. The dynamic complexities of some ecological models are studied by considering their linear stability analysis. Based on the choices of parameters in transforming the system into a dimensionless form, we were able to obtain a well-balanced system that
International Nuclear Information System (INIS)
Chinese, D.; Patrizio, P.; Nardin, G.
2014-01-01
Italy has witnessed an extraordinary growth in biogas generation from livestock effluents and agricultural activities in the last few years as well as a severe isomorphic process, leading to a market dominance of 999 kW power plants owned by “entrepreneurial farms”. Under the pressure of the economic crisis in the country, the Italian government has restructured renewable energy support schemes, introducing a new program in 2013. In this paper, the effects of the previous and current support schemes on the optimal plant size, feedstock mix and profitability were investigated by introducing a spatially explicit biogas supply chain optimization model, which accounts for different incentive structures. By applying the model to a regional case study, homogenization observed to date is recognized as a result of former incentive structures. Considerable reductions in local economic potentials for agricultural biogas power plants without external heat use, are estimated. New plants are likely to be manure-based and due to the lower energy density of such feedstock, wider supply chains are expected although optimal plant size will be smaller. The new support scheme will therefore most likely eliminate past distortions but also slow down investments in agricultural biogas plants. - Highlights: • We review the evolution of agricultural biogas support schemes in Italy over last 20 years. • A biogas supply chain optimization model which accounts for feed-in-tariffs is introduced. • The model is applied to a regional case study under the two most recent support schemes. • Incentives in force until 2013 caused homogenization towards maize based 999 kW el plants. • Wider, manure based supply chains feeding smaller plants are expected with future incentives
Integrating a reservoir regulation scheme into a spatially distributed hydrological model
Energy Technology Data Exchange (ETDEWEB)
Zhao, Gang; Gao, Huilin; Naz, Bibi S.; Kao, Shih-Chieh; Voisin, Nathalie
2016-12-01
During the past several decades, numerous reservoirs have been built across the world for a variety of purposes such as flood control, irrigation, municipal/industrial water supplies, and hydropower generation. Consequently, natural streamflow timing and magnitude have been altered significantly by reservoir operations. In addition, the hydrological cycle can be modified by land use/land cover and climate changes. To understand the fine scale feedback between hydrological processes and water management decisions, a distributed hydrological model embedded with a reservoir component is of desire. In this study, a multi-purpose reservoir module with predefined complex operational rules was integrated into the Distributed Hydrology Soil Vegetation Model (DHSVM). Conditional operating rules, which are designed to reduce flood risk and enhance water supply reliability, were adopted in this module. The performance of the integrated model was tested over the upper Brazos River Basin in Texas, where two U.S. Army Corps of Engineers reservoirs, Lake Whitney and Aquilla Lake, are located. The integrated DHSVM model was calibrated and validated using observed reservoir inflow, outflow, and storage data. The error statistics were summarized for both reservoirs on a daily, weekly, and monthly basis. Using the weekly reservoir storage for Lake Whitney as an example, the coefficients of determination (R2) and the Nash-Sutcliff Efficiency (NSE) are 0.85 and 0.75, respectively. These results suggest that this reservoir module has promise for use in sub-monthly hydrological simulations. Enabled with the new reservoir component, the DHSVM model provides a platform to support adaptive water resources management under the impacts of evolving anthropogenic activities and substantial environmental changes.
Zealotry effects on opinion dynamics in the adaptive voter model
Klamser, Pascal P.; Wiedermann, Marc; Donges, Jonathan F.; Donner, Reik V.
2017-11-01
The adaptive voter model has been widely studied as a conceptual model for opinion formation processes on time-evolving social networks. Past studies on the effect of zealots, i.e., nodes aiming to spread their fixed opinion throughout the system, only considered the voter model on a static network. Here we extend the study of zealotry to the case of an adaptive network topology co-evolving with the state of the nodes and investigate opinion spreading induced by zealots depending on their initial density and connectedness. Numerical simulations reveal that below the fragmentation threshold a low density of zealots is sufficient to spread their opinion to the whole network. Beyond the transition point, zealots must exhibit an increased degree as compared to ordinary nodes for an efficient spreading of their opinion. We verify the numerical findings using a mean-field approximation of the model yielding a low-dimensional set of coupled ordinary differential equations. Our results imply that the spreading of the zealots' opinion in the adaptive voter model is strongly dependent on the link rewiring probability and the average degree of normal nodes in comparison with that of the zealots. In order to avoid a complete dominance of the zealots' opinion, there are two possible strategies for the remaining nodes: adjusting the probability of rewiring and/or the number of connections with other nodes, respectively.
Direct Adaptive Control Of An Industrial Robot
Seraji, Homayoun; Lee, Thomas; Delpech, Michel
1992-01-01
Decentralized direct adaptive control scheme for six-jointed industrial robot eliminates part of overall computational burden imposed by centralized controller and degrades performance of robot by reducing sampling rate. Control and controller-adaptation laws based on observed performance of manipulator: no need to model dynamics of robot. Adaptive controllers cope with uncertainties and variations in robot and payload.
Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia
2018-03-01
Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.
Energy Technology Data Exchange (ETDEWEB)
Beliaev, J.; Trunov, N.; Tschekin, I. [OKB Gidropress (Russian Federation); Luther, W. [GRS Garching (Germany); Spolitak, S. [RNC-KI (Russian Federation)
1995-12-31
Currently the ATHLET code is widely applied for modelling of several Power Plants of WWER type with horizontal steam generators. A main drawback of all these applications is the insufficient verification of the models for the steam generator. This paper presents the nodalization schemes for the secondary side of the steam generator, the results of stationary calculations, and preliminary comparisons to experimental data. The consideration of circulation in the water inventory of the secondary side is proved to be necessary. (orig.). 3 refs.
A general coarse and fine mesh solution scheme for fluid flow modeling in VHTRS
International Nuclear Information System (INIS)
Clifford, I; Ivanov, K; Avramova, M.
2011-01-01
Coarse mesh Computational Fluid Dynamics (CFD) methods offer several advantages over traditional coarse mesh methods for the safety analysis of helium-cooled graphite-moderated Very High Temperature Reactors (VHTRs). This relatively new approach opens up the possibility for system-wide calculations to be carried out using a consistent set of field equations throughout the calculation, and subsequently the possibility for hybrid coarse/fine mesh or hierarchical multi scale CFD simulations. To date, a consistent methodology for hierarchical multi-scale CFD has not been developed. This paper describes work carried out in the initial development of a multi scale CFD solver intended to be used for the safety analysis of VHTRs. The VHTR is considered on any scale to consist of a homogenized two-phase mixture of fluid and stationary solid material of varying void fraction. A consistent set of conservation equations was selected such that they reduce to the single-phase conservation equations for the case where void fraction is unity. The discretization of the conservation equations uses a new pressure interpolation scheme capable of capturing the discontinuity in pressure across relatively large changes in void fraction. Based on this, a test solver was developed which supports fully unstructured meshes for three-dimensional time-dependent compressible flow problems, including buoyancy effects. For typical VHTR flow phenomena the new solver shows promise as an effective candidate for predicting the flow behavior on multiple scales, as it is capable of modeling both fine mesh single phase flows as well as coarse mesh flows in homogenized regions containing both fluid and solid materials. (author)
Dynamic modeling and adaptive vibration suppression of a high-speed macro-micro manipulator
Yang, Yi-ling; Wei, Yan-ding; Lou, Jun-qiang; Fu, Lei; Fang, Sheng; Chen, Te-huan
2018-05-01
This paper presents a dynamic modeling and microscopic vibration suppression for a flexible macro-micro manipulator dedicated to high-speed operation. The manipulator system mainly consists of a macro motion stage and a flexible micromanipulator bonded with one macro-fiber-composite actuator. Based on Hamilton's principle and the Bouc-Wen hysteresis equation, the nonlinear dynamic model is obtained. Then, a hybrid control scheme is proposed to simultaneously suppress the elastic vibration during and after the motor motion. In particular, the hybrid control strategy is composed of a trajectory planning approach and an adaptive variable structure control. Moreover, two optimization indices regarding the comprehensive torques and synthesized vibrations are designed, and the optimal trajectories are acquired using a genetic algorithm. Furthermore, a nonlinear fuzzy regulator is used to adjust the switching gain in the variable structure control. Thus, a fuzzy variable structure control with nonlinear adaptive control law is achieved. A series of experiments are performed to verify the effectiveness and feasibility of the established system model and hybrid control strategy. The excited vibration during the motor motion and the residual vibration after the motor motion are decreased. Meanwhile, the settling time is shortened. Both the manipulation stability and operation efficiency of the manipulator are improved by the proposed hybrid strategy.
Diagnosis and Modeling of the Explosive Development of Winter Storms: Sensitivity to PBL Schemes
Liberato, Margarida L. R.; Pradhan, Prabodha K.
2014-05-01
The correct representation of extreme windstorms in regional models is of great importance for impact studies of climate change. The Iberian Peninsula has recently witnessed major damage from winter extratropical intense cyclones like Klaus (January 2009), Xynthia (February 2010) and Gong (January 2013) which formed over the mid-Atlantic, experienced explosive intensification while travelling eastwards at lower latitudes than usual [Liberato et al. 2011; 2013]. In this paper the explosive development of these storms is simulated by the advanced mesoscale Weather Research and Forecasting Model (WRF v 3.4.1), initialized with NCEP Final Analysis (FNL) data as initial and lateral boundary conditions (boundary conditions updated in every 3 hours intervals). The simulation experiments are conducted with two domains, a coarser (25km) and nested (8.333km), covering the entire North Atlantic and Iberian Peninsula region. The characteristics of these storms (e.g. wind speed, precipitation) are studied from WRF model and compared with multiple observations. In this context simulations with different Planetary Boundary Layer (PBL) schemes are performed. This approach aims at understanding which mechanisms favor the explosive intensification of these storms at a lower than usual latitudes, thus improving the knowledge of atmospheric dynamics (including small-scale processes) on controlling the life cycle of midlatitude extreme storms and contributing to the improvement in predictability and in our ability to forecast storms' impacts over Iberian Peninsula. Acknowledgments: This work was partially supported by FEDER (Fundo Europeu de Desenvolvimento Regional) funds through the COMPETE (Programa Operacional Factores de Competitividade) and by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project STORMEx FCOMP-01-0124-FEDER- 019524 (PTDC/AAC-CLI/121339/2010). References: Liberato M.L.R., J.G. Pinto, I.F. Trigo, R.M. Trigo (2011) Klaus - an
Hydrodynamic modelling of the shock ignition scheme for inertial confinement fusion
International Nuclear Information System (INIS)
Vallet, Alexandra
2014-01-01
. That significant pressure enhancement is explained by contribution of hot-electrons generated by non-linear laser/plasma interaction in the corona. The proposed analytical models allow to optimize the shock ignition scheme, including the influence of the implosion parameters. Analytical, numerical and experimental results are mutually consistent. (author) [fr
Avolio, E.; Federico, S.; Miglietta, M. M.; Lo Feudo, T.; Calidonna, C. R.; Sempreviva, A. M.
2017-08-01
The sensitivity of boundary layer variables to five (two non-local and three local) planetary boundary-layer (PBL) parameterization schemes, available in the Weather Research and Forecasting (WRF) mesoscale meteorological model, is evaluated in an experimental site in Calabria region (southern Italy), in an area characterized by a complex orography near the sea. Results of 1 km × 1 km grid spacing simulations are compared with the data collected during a measurement campaign in summer 2009, considering hourly model outputs. Measurements from several instruments are taken into account for the performance evaluation: near surface variables (2 m temperature and relative humidity, downward shortwave radiation, 10 m wind speed and direction) from a surface station and a meteorological mast; vertical wind profiles from Lidar and Sodar; also, the aerosol backscattering from a ceilometer to estimate the PBL height. Results covering the whole measurement campaign show a cold and moist bias near the surface, mostly during daytime, for all schemes, as well as an overestimation of the downward shortwave radiation and wind speed. Wind speed and direction are also verified at vertical levels above the surface, where the model uncertainties are, usually, smaller than at the surface. A general anticlockwise rotation of the simulated flow with height is found at all levels. The mixing height is overestimated by all schemes and a possible role of the simulated sensible heat fluxes for this mismatching is investigated. On a single-case basis, significantly better results are obtained when the atmospheric conditions near the measurement site are dominated by synoptic forcing rather than by local circulations. From this study, it follows that the two first order non-local schemes, ACM2 and YSU, are the schemes with the best performance in representing parameters near the surface and in the boundary layer during the analyzed campaign.
A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment
International Nuclear Information System (INIS)
Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir
2015-01-01
This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL ® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0–238 N s m −1 through the viscous and electromagnetic components, respectively. (paper)
Adaptive control of a Stewart platform-based manipulator
Nguyen, Charles C.; Antrazi, Sami S.; Zhou, Zhen-Lei; Campbell, Charles E., Jr.
1993-01-01
A joint-space adaptive control scheme for controlling noncompliant motion of a Stewart platform-based manipulator (SPBM) was implemented in the Hardware Real-Time Emulator at Goddard Space Flight Center. The six-degrees of freedom SPBM uses two platforms and six linear actuators driven by dc motors. The adaptive control scheme is based on proportional-derivative controllers whose gains are adjusted by an adaptation law based on model reference adaptive control and Liapunov direct method. It is concluded that the adaptive control scheme provides superior tracking capability as compared to fixed-gain controllers.
International Development Research Centre (IDRC) Digital Library (Canada)
. Dar es Salaam. Durban. Bloemfontein. Antananarivo. Cape Town. Ifrane ... program strategy. A number of CCAA-supported projects have relevance to other important adaptation-related themes such as disaster preparedness and climate.
Evaluation-Function-based Model-free Adaptive Fuzzy Control
Directory of Open Access Journals (Sweden)
Agus Naba
2016-12-01
Full Text Available Designs of adaptive fuzzy controllers (AFC are commonly based on the Lyapunov approach, which requires a known model of the controlled plant. They need to consider a Lyapunov function candidate as an evaluation function to be minimized. In this study these drawbacks were handled by designing a model-free adaptive fuzzy controller (MFAFC using an approximate evaluation function defined in terms of the current state, the next state, and the control action. MFAFC considers the approximate evaluation function as an evaluative control performance measure similar to the state-action value function in reinforcement learning. The simulation results of applying MFAFC to the inverted pendulum benchmark veriﬁed the proposed scheme’s efficacy.
Goal-oriented model adaptivity for viscous incompressible flows
van Opstal, T. M.
2015-04-04
© 2015, Springer-Verlag Berlin Heidelberg. In van Opstal et al. (Comput Mech 50:779–788, 2012) airbag inflation simulations were performed where the flow was approximated by Stokes flow. Inside the intricately folded initial geometry the Stokes assumption is argued to hold. This linearity assumption leads to a boundary-integral representation, the key to bypassing mesh generation and remeshing. It therefore enables very large displacements with near-contact. However, such a coarse assumption cannot hold throughout the domain, where it breaks down one needs to revert to the original model. The present work formalizes this idea. A model adaptive approach is proposed, in which the coarse model (a Stokes boundary-integral equation) is locally replaced by the original high-fidelity model (Navier–Stokes) based on a-posteriori estimates of the error in a quantity of interest. This adaptive modeling framework aims at taking away the burden and heuristics of manually partitioning the domain while providing new insight into the physics. We elucidate how challenges pertaining to model disparity can be addressed. Essentially, the solution in the interior of the coarse model domain is reconstructed as a post-processing step. We furthermore present a two-dimensional numerical experiments to show that the error estimator is reliable.
Adaptive Gaussian Predictive Process Models for Large Spatial Datasets
Guhaniyogi, Rajarshi; Finley, Andrew O.; Banerjee, Sudipto; Gelfand, Alan E.
2011-01-01
Large point referenced datasets occur frequently in the environmental and natural sciences. Use of Bayesian hierarchical spatial models for analyzing these datasets is undermined by onerous computational burdens associated with parameter estimation. Low-rank spatial process models attempt to resolve this problem by projecting spatial effects to a lower-dimensional subspace. This subspace is determined by a judicious choice of “knots” or locations that are fixed a priori. One such representation yields a class of predictive process models (e.g., Banerjee et al., 2008) for spatial and spatial-temporal data. Our contribution here expands upon predictive process models with fixed knots to models that accommodate stochastic modeling of the knots. We view the knots as emerging from a point pattern and investigate how such adaptive specifications can yield more flexible hierarchical frameworks that lead to automated knot selection and substantial computational benefits. PMID:22298952
Language Model Combination and Adaptation Using Weighted Finite State Transducers
Liu, X.; Gales, M. J. F.; Hieronymus, J. L.; Woodland, P. C.
2010-01-01
In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences
An adaptive distance measure for use with nonparametric models
International Nuclear Information System (INIS)
Garvey, D. R.; Hines, J. W.
2006-01-01
Distance measures perform a critical task in nonparametric, locally weighted regression. Locally weighted regression (LWR) models are a form of 'lazy learning' which construct a local model 'on the fly' by comparing a query vector to historical, exemplar vectors according to a three step process. First, the distance of the query vector to each of the exemplar vectors is calculated. Next, these distances are passed to a kernel function, which converts the distances to similarities or weights. Finally, the model output or response is calculated by performing locally weighted polynomial regression. To date, traditional distance measures, such as the Euclidean, weighted Euclidean, and L1-norm have been used as the first step in the prediction process. Since these measures do not take into consideration sensor failures and drift, they are inherently ill-suited for application to 'real world' systems. This paper describes one such LWR model, namely auto associative kernel regression (AAKR), and describes a new, Adaptive Euclidean distance measure that can be used to dynamically compensate for faulty sensor inputs. In this new distance measure, the query observations that lie outside of the training range (i.e. outside the minimum and maximum input exemplars) are dropped from the distance calculation. This allows for the distance calculation to be robust to sensor drifts and failures, in addition to providing a method for managing inputs that exceed the training range. In this paper, AAKR models using the standard and Adaptive Euclidean distance are developed and compared for the pressure system of an operating nuclear power plant. It is shown that using the standard Euclidean distance for data with failed inputs, significant errors in the AAKR predictions can result. By using the Adaptive Euclidean distance it is shown that high fidelity predictions are possible, in spite of the input failure. In fact, it is shown that with the Adaptive Euclidean distance prediction
Anisotropic mesh adaptation for marine ice-sheet modelling
Gillet-Chaulet, Fabien; Tavard, Laure; Merino, Nacho; Peyaud, Vincent; Brondex, Julien; Durand, Gael; Gagliardini, Olivier
2017-04-01
Improving forecasts of ice-sheets contribution to sea-level rise requires, amongst others, to correctly model the dynamics of the grounding line (GL), i.e. the line where the ice detaches from its underlying bed and goes afloat on the ocean. Many numerical studies, including the intercomparison exercises MISMIP and MISMIP3D, have shown that grid refinement in the GL vicinity is a key component to obtain reliable results. Improving model accuracy while maintaining the computational cost affordable has then been an important target for the development of marine icesheet models. Adaptive mesh refinement (AMR) is a method where the accuracy of the solution is controlled by spatially adapting the mesh size. It has become popular in models using the finite element method as they naturally deal with unstructured meshes, but block-structured AMR has also been successfully applied to model GL dynamics. The main difficulty with AMR is to find efficient and reliable estimators of the numerical error to control the mesh size. Here, we use the estimator proposed by Frey and Alauzet (2015). Based on the interpolation error, it has been found effective in practice to control the numerical error, and has some flexibility, such as its ability to combine metrics for different variables, that makes it attractive. Routines to compute the anisotropic metric defining the mesh size have been implemented in the finite element ice flow model Elmer/Ice (Gagliardini et al., 2013). The mesh adaptation is performed using the freely available library MMG (Dapogny et al., 2014) called from Elmer/Ice. Using a setup based on the inter-comparison exercise MISMIP+ (Asay-Davis et al., 2016), we study the accuracy of the solution when the mesh is adapted using various variables (ice thickness, velocity, basal drag, …). We show that combining these variables allows to reduce the number of mesh nodes by more than one order of magnitude, for the same numerical accuracy, when compared to uniform mesh
Modelling the mortality of members of group schemes in South Africa
African Journals Online (AJOL)
In this paper, the methodology underlying the graduation of the mortality of members of group schemes in South Africa underwritten by life insurance companies under group life-insurance arrangements is described and the results are presented. A multivariate parametric curve was fitted to the data for the working ages 25 ...
Tang, Yu Jia; Li, Ling Jun; Zhou, Yi Ming; Zhang, Da Wei; Yin, Wen Jun; Zhang, Meng; Xie, Bao Guo; Cheng, Nianliang
2017-04-01
Dust produced by wind erosion is a major source of atmospheric dust pollutions which have impacts on air quality, weather and climate. It is difficult to calculate dust concentration in the atmosphere with certainty unless the dust-emission rate can be estimated with accuracy. Hence, due to the unreliable estimation of dust-emission rate flux from ground surface, the dust forecast accuracy in air quality models is low. The main reason is that the parameter that describes the dust-emission rate in the regional air quality model is constant and cannot reflect the reality of surface dust-emission changes. A new scheme which uses the vegetation information from satellite remote sensing data and meteorological condition provided by meteorological forecast model is developed to estimate the actual dust-emission rete from the ground surface. The results shows that the new scheme can improve dust simulation and forecast performance significantly and reduce the root mean square error by 25% 68%. The DDR scheme can be coupled with any current air quality model (e.g. WRF-Chem, CMAQ, CAMx) and produce more accurate dust forecast.
Chuvatin, Alexandre S.; Rudakov, Leonid I.; Kokshenev, Vladimir A.; Aranchuk, Leonid E.; Huet, Dominique; Gasilov, Vladimir A.; Krukovskii, Alexandre Yu.; Kurmaev, Nikolai E.; Fursov, Fiodor I.
2002-12-01
This work introduces an inductive energy storage (IES) scheme which aims pulsed-power conditioning at multi- MJ energies. The key element of the scheme represents an additional plasma volume, where a magnetically accelerated wire array is used for inductive current switching. This plasma acceleration volume is connected in parallel to a microsecond capacitor bank and to a 100-ns current ruse-time useful load. Simple estimates suggest that optimized scheme parameters could be reachable even when operating at ultra-high currents. We describe first proof-of-principle experiments carried out on GIT12 generator [1] at the wire-array current level of 2 MA. The obtained confirmation of the concept consists in generation of a 200 kV voltage directly at an inductive load. This load voltage value can be already sufficient to transfer the available magnetic energy into kinetic energy of a liner at this current level. Two-dimensional modeling with the radiational MHD numerical tool Marple [2] confirms the development of inductive voltage in the system. However, the average voltage increase is accompanied by short-duration voltage drops due to interception of the current by the low-density upstream plasma. Upon our viewpoint, this instability of the current distribution represents the main physical limitation to the scheme performance.
Directory of Open Access Journals (Sweden)
Yanxue Yu
2017-01-01
Full Text Available As a basic building block in power systems, the three-phase voltage-source inverter (VSI connects the distributed energy to the grid. For the inductor-capacitor-inductor (LCL-filter three-phase VSI, according to different current sampling position and different reference frame, there mainly exist four control schemes. Different control schemes present different impedance characteristics in their corresponding determined frequency range. To analyze the existing resonance phenomena due to the variation of grid impedances, the sequence impedance models of LCL-type grid-connected three-phase inverters under different control schemes are presented using the harmonic linearization method. The impedance-based stability analysis approach is then applied to compare the relative stability issues due to the impedance differences at some frequencies and to choose the best control scheme and the better controller parameters regulating method for the LCL-type three-phase VSI. The simulation and experiments both validate the resonance analysis results.
Kamesh, Reddi; Rani, Kalipatnapu Yamuna
2017-12-01
In this paper, a novel formulation for nonlinear model predictive control (MPC) has been proposed incorporating the extended Kalman filter (EKF) control concept using a purely data-driven artificial neural network (ANN) model based on measurements for supervisory control. The proposed scheme consists of two modules focusing on online parameter estimation based on past measurements and control estimation over control horizon based on minimizing the deviation of model output predictions from set points along the prediction horizon. An industrial case study for temperature control of a multiproduct semibatch polymerization reactor posed as a challenge problem has been considered as a test bed to apply the proposed ANN-EKFMPC strategy at supervisory level as a cascade control configuration along with proportional integral controller [ANN-EKFMPC with PI (ANN-EKFMPC-PI)]. The proposed approach is formulated incorporating all aspects of MPC including move suppression factor for control effort minimization and constraint-handling capability including terminal constraints. The nominal stability analysis and offset-free tracking capabilities of the proposed controller are proved. Its performance is evaluated by comparison with a standard MPC-based cascade control approach using the same adaptive ANN model. The ANN-EKFMPC-PI control configuration has shown better controller performance in terms of temperature tracking, smoother input profiles, as well as constraint-handling ability compared with the ANN-MPC with PI approach for two products in summer and winter. The proposed scheme is found to be versatile although it is based on a purely data-driven model with online parameter estimation.
Model Adaptation for Prognostics in a Particle Filtering Framework
Saha, Bhaskar; Goebel, Kai Frank
2011-01-01
One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.
Model Adaptation for Prognostics in a Particle Filtering Framework
Directory of Open Access Journals (Sweden)
Bhaskar Saha
2011-01-01
Full Text Available One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the “curse of dimensionality”, i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for “well-designed” particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion and Li-Polymer batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.
Dynamics Modeling and L1 Adaptive Control of a Transport Aircraft for Heavyweight Airdrop
Directory of Open Access Journals (Sweden)
Ri Liu
2015-01-01
Full Text Available The longitudinal nonlinear aircraft model with cargo extraction is derived using theoretical mechanics and flight mechanics. Furthermore, the nonlinear model is approximated by a semilinear time-varying system with the cargo disturbances viewed as unknown nonlinearities, both matched and unmatched types. On this basis, a novel autopilot inner-loop based on the LQR and L1 adaptive theory is developed to reject the unknown nonlinear disturbances caused by the cargo and also to accommodate uncertainties. Analysis shows that the controller can guarantee robustness in the presence of fast adaptation, without exciting control signal oscillations and gain scheduling. The overall control system is completed with the outer-loop altitude-hold control based on a PID controller. Simulations are conducted under the condition that one transport aircraft performs maximum load airdrop mission at the height of 82 ft, using single row single platform mode. The results show the good performance of the control scheme, which can meet the airdrop mission performance indexes well, even in the presence of ±20% aerodynamic uncertainties.
Directory of Open Access Journals (Sweden)
Saikat Kumar Shome
2015-01-01
Full Text Available Piezoelectric-stack actuated platforms are very popular in the parlance of nanopositioning with myriad applications like micro/nanofactory, atomic force microscopy, scanning probe microscopy, wafer design, biological cell manipulation, and so forth. Motivated by the necessity to improve trajectory tracking in such applications, this paper addresses the problem of rate dependent hysteretic nonlinearity in piezoelectric actuators (PEA. The classical second order Dahl model for hysteresis encapsulation is introduced first, followed by the identification of parameters through particle swarm optimization. A novel inversion based feedforward mechanism in combination with a feedback compensator is proposed to achieve high-precision tracking wherein the paradoxical concept of noise as a performance enhancer is introduced in the realm of PZAs. Having observed that dither induced stochastic resonance in the presence of periodic forcing reduces tracking error, dither capability is further explored in conjunction with a novel output harmonics based adaptive control scheme. The proposed adaptive controller is then augmented with an internal model control based approach to impart robustness against parametric variations and external disturbances. The proposed control law has been employed to track multifrequency signals with consistent compensation of rate dependent hysteresis of the PEA. The results indicate a greatly improved positioning accuracy along with considerable robustness achieved with the proposed integrated approach even for dual axis tracking applications.
Adaptive thermal modeling of Li-ion batteries
International Nuclear Information System (INIS)
Shadman Rad, M.; Danilov, D.L.; Baghalha, M.; Kazemeini, M.; Notten, P.H.L.
2013-01-01
Highlights: • A simple, accurate and adaptive thermal model is proposed for Li-ion batteries. • Equilibrium voltages, overpotentials and entropy changes are quantified from experimental results. • Entropy changes are highly dependent on the battery State-of-Charge. • Good agreement between simulated and measured heat development is obtained under all conditions. • Radiation contributes to about 50% of heat dissipation at elevated temperatures. -- Abstract: An accurate thermal model to predict the heat generation in rechargeable batteries is an essential tool for advanced thermal management in high power applications, such as electric vehicles. For such applications, the battery materials’ details and cell design are normally not provided. In this work a simple, though accurate, thermal model for batteries has been developed, considering the temperature- and current-dependent overpotential heat generation and State-of-Charge dependent entropy contributions. High power rechargeable Li-ion (7.5 Ah) batteries have been experimentally investigated and the results are used for model verification. It is shown that the State-of-Charge dependent entropy is a significant heat source and is therefore essential to correctly predict the thermal behavior of Li-ion batteries under a wide variety of operating conditions. An adaptive model is introduced to obtain these entropy values. A temperature-dependent equation for heat transfer to the environment is also taken into account. Good agreement between the simulations and measurements is obtained in all cases. The parameters for both the heat generation and heat transfer processes can be applied to the thermal design of advanced battery packs. The proposed methodology is generic and independent on the cell chemistry and battery design. The parameters for the adaptive model can be determined by performing simple cell potential/current and temperature measurements for a limited number of charge/discharge cycles
Energy Technology Data Exchange (ETDEWEB)
Goudon, Thierry, E-mail: thierry.goudon@inria.fr [Team COFFEE, INRIA Sophia Antipolis Mediterranee (France); Labo. J.A. Dieudonne CNRS and Univ. Nice-Sophia Antipolis (UMR 7351), Parc Valrose, 06108 Nice cedex 02 (France); Parisot, Martin, E-mail: martin.parisot@gmail.com [Project-Team SIMPAF, INRIA Lille Nord Europe, Park Plazza, 40 avenue Halley, F-59650 Villeneuve d' Ascq cedex (France)
2012-10-15
In the so-called Spitzer-Haerm regime, equations of plasma physics reduce to a nonlinear parabolic equation for the electronic temperature. Coming back to the derivation of this limiting equation through hydrodynamic regime arguments, one is led to construct a hierarchy of models where the heat fluxes are defined through a non-local relation which can be reinterpreted as well by introducing coupled diffusion equations. We address the question of designing numerical methods to simulate these equations. The basic requirement for the scheme is to be asymptotically consistent with the Spitzer-Haerm regime. Furthermore, the constraints of physically realistic simulations make the use of unstructured meshes unavoidable. We develop a Finite Volume scheme, based on Vertex-Based discretization, which reaches these objectives. We discuss on numerical grounds the efficiency of the method, and the ability of the generalized models in capturing relevant phenomena missed by the asymptotic problem.
Feiccabrino, James; Lundberg, Angela; Sandström, Nils
2013-04-01
Many hydrological models determine precipitation phase using surface weather station data. However, there are a declining number of augmented weather stations reporting manually observed precipitation phases, and a large number of automated observing systems (AOS) which do not report precipitation phase. Automated precipitation phase determination suffers from low accuracy in the precipitation phase transition zone (PPTZ), i.e. temperature range -1° C to 5° C where rain, snow and mixed precipitation is possible. Therefore, it is valuable to revisit surface based precipitation phase determination schemes (PPDS) while manual verification is still widely available. Hydrological and meteorological approaches to PPDS are vastly different. Most hydrological models apply surface meteorological data into one of two main PPDS approaches. The first is a single rain/snow threshold temperature (TRS), the second uses a formula to describe how mixed precipitation phase changes between the threshold temperatures TS (below this temperature all precipitation is considered snow) and TR (above this temperature all precipitation is considered rain). However, both approaches ignore the effect of lower tropospheric conditions on surface precipitation phase. An alternative could be to apply a meteorological approach in a hydrological model. Many meteorological approaches rely on weather balloon data to determine initial precipitation phase, and latent heat transfer for the melting or freezing of precipitation falling through the lower troposphere. These approaches can improve hydrological PPDS, but would require additional input data. Therefore, it would be beneficial to link expected lower tropospheric conditions to AOS data already used by the model. In a single air mass, rising air can be assumed to cool at a steady rate due to a decrease in atmospheric pressure. When two air masses meet, warm air is forced to ascend the more dense cold air. This causes a thin sharp warming (frontal
Fan, Hong; Li, Huan
2015-12-01
Location-related data are playing an increasingly irreplaceable role in business, government and scientific research. At the same time, the amount and types of data are rapidly increasing. It is a challenge how to quickly find required information from this rapidly growing volume of data, as well as how to efficiently provide different levels of geospatial data to users. This paper puts forward a data-oriented access model for geographic information science data. First, we analyze the features of GIS data including traditional types such as vector and raster data and new types such as Volunteered Geographic Information (VGI). Taking into account these analyses, a classification scheme for geographic data is proposed and TRAFIE is introduced to describe the establishment of a multi-level model for geographic data. Based on this model, a multi-level, scalable access system for geospatial information is put forward. Users can select different levels of data according to their concrete application needs. Pull-based and push-based data access mechanisms based on this model are presented. A Service Oriented Architecture (SOA) was chosen for the data processing. The model of this study has been described by providing decision-making process of government departments with a simulation of fire disaster data collection. The use case shows this data model and the data provision system is flexible and has good adaptability.
Shan, Y.; Eric, W.; Gao, L.; Zhao, T.; Yin, Y.
2015-12-01
In this study, we have evaluated the performance of size distribution functions (SDF) with 2- and 3-moments in fitting the observed size distribution of rain droplets at three different heights. The goal is to improve the microphysics schemes in meso-scale models, such as Weather Research and Forecast (WRF). Rain droplets were observed during eight periods of different rain types at three stations on the Yellow Mountain in East China. The SDF in this study were M-P distribution with a fixed shape parameter in Gamma SDF(FSP). Where the Gamma SDFs were obtained with three diagnosis methods with the shape parameters based on Milbrandt (2010; denoted DSPM10), Milbrandt (2005; denoted DSPM05) and Seifert (2008; denoted DSPS08) for solving the shape parameter(SSP) and Lognormal SDF. Based on the preliminary experiments, three ensemble methods deciding Gamma SDF was also developed and assessed. The magnitude of average relative error caused by applying a FSP was 10-2 for fitting 0-order moment of the observed rain droplet distribution, and the magnitude of average relative error changed to 10-1 and 100 respectively for 1-4 order moments and 5-6 order moments. To different extent, DSPM10, DSPM05, DSPS08, SSP and ensemble methods could improve fitting accuracies for 0-6 order moments, especially the one coupling SSP and DSPS08 methods, which provided a average relative error 6.46% for 1-4 order moments and 11.90% for 5-6 order moments, respectively. The relative error of fitting three moments using the Lognormal SDF was much larger than that of Gamma SDF. The threshold value of shape parameter ranged from 0 to 8, because values beyond this range could cause overflow in the calculation. When average diameter of rain droplets was less than 2mm, the possibility of unavailable shape parameter value(USPV) increased with a decreasing droplet size. There was strong sensitivity of moment group in fitting accuracy. When ensemble method coupling SSP and DSPS08 was used, a better fit
International Nuclear Information System (INIS)
Mukhamedov, Farrukh; Saburov, Mansoor
2010-06-01
In the present paper we study forward Quantum Markov Chains (QMC) defined on a Cayley tree. Using the tree structure of graphs, we give a construction of quantum Markov chains on a Cayley tree. By means of such constructions we prove the existence of a phase transition for the XY-model on a Cayley tree of order three in QMC scheme. By the phase transition we mean the existence of two distinct QMC for the given family of interaction operators {K }. (author)
Peng, Qiujin
2017-09-18
In this paper, we present two second-order numerical schemes to solve the fourth order parabolic equation derived from a diffuse interface model with Peng-Robinson Equation of state (EOS) for pure substance. The mass conservation, energy decay property, unique solvability and L-infinity convergence of these two schemes are proved. Numerical results demonstrate the good approximation of the fourth order equation and confirm reliability of these two schemes.
Adaptive output feedback control of aircraft flexible modes
Ponnusamy, Sangeeth Saagar; Bordeneuve-Guibé, Joël
2012-01-01
The application of adaptive output feedback augmentative control to the flexible aircraft problem is presented. Experimental validation of control scheme was carried out using a three disk torsional pendulum. In the reference model adaptive control scheme, the rigid aircraft reference model and neural network adaptation is used to control structural flexible modes and compensate for the effects unmodeled dynamics and parametric variations of a classical high order large passenger aircraft. Th...
Energy Technology Data Exchange (ETDEWEB)
Halfmann, C.; Holzmann, H.; Isermann, R. [Technische Univ. Darmstadt (Germany). Inst. fuer Automatisierungstechnik; Hamann, C.D.; Simm, N. [Opel (A.) AG, Ruesselsheim (Germany). Gruppe Chassis und Fahrerassistenzsysteme
1999-12-01
The application of modern simulation tools offering additional support during the vehicle development process is accepted to a large extent by most car manufacturers. Just like new model-based control strategies, these simulation investigations require very accurate - and thus very complex - models of vehicle dynamics, which can be processed in real time. As an example of such a vehicle model, this article describes a real-time vehicle simulation model which was developed at the Institute of Automatic Control at Darmstadt University of Technology, in co-operation with the ITDC of the Adam OPEL AG. By applying modern adaptation techniques, this vehicle model is able to calculate onboard the important variables describing the actual driving state even if the environmental conditions change. (orig.) [German] Der Einsatz von Simulationswerkzeugen zur Unterstuetzung der Fahrzeugentwicklung hat sich bei den meisten Automobilherstellern weitgehend durchgesetzt. Ebenso wie neuartige modellbasierte Regelstrategien verlangen diese Simulationsuntersuchungen nach immer exakteren - und damit komplexeren - fahrdynamischen Modellen, die in Echtzeit ausgewertet werden. Als Beispiel fuer ein solches Gesamtfahrzeugmodell beschreibt dieser Beitrag ein echtzeitfaehiges Modell fuer die Bewegung des Fahrzeugs um alle drei Hauptachsen, das am Institut fuer Automatisierungstechnik der TU Darmstadt in Kooperation mit dem Internationalen Technischen Entwicklungszentrum (ITEZ) der Adam Opel AG entwickelt wurde. Es ist durch den Einsatz von Adaptionsmethoden in der Lage, wichtige fahrdynamische Zustandsgroessen im Fahrzeug auch unter veraenderlichen Umgebungsbedingungen zu ermitteln. (orig.)
Adaptive Admittance Control for an Ankle Exoskeleton Using an EMG-Driven Musculoskeletal Model
Directory of Open Access Journals (Sweden)
Shaowei Yao
2018-04-01
Full Text Available Various rehabilitation robots have been employed to recover the motor function of stroke patients. To improve the effect of rehabilitation, robots should promote patient participation and provide compliant assistance. This paper proposes an adaptive admittance control scheme (AACS consisting of an admittance filter, inner position controller, and electromyography (EMG-driven musculoskeletal model (EDMM. The admittance filter generates the subject's intended motion according to the joint torque estimated by the EDMM. The inner position controller tracks the intended motion, and its parameters are adjusted according to the estimated joint stiffness. Eight healthy subjects were instructed to wear the ankle exoskeleton robot, and they completed a series of sinusoidal tracking tasks involving ankle dorsiflexion and plantarflexion. The robot was controlled by the AACS and a non-adaptive admittance control scheme (NAACS at four fixed parameter levels. The tracking performance was evaluated using the jerk value, position error, interaction torque, and EMG levels of the tibialis anterior (TA and gastrocnemius (GAS. For the NAACS, the jerk value and position error increased with the parameter levels, and the interaction torque and EMG levels of the TA tended to decrease. In contrast, the AACS could maintain a moderate jerk value, position error, interaction torque, and TA EMG level. These results demonstrate that the AACS achieves a good tradeoff between accurate tracking and compliant assistance because it can produce a real-time response to stiffness changes in the ankle joint. The AACS can alleviate the conflict between accurate tracking and compliant assistance and has potential for application in robot-assisted rehabilitation.
Directory of Open Access Journals (Sweden)
M. Cassiani
2016-11-01
Full Text Available The offline FLEXible PARTicle (FLEXPART stochastic dispersion model is currently a community model used by many scientists. Here, an alternative FLEXPART model version has been developed and tailored to use with the meteorological output data generated by the CMIP5-version of the Norwegian Earth System Model (NorESM1-M. The atmospheric component of the NorESM1-M is based on the Community Atmosphere Model (CAM4; hence, this FLEXPART version could be widely applicable and it provides a new advanced tool to directly analyse and diagnose atmospheric transport properties of the state-of-the-art climate model NorESM in a reliable way. The adaptation of FLEXPART to NorESM required new routines to read meteorological fields, new post-processing routines to obtain the vertical velocity in the FLEXPART coordinate system, and other changes. These are described in detail in this paper. To validate the model, several tests were performed that offered the possibility to investigate some aspects of offline global dispersion modelling. First, a comprehensive comparison was made between the tracer transport from several point sources around the globe calculated online by the transport scheme embedded in CAM4 and the FLEXPART model applied offline on output data. The comparison allowed investigating several aspects of the transport schemes including the approximation introduced by using an offline dispersion model with the need to transform the vertical coordinate system, the influence on the model results of the sub-grid-scale parameterisations of convection and boundary layer height and the possible advantage entailed in using a numerically non-diffusive Lagrangian particle solver. Subsequently, a comparison between the reference FLEXPART model and the FLEXPART–NorESM/CAM version was performed to compare the well-mixed state of the atmosphere in a 1-year global simulation. The two model versions use different methods to obtain the vertical velocity but no
Xie, Zhipeng; Hu, Zeyong; Xie, Zhenghui; Jia, Binghao; Sun, Genhou; Du, Yizhen; Song, Haiqing
2018-02-01
This paper presents the impact of two snow cover schemes (NY07 and SL12) in the Community Land Model version 4.5 (CLM4.5) on the snow distribution and surface energy budget over the Tibetan Plateau. The simulated snow cover fraction (SCF), snow depth, and snow cover days were evaluated against in situ snow depth observations and a satellite-based snow cover product and snow depth dataset. The results show that the SL12 scheme, which considers snow accumulation and snowmelt processes separately, has a higher overall accuracy (81.8%) than the NY07 (75.8%). The newer scheme performs better in the prediction of overall accuracy compared with the NY07; however, SL12 yields a 15.1% underestimation rate while NY07 overestimated the SCF with a 15.2% overestimation rate. Both two schemes capture the distribution of the maximum snow depth well but show large positive biases in the average value through all periods (3.37, 3.15, and 1.48 cm for NY07; 3.91, 3.52, and 1.17 cm for SL12) and overestimate snow cover days compared with the satellite-based product and in situ observations. Higher altitudes show larger root-mean-square errors (RMSEs) in the simulations of snow depth and snow cover days during the snow-free period. Moreover, the surface energy flux estimations from the SL12 scheme are generally superior to the simulation from NY07 when evaluated against ground-based observations, in particular for net radiation and sensible heat flux. This study has great implications for further improvement of the subgrid-scale snow variations over the Tibetan Plateau.
Hamdi, R.; Degrauwe, D.; Duerinckx, A.; Cedilnik, J.; Costa, V.; Dalkilic, T.; Essaouini, K.; Jerczynki, M.; Kocaman, F.; Kullmann, L.; Mahfouf, J.-F.; Meier, F.; Sassi, M.; Schneider, S.; Váňa, F.; Termonia, P.
2014-01-01
The newly developed land surface scheme SURFEX (SURFace EXternalisée) is implemented into a limited-area numerical weather prediction model running operationally in a number of countries of the ALADIN and HIRLAM consortia. The primary question addressed is the ability of SURFEX to be used as a new land surface scheme and thus assessing its potential use in an operational configuration instead of the original ISBA (Interactions between Soil, Biosphere, and Atmosphere) scheme. The results show that the introduction of SURFEX either shows improvement for or has a neutral impact on the 2 m temperature, 2 m relative humidity and 10 m wind. However, it seems that SURFEX has a tendency to produce higher maximum temperatures at high-elevation stations during winter daytime, which degrades the 2 m temperature scores. In addition, surface radiative and energy fluxes improve compared to observations from the Cabauw tower. The results also show that promising improvements with a demonstrated positive impact on the forecast performance are achieved by introducing the town energy balance (TEB) scheme. It was found that the use of SURFEX has a neutral impact on the precipitation scores. However, the implementation of TEB within SURFEX for a high-resolution run tends to cause rainfall to be locally concentrated, and the total accumulated precipitation obviously decreases during the summer. One of the novel features developed in SURFEX is the availability of a more advanced surface data assimilation using the extended Kalman filter. The results over Belgium show that the forecast scores are similar between the extended Kalman filter and the classical optimal interpolation scheme. Finally, concerning the vertical scores, the introduction of SURFEX either shows improvement for or has a neutral impact in the free atmosphere.
ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION
HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG
2011-01-01
We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme
Directory of Open Access Journals (Sweden)
V. I. Shynkarenko
2016-04-01
Full Text Available Purpose.The second part of the paper completes presentation of constructive and the productive structures (CPS, modeling adaptation of data structures in memory (RAM. The purpose of the second part in the research is to develop a model of process of adaptation data in a RAM functioning in different hardware and software environments and scenarios of data processing. Methodology. The methodology of mathematical and algorithmic constructionism was applied. In this part of the paper, changes were developed the constructors of scenarios and adaptation processes based on a generalized CPS through its transformational conversions. Constructors are interpreted, specialized CPS. Were highlighted the terminal alphabets of the constructor scenarios in the form of data processing algorithms and the constructor of adaptation – in the form of algorithmic components of the adaptation process. The methodology involves the development of substitution rules that determine the output process of the relevant structures. Findings. In the second part of the paper, system is represented by CPS modeling adaptation data placement in the RAM, namely, constructors of scenarios and of adaptation processes. The result of the implementation of constructor of scenarios is a set of data processing operations in the form of text in the language of programming C#, constructor of the adaptation processes – a process of adaptation, and the result the process of adaptation – the adapted binary code of processing data structures. Originality. For the first time proposed the constructive model of data processing – the scenario that takes into account the order and number of calls to the various elements of data structures and adaptation of data structures to the different hardware and software environments. At the same the placement of data in RAM and processing algorithms are adapted. Constructionism application in modeling allows to link data models and algorithms for
DEFF Research Database (Denmark)
Yang, Z.; Izadi-Zamanabadi, R.; Blanke, Mogens
2000-01-01
Based on the model-matching strategy, an adaptive control reconfiguration method for a class of nonlinear control systems is proposed by using the multiple-model scheme. Instead of requiring the nominal and faulty nonlinear systems to match each other directly in some proper sense, three sets...... of LTI models are employed to approximate the faulty, reconfigured and nominal nonlinear systems respectively with respect to the on-line information of the operating system, and a set of compensating modules are proposed and designed so as to make the local LTI model approximating to the reconfigured...... nonlinear system match the corresponding LTI model approximating to the nominal nonlinear system in some optimal sense. The compensating modules are designed by the Pseudo-Inverse Method based on the local LTI models for the nominal and faulty nonlinear systems. Moreover, these modules should update...
International Development Research Centre (IDRC) Digital Library (Canada)
Nairobi, Kenya. 28 Adapting Fishing Policy to Climate Change with the Aid of Scientific and Endogenous Knowledge. Cap Verde, Gambia,. Guinea, Guinea Bissau,. Mauritania and Senegal. Environment and Development in the Third World. (ENDA-TM). Dakar, Senegal. 29 Integrating Indigenous Knowledge in Climate Risk ...
Motion Planning of Bimanual Robot Using Adaptive Model of Assembly
Hwang, Myun Joong; Lee, Doo Yong; Chung, Seong Youb
This paper presents a motion planning method for a bimanual robot for executing assembly tasks. The method employs an adaptive modeling which can automatically generate an assembly model and modify the model during actual assembly. Bimanual robotic assembly is modeled at the task-level using contact states of workpieces and their transitions. The lower-level velocity commands of the workpieces are automatically derived by solving optimization problem formulated with assembly constraints, position of the workpieces, and kinematics of manipulators. Motion requirements of the workpieces are transformed to motion commands of the bimanual robot. The proposed approach is evaluated with experiments on peg-in-hole assembly with an L-shaped peg.
Adaptive Active Noise Suppression Using Multiple Model Switching Strategy
Directory of Open Access Journals (Sweden)
Quanzhen Huang
2017-01-01
Full Text Available Active noise suppression for applications where the system response varies with time is a difficult problem. The computation burden for the existing control algorithms with online identification is heavy and easy to cause control system instability. A new active noise control algorithm is proposed in this paper by employing multiple model switching strategy for secondary path varying. The computation is significantly reduced. Firstly, a noise control system modeling method is proposed for duct-like applications. Then a multiple model adaptive control algorithm is proposed with a new multiple model switching strategy based on filter-u least mean square (FULMS algorithm. Finally, the proposed algorithm was implemented on Texas Instruments digital signal processor (DSP TMS320F28335 and real time experiments were done to test the proposed algorithm and FULMS algorithm with online identification. Experimental verification tests show that the proposed algorithm is effective with good noise suppression performance.
Prediction of conductivity by adaptive neuro-fuzzy model.
Directory of Open Access Journals (Sweden)
S Akbarzadeh
Full Text Available Electrochemical impedance spectroscopy (EIS is a key method for the characterizing the ionic and electronic conductivity of materials. One of the requirements of this technique is a model to forecast conductivity in preliminary experiments. The aim of this paper is to examine the prediction of conductivity by neuro-fuzzy inference with basic experimental factors such as temperature, frequency, thickness of the film and weight percentage of salt. In order to provide the optimal sets of fuzzy logic rule bases, the grid partition fuzzy inference method was applied. The validation of the model was tested by four random data sets. To evaluate the validity of the model, eleven statistical features were examined. Statistical analysis of the results clearly shows that modeling with an adaptive neuro-fuzzy is powerful enough for the prediction of conductivity.
Adaptive Multiscale Modeling of Geochemical Impacts on Fracture Evolution
Molins, S.; Trebotich, D.; Steefel, C. I.; Deng, H.
2016-12-01
Understanding fracture evolution is essential for many subsurface energy applications, including subsurface storage, shale gas production, fracking, CO2 sequestration, and geothermal energy extraction. Geochemical processes in particular play a significant role in the evolution of fractures through dissolution-driven widening, fines migration, and/or fracture sealing due to precipitation. One obstacle to understanding and exploiting geochemical fracture evolution is that it is a multiscale process. However, current geochemical modeling of fractures cannot capture this multi-scale nature of geochemical and mechanical impacts on fracture evolution, and is limited to either a continuum or pore-scale representation. Conventional continuum-scale models treat fractures as preferential flow paths, with their permeability evolving as a function (often, a cubic law) of the fracture aperture. This approach has the limitation that it oversimplifies flow within the fracture in its omission of pore scale effects while also assuming well-mixed conditions. More recently, pore-scale models along with advanced characterization techniques have allowed for accurate simulations of flow and reactive transport within the pore space (Molins et al., 2014, 2015). However, these models, even with high performance computing, are currently limited in their ability to treat tractable domain sizes (Steefel et al., 2013). Thus, there is a critical need to develop an adaptive modeling capability that can account for separate properties and processes, emergent and otherwise, in the fracture and the rock matrix at different spatial scales. Here we present an adaptive modeling capability that treats geochemical impacts on fracture evolution within a single multiscale framework. Model development makes use of the high performance simulation capability, Chombo-Crunch, leveraged by high resolution characterization and experiments. The modeling framework is based on the adaptive capability in Chombo
Directory of Open Access Journals (Sweden)
Saulo Frietas
2012-01-01
Full Text Available An advection scheme, which maintains the initial monotonic characteristics of a tracer field being transported and at the same time produces low numerical diffusion, is implemented in the Coupled Chemistry-Aerosol-Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CCATT-BRAMS. Several comparisons of transport modeling using the new and original (non-monotonic CCATT-BRAMS formulations are performed. Idealized 2-D non-divergent or divergent and stationary or time-dependent wind fields are used to transport sharply localized tracer distributions, as well as to verify if an existent correlation of the mass mixing ratios of two interrelated tracers is kept during the transport simulation. Further comparisons are performed using realistic 3-D wind fields. We then perform full simulations of real cases using data assimilation and complete atmospheric physics. In these simulations, we address the impacts of both advection schemes on the transport of biomass burning emissions and the formation of secondary species from non-linear chemical reactions of precursors. The results show that the new scheme produces much more realistic transport patterns, without generating spurious oscillations and under- and overshoots or spreading mass away from the local peaks. Increasing the numerical diffusion in the original scheme in order to remove the spurious oscillations and maintain the monotonicity of the transported field causes excessive smoothing in the tracer distribution, reducing the local gradients and maximum values and unrealistically spreading mass away from the local peaks. As a result, huge differences (hundreds of % for relatively inert tracers (like carbon monoxide are found in the smoke plume cores. In terms of the secondary chemical species formed by non-linear reactions (like ozone, we found differences of up to 50% in our simulations.
DEFF Research Database (Denmark)
Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin
2013-01-01
basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based...... on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models...... that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines....
Burgos, Daniel; Tattersall, Colin; Koper, Rob
2006-01-01
Burgos, D., Tattersall, C., & Koper, E. J. R. (2007). Representing adaptive and adaptable Units of Learning. How to model personalized eLearning in IMS Learning Design. In B. Fernández Manjon, J. M. Sanchez Perez, J. A. Gómez Pulido, M. A. Vega Rodriguez & J. Bravo (Eds.), Computers and Education:
Rastigejev, Y.; Semakin, A. N.
2013-12-01
Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical
An adaptive complex network model for brain functional networks.
Directory of Open Access Journals (Sweden)
Ignacio J Gomez Portillo
Full Text Available Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution.
Teodoro, P E; Laviola, B G; Martins, L D; Amaral, J F T; Rodrigues, W N
2016-08-19
The aim of this study was to screen physic nut (Jatropha curcas) genotypes that differ in their phosphorous (P) use, using mixed models. The experiment was conducted in a greenhouse located in the experimental area of the Centro de Ciências Agrárias of the Universidade Federal do Espírito Santo, in Alegre, ES, Brazil. The experiment was arranged in a randomized block design, using a 10 x 3-factorial scheme, including ten physic nut genotypes and two environments that differed in their levels of soil P availability (10 and 60 mg/dm 3 ), each with four replications. After 100 days of cultivation, we evaluated the plant height, stem diameter, root volume, root dry matter, aerial part dry matter, total dry matter, as well as the efficiency of absorption, and use. The parameters were estimated for combined selection while considering the studied parameters: stability and adaptability for both environments were obtained using the harmonic mean of the relative performance of the predicted genotypic values. High genotype by environment interactions were observed for most physic nut traits, indicating considerable influences of P availability on the phenotypic value. The genotype Paraíso simultaneously presented high adaptability and stability for aerial part dry matter, total dry matter, and P translocation efficiency. The genotype CNPAE-C2 showed a positive response to P fertilization by increasing both the total and aerial part dry matter.
The Role of Scale and Model Bias in ADAPT's Photospheric Eatimation
Energy Technology Data Exchange (ETDEWEB)
Godinez Vazquez, Humberto C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hickmann, Kyle Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Arge, Charles Nicholas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henney, Carl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-05-20
The Air Force Assimilative Photospheric flux Transport model (ADAPT), is a magnetic flux propagation based on Worden-Harvey (WH) model. ADAPT would be used to provide a global photospheric map of the Earth. A data assimilation method based on the Ensemble Kalman Filter (EnKF), a method of Monte Carlo approximation tied with Kalman filtering, is used in calculating the ADAPT models.
Zahmatkesh, Zahra; Karamouz, Mohammad; Nazif, Sara
2015-09-01
Simulation of rainfall-runoff process in urban areas is of great importance considering the consequences and damages of extreme runoff events and floods. The first issue in flood hazard analysis is rainfall simulation. Large scale climate signals have been proved to be effective in rainfall simulation and prediction. In this study, an integrated scheme is developed for rainfall-runoff modeling considering different sources of uncertainty. This scheme includes three main steps of rainfall forecasting, rainfall-runoff simulation and future runoff prediction. In the first step, data driven models are developed and used to forecast rainfall using large scale climate signals as rainfall predictors. Due to high effect of different sources of uncertainty on the output of hydrologic models, in the second step uncertainty associated with input data, model parameters and model structure is incorporated in rainfall-runoff modeling and simulation. Three rainfall-runoff simulation models are developed for consideration of model conceptual (structural) uncertainty in real time runoff forecasting. To analyze the uncertainty of the model structure, streamflows generated by alternative rainfall-runoff models are combined, through developing a weighting method based on K-means clustering. Model parameters and input uncertainty are investigated using an adaptive Markov Chain Monte Carlo method. Finally, calibrated rainfall-runoff models are driven using the forecasted rainfall to predict future runoff for the watershed. The proposed scheme is employed in the case study of the Bronx River watershed, New York City. Results of uncertainty analysis of rainfall-runoff modeling reveal that simultaneous estimation of model parameters and input uncertainty significantly changes the probability distribution of the model parameters. It is also observed that by combining the outputs of the hydrological models using the proposed clustering scheme, the accuracy of runoff simulation in the
Model-free adaptive control of advanced power plants
Cheng, George Shu-Xing; Mulkey, Steven L.; Wang, Qiang
2015-08-18
A novel 3-Input-3-Output (3.times.3) Model-Free Adaptive (MFA) controller with a set of artificial neural networks as part of the controller is introduced. A 3.times.3 MFA control system using the inventive 3.times.3 MFA controller is described to control key process variables including Power, Steam Throttle Pressure, and Steam Temperature of boiler-turbine-generator (BTG) units in conventional and advanced power plants. Those advanced power plants may comprise Once-Through Supercritical (OTSC) Boilers, Circulating Fluidized-Bed (CFB) Boilers, and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.
Preference learning with evolutionary Multivariate Adaptive Regression Spline model
DEFF Research Database (Denmark)
Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll
2015-01-01
This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for function approximation as well as being relatively easy to interpret. MARS models are evolved based on their efficiency in learning pairwise data. The method is tested on two datasets that collectively provide pairwise preference data of five cognitive states expressed by users. The method is analysed...