A Survey of Health Care Models that Encompass Multiple Departments
Vanberkel, Peter T.; Boucherie, Richardus J.; Hans, Elias W.; Hurink, Johann L.; Litvak, Nelli
2009-01-01
In this survey we review quantitative health care models to illustrate the extent to which they encompass multiple hospital departments. The paper provides general overviews of the relationships that exists between major hospital departments and describes how these relationships are accounted for by
A survey of health care models that encompass multiple departments
Vanberkel, P.T.; Boucherie, Richardus J.; Hans, Elias W.; Hurink, Johann L.; Litvak, Nelli
2009-01-01
In this survey we review quantitative health care models to illustrate the extent to which they encompass multiple hospital departments. The paper provides general overviews of the relationships that exists between major hospital departments and describes how these relationships are accounted for by
Towards an Encompassing Maturity Model for the Management of Hospital Information Systems.
de Carvalho, João Vidal; Rocha, Álvaro; Vasconcelos, José
2015-09-01
Maturity models are tools that favour the management of organizations, including their information systems management task, and hospital organizations are no exception. In the present paper we put forth a preliminary investigation aimed at the development of an encompassing maturity model for the management of hospital information systems. The development of this model is justified to the extent that current maturity models in the field of hospital information systems management are still in an early development stage, and especially because they are poorly detailed, do not provide tools to determine the maturity stage nor structure the characteristics of maturity stages according to different influencing factors.
Thuan, T. X.; Hart, M. H.; Ostriker, J. P.
1975-01-01
The two basic approaches of physical theory required to calculate the evolution of a galactic system are considered, taking into account stellar evolution theory and the dynamics of a gas-star system. Attention is given to intrinsic (stellar) physics, extrinsic (dynamical) physics, and computations concerning the fractionation of an initial mass of gas into stars. The characteristics of a 'standard' model and its variants are discussed along with the results obtained with the aid of these models.
A Simplified In vitro Experimental Model Encompasses the Essential Features of Sleep
Colombi, Ilaria; Tinarelli, Federico; Pasquale, Valentina; Tucci, Valter; Chiappalone, Michela
2016-01-01
In this paper, we show that neuronal assemblies plated on Micro Electrode Arrays present synchronized, low frequency firing patterns similar to in vivo slow wave oscillations, which are a key parameter of sleep-like state. Although neuronal cultures lack the characteristic high-frequency waves of wakefulness, it is possible to modulate their spontaneous firing pattern through the administration of specific neurotransmitters such as acetylcholine. We thus stimulated the cortical cultures with an agonist of acetylcholine receptor, Carbachol, which caused a desynchronization of the spontaneous firing of the cultures. We recorded and monitored the cultures for a period of over 31 h. We analyzed the electrophysiological signals by exploiting novel methodological approaches, taking into account the different temporal scales of the recorded signals, and considering both spikes and local field potentials. Supporting the electrophysiological analysis results, gene expressions of targeted genes showed the activation of specific markers involved in sleep-wake rhythms. Our results demonstrate that the Carbachol treatment induces desynchronization of neuronal activity, altering sleep-like properties in an in vitro model. PMID:27458335
Chrysikos, Dimosthenis T; Sergentanis, Theodoros N; Zagouri, Flora; Psaltopoulou, Theodora; Theodoropoulos, George; Flessas, Ioannis; Agrogiannis, George; Alexakis, Nikolaos; Lymperi, Maria; Katsarou, Ageliki I; Patsouris, Efstratios S; Zografos, Constantine G; Papalois, Apostolos E
2015-03-20
The potential of lazaroid U-74389G in attenuating injury after ischemia and reperfusion has been reported in various organs. The present study focuses specifically on the pancreas and aims to examine any effects of U-74389G in a swine model of pancreatic ischemia and reperfusion, encompassing ischemic preconditioning. Twelve pigs, weighing 28-35 kg, were randomized into two experimental groups. Group A (control group, n=6): Two periods of ischemic preconditioning (5 min each) separated by a 5-min rest interval; then ischemia time 30 min and reperfusion for 120 min. Group B (n=6): the same as above, with U-74389G intravenous injection in the inferior vena cava immediately prior to the initiation of reperfusion. Blood sampling and pancreatic biopsies were conducted at 0, 30, 60, 90 and 120 min after reperfusion. Repeated-measures ANOVA was undertaken to evaluate differences between the two study groups. No statistically significant differences were noted concerning the histopathological parameters in the control and therapy groups (P=0.563 for edema, P=0.241 for hemorrhage, P=0.256 for leukocyte infiltration, P=0.231 for acinar necrosis and P=0.438 for vacuolization). In accordance with the above, serum metabolic data (glucose, creatinine, urea, total and direct bilirubin, total calcium, amylase, lipase, SGOT/AST, SGPT/ALT, ALP, GGT, LDH, CRP, insulin) were not significantly different between the two groups; similarly, tumor necrosis factor-α values (P=0.705) and tissue malondialdehyde levels (P=0.628) did not differ between the two groups. This swine model of pancreatic ischemia and reperfusion, encompassing preconditioning, indicates that U-74389G lazaroid does not seem to exert protective effects from pancreatic damage.
Directory of Open Access Journals (Sweden)
Dimosthenis T Chrysikos
2015-05-01
Full Text Available Context The potential of lazaroid U-74389G in attenuating injury after ischemia and reperfusion has been reported in various organs. Objective The present study focuses specifically on the pancreas and aims to examine any effects of U-74389G in a swine model of pancreatic ischemia and reperfusion, encompassing ischemic preconditioning. Methods Twelve pigs, weighing 28–35 kg, were randomized into two experimental groups. Group A (control group, n=6: Two periods of ischemic preconditioning (5 min each separated by a 5- min rest interval; then ischemia time 30 min and reperfusion for 120 min. Group B (n=6: the same as above, with U-74389G intravenous injection in the inferior vena cava immediately prior to the initiation of reperfusion. Blood sampling and pancreatic biopsies were conducted at 0, 30, 60, 90 and 120 min after reperfusion. Results Repeated-measures ANOVA was undertaken to evaluate differences between the two study groups. No statistically significant differences were noted concerning the histopathological parameters in the control and therapygroups (P=0.563 for edema, P=0.241 for hemorrhage, P=0.256 for leukocyte infiltration, P=0.231 for acinar necrosis and P=0.438 for vacuolization. In accordance with the above, serum metabolic data (glucose, creatinine, urea, total and direct bilirubin, total calcium, amylase, lipase, SGOT/AST, SGPT/ALT, ALP, GGT, LDH, CRP, insulin were not significantly different between the two groups; similarly, tumor necrosis factor-α values (P=0.705 and tissue malondialdehyde levels (P=0.628 did not differ between the two groups. Conclusion This swine model of pancreatic ischemia and reperfusion, encompassing preconditioning, indicates that U-74389G lazaroid does not seem to exert protective effects from pancreatic damage.
A Model of Hierarchical Key Assignment Scheme
Institute of Scientific and Technical Information of China (English)
ZHANG Zhigang; ZHAO Jing; XU Maozhi
2006-01-01
A model of the hierarchical key assignment scheme is approached in this paper, which can be used with any cryptography algorithm. Besides, the optimal dynamic control property of a hierarchical key assignment scheme will be defined in this paper. Also, our scheme model will meet this property.
Modeling Students' Mathematics Using Steffe's Fraction Schemes
Norton, Anderson H.; McCloskey, Andrea V.
2008-01-01
Each year, more teachers learn about the successful intervention program known as Math Recovery (USMRC 2008; Wright 2003). The program uses Steffe's whole-number schemes to model, understand, and support children's development of whole-number reasoning. Readers are probably less familiar with Steffe's fraction schemes, which have proven similarly…
A SUBDIVISION SCHEME FOR VOLUMETRIC MODELS
Institute of Scientific and Technical Information of China (English)
GhulamMustafa; LiuXuefeng
2005-01-01
In this paper, a subdivision scheme which generalizes a surface scheme in previous papers to volume meshes is designed. The scheme exhibits significant control over shrink-age/size of volumetric models. It also has the ability to conveniently incorporate boundaries and creases into a smooth limit shape of models. The method presented here is much simpler and easier as compared to MacCracken and Joy's. This method makes no restrictions on the local topology of meshes. Particularly, it can be applied without any change to meshes of nonmanifold topology.
Group Buying Schemes : A Sustainable Business Model?
Köpp, Sebastian; Mukhachou, Aliaksei; Schwaninger, Markus
2013-01-01
Die Autoren gehen der Frage nach, ob "Group Buying Schemes" wie beispielsweise von den Unternehmen Groupon und Dein Deal angeboten, ein nachhaltiges Geschäftsmodell sind. Anhand der Fallstudie Groupon wird mit einem System Dynamics Modell festgestellt, dass das Geschäftsmodell geändert werden muss, wenn die Unternehmung auf Dauer lebensfähig sein soll. The authors examine if group buying schemes are a sustainable business model. By means of the Groupon case study and using a System Dynami...
A Modified Model Predictive Control Scheme
Institute of Scientific and Technical Information of China (English)
Xiao-Bing Hu; Wen-Hua Chen
2005-01-01
In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.
Presotto, L; Gianolli, L; Gilardi, M C; Bettinardi, V
2015-04-01
To perform kinetic modelling quantification, PET dynamic data must be acquired in short frames, where different critical conditions are met. The accuracy of reconstructed images influences quantification. The added value of Time-Of-Flight (TOF) and Point Spread Function (PSF) in cardiac image reconstruction was assessed. A static phantom was used to simulate two extreme conditions: (i) the bolus passage and (ii) the steady uptake. Various count statistics and independent noise realisations were considered. A moving phantom filled with two different radionuclides was used to simulate: (i) a great range of contrasts and (ii) the cardio/respiratory motion. Analytical and iterative reconstruction (IR) algorithms also encompassing TOF and PSF modelling were evaluated. Both analytic and IR algorithms provided good results in all the evaluated conditions. The amount of bias introduced by IR was found to be limited. TOF allowed faster convergence and lower noise levels. PSF achieved near full myocardial activity recovery in static conditions. Motion degraded performances, but the addition of both TOF and PSF maintained the best overall behaviour. IR accounting for TOF and PSF can be recommended for the quantification of dynamic cardiac PET studies as they improve the results compared to analytic and standard IR.
A Formal Model for the Security of Proxy Signature Schemes
Institute of Scientific and Technical Information of China (English)
GU Chun-xiang; ZHU Yue-fei; ZHANG Ya-juan
2005-01-01
This paper provides theoretical foundations for the secure proxy signature primitive. We present a formal model for the security of proxy signature schemes, which defines the capabilities of the adversary and the security goals to capture which mean for a proxy signature scheme to be secure. Then, we present an example of proxy signature scheme that can be proven secure in the standard model.
An Improved Scalar Costa Scheme Based on Watson Perceptual Model
Institute of Scientific and Technical Information of China (English)
QI Kai-yue; CHEN Jian-bo; ZHOU Yi
2008-01-01
An improved scalar Costa scheme (SCS) was proposed by using improved Watson perceptual model to adaptively decide quantization step size and scaling factor. The improved scheme equals to embed hiding data based on an actual image. In order to withstand amplitude scaling attack, the Watson perceptual model was redefined, and the improved scheme using the new definition can insure quantization step size in decoder that is proportional to amplitude scaling attack factor. The performance of the improved scheme outperforms that of SCS with fixed quantization step size. The improved scheme combines information theory and visual model.
Directory of Open Access Journals (Sweden)
Navaro Pierre
2011-11-01
Full Text Available A new scheme for discretizing the P1 model on unstructured polygonal meshes is proposed. This scheme is designed such that its limit in the diffusion regime is the MPFA-O scheme which is proved to be a consistent variant of the Breil-Maire diffusion scheme. Numerical tests compare this scheme with a derived GLACE scheme for the P1 system. Un nouveau schéma de discrétisation du modèle P1 sur maillage non structuré composé de polygones est proposé. Ce schéma est construit pour que sa limite en régime diffusion soit le schéma MPFA-O qu’on démontre être une variante consistante du schéma de diffusion de Breil-Maire. Ce schéma est comparé sur des cas tests avec un schéma dérivé du schéma GLACE pour le modèle P1.
A modified symplectic PRK scheme for seismic wave modeling
Liu, Shaolin; Yang, Dinghui; Ma, Jian
2017-02-01
A new scheme for the temporal discretization of the seismic wave equation is constructed based on symplectic geometric theory and a modified strategy. The ordinary differential equation in terms of time, which is obtained after spatial discretization via the spectral-element method, is transformed into a Hamiltonian system. A symplectic partitioned Runge-Kutta (PRK) scheme is used to solve the Hamiltonian system. A term related to the multiplication of the spatial discretization operator with the seismic wave velocity vector is added into the symplectic PRK scheme to create a modified symplectic PRK scheme. The symplectic coefficients of the new scheme are determined via Taylor series expansion. The positive coefficients of the scheme indicate that its long-term computational capability is more powerful than that of conventional symplectic schemes. An exhaustive theoretical analysis reveals that the new scheme is highly stable and has low numerical dispersion. The results of three numerical experiments demonstrate the high efficiency of this method for seismic wave modeling.
Generic-Model-Based Description Scheme for MPEG-7
Institute of Scientific and Technical Information of China (English)
Deng Juan; Tan Hut; Chen Xin-meng
2004-01-01
We propose a new description scheme for MPEG7-: Generic-model-based Description Scheme to describe contents of audio, video, text and other sorts of multimedia.It uses a generic model as the description frame, which provides a simple but useful object-based structure. The main components of the description scheme are generic model, objects and object fcatures. The proposed description scheme is illustrated and exemplified by Extensible Markup Language.It aims at clarity and flexibility to support MPEG-7 applications such as query and edit. We demonstrate its feasibility and efficiency by presenting applications: Digital Broadcasting and Edit System (DEBS) and Non-linear Edit System (NLES) that already used the generic structure or will greatly benefit from it.
Comparison of tropospheric chemistry schemes for use within global models
Directory of Open Access Journals (Sweden)
K. M. Emmerson
2008-11-01
Full Text Available Methane and ozone are two important climate gases with significant tropospheric chemistry. Within chemistry-climate and transport models this chemistry is simplified for computational expediency. We compare the state of the art Master Chemical Mechanism (MCM with six tropospheric chemistry schemes (CRI-reduced, GEOS-CHEM and a GEOS-CHEM adduct, MOZART, TOMCAT and CBM-IV that could be used within composition transport models. We test the schemes within a box model framework under conditions derived from a composition transport model and from field observations from a regional scale pollution event. We find that CRI-reduced provides much skill in simulating the full chemistry, yet with greatly reduced complexity. We find significant variations between the other chemical schemes, and reach the following conclusions. 1 The inclusion of a gas phase N_{2}O_{5}+H_{2}O reaction in some schemes and not others is a large source of uncertainty in the inorganic chemistry. 2 There are significant variations in the calculated concentration of PAN between the schemes, which will affect the long range transport of reactive nitrogen in global models. 3 The representation of isoprene chemistry differs hugely between the schemes, leading to significant uncertainties on the impact of isoprene on composition. 4 Night-time chemistry is badly represented with significant disagreements in the ratio of NO_{3} to NO_{x}. Resolving these four issues through further investigative laboratory studies will reduce the uncertainties within the chemical schemes of global tropospheric models.
Moist convection scheme in Model E2
Kim, Daehyun; Yao, Mao-Sung
2013-01-01
This documentation describes the version of the Del Genio - Yao cumulus parameterization used in the NASA Goddard Institute for Space Studies Model E2 GCM. This version was used for the official GISS submissions to the CMIP5 archive.
Accelerated failure time model under general biased sampling scheme.
Kim, Jane Paik; Sit, Tony; Ying, Zhiliang
2016-07-01
Right-censored time-to-event data are sometimes observed from a (sub)cohort of patients whose survival times can be subject to outcome-dependent sampling schemes. In this paper, we propose a unified estimation method for semiparametric accelerated failure time models under general biased estimating schemes. The proposed estimator of the regression covariates is developed upon a bias-offsetting weighting scheme and is proved to be consistent and asymptotically normally distributed. Large sample properties for the estimator are also derived. Using rank-based monotone estimating functions for the regression parameters, we find that the estimating equations can be easily solved via convex optimization. The methods are confirmed through simulations and illustrated by application to real datasets on various sampling schemes including length-bias sampling, the case-cohort design and its variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Bilodeau, Mélanie; MacRae, Tara; Gaboury, Louis; Laverdure, Jean-Philippe; Hardy, Marie-Pierre; Mayotte, Nadine; Paradis, Véronique; Harton, Sébastien; Perreault, Claude; Sauvageau, Guy
2009-10-19
The cystatin protein superfamily is characterized by the presence of conserved sequences that display cysteine protease inhibitory activity (e.g., towards cathepsins). Type 1 and 2 cystatins are encoded by 25 genes of which 23 are grouped in 2 clusters localized on mouse chromosomes 16 and 2. The expression and essential roles of most of these genes in mouse development and hematopoiesis remain poorly characterized. In this study, we describe a set of quantitative real-time PCR assays and a global expression profile of cystatin genes in normal mouse tissues. Benefiting from our collection of DelES embryonic stem cell clones harboring large chromosomal deletions (to be reported elsewhere), we selected a clone in which a 95-kb region of chromosome 16 is missing (Del(16qB3Delta/+)). In this particular clone, 2 cystatin genes, namely Csta and Stfa2l1 are absent along with 2 other genes (Fam162a, Ccdc58) and associated intergenic regions. From this line, we established a new homozygous mutant mouse model (Del(16qB3Delta/16qB3Delta)) to assess the in vivo biological functions of the 2 deleted cystatins. Stfa2l1 gene expression is high in wild-type fetal liver, bone marrow, and spleen, while Csta is ubiquitously expressed. Homozygous Del(16qB3Delta/16qB3Delta) animals are phenotypically normal, fertile, and not overtly susceptible to spontaneous or irradiation-induced tumor formation. The hematopoietic stem and progenitor cell activity in these mutant mice are also normal. Interestingly, quantitative real-time PCR expression profiling reveals a marked increase in the expression levels of Stfa2l1/Csta phylogenetically-related genes (Stfa1, Stfa2, and Stfa3) in Del(16qB3Delta/16qB3Delta) hematopoietic tissues, suggesting that these candidate genes might be contributing to compensatory mechanisms. Overall, this study presents an optimized approach to globally monitor cystatin gene expression as well as a new mouse model deficient in Stfa2l1/Csta genes, expanding the
The Numerical Scheme Development of a Simplified Frozen Soil Model
Institute of Scientific and Technical Information of China (English)
LI Qian; SUN Shufen; DAI Qiudan
2009-01-01
In almost all frozen soil models used currently,three variables of temperature,ice content and moisture content are used as prognostic variables and the rate term,accounting for the contribution of the phase change between water and ice,is shown explicitly in both the energy and mass balance equations.The models must be solved by a numerical method with an iterative process,and the rate term of the phase change needs to be pre-estimated at the beginning in each iteration step.Since the rate term of the phase change in the energy equation is closely related to the release or absorption of the great amount of fusion heat,a small error in the rate term estimation will introduce greater error in the energy balance,which will amplify the error in the temperature calculation and in turn,cause problems for the numerical solution convergence.In this work,in order to first reduce the trouble,the methodology of the variable transformation is applied to a simplified frozen soil model used currently,which leads to new frozen soil scheme used in this work.In the new scheme,the enthalpy and the total water equivalent are used as predictive variables in the governing equations to replace temperature,volumetric soil moisture and ice content used in many current models.By doing so,the rate terms of the phase change are not shown explicitly in both the mass and energy equations and its pre-estimation is avoided.Secondly,in order to solve this new scheme more functionally,the development of the numerical scheme to the new scheme is described and a numerical algorithm appropriate to the numerical scheme is developed.In order to evaluate the new scheme of the frozen soil model and its relevant algorithm,a series of model evaluations are conducted by comparing numerical results from the new model scheme with three observational data sets.The comparisons show that the results from the model are in good agreement with these data sets in both the change trend of variables and their
Iteration schemes for parallelizing models of superconductivity
Energy Technology Data Exchange (ETDEWEB)
Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)
1996-12-31
The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.
Inflationary gravitational waves in collapse scheme models
Energy Technology Data Exchange (ETDEWEB)
Mariani, Mauro, E-mail: mariani@carina.fcaglp.unlp.edu.ar [Facultad de Ciencias Astronómicas y Geofísicas, Universidad Nacional de La Plata, Paseo del Bosque S/N, 1900 La Plata (Argentina); Bengochea, Gabriel R., E-mail: gabriel@iafe.uba.ar [Instituto de Astronomía y Física del Espacio (IAFE), UBA-CONICET, CC 67, Suc. 28, 1428 Buenos Aires (Argentina); León, Gabriel, E-mail: gleon@df.uba.ar [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria – Pab. I, 1428 Buenos Aires (Argentina)
2016-01-10
The inflationary paradigm is an important cornerstone of the concordance cosmological model. However, standard inflation cannot fully address the transition from an early homogeneous and isotropic stage, to another one lacking such symmetries corresponding to our present universe. In previous works, a self-induced collapse of the wave function has been suggested as the missing ingredient of inflation. Most of the analysis regarding the collapse hypothesis has been solely focused on the characteristics of the spectrum associated to scalar perturbations, and within a semiclassical gravity framework. In this Letter, working in terms of a joint metric-matter quantization for inflation, we calculate, for the first time, the tensor power spectrum and the tensor-to-scalar ratio corresponding to the amplitude of primordial gravitational waves resulting from considering a generic self-induced collapse.
Inflationary gravitational waves in collapse scheme models
Directory of Open Access Journals (Sweden)
Mauro Mariani
2016-01-01
Full Text Available The inflationary paradigm is an important cornerstone of the concordance cosmological model. However, standard inflation cannot fully address the transition from an early homogeneous and isotropic stage, to another one lacking such symmetries corresponding to our present universe. In previous works, a self-induced collapse of the wave function has been suggested as the missing ingredient of inflation. Most of the analysis regarding the collapse hypothesis has been solely focused on the characteristics of the spectrum associated to scalar perturbations, and within a semiclassical gravity framework. In this Letter, working in terms of a joint metric-matter quantization for inflation, we calculate, for the first time, the tensor power spectrum and the tensor-to-scalar ratio corresponding to the amplitude of primordial gravitational waves resulting from considering a generic self-induced collapse.
Modelling of Substrate Noise and Mitigation Schemes for UWB Systems
DEFF Research Database (Denmark)
Shen, Ming; Mikkelsen, Jan H.; Larsen, Torben
2012-01-01
The last chapter of this first part of the book, chapter seven, is devoted to Modeling of Substrate Noise and Mitigation Schemes for Ultrawideband (UWB) systems, and is written by Ming Shen, Jan H. Mikkelsen, and Torben Larsen from Aalborg University, Denmark. In highly integrated mixed-mode desi...
Bridge aerodynamics and aeroelasticity: A comparison of modeling schemes
Wu, Teng; Kareem, Ahsan
2013-11-01
Accurate modeling of wind-induced loads on bridge decks is critical to ensure the functionality and survivability of long-span bridges. Over the last few decades, several schemes have emerged to model bridge behavior under winds from an aerodynamic/aeroelastic perspective. A majority of these schemes rely on the quasi-steady (QS) theory. This paper systematically compares and assesses the efficacy of five analytical models available in the literature with a new model presented herein. These models include: QS theory-based model, corrected QS theory-based model, linearized QS theory-based model, semi-empirical linear model, hybrid model, and the proposed modified hybrid model. The ability of these models to capture fluid memory and nonlinear effects either individually or collectively is examined. In addition, their ability to include the effects of turbulence in the approach flow on the bridge behavior is assessed. All models are compared in a consistent manner by utilizing the time domain approach. The underlying role of each model in capturing the physics of bridge behavior under winds is highlighted and the influence of incoming turbulence and its interaction with the bridge deck is examined. A discussion is included that focuses on a number of critical parameters pivotal to the effectiveness of corresponding models.
Runoff prediction using an integrated hybrid modelling scheme
Remesan, Renji; Shamim, Muhammad Ali; Han, Dawei; Mathew, Jimson
2009-06-01
SummaryRainfall runoff is a very complicated process due to its nonlinear and multidimensional dynamics, and hence difficult to model. There are several options for a modeller to consider, for example: the type of input data to be used, the length of model calibration (training) data and whether or not the input data be treated as signals with different frequency bands so that they can be modelled separately. This paper describes a new hybrid modelling scheme to answer the above mentioned questions. The proposed methodology is based on a hybrid model integrating wavelet transformation, a modelling engine (Artificial Neural Network) and the Gamma Test. First, the Gamma Test is used to decide the required input data dimensions and its length. Second, the wavelet transformation decomposes the input signals into different frequency bands. Finally, a modelling engine (ANN in this study) is used to model the decomposed signals separately. The proposed scheme was tested using the Brue catchment, Southwest England, as a case study and has produced very positive results. The hybrid model outperforms all other models tested. This study has a wider implication in the hydrological modelling field since its general framework could be applied to other model combinations (e.g., model engine could be Support Vector Machines, neuro-fuzzy systems, or even a conceptual model. The signal decomposition could be carried out by Fourier transformation).
An activation-recruitment scheme for use in muscle modeling.
Hawkins, D A; Hull, M L
1992-12-01
The derivation of a new activation-recruitment scheme and the results of a study designed to test its validity are presented. The activation scheme utilizes input data of processed surface EMG signals, muscle composition, muscle architecture, and experimentally determined activation coefficients. In the derivation, the relationship between muscle activation and muscle fiber recruitment was considered. In the experimental study, triceps muscle force was determined for isometric elbow extension tasks varying in intensity from 10 to 100% of a maximum voluntary contraction (MVC) using both a muscle model that incorporates the activation scheme, and inverse dynamics techniques. The forces calculated using the two methods were compared statistically. The modeled triceps force was not significantly different from the experimental results determined using inverse dynamics techniques for average activation levels greater than 25% of MVC, but was significantly different for activation levels less than 25% of MVC. These results lend support for use of the activation-recruitment scheme for moderate to large activation levels, and suggest that factors in addition to fiber recruitment play a role in force regulation at lower activation levels.
High order discretization schemes for stochastic volatility models
Jourdain, Benjamin
2009-01-01
In usual stochastic volatility models, the process driving the volatility of the asset price evolves according to an autonomous one-dimensional stochastic differential equation. We assume that the coefficients of this equation are smooth. Using It\\^o's formula, we get rid, in the asset price dynamics, of the stochastic integral with respect to the Brownian motion driving this SDE. Taking advantage of this structure, we propose - a scheme, based on the Milstein discretization of this SDE, with order one of weak trajectorial convergence for the asset price, - a scheme, based on the Ninomiya-Victoir discretization of this SDE, with order two of weak convergence for the asset price. We also propose a specific scheme with improved convergence properties when the volatility of the asset price is driven by an Orstein-Uhlenbeck process. We confirm the theoretical rates of convergence by numerical experiments and show that our schemes are well adapted to the multilevel Monte Carlo method introduced by Giles [2008a,b].
An Industrial Model Based Disturbance Feedback Control Scheme
DEFF Research Database (Denmark)
Kawai, Fukiko; Nakazawa, Chikashi; Vinther, Kasper
2014-01-01
This paper presents a model based disturbance feedback control scheme. Industrial process systems have been traditionally controlled by using relay and PID controller. However these controllers are affected by disturbances and model errors and these effects degrade control performance. The authors...... propose a new control method that can decrease the negative impact of disturbance and model errors. The control method is motivated by industrial practice by Fuji Electric. Simulation tests are examined with a conventional PID controller and the disturbance feedback control. The simulation results...
CFD DPAL modeling for various schemes of flow configurations
Waichman, Karol; Barmashenko, Boris D.; Rosenwaks, Salman
2014-10-01
Comprehensive analysis of kinetic and fluid dynamic processes in flowing-gas diode pumped alkali lasers (DPALs) using two- and three-dimensional computational fluid dynamics (2D and 3D CFD) models is reported for Cs DPALs. The models take into account effects of temperature rise and losses of alkali atoms due to ionization. Various gas flow regimes and transverse and parallel flow-optics directions configurations are studied. Optimization of the Cs DPAL parameters, using 3D CFD modeling, shows that applying high flow velocity and narrowband pumping, maximum lasing power as high as 40 kW can be obtained at pump power of 80 kW for transverse flow configuration in a pumped volume of ~ 0.7 cm3. At high pump power the calculated laser power is higher for the transverse scheme than for the parallel scheme because of a more efficient heat convection from the beam volume in the transverse configuration. The CFD models are applied to experimental devices and the calculated results are in good agreement with the measurements.
Onsager reciprocity principle for kinetic models and kinetic schemes
Mahendra, Ajit Kumar
2013-01-01
Boltzmann equation requires some alternative simpler kinetic model like BGK to replace the collision term. Such a kinetic model which replaces the Boltzmann collision integral should preserve the basic properties and characteristics of the Boltzmann equation and comply with the requirements of non equilibrium thermodynamics. Most of the research in development of kinetic theory based methods have focused more on entropy conditions, stability and ignored the crucial aspect of non equilibrium thermodynamics. The paper presents a new kinetic model formulated based on the principles of non equilibrium thermodynamics. The new kinetic model yields correct transport coefficients and satisfies Onsager's reciprocity relationship. The present work also describes a novel kinetic particle method and gas kinetic scheme based on this linkage of non-equilibrium thermodynamics and kinetic theory. The work also presents derivation of kinetic theory based wall boundary condition which complies with the principles of non-equili...
Comparing Sediment Yield Predictions from Different Hydrologic Modeling Schemes
Dahl, T. A.; Kendall, A. D.; Hyndman, D. W.
2015-12-01
Sediment yield, or the delivery of sediment from the landscape to a river, is a difficult process to accurately model. It is primarily a function of hydrology and climate, but influenced by landcover and the underlying soils. These additional factors make it much more difficult to accurately model than water flow alone. It is not intuitive what impact different hydrologic modeling schemes may have on the prediction of sediment yield. Here, two implementations of the Modified Universal Soil Loss Equation (MUSLE) are compared to examine the effects of hydrologic model choice. Both the Soil and Water Assessment Tool (SWAT) and the Landscape Hydrology Model (LHM) utilize the MUSLE for calculating sediment yield. SWAT is a lumped parameter hydrologic model developed by the USDA, which is commonly used for predicting sediment yield. LHM is a fully distributed hydrologic model developed primarily for integrated surface and groundwater studies at the watershed to regional scale. SWAT and LHM models were developed and tested for two large, adjacent watersheds in the Great Lakes region; the Maumee River and the St. Joseph River. The models were run using a variety of single model and ensemble downscaled climate change scenarios from the Coupled Model Intercomparison Project 5 (CMIP5). The initial results of this comparison are discussed here.
An integration scheme for stiff solid-gas reactor models
Directory of Open Access Journals (Sweden)
Bjarne A. Foss
2001-04-01
Full Text Available Many dynamic models encounter numerical integration problems because of a large span in the dynamic modes. In this paper we develop a numerical integration scheme for systems that include a gas phase, and solid and liquid phases, such as a gas-solid reactor. The method is based on neglecting fast dynamic modes and exploiting the structure of the algebraic equations. The integration method is suitable for a large class of industrially relevant systems. The methodology has proven remarkably efficient. It has in practice performed excellent and been a key factor for the success of the industrial simulator for electrochemical furnaces for ferro-alloy production.
Modeling Phosphorous Losses from Seasonal Manure Application Schemes
Menzies, E.; Walter, M. T.
2015-12-01
Excess nutrient loading, especially nitrogen and phosphorus, to surface waters is a common and significant problem throughout the United States. While pollution remediation efforts are continuously improving, the most effective treatment remains to limit the source. Appropriate timing of fertilizer application to reduce nutrient losses is currently a hotly debated topic in the Northeastern United States; winter spreading of manure is under special scrutiny. We plan to evaluate the loss of phosphorous to surface waters from agricultural systems under varying seasonal fertilization schemes in an effort to determine the impacts of fertilizers applied throughout the year. The Cayuga Lake basin, located in the Finger Lakes region of New York State, is a watershed dominated by agriculture where a wide array of land management strategies can be found. The evaluation will be conducted on the Fall Creek Watershed, a large sub basin in the Cayuga Lake Watershed. The Fall Creek Watershed covers approximately 33,000 ha in central New York State with approximately 50% of this land being used for agriculture. We plan to use the Soil and Water Assessment Tool (SWAT) to model a number of seasonal fertilization regimes such as summer only spreading and year round spreading (including winter applications), as well as others. We will use the model to quantify the phosphorous load to surface waters from these different fertilization schemes and determine the impacts of manure applied at different times throughout the year. More detailed knowledge about how seasonal fertilization schemes impact phosphorous losses will provide more information to stakeholders concerning the impacts of agriculture on surface water quality. Our results will help farmers and extensionists make more informed decisions about appropriate timing of manure application for reduced phosphorous losses and surface water degradation as well as aid law makers in improving policy surrounding manure application.
Tissue modeling schemes in low energy breast brachytherapy.
Afsharpour, Hossein; Landry, Guillaume; Reniers, Brigitte; Pignol, Jean-Philippe; Beaulieu, Luc; Verhaegen, Frank
2011-11-21
Breast tissue is heterogeneous and is mainly composed of glandular (G) and adipose (A) tissues. The proportion of G versus A varies considerably among the population. The absorbed dose distributions in accelerated partial breast irradiation therapy with low energy photon brachytherapy sources are very sensitive to tissue heterogeneities. Current clinical algorithms use the recommendations of the AAPM TG43 report which approximates the human tissues by unit density water. The aim of this study is to investigate various breast tissue modeling schemes for low energy brachytherapy. A special case of breast permanent seed implant is considered here. Six modeling schemes are considered. Uniform and non-uniform water breast (UWB and NUWB) consider the density but neglect the effect of the composition of tissues. The uniform and the non-uniform G/A breast (UGAB and NUGAB) as well the age-dependent breast (ADB) models consider the effect of the composition. The segmented breast tissue (SBT) method uses a density threshold to distinguish between G and A tissues. The PTV D(90) metric is used for the analysis and is based on the dose to water (D(90(w,m))). D(90(m,m)) is also reported for comparison to D(90(w,m)). The two-month post-implant D(90(w,m)) averaged over 38 patients is smaller in NUWB than in UWB by about 4.6% on average (ranging from 5% to 13%). Large average differences of G/A breast models with TG43 (17% and 26% in UGAB and NUGAB, respectively) show that the effect of the chemical composition dominates the effect of the density on dose distributions. D(90(w,m)) is 12% larger in SBT than in TG43 when averaged. These differences can be as low as 4% or as high as 20% when the individual patients are considered. The high sensitivity of dosimetry on the modeling scheme argues in favor of an agreement on a standard tissue modeling approach to be used in low energy breast brachytherapy. SBT appears to generate the most geometrically reliable breast tissue models in this
Experimental design schemes for learning Boolean network models
Atias, Nir; Gershenzon, Michal; Labazin, Katia; Sharan, Roded
2014-01-01
Motivation: A holy grail of biological research is a working model of the cell. Current modeling frameworks, especially in the protein–protein interaction domain, are mostly topological in nature, calling for stronger and more expressive network models. One promising alternative is logic-based or Boolean network modeling, which was successfully applied to model signaling regulatory circuits in human. Learning such models requires observing the system under a sufficient number of different conditions. To date, the amount of measured data is the main bottleneck in learning informative Boolean models, underscoring the need for efficient experimental design strategies. Results: We developed novel design approaches that greedily select an experiment to be performed so as to maximize the difference or the entropy in the results it induces with respect to current best-fit models. Unique to our maximum difference approach is the ability to account for all (possibly exponential number of) Boolean models displaying high fit to the available data. We applied both approaches to simulated and real data from the EFGR and IL1 signaling systems in human. We demonstrate the utility of the developed strategies in substantially improving on a random selection approach. Our design schemes highlight the redundancy in these datasets, leading up to 11-fold savings in the number of experiments to be performed. Availability and implementation: Source code will be made available upon acceptance of the manuscript. Contact: roded@post.tau.ac.il PMID:25161232
Study on noise prediction model and control schemes for substation.
Chen, Chuanmin; Gao, Yang; Liu, Songtao
2014-01-01
With the government's emphasis on environmental issues of power transmission and transformation project, noise pollution has become a prominent problem now. The noise from the working transformer, reactor, and other electrical equipment in the substation will bring negative effect to the ambient environment. This paper focuses on using acoustic software for the simulation and calculation method to control substation noise. According to the characteristics of the substation noise and the techniques of noise reduction, a substation's acoustic field model was established with the SoundPLAN software to predict the scope of substation noise. On this basis, 4 reasonable noise control schemes were advanced to provide some helpful references for noise control during the new substation's design and construction process. And the feasibility and application effect of these control schemes can be verified by using the method of simulation modeling. The simulation results show that the substation always has the problem of excessive noise at boundary under the conventional measures. The excess noise can be efficiently reduced by taking the corresponding noise reduction methods.
A finite-volume scheme for a kidney nephron model
Directory of Open Access Journals (Sweden)
Seguin Nicolas
2012-04-01
Full Text Available We present a finite volume type scheme to solve a transport nephron model. The model consists in a system of transport equations with specific boundary conditions. The transport velocity is driven by another equation that can undergo sign changes during the transient regime. This is the main difficulty for the numerical resolution. The scheme we propose is based on an explicit resolution and is stable under a CFL condition which does not depend on the stiffness of source terms. Nous présentons un schéma numérique de type volume fini que l’on applique à un modèle de transport dans le néphron. Ce modèle consiste en un système d’équations de transport, avec des conditions aux bords spécifiques. La vitesse du transport est la solution d’un autre système d’équation et peut changer de signe au cours du régime transitoire. Ceci constitue la principale difficulté pour la résolution numérique. Le schéma proposé, basé sur une résolution explicite, est stable sous une condition CFL non restrictive.
Modeling of power control schemes in induction cooking devices
Beato, Alessio; Conti, Massimo; Turchetti, Claudio; Orcioni, Simone
2005-06-01
In recent years, with remarkable advancements of power semiconductor devices and electronic control systems, it becomes possible to apply the induction heating technique for domestic use. In order to achieve the supply power required by these devices, high-frequency resonant inverters are used: the force commutated, half-bridge series resonant converter is well suited for induction cooking since it offers an appropriate balance between complexity and performances. Power control is a key issue to attain efficient and reliable products. This paper describes and compares four power control schemes applied to the half-bridge series resonant inverter. The pulse frequency modulation is the most common control scheme: according to this strategy, the output power is regulated by varying the switching frequency of the inverter circuit. Other considered methods, originally developed for induction heating industrial applications, are: pulse amplitude modulation, asymmetrical duty cycle and pulse density modulation which are respectively based on variation of the amplitude of the input supply voltage, on variation of the duty cycle of the switching signals and on variation of the number of switching pulses. Each description is provided with a detailed mathematical analysis; an analytical model, built to simulate the circuit topology, is implemented in the Matlab environment in order to obtain the steady-state values and waveforms of currents and voltages. For purposes of this study, switches and all reactive components are modelled as ideal and the "heating-coil/pan" system is represented by an equivalent circuit made up of a series connected resistance and inductance.
Educational NASA Computational and Scientific Studies (enCOMPASS)
Memarsadeghi, Nargess
2013-01-01
Educational NASA Computational and Scientific Studies (enCOMPASS) is an educational project of NASA Goddard Space Flight Center aimed at bridging the gap between computational objectives and needs of NASA's scientific research, missions, and projects, and academia's latest advances in applied mathematics and computer science. enCOMPASS achieves this goal via bidirectional collaboration and communication between NASA and academia. Using developed NASA Computational Case Studies in university computer science/engineering and applied mathematics classes is a way of addressing NASA's goals of contributing to the Science, Technology, Education, and Math (STEM) National Objective. The enCOMPASS Web site at http://encompass.gsfc.nasa.gov provides additional information. There are currently nine enCOMPASS case studies developed in areas of earth sciences, planetary sciences, and astrophysics. Some of these case studies have been published in AIP and IEEE's Computing in Science and Engineering magazines. A few university professors have used enCOMPASS case studies in their computational classes and contributed their findings to NASA scientists. In these case studies, after introducing the science area, the specific problem, and related NASA missions, students are first asked to solve a known problem using NASA data and past approaches used and often published in a scientific/research paper. Then, after learning about the NASA application and related computational tools and approaches for solving the proposed problem, students are given a harder problem as a challenge for them to research and develop solutions for. This project provides a model for NASA scientists and engineers on one side, and university students, faculty, and researchers in computer science and applied mathematics on the other side, to learn from each other's areas of work, computational needs and solutions, and the latest advances in research and development. This innovation takes NASA science and
Dynamics Model Abstraction Scheme Using Radial Basis Functions
Directory of Open Access Journals (Sweden)
Silvia Tolu
2012-01-01
Full Text Available This paper presents a control model for object manipulation. Properties of objects and environmental conditions influence the motor control and learning. System dynamics depend on an unobserved external context, for example, work load of a robot manipulator. The dynamics of a robot arm change as it manipulates objects with different physical properties, for example, the mass, shape, or mass distribution. We address active sensing strategies to acquire object dynamical models with a radial basis function neural network (RBF. Experiments are done using a real robot’s arm, and trajectory data are gathered during various trials manipulating different objects. Biped robots do not have high force joint servos and the control system hardly compensates all the inertia variation of the adjacent joints and disturbance torque on dynamic gait control. In order to achieve smoother control and lead to more reliable sensorimotor complexes, we evaluate and compare a sparse velocity-driven versus a dense position-driven control scheme.
An interface capturing scheme for modeling atomization in compressible flows
Garrick, Daniel P.; Hagen, Wyatt A.; Regele, Jonathan D.
2017-09-01
The study of atomization in supersonic flow is critical to ensuring reliable ignition of scramjet combustors under startup conditions. Numerical methods incorporating surface tension effects have largely focused on the incompressible regime as most atomization applications occur at low Mach numbers. Simulating surface tension effects in compressible flow requires robust numerical methods that can handle discontinuities caused by both shocks and material interfaces with high density ratios. In this work, a shock and interface capturing scheme is developed that uses the Harten-Lax-van Leer-Contact (HLLC) Riemann solver while a Tangent of Hyperbola for INterface Capturing (THINC) interface reconstruction scheme retains the fluid immiscibility condition in the volume fraction and phasic densities in the context of the five equation model. The approach includes the effects of compressibility, surface tension, and molecular viscosity. One and two-dimensional benchmark problems demonstrate the desirable interface sharpening and conservation properties of the approach. Simulations of secondary atomization of a cylindrical water column after its interaction with a shockwave show good qualitative agreement with experimentally observed behavior. Three-dimensional examples of primary atomization of a liquid jet in a Mach 2 crossflow demonstrate the robustness of the method.
Model evaluation of marine primary organic aerosol emission schemes
Directory of Open Access Journals (Sweden)
B. Gantt
2012-09-01
Full Text Available In this study, several marine primary organic aerosol (POA emission schemes have been evaluated using the GEOS-Chem chemical transport model in order to provide guidance for their implementation in air quality and climate models. These emission schemes, based on varying dependencies of chlorophyll a concentration ([chl a] and 10 m wind speed (U_{10}, have large differences in their magnitude, spatial distribution, and seasonality. Model comparison with weekly and monthly mean values of the organic aerosol mass concentration at two coastal sites shows that the source function exclusively related to [chl a] does a better job replicating surface observations. Sensitivity simulations in which the negative U_{10} and positive [chl a] dependence of the organic mass fraction of sea spray aerosol are enhanced show improved prediction of the seasonality of the marine POA concentrations. A top-down estimate of submicron marine POA emissions based on the parameterization that compares best to the observed weekly and monthly mean values of marine organic aerosol surface concentrations has a global average emission rate of 6.3 Tg yr^{−1}. Evaluation of existing marine POA source functions against a case study during which marine POA contributed the major fraction of submicron aerosol mass shows that none of the existing parameterizations are able to reproduce the hourly-averaged observations. Our calculations suggest that in order to capture episodic events and short-term variability in submicron marine POA concentration over the ocean, new source functions need to be developed that are grounded in the physical processes unique to the organic fraction of sea spray aerosol.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Model evaluation of marine primary organic aerosol emission schemes
Gantt, B.; Johnson, M. S.; Meskhidze, N.; Sciare, J.; Ovadnevaite, J.; Ceburnis, D.; O'Dowd, C. D.
2012-09-01
In this study, several marine primary organic aerosol (POA) emission schemes have been evaluated using the GEOS-Chem chemical transport model in order to provide guidance for their implementation in air quality and climate models. These emission schemes, based on varying dependencies of chlorophyll a concentration ([chl a]) and 10 m wind speed (U10), have large differences in their magnitude, spatial distribution, and seasonality. Model comparison with weekly and monthly mean values of the organic aerosol mass concentration at two coastal sites shows that the source function exclusively related to [chl a] does a better job replicating surface observations. Sensitivity simulations in which the negative U10 and positive [chl a] dependence of the organic mass fraction of sea spray aerosol are enhanced show improved prediction of the seasonality of the marine POA concentrations. A top-down estimate of submicron marine POA emissions based on the parameterization that compares best to the observed weekly and monthly mean values of marine organic aerosol surface concentrations has a global average emission rate of 6.3 Tg yr-1. Evaluation of existing marine POA source functions against a case study during which marine POA contributed the major fraction of submicron aerosol mass shows that none of the existing parameterizations are able to reproduce the hourly-averaged observations. Our calculations suggest that in order to capture episodic events and short-term variability in submicron marine POA concentration over the ocean, new source functions need to be developed that are grounded in the physical processes unique to the organic fraction of sea spray aerosol.
Modeling Prioritized Hard Handoff Management Scheme for Wireless Mobile Networks
Directory of Open Access Journals (Sweden)
BISWAJIT BHOWMIK
2012-08-01
Full Text Available The channel associated with the current connection serviced by a base station is changed while a call is in progress. Usually, continuous service is achieved by supporting handoff from one cell to another. It is often initiated either by crossing a cell boundary or by deterioration in quality of the signal in the current channel. The existing call is then changed to a new base station. For the traffics which are non stationary at and are away from the servicing base station, the chances of a call to be handed off are increasing. In this paper we propose a scheme MH_2S to modeling and implementing a traffic model with handoff behavior for wireless mobile networks . The simulation model MH_2S with priority is developed to investigate the performance behavior of hard handoff strategy. Novelty of the proposed model MH_2S results that it can improve call blocking rate of handoff calls. In addition to this, measurement of blocking probabilities for both originating calls and handoff calls is another impressive achievement of the model.
Modeling Marine Stratocumulus with a Detailed Microphysical Scheme
Institute of Scientific and Technical Information of China (English)
ZHAO Chunsheng(赵春生); Yutaka ISHIZAKA
2004-01-01
A one-dimensional 3rd-order turbulence closure model with size-resolved microphysics and radiative transfer has been developed for investigating aerosol and cloud interactions of the stratocumulus-topped marine boundary layer.A new method is presented for coupling between the dynamical model and the microphysical model.This scheme allows the liquid water related correlations to be directly calculated rather than parameterized.On 21 April 2001,a marine stratocumulus was observed by the Caesar aircraft over the west Pacific Rim south of Japan during the 2001 APEX/ACE-Asia field measurements.This cloud is simulated by the model we present here.The model results show that the general features of the stratocumulus-topped marine boundary layer predicted by the model are in agreement with the measurements.A new onboard cloud condensation nuclei (CCN) counter provides not only total CC Nnumber concentration (as the traditional CCN counters do at a certain supersaturation) but also the CCN size distribution information.Using these CCN data,model responses to different CCN initial concentrations are examined.The model results are consistent with both observations and expectations.The numerical results show that the cloud microphysical properties are changed fundamentally by differentinitial CCN concentrations but the cloud liquid water content does not differ significantly.Different initial CCN loadings have large impacts on the evolution of cloud microstructure and radiation transfer while they have a modest effect on thermodynamics.Increased CCN concentration leads to significant decrease of cloud effective radius.
3 Lectures: "Lagrangian Models", "Numerical Transport Schemes", and "Chemical and Transport Models"
Douglass, A.
2005-01-01
The topics for the three lectures for the Canadian Summer School are Lagrangian Models, numerical transport schemes, and chemical and transport models. In the first lecture I will explain the basic components of the Lagrangian model (a trajectory code and a photochemical code), the difficulties in using such a model (initialization) and show some applications in interpretation of aircraft and satellite data. If time permits I will show some results concerning inverse modeling which is being used to evaluate sources of tropospheric pollutants. In the second lecture I will discuss one of the core components of any grid point model, the numerical transport scheme. I will explain the basics of shock capturing schemes, and performance criteria. I will include an example of the importance of horizontal resolution to polar processes. We have learned from NASA's global modeling initiative that horizontal resolution matters for predictions of the future evolution of the ozone hole. The numerical scheme will be evaluated using performance metrics based on satellite observations of long-lived tracers. The final lecture will discuss the evolution of chemical transport models over the last decade. Some of the problems with assimilated winds will be demonstrated, using satellite data to evaluate the simulations.
THE APPLICATION OF TIDAL SIGNAL EXCLUSION SCHEME FROM INITIALIZATION IN A GENERAL CIRCULATION MODEL
Institute of Scientific and Technical Information of China (English)
杨学胜; 王军; 陈谊
2004-01-01
In this paper, some corrections was made to the assumption that the forcing is quasi-static, which is the basis of the nonlinear diabatic initialization scheme adopted by a global model T106L19. Thus the tidal signal is expressed and excluded from the initialization scheme. It shows that the new scheme captures the semi-diurnal pressure variation and is much closer to the uninitialized field. Compared with the standard initialization scheme, both the anomaly correlation coefficients and RMS of 500 hPa geopotential height simulated under the new scheme have improved significantly.
On usage of CABARET scheme for tracer transport in INM ocean model
Diansky, Nikolay; Kostrykin, Sergey; Gusev, Anatoly; Salnikov, Nikolay
2010-06-01
The contemporary state of ocean numerical modelling sets some requirements for the numerical advection schemes used in ocean general circulation models (OGCMs). The most important requirements are conservation, monotonicity and numerical efficiency including good parallelization properties. Investigation of some advection schemes shows that one of the best schemes satisfying the criteria is CABARET scheme. 3D-modification of the CABARET scheme was used to develop a new transport module (for temperature and salinity) for the Institute of Numerical Mathematics ocean model (INMOM). Testing of this module on some common benchmarks shows a high accuracy in comparison with the second-order advection scheme used in the INMOM. This new module was incorporated in the INMOM and experiments with the modified model showed a better simulation of oceanic circulation than its previous version.
Hérivaux, Cécile; Orban, Philippe; Brouyère, Serge
2013-10-15
In Europe, 30% of groundwater bodies are considered to be at risk of not achieving the Water Framework Directive (WFD) 'good status' objective by 2015, and 45% are in doubt of doing so. Diffuse agricultural pollution is one of the main pressures affecting groundwater bodies. To tackle this problem, the WFD requires Member States to design and implement cost-effective programs of measures to achieve the 'good status' objective by 2027 at the latest. Hitherto, action plans have mainly consisted of promoting the adoption of Agri-Environmental Schemes (AES). This raises a number of questions concerning the effectiveness of such schemes for improving groundwater status, and the economic implications of their implementation. We propose a hydro-economic model that combines a hydrogeological model to simulate groundwater quality evolution with agronomic and economic components to assess the expected costs, effectiveness, and benefits of AES implementation. This hydro-economic model can be used to identify cost-effective AES combinations at groundwater-body scale and to show the benefits to be expected from the resulting improvement in groundwater quality. The model is applied here to a rural area encompassing the Hesbaye aquifer, a large chalk aquifer which supplies about 230,000 inhabitants in the city of Liege (Belgium) and is severely contaminated by agricultural nitrates. We show that the time frame within which improvements in the Hesbaye groundwater quality can be expected may be much longer than that required by the WFD. Current WFD programs based on AES may be inappropriate for achieving the 'good status' objective in the most productive agricultural areas, in particular because these schemes are insufficiently attractive. Achieving 'good status' by 2027 would demand a substantial change in the design of AES, involving costs that may not be offset by benefits in the case of chalk aquifers with long renewal times. Copyright © 2013 Elsevier Ltd. All rights reserved.
A positive and entropy-satisfying finite volume scheme for the Baer-Nunziato model
Coquel, Frédéric; Hérard, Jean-Marc; Saleh, Khaled
2017-02-01
We present a relaxation scheme for approximating the entropy dissipating weak solutions of the Baer-Nunziato two-phase flow model. This relaxation scheme is straightforwardly obtained as an extension of the relaxation scheme designed in [16] for the isentropic Baer-Nunziato model and consequently inherits its main properties. To our knowledge, this is the only existing scheme for which the approximated phase fractions, phase densities and phase internal energies are proven to remain positive without any restrictive condition other than a classical fully computable CFL condition. For ideal gas and stiffened gas equations of state, real values of the phasic speeds of sound are also proven to be maintained by the numerical scheme. It is also the only scheme for which a discrete entropy inequality is proven, under a CFL condition derived from the natural sub-characteristic condition associated with the relaxation approximation. This last property, which ensures the non-linear stability of the numerical method, is satisfied for any admissible equation of state. We provide a numerical study for the convergence of the approximate solutions towards some exact Riemann solutions. The numerical simulations show that the relaxation scheme compares well with two of the most popular existing schemes available for the Baer-Nunziato model, namely Schwendeman-Wahle-Kapila's Godunov-type scheme [39] and Tokareva-Toro's HLLC scheme [44]. The relaxation scheme also shows a higher precision and a lower computational cost (for comparable accuracy) than a standard numerical scheme used in the nuclear industry, namely Rusanov's scheme. Finally, we assess the good behavior of the scheme when approximating vanishing phase solutions.
Numerical tests of efficiency of the retrospective time integration scheme in the self-memory model
Institute of Scientific and Technical Information of China (English)
GU Xiangqian; YOU Xingtian; ZHU He; CAO Hongxing
2004-01-01
A set of numerical tests was carried out to compare the retrospective time integral scheme in a self-memory model,whose dynamic kernel is the barotropical quasi-geostrophic model, with the ordinary centered difference scheme in the barotropical quasigeostrophic model. The Rossby-Haurwitz wave function was taken as the initial fields for both schemes. The results show that in comparison with the ordinary centered difference scheme, the retrospective time integral scheme reduces by 2 orders of magnitude the forecast error, and the forecast error increases very little with lengthening of the time-step. Therefore, the retrospective time integral scheme has advantages of improving the forecast accuracy, extending the predictable duration and reducing the computation amount.
Nie, Suping; Zhu, Jiang; Luo, Yong
2010-05-01
The purpose of this study is to explore the performances of different model error scheme in soil moisture data assimilation. Based on the ensemble Kalman filter (EnKF) and the atmosphere-vegetation interaction model (AVIM), point-scale analysis results for three schemes, 1) covariance inflation (CI), 2) direct random disturbance (DRD), and 3) error source random disturbance (ESRD), are combined under conditions of different observational error estimations, different observation layers, and different observation intervals using a series of idealized experiments. The results shows that all these schemes obtain good assimilation results when the assumed observational error is an accurate statistical representation of the actual error used to perturb the original truth value, and the ESRD scheme has the least root mean square error (RMSE). Overestimation or underestimation of the observational errors can affect the assimilation results of CI and DRD schemes sensitively. The performances of these two schemes deteriorate obviously while the ESRD scheme keeps its capability well. When the observation layers or observation interval increase, the performances of both CI and DRD schemes decline evidently. But for the ESRD scheme, as it can assimilate multi-layer observations coordinately, the increased observations improve the assimilation results further. Moreover, as the ESRD scheme contains a certain amount of model error estimation functions in its assimilation process, it also has a good performance in assimilating sparse-time observations.
Modeling and Simulation of Handover Scheme in Integrated EPON-WiMAX Networks
DEFF Research Database (Denmark)
Yan, Ying; Dittmann, Lars
2011-01-01
by enhancing the traditional MPCP signaling protocol, which cooperatively collects mobility information from the front-end wireless network and makes centralized bandwidth allocation decisions in the backhaul optical network. The integrated network architecture and the joint handover scheme are simulated using......In this paper, we tackle the seamless handover problem in integrated optical wireless networks. Our model applies for the convergence network of EPON and WiMAX and a mobilityaware signaling protocol is proposed. The proposed handover scheme, Integrated Mobility Management Scheme (IMMS), is assisted...... OPNET modeler. Results show validation of the protocol, i.e., integrated handover scheme gains better network performances....
Projective Ring Line Encompassing Two-Qubits
Saniga, M; Pracna, P; Planat, Michel; Pracna, Petr; Saniga, Metod
2006-01-01
The projective line over the (non-commutative) ring of two-by-two matrices with coefficients in GF(2) is found to fully accommodate the algebra of 15 operators -- generalized Pauli matrices -- characterizing two-qubit systems. The relevant sub-configuration consits of 15 points each of which is either simultaneusly distant or simultaneously neighbour to (any) two given distant points of the line. The operators can be identified with the points in such a one-to-one manner that their commutation relations are exactly reproduced by the underlying geometry of the points, with the ring geometrical notions of neighbour/distant answering, respectively, to the operational ones of commuting/non-commuting. This finding opens up rather unexpected vistas for an algebraic geometrical modelling of finite-dimensional quantum systems and gives their numerous applications a wholy new perspective.
Modeling and Simulation of Downlink Subcarrier Allocation Schemes in LTE
DEFF Research Database (Denmark)
Popovska Avramova, Andrijana; Yan, Ying; Dittmann, Lars
2012-01-01
The efficient utilization of the air interface in the LTE standard is achieved through a combination of subcarrier allocation schemes, adaptive modulation and coding, and transmission power allotment. The scheduler in the base station has a major role in achieving the required QoS and the overall...
A second-order characteristic line scheme for solving a juvenile-adult model of amphibians.
Deng, Keng; Wang, Yi
2015-01-01
In this paper, we develop a second-order characteristic line scheme for a nonlinear hierarchical juvenile-adult population model of amphibians. The idea of the scheme is not to follow the characteristics from the initial data, but for each time step to find the origins of the grid nodes at the previous time level. Numerical examples are presented to demonstrate the accuracy of the scheme and its capability to handle solutions with singularity.
Gillibrand, P. A.; Herzfeld, M.
2016-05-01
We present a flux-form semi-Lagrangian (FFSL) advection scheme designed for offline scalar transport simulation with coastal ocean models using curvilinear horizontal coordinates. The scheme conserves mass, overcoming problems of mass conservation typically experienced with offline transport models, and permits long time steps (relative to the Courant number) to be used by the offline model. These attributes make the method attractive for offline simulation of tracers in biogeochemical or sediment transport models using archived flow fields from hydrodynamic models. We describe the FFSL scheme, and test it on two idealised domains and one real domain, the Great Barrier Reef in Australia. For comparison, we also include simulations using a traditional semi-Lagrangian advection scheme for the offline simulations. We compare tracer distributions predicted by the offline FFSL transport scheme with those predicted by the original hydrodynamic model, assess the conservation of mass in all cases and contrast the computational efficiency of the schemes. We find that the FFSL scheme produced very good agreement with the distributions of tracer predicted by the hydrodynamic model, and conserved mass with an error of a fraction of one percent. In terms of computational speed, the FFSL scheme was comparable with the semi-Lagrangian method and an order of magnitude faster than the full hydrodynamic model, even when the latter ran in parallel on multiple cores. The FFSL scheme presented here therefore offers a viable mass-conserving and computationally-efficient alternative to traditional semi-Lagrangian schemes for offline scalar transport simulation in coastal models.
Analyses of models for promotion schemes and ownership arrangements
DEFF Research Database (Denmark)
Hansen, Lise-Lotte Pade; Schröder, Sascha Thorsten; Münster, Marie
2011-01-01
based microCHP will be able to contribute to an innovative system where the customer produces his own heat and partly his own electricity. Furthermore, stationary fuel cells as a part of a distributed generation system are also regarded as a potential to improve the national security of supply as well...... as increase the national competitiveness. The stationary fuel cell technology is still in a rather early stage of development and faces a long list of challenges and barriers of which some are linked directly to the technology through the need of cost decrease and reliability improvements. Others are linked...... countries should opt to support stationary fuel cells, we find that in Denmark it would be promising to apply the net metering based support scheme for households with an electricity consumption exceeding the electricity production from the fuel cell. In France and Portugal the most promising support scheme...
Yang, Xiaofeng; Han, Daozhi
2017-02-01
In this paper, we develop a series of linear, unconditionally energy stable numerical schemes for solving the classical phase field crystal model. The temporal discretizations are based on the first order Euler method, the second order backward differentiation formulas (BDF2) and the second order Crank-Nicolson method, respectively. The schemes lead to linear elliptic equations to be solved at each time step, and the induced linear systems are symmetric positive definite. We prove that all three schemes are unconditionally energy stable rigorously. Various classical numerical experiments in 2D and 3D are performed to validate the accuracy and efficiency of the proposed schemes.
Verification and comparison of four numerical schemes for a 1D viscoelastic blood flow model.
Wang, Xiaofei; Fullana, Jose-Maria; Lagrée, Pierre-Yves
2015-01-01
A reliable and fast numerical scheme is crucial for the 1D simulation of blood flow in compliant vessels. In this paper, a 1D blood flow model is incorporated with a Kelvin-Voigt viscoelastic arterial wall. This leads to a nonlinear hyperbolic-parabolic system, which is then solved with four numerical schemes, namely: MacCormack, Taylor-Galerkin, monotonic upwind scheme for conservation law and local discontinuous Galerkin. The numerical schemes are tested on a single vessel, a simple bifurcation and a network with 55 arteries. The numerical solutions are checked favorably against analytical, semi-analytical solutions or clinical observations. Among the numerical schemes, comparisons are made in four important aspects: accuracy, ability to capture shock-like phenomena, computational speed and implementation complexity. The suitable conditions for the application of each scheme are discussed.
Institute of Scientific and Technical Information of China (English)
Zeeshan Ahmad; Meng Jun
2015-01-01
DEA is a nonparametric method used in operation researches and economics fields for the evaluation of the production frontier. It has distinct intrinsic which is worth coping with assessment problems with multiple inputs in particular with multiple outputs. This paper usedDεC2R model of DEA to assess the comparative efficiency of the multiple schemes of agricultural industrial structure, at the end we chose the most favorable also known as "OPTIMAL" scheme. In addition to this, using some functional insights from DEA model non optimal schemes or less optimal schemes had also been improved to some extent. Assessment and selection of optimal schemes of agricultural industrial structure using DEA model gave a greater and better insight of agricultural industrial structure and was the first of such researches in Pakistan.
A Markov Chain Model for the Analysis of Round-Robin Scheduling Scheme
Shukla, D; Singhai, Rahul; Agarwal, R K
2010-01-01
In the literature of Round-Robin scheduling scheme, each job is processed, one after the another after giving a fix quantum. In case of First-come first-served, each process is executed, if the previously arrived processed is completed. Both these scheduling schemes are used in this paper as its special cases. A Markov chain model is used to compare several scheduling schemes of the class. An index measure is defined to compare the model based efficiency of different scheduling schemes. One scheduling scheme which is the mixture of FIFO and round robin is found efficient in terms of model based study. The system simulation procedure is used to derive the conclusion of the content
Constructing an Urban Population Model for Medical Insurance Scheme Using Microsimulation Techniques
Directory of Open Access Journals (Sweden)
Linping Xiong
2012-01-01
Full Text Available China launched a pilot project of medical insurance reform in 79 cities in 2007 to cover urban nonworking residents. An urban population model was created in this paper for China’s medical insurance scheme using microsimulation model techniques. The model made it clear for the policy makers the population distributions of different groups of people, the potential urban residents entering the medical insurance scheme. The income trends of units of individuals and families were also obtained. These factors are essential in making the challenging policy decisions when considering to balance the long-term financial sustainability of the medical insurance scheme.
Xiong, Linping; Zhang, Lulu; Tang, Weidong; Ma, Yuqin
2012-01-01
China launched a pilot project of medical insurance reform in 79 cities in 2007 to cover urban nonworking residents. An urban population model was created in this paper for China's medical insurance scheme using microsimulation model techniques. The model made it clear for the policy makers the population distributions of different groups of people, the potential urban residents entering the medical insurance scheme. The income trends of units of individuals and families were also obtained. These factors are essential in making the challenging policy decisions when considering to balance the long-term financial sustainability of the medical insurance scheme.
Scheme of adaptive polarization filtering based on Kalman model
Institute of Scientific and Technical Information of China (English)
Song Lizhong; Qi Haiming; Qiao Xiaolin; Meng Xiande
2006-01-01
A new kind of adaptive polarization filtering algorithm in order to suppress the angle cheating interference for the active guidance radar is presented. The polarization characteristic of the interference is dynamically tracked by using Kalman estimator under variable environments with time. The polarization filter parameters are designed according to the polarization characteristic of the interference, and the polarization filtering is finished in the target cell. The system scheme of adaptive polarization filter is studied and the tracking performance of polarization filter and improvement of angle measurement precision are simulated. The research results demonstrate this technology can effectively suppress the angle cheating interference in guidance radar and is feasible in engineering.
An Efficient Code-Based Threshold Ring Signature Scheme with a Leader-Participant Model
Directory of Open Access Journals (Sweden)
Guomin Zhou
2017-01-01
Full Text Available Digital signature schemes with additional properties have broad applications, such as in protecting the identity of signers allowing a signer to anonymously sign a message in a group of signers (also known as a ring. While these number-theoretic problems are still secure at the time of this research, the situation could change with advances in quantum computing. There is a pressing need to design PKC schemes that are secure against quantum attacks. In this paper, we propose a novel code-based threshold ring signature scheme with a leader-participant model. A leader is appointed, who chooses some shared parameters for other signers to participate in the signing process. This leader-participant model enhances the performance because every participant including the leader could execute the decoding algorithm (as a part of signing process upon receiving the shared parameters from the leader. The time complexity of our scheme is close to Courtois et al.’s (2001 scheme. The latter is often used as a basis to construct other types of code-based signature schemes. Moreover, as a threshold ring signature scheme, our scheme is as efficient as the normal code-based ring signature.
Efficient hierarchical identity based encryption scheme in the standard model over lattices
Institute of Scientific and Technical Information of China (English)
Feng-he WANG; Chun-xiao WANG; Zhen-hua LIU
2016-01-01
Using lattice basis delegation in a fi xed dimension, we propose an efficient lattice-based hierarchical identity based encryption (HIBE) scheme in the standard model whose public key size is only (dm2+mn) log q bits and whose message-ciphertext expansion factor is only log q, where d is the maximum hierarchical depth and (n,m,q) are public parameters. In our construction, a novel public key assignment rule is used to averagely assign one random and public matrix to two identity bits, which implies that d random public matrices are enough to build the proposed HIBE scheme in the standard model, compared with the case in which 2d such public matrices are needed in the scheme proposed at Crypto 2010 whose public key size is (2dm2+mn+m) log q. To reduce the message-ciphertext expansion factor of the proposed scheme to log q, the encryption algorithm of this scheme is built based on Gentry’s encryption scheme, by which m2 bits of plaintext are encrypted into m2 log q bits of ciphertext by a one time encryption operation. Hence, the presented scheme has some advantages with respect to not only the public key size but also the message-ciphertext expansion factor. Based on the hardness of the learning with errors problem, we demonstrate that the scheme is secure under selective identity and chosen plaintext attacks.
A benchmark study of numerical schemes for one-dimensional arterial blood flow modelling.
Boileau, Etienne; Nithiarasu, Perumal; Blanco, Pablo J; Müller, Lucas O; Fossan, Fredrik Eikeland; Hellevik, Leif Rune; Donders, Wouter P; Huberts, Wouter; Willemet, Marie; Alastruey, Jordi
2015-10-01
Haemodynamical simulations using one-dimensional (1D) computational models exhibit many of the features of the systemic circulation under normal and diseased conditions. Recent interest in verifying 1D numerical schemes has led to the development of alternative experimental setups and the use of three-dimensional numerical models to acquire data not easily measured in vivo. In most studies to date, only one particular 1D scheme is tested. In this paper, we present a systematic comparison of six commonly used numerical schemes for 1D blood flow modelling: discontinuous Galerkin, locally conservative Galerkin, Galerkin least-squares finite element method, finite volume method, finite difference MacCormack method and a simplified trapezium rule method. Comparisons are made in a series of six benchmark test cases with an increasing degree of complexity. The accuracy of the numerical schemes is assessed by comparison with theoretical results, three-dimensional numerical data in compatible domains with distensible walls or experimental data in a network of silicone tubes. Results show a good agreement among all numerical schemes and their ability to capture the main features of pressure, flow and area waveforms in large arteries. All the information used in this study, including the input data for all benchmark cases, experimental data where available and numerical solutions for each scheme, is made publicly available online, providing a comprehensive reference data set to support the development of 1D models and numerical schemes.
Soft rotator model and {sup 246}Cm low-lying level scheme
Energy Technology Data Exchange (ETDEWEB)
Porodzinskij, Yu.V.; Sukhovitskij, E.Sh. [Radiation Physics and Chemistry Problems Inst., Minsk-Sosny (Belarus)
1997-03-01
Non-axial soft rotator nuclear model is suggested as self-consistent approach for interpretation of level schemes, {gamma}-transition probabilities and neutron interaction with even-even nuclei. (author)
Soft rotator model and {sup 246}Cm low-lying level scheme
Energy Technology Data Exchange (ETDEWEB)
Porodzinskij, Yu.V.; Sukhovitskij, E.Sh. [Radiation Physics and Chemistry Problems Inst., Minsk-Sosny (Belarus)
1997-03-01
Non-axial soft rotator nuclear model is suggested as self-consistent approach for interpretation of level schemes, {gamma}-transition probabilities and neutron interaction with even-even nuclei. (author)
Solute based Lagrangian scheme in modeling the drying process of soft matter solutions.
Meng, Fanlong; Luo, Ling; Doi, Masao; Ouyang, Zhongcan
2016-02-01
We develop a new dynamical model to study the drying process of a droplet of soft matter solutions. The model includes the processes of solute diffusion, gel-layer formation and cavity creation. A new scheme is proposed to handle the diffusion dynamics taking place in such processes. In this scheme, the dynamics is described by the motion of material points taken on solute. It is convenient to apply this scheme to solve problems that involve moving boundaries and phase changes. As an example, we show results of a numerical calculation for a drying spherical droplet, and discuss how initial concentration and evaporation rate affect the structural evolution of the droplet.
Analyses of models for promotion schemes and ownership arrangements
DEFF Research Database (Denmark)
Hansen, Lise-Lotte Pade; Schröder, Sascha Thorsten; Münster, Marie
2011-01-01
Micro-Combined Heat and Power systems may contribute to changing the energy system at the residential level. Being a part of a distributed generation system, the stationary fuel cells constitute a promising element in a potentially sustainable and environmentally friendly energy system. Fuel cell...... based microCHP will be able to contribute to an innovative system where the customer produces his own heat and partly his own electricity. Furthermore, stationary fuel cells as a part of a distributed generation system are also regarded as a potential to improve the national security of supply as well...... to the political stage, where the necessary support schemes have to be in place in combination with guarantees that the political objectives for the future energy system does not change dramatically. One of the main challenges of the fuel cell technology is the efficiency while others are the cost as well...
Impact of WRF model PBL schemes on air quality simulations over Catalonia, Spain.
Banks, R F; Baldasano, J M
2016-12-01
Here we analyze the impact of four planetary boundary-layer (PBL) parametrization schemes from the Weather Research and Forecasting (WRF) numerical weather prediction model on simulations of meteorological variables and predicted pollutant concentrations from an air quality forecast system (AQFS). The current setup of the Spanish operational AQFS, CALIOPE, is composed of the WRF-ARW V3.5.1 meteorological model tied to the Yonsei University (YSU) PBL scheme, HERMES v2 emissions model, CMAQ V5.0.2 chemical transport model, and dust outputs from BSC-DREAM8bv2. We test the performance of the YSU scheme against the Assymetric Convective Model Version 2 (ACM2), Mellor-Yamada-Janjic (MYJ), and Bougeault-Lacarrère (BouLac) schemes. The one-day diagnostic case study is selected to represent the most frequent synoptic condition in the northeast Iberian Peninsula during spring 2015; regional recirculations. It is shown that the ACM2 PBL scheme performs well with daytime PBL height, as validated against estimates retrieved using a micro-pulse lidar system (mean bias=-0.11km). In turn, the BouLac scheme showed WRF-simulated air and dew point temperature closer to METAR surface meteorological observations. Results are more ambiguous when simulated pollutant concentrations from CMAQ are validated against network urban, suburban, and rural background stations. The ACM2 scheme showed the lowest mean bias (-0.96μgm(-3)) with respect to surface ozone at urban stations, while the YSU scheme performed best with simulated nitrogen dioxide (-6.48μgm(-3)). The poorest results were with simulated particulate matter, with similar results found with all schemes tested. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Verification and comparison of four numerical schemes for a 1D viscoelastic blood flow model
Wang, Xiaofei; Lagrée, Pierre-Yves
2013-01-01
In this paper, we present four numerical schemes for a 1D viscoelastic blood flow model. In the case with a small nonlinearity (small amplitude of wave), asymptotic analysis predicts several behaviours of the wave: propagation in a uniform tube, attenuation of the amplitude due to the skin friction, diffusion due to the viscosity of the wall, and reflection and transmission at a branching point. These predictions are compared very favorably with all of the numerical solutions. The schemes are also tested in case with a larger nonlinearity. Finally, we apply all of the schemes on a relatively realistic arterial system with 55 arteries. The schemes are compared in four aspects: the spatial and temporal convergence speed, the ability to capture shock phenomena, the computation speed and the complexity of the implementation. The suitable conditions for the application of the various schemes are discussed.
Institute of Scientific and Technical Information of China (English)
Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan
2015-01-01
This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.
A Backward Unlinkable Secret Handshake Scheme with Revocation Support in the Standard Model
Directory of Open Access Journals (Sweden)
Yamin Wen
2015-10-01
Full Text Available Secret handshake schemes have been proposed to achieve private mutual authentications, which allow the members of a certain organization to anonymously authenticate each other without exposing their affiliations. In this paper, a backward unlinkable secret handshake scheme with revocation support (BU-RSH is constructed. For a full-fledged secret handshake scheme, it is indispensable to furnish it with practical functionality, such as unlinkability, revocation and traceability. The revocation is achieved in the BU-RSH scheme, as well as the unlinkability and the traceability. Moreover, the anonymity of revoked members is improved, so that the past transcripts of revoked members remain private, i.e., backward unlinkability. In particular, the BU-RSH scheme is provably secure in the standard model by assuming the intractability of the `-hidden strong Diffie-Hellman problem and the subgroup decision problem.
A theoretical extraction scheme of transport information based on exclusion models
Institute of Scientific and Technical Information of China (English)
Chen Hua; Du Lei; Qu Cheng-Li; Li Wei-Hua; He Liang; Chen Wen-Hao; Sun Peng
2010-01-01
In order to explore how to extract more transport information from current fluctuation, a theoretical extraction scheme is presented in a single barrier structure based on exclusion models, which include counter-flows model and tunnel model. The first four cumulants of these two exclusion models are computed in a single barrier structure, and their characteristics are obtained. A scheme with the help of the first three cumulants is devised to check a transport process to follow the counter-flows model, the tunnel model or neither of them. Time series generated by Monte Carlo techniques is adopted to validate the abstraction procedure, and the result is reasonable.
A Fuzzy Identity-Based Signature Scheme from Lattices in the Standard Model
Directory of Open Access Journals (Sweden)
Chunli Yang
2014-01-01
Full Text Available A fuzzy identity-based signature (FIBS scheme allows a user with identity ID to issue a signature that could be verified with identity ID' if and only if ID and ID' lie within a certain distance. To obtain an FIBS scheme that can resist known quantum attacks, we use the double-trapdoor technique from ABB10a for secret key extracting and the vanishing trapdoor technique from Boyen10 for message signing. In addition, in order to reflect the functionality of fuzziness, Shamir secret sharing scheme is also used in our construction. In this paper, we propose an FIBS scheme from lattices and prove that this new scheme achieves strong unforgeability under selective chosen-identity and adaptive chosen-message attacks (SU-sID-CMA in the standard model. To the best of our knowledge, our scheme is not only the first FIBS scheme from lattices without random oracles but also the first FIBS scheme that achieves strong unforgeability.
A numerical scheme for modelling reacting flow with detailed chemistry and transport.
Energy Technology Data Exchange (ETDEWEB)
Knio, Omar M. (The Johns Hopkins University, Baltimore, MD); Najm, Habib N.; Paul, Phillip H. (Eksigent Technologies LLC, Livermore, CA)
2003-09-01
An efficient projection scheme is developed for the simulation of reacting flow with detailed kinetics and transport. The scheme is based on a zero-Mach-number formulation of the compressible conservation equations for an ideal gas mixture. It is a modified version of the stiff operator-split scheme developed by Knio, Najm & Wyckoff (1999, J. Comput. Phys. 154, 428). Similar to its predecessor, the new scheme relies on Strang splitting of the discrete evolution equations, where diffusion is integrated in two half steps that are symmetrically distributed around a single stiff step for the reaction source terms. The diffusive half-step is integrated using an explicit single-step, multistage, Runge-Kutta-Chebyshev (RKC) method, which replaces the explicit, multi-step, fractional sub-step approach used in the previous formulation. This modification maintains the overall second-order convergence properties of the scheme and enhances the efficiency of the computations by taking advantage of the extended real-stability region of the RKC scheme. Two additional efficiency-enhancements are also explored, based on an extrapolation procedure for the transport coefficients and on the use of approximate Jacobian data evaluated on a coarse mesh. By including these enhancement schemes, performance tests using 2D computations with a detailed C{sub 1}C{sub 2} methane-air mechanism and a detailed mixture-averaged transport model indicate that speedup factors of about 15 are achieved over the previous split-stiff scheme.
Neural Network Based Multi-level Fuzzy Evaluation Model for Mechanical Kinematic Scheme
Institute of Scientific and Technical Information of China (English)
BO Ruifeng; LI Ruiqin
2006-01-01
To implement a quantificational evaluation for mechanical kinematic scheme more effectively, a multi-level and multi-objective evaluation model is presented using neural network and fuzzy theory. Firstly, the structure of evaluation model is constructed according to evaluation indicator system. Then evaluation samples are generated and provided to train this model. Thus it can reflect the relation between attributive value and evaluation result, as well as the weight of evaluation indicator. Once evaluation indicators of each candidate are fuzzily quantified and fed into the trained network model, the corresponding evaluation result is outputted and the best alternative can be selected. Under this model, expert knowledge can be effectively acquired and expressed, and the quantificational evaluation can be implemented for kinematic scheme with multi-level evaluation indicator system. Several key problems on this model are discussed and an illustration has demonstrated that this model is feasible and can be regarded as a new idea for solving kinematic scheme evaluation.
A hybrid convection scheme for use in non-hydrostatic numerical weather prediction models
Directory of Open Access Journals (Sweden)
Volker Kuell
2008-12-01
Full Text Available The correct representation of convection in numerical weather prediction (NWP models is essential for quantitative precipitation forecasts. Due to its small horizontal scale convection usually has to be parameterized, e.g. by mass flux convection schemes. Classical schemes originally developed for use in coarse grid NWP models assume zero net convective mass flux, because the whole circulation of a convective cell is confined to the local grid column and all convective mass fluxes cancel out. However, in contemporary NWP models with grid sizes of a few kilometers this assumption becomes questionable, because here convection is partially resolved on the grid. To overcome this conceptual problem we propose a hybrid mass flux convection scheme (HYMACS in which only the convective updrafts and downdrafts are parameterized. The generation of the larger scale environmental subsidence, which may cover several grid columns, is transferred to the grid scale equations. This means that the convection scheme now has to generate a net convective mass flux exerting a direct dynamical forcing to the grid scale model via pressure gradient forces. The hybrid convection scheme implemented into the COSMO model of Deutscher Wetterdienst (DWD is tested in an idealized simulation of a sea breeze circulation initiating convection in a realistic manner. The results are compared with analogous simulations with the classical Tiedtke and Kain-Fritsch convection schemes.
Spatial model of lifting scheme in wavelet transforms and image compression
Wu, Yu; Li, Gang; Wang, Guoyin
2002-03-01
Wavelet transforms via lifting scheme are called the second-generation wavelet transforms. However, in some lifting schemes the coefficients are transformed using mathematical method from the first-generation wavelets, so the filters with better performance using in lifting are limited. The spatial structures of lifting scheme are also simple. For example, the classical lifting scheme, predicting-updating, is two-stage, and most researchers simply adopt this structure. In addition, in most design results the lifting filters are not only hard to get and also fixed. In our former work, we had presented a new three-stage lifting scheme, predicting-updating-adapting, and the results of filter design are no more fixed. In this paper, we continue to research the spatial model of lifting scheme. A group of general multi-stage lifting schemes are achieved and designed. All lifting filters are designed in spatial domain and proper mathematical methods are selected. Our designed coefficients are flexible and can be adjusted according to different data. We give the mathematical design details in this paper. Finally, all designed model of lifting are used in image compression and satisfactory results are achieved.
Transfer Scheme Evaluation Model for a Transportation Hub based on Vectorial Angle Cosine
Directory of Open Access Journals (Sweden)
Li-Ya Yao
2014-07-01
Full Text Available As the most important node in public transport network, efficiency of a transport hub determines the entire efficiency of the whole transport network. In order to put forward effective transfer schemes, a comprehensive evaluation index system of urban transport hubs’ transfer efficiency was built, evaluation indexes were quantified, and an evaluation model of a multi-objective decision hub transfer scheme was established based on vectorial angle cosine. Qualitative and quantitative analysis on factors affecting transfer efficiency is conducted, which discusses the passenger satisfaction, transfer coordination, transfer efficiency, smoothness, economy, etc. Thus, a new solution to transfer scheme utilization was proposed.
Energy Technology Data Exchange (ETDEWEB)
Silva, Filipe da, E-mail: tanatos@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal); Pinto, Martin Campos, E-mail: campos@ann.jussieu.fr [CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Després, Bruno, E-mail: despres@ann.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Heuraux, Stéphane, E-mail: stephane.heuraux@univ-lorraine.fr [Institut Jean Lamour, UMR 7198, CNRS – University Lorraine, Vandoeuvre (France)
2015-08-15
This work analyzes the stability of the Yee scheme for non-stationary Maxwell's equations coupled with a linear current model with density fluctuations. We show that the usual procedure may yield unstable scheme for physical situations that correspond to strongly magnetized plasmas in X-mode (TE) polarization. We propose to use first order clustered discretization of the vectorial product that gives back a stable coupling. We validate the schemes on some test cases representative of direct numerical simulations of X-mode in a magnetic fusion plasma including turbulence.
Post-processing scheme for modelling the lithospheric magnetic field
Directory of Open Access Journals (Sweden)
V. Lesur
2013-03-01
Full Text Available We investigated how the noise in satellite magnetic data affects magnetic lithospheric field models derived from these data in the special case where this noise is correlated along satellite orbit tracks. For this we describe the satellite data noise as a perturbation magnetic field scaled independently for each orbit, where the scaling factor is a random variable, normally distributed with zero mean. Under this assumption, we have been able to derive a model for errors in lithospheric models generated by the correlated satellite data noise. Unless the perturbation field is known, estimating the noise in the lithospheric field model is a non-linear inverse problem. We therefore proposed an iterative post-processing technique to estimate both the lithospheric field model and its associated noise model. The technique has been successfully applied to derive a lithospheric field model from CHAMP satellite data up to spherical harmonic degree 120. The model is in agreement with other existing models. The technique can, in principle, be extended to all sorts of potential field data with "along-track" correlated errors.
Convergence of discrete duality finite volume schemes for the cardiac bidomain model
Andreianov, Boris; Karlsen, Kenneth H; Pierre, Charles
2010-01-01
We prove convergence of discrete duality finite volume (DDFV) schemes on distorted meshes for a class of simplified macroscopic bidomain models of the electrical activity in the heart. Both time-implicit and linearised time-implicit schemes are treated. A short description is given of the 3D DDFV meshes and of some of the associated discrete calculus tools. Several numerical tests are presented.
Algebraic K-theory of generalized schemes
DEFF Research Database (Denmark)
Anevski, Stella Victoria Desiree
Nikolai Durov has developed a generalization of conventional scheme theory in which commutative algebraic monads replace commutative unital rings as the basic algebraic objects. The resulting geometry is expressive enough to encompass conventional scheme theory, tropical algebraic geometry...
Algebraic K-theory of generalized schemes
DEFF Research Database (Denmark)
Anevski, Stella Victoria Desiree
Nikolai Durov has developed a generalization of conventional scheme theory in which commutative algebraic monads replace commutative unital rings as the basic algebraic objects. The resulting geometry is expressive enough to encompass conventional scheme theory, tropical algebraic geometry...
A Lattice-Based Identity-Based Proxy Blind Signature Scheme in the Standard Model
Directory of Open Access Journals (Sweden)
Lili Zhang
2014-01-01
Full Text Available A proxy blind signature scheme is a special form of blind signature which allowed a designated person called proxy signer to sign on behalf of original signers without knowing the content of the message. It combines the advantages of proxy signature and blind signature. Up to date, most proxy blind signature schemes rely on hard number theory problems, discrete logarithm, and bilinear pairings. Unfortunately, the above underlying number theory problems will be solvable in the postquantum era. Lattice-based cryptography is enjoying great interest these days, due to implementation simplicity and provable security reductions. Moreover, lattice-based cryptography is believed to be hard even for quantum computers. In this paper, we present a new identity-based proxy blind signature scheme from lattices without random oracles. The new scheme is proven to be strongly unforgeable under the standard hardness assumption of the short integer solution problem (SIS and the inhomogeneous small integer solution problem (ISIS. Furthermore, the secret key size and the signature length of our scheme are invariant and much shorter than those of the previous lattice-based proxy blind signature schemes. To the best of our knowledge, our construction is the first short lattice-based identity-based proxy blind signature scheme in the standard model.
Comparison of tropospheric gas-phase chemistry schemes for use within global models
Directory of Open Access Journals (Sweden)
K. M. Emmerson
2009-03-01
Full Text Available Methane and ozone are two important climate gases with significant tropospheric chemistry. Within chemistry-climate and transport models this chemistry is simplified for computational expediency. We compare the state of the art Master Chemical Mechanism (MCM with six tropospheric chemistry schemes (CRI-reduced, GEOS-CHEM and a GEOS-CHEM adduct, MOZART-2, TOMCAT and CBM-IV that could be used within composition transport models. We test the schemes within a box model framework under conditions derived from a composition transport model and from field observations from a regional scale pollution event. We find that CRI-reduced provides much skill in simulating the full chemistry, yet with greatly reduced complexity. We find significant variations between the other chemical schemes, and reach the following conclusions. 1 The inclusion of a gas phase N_{2}O_{5}+H_{2}O reaction in one scheme and not others is a large source of uncertainty in the inorganic chemistry. 2 There are significant variations in the calculated concentration of PAN between the schemes, which will affect the long range transport of reactive nitrogen in global models. 3 The representation of isoprene chemistry differs hugely between the schemes, leading to significant uncertainties on the impact of isoprene on composition. 4 Differences are found in NO_{3} concentrations in the nighttime chemistry. Resolving these four issues through further investigative laboratory studies will reduce the uncertainties within the chemical schemes of global tropospheric models.
A model and regularization scheme for ultrasonic beamforming clutter reduction.
Byram, Brett; Dei, Kazuyuki; Tierney, Jaime; Dumont, Douglas
2015-11-01
Acoustic clutter produced by off-axis and multipath scattering is known to cause image degradation, and in some cases these sources may be the prime determinants of in vivo image quality. We have previously shown some success addressing these sources of image degradation by modeling the aperture domain signal from different sources of clutter, and then decomposing aperture domain data using the modeled sources. Our previous model had some shortcomings including model mismatch and failure to recover B-Mode speckle statistics. These shortcomings are addressed here by developing a better model and by using a general regularization approach appropriate for the model and data. We present results with L1 (lasso), L2 (ridge), and L1/L2 combined (elastic-net) regularization methods. We call our new method aperture domain model image reconstruction (ADMIRE). Our results demonstrate that ADMIRE with L1 regularization, or weighted toward L1 in the case of elastic-net regularization, have improved image quality. L1 by itself works well, but additional improvements are seen with elastic-net regularization over the pure L1 constraint. On in vivo example cases, L1 regularization showed mean contrast improvements of 4.6 and 6.8 dB on fundamental and harmonic images, respectively. Elastic net regularization (α = 0.9) showed mean contrast improvements of 17.8 dB on fundamental images and 11.8 dB on harmonic images. We also demonstrate that in uncluttered Field II simulations the decluttering algorithm produces the same contrast, contrast-tonoise ratio, and speckle SNR as normal B-mode imaging, demonstrating that ADMIRE preserves typical image features.
Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.
2010-03-01
Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks
DEFF Research Database (Denmark)
Hagen, Espen; Dahmen, David; Stavrinou, Maria L
2016-01-01
and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely...... on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network...... model for a ∼1 mm(2) patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its...
Action versus result-oriented schemes in a grassland agroecosystem: a dynamic modelling approach.
Sabatier, Rodolphe; Doyen, Luc; Tichit, Muriel
2012-01-01
Effects of agri-environment schemes (AES) on biodiversity remain controversial. While most AES are action-oriented, result-oriented and habitat-oriented schemes have recently been proposed as a solution to improve AES efficiency. The objective of this study was to compare action-oriented, habitat-oriented and result-oriented schemes in terms of ecological and productive performance as well as in terms of management flexibility. We developed a dynamic modelling approach based on the viable control framework to carry out a long term assessment of the three schemes in a grassland agroecosystem. The model explicitly links grazed grassland dynamics to bird population dynamics. It is applied to lapwing conservation in wet grasslands in France. We ran the model to assess the three AES scenarios. The model revealed the grazing strategies respecting ecological and productive constraints specific to each scheme. Grazing strategies were assessed by both their ecological and productive performance. The viable control approach made it possible to obtain the whole set of viable grazing strategies and therefore to quantify the management flexibility of the grassland agroecosystem. Our results showed that habitat and result-oriented scenarios led to much higher ecological performance than the action-oriented one. Differences in both ecological and productive performance between the habitat and result-oriented scenarios were limited. Flexibility of the grassland agroecosystem in the result-oriented scenario was much higher than in that of habitat-oriented scenario. Our model confirms the higher flexibility as well as the better ecological and productive performance of result-oriented schemes. A larger use of result-oriented schemes in conservation may also allow farmers to adapt their management to local conditions and to climatic variations.
Action versus result-oriented schemes in a grassland agroecosystem: a dynamic modelling approach.
Directory of Open Access Journals (Sweden)
Rodolphe Sabatier
Full Text Available Effects of agri-environment schemes (AES on biodiversity remain controversial. While most AES are action-oriented, result-oriented and habitat-oriented schemes have recently been proposed as a solution to improve AES efficiency. The objective of this study was to compare action-oriented, habitat-oriented and result-oriented schemes in terms of ecological and productive performance as well as in terms of management flexibility. We developed a dynamic modelling approach based on the viable control framework to carry out a long term assessment of the three schemes in a grassland agroecosystem. The model explicitly links grazed grassland dynamics to bird population dynamics. It is applied to lapwing conservation in wet grasslands in France. We ran the model to assess the three AES scenarios. The model revealed the grazing strategies respecting ecological and productive constraints specific to each scheme. Grazing strategies were assessed by both their ecological and productive performance. The viable control approach made it possible to obtain the whole set of viable grazing strategies and therefore to quantify the management flexibility of the grassland agroecosystem. Our results showed that habitat and result-oriented scenarios led to much higher ecological performance than the action-oriented one. Differences in both ecological and productive performance between the habitat and result-oriented scenarios were limited. Flexibility of the grassland agroecosystem in the result-oriented scenario was much higher than in that of habitat-oriented scenario. Our model confirms the higher flexibility as well as the better ecological and productive performance of result-oriented schemes. A larger use of result-oriented schemes in conservation may also allow farmers to adapt their management to local conditions and to climatic variations.
Evaluation of Parameterization Schemes in the WRF Model for Estimation of Mixing Height
Directory of Open Access Journals (Sweden)
R. Shrivastava
2014-01-01
Full Text Available This paper deals with the evaluation of parameterization schemes in the WRF model for estimation of mixing height. Numerical experiments were performed using various combinations of parameterization schemes and the results were compared with the mixing height estimated using the radiosonde observations taken by the India Meteorological Department (IMD at Mangalore site for selected days of the warm and cold season in the years 2004–2007. The results indicate that there is a large variation in the mixing heights estimated by the model using various combinations of parameterization schemes. It was seen that the physics option consisting of Mellor Yamada Janjic (Eta as the PBL scheme, Monin Obukhov Janjic (Eta as the surface layer scheme, and Noah land surface model performs reasonably well in reproducing the observed mixing height at this site for both the seasons as compared to the other combinations tested. This study also showed that the choice of the land surface model can have a significant impact on the simulation of mixing height by a prognostic model.
A seawater desalination scheme for global hydrological models
Hanasaki, Naota; Yoshikawa, Sayaka; Kakinuma, Kaoru; Kanae, Shinjiro
2016-10-01
Seawater desalination is a practical technology for providing fresh water to coastal arid regions. Indeed, the use of desalination is rapidly increasing due to growing water demand in these areas and decreases in production costs due to technological advances. In this study, we developed a model to estimate the areas where seawater desalination is likely to be used as a major water source and the likely volume of production. The model was designed to be incorporated into global hydrological models (GHMs) that explicitly include human water usage. The model requires spatially detailed information on climate, income levels, and industrial and municipal water use, which represent standard input/output data in GHMs. The model was applied to a specific historical year (2005) and showed fairly good reproduction of the present geographical distribution and national production of desalinated water in the world. The model was applied globally to two periods in the future (2011-2040 and 2041-2070) under three distinct socioeconomic conditions, i.e., SSP (shared socioeconomic pathway) 1, SSP2, and SSP3. The results indicate that the usage of seawater desalination will have expanded considerably in geographical extent, and that production will have increased by 1.4-2.1-fold in 2011-2040 compared to the present (from 2.8 × 109 m3 yr-1 in 2005 to 4.0-6.0 × 109 m3 yr-1), and 6.7-17.3-fold in 2041-2070 (from 18.7 to 48.6 × 109 m3 yr-1). The estimated global costs for production for each period are USD 1.1-10.6 × 109 (0.002-0.019 % of the total global GDP), USD 1.6-22.8 × 109 (0.001-0.020 %), and USD 7.5-183.9 × 109 (0.002-0.100 %), respectively. The large spreads in these projections are primarily attributable to variations within the socioeconomic scenarios.
Feedback control scheme of traffic jams based on the coupled map car-following model
Zhou, Tong; Sun, Di-Hua; Zhao, Min; Li, Hua-Min
2013-09-01
Based on the pioneering work of Konishi et al. [Phys. Rev. E (1999) 60 4000], a new feedback control scheme is presented to suppress traffic jams based on the coupled map car-following model under the open boundary condition. The effect of the safe headway on the traffic system is considered. According to the control theory, the condition under which traffic jams can be suppressed is analyzed. The results are compared with the previous results concerning congestion control. The simulations show that the suppression performance of our scheme on traffic jams is better than those of the previous schemes, although all the schemes can suppress traffic jams. The simulation results are consistent with theoretical analyses.
A New Framework to Compare Mass-Flux Schemes Within the AROME Numerical Weather Prediction Model
Riette, Sébastien; Lac, Christine
2016-08-01
In the Application of Research to Operations at Mesoscale (AROME) numerical weather forecast model used in operations at Météo-France, five mass-flux schemes are available to parametrize shallow convection at kilometre resolution. All but one are based on the eddy-diffusivity-mass-flux approach, and differ in entrainment/detrainment, the updraft vertical velocity equation and the closure assumption. The fifth is based on a more classical mass-flux approach. Screen-level scores obtained with these schemes show few discrepancies and are not sufficient to highlight behaviour differences. Here, we describe and use a new experimental framework, able to compare and discriminate among different schemes. For a year, daily forecast experiments were conducted over small domains centred on the five French metropolitan radio-sounding locations. Cloud base, planetary boundary-layer height and normalized vertical profiles of specific humidity, potential temperature, wind speed and cloud condensate were compared with observations, and with each other. The framework allowed the behaviour of the different schemes in and above the boundary layer to be characterized. In particular, the impact of the entrainment/detrainment formulation, closure assumption and cloud scheme were clearly visible. Differences mainly concerned the transport intensity thus allowing schemes to be separated into two groups, with stronger or weaker updrafts. In the AROME model (with all interactions and the possible existence of compensating errors), evaluation diagnostics gave the advantage to the first group.
Dislocation climb models from atomistic scheme to dislocation dynamics
Niu, Xiaohua; Luo, Tao; Lu, Jianfeng; Xiang, Yang
2017-02-01
We develop a mesoscopic dislocation dynamics model for vacancy-assisted dislocation climb by upscalings from a stochastic model on the atomistic scale. Our models incorporate microscopic mechanisms of (i) bulk diffusion of vacancies, (ii) vacancy exchange dynamics between bulk and dislocation core, (iii) vacancy pipe diffusion along the dislocation core, and (iv) vacancy attachment-detachment kinetics at jogs leading to the motion of jogs. Our mesoscopic model consists of the vacancy bulk diffusion equation and a dislocation climb velocity formula. The effects of these microscopic mechanisms are incorporated by a Robin boundary condition near the dislocations for the bulk diffusion equation and a new contribution in the dislocation climb velocity due to vacancy pipe diffusion driven by the stress variation along the dislocation. Our climb formulation is able to quantitatively describe the translation of prismatic loops at low temperatures when the bulk diffusion is negligible. Using this new formulation, we derive analytical formulas for the climb velocity of a straight edge dislocation and a prismatic circular loop. Our dislocation climb formulation can be implemented in dislocation dynamics simulations to incorporate all the above four microscopic mechanisms of dislocation climb.
Traffic Prediction Scheme based on Chaotic Models in Wireless Networks
Directory of Open Access Journals (Sweden)
Xiangrong Feng
2013-09-01
Full Text Available Based on the local support vector algorithm of chaotic time series analysis, the Hannan-Quinn information criterion and SAX symbolization are introduced. Then a novel prediction algorithm is proposed, which is successfully applied to the prediction of wireless network traffic. For the correct prediction problems of short-term flow with smaller data set size, the weakness of the algorithms during model construction is analyzed by study and comparison to LDK prediction algorithm. It is verified the Hannan-Quinn information principle can be used to calculate the number of neighbor points to replace pervious empirical method, which uses the number of neighbor points to acquire more accurate prediction model. Finally, actual flow data is applied to confirm the accuracy rate of the proposed algorithm LSDHQ. It is testified by our experiments that it also has higher performance in adaptability than that of LSDHQ algorithm.
Central upwind scheme for a compressible two-phase flow model.
Directory of Open Access Journals (Sweden)
Munshoor Ahmed
Full Text Available In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.
The two-dimensional Godunov scheme and what it means for macroscopic pedestrian flow models
Van Wageningen-Kessels, F.L.M.; Daamen, W.; Hoogendoorn, S.P.
2015-01-01
An efficient simulation method for two-dimensional continuum pedestrian flow models is introduced. It is a two-dimensional and multi-class extension of the Go-dunov scheme for one-dimensional road traffic flow models introduced in the mid 1990’s. The method can be applied to continuum pedestrian flo
THE SCHEME FOR THE DATABASE BUILDING AND UPDATING OF 1:10 000 DIGITAL ELEVATION MODELS
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The National Bureau of Surveying and Mapping of China has planned to speed up the development of spatial data infrastructure (SDI) in the coming few years. This SDI consists of four types of digital products, i. e., digital orthophotos, digital elevation models,digital line graphs and digital raster graphs. For the DEM,a scheme for the database building and updating of 1:10 000 digital elevation models has been proposed and some experimental tests have also been accomplished. This paper describes the theoretical (and/or technical)background and reports some of the experimental results to support the scheme. Various aspects of the scheme such as accuracy, data sources, data sampling, spatial resolution, terrain modeling, data organization, etc are discussed.
Ranganathan, Panneerselvam; Gu, Sai
2016-08-01
The present work concerns with CFD modelling of biomass fast pyrolysis in a fluidised bed reactor. Initially, a study was conducted to understand the hydrodynamics of the fluidised bed reactor by investigating the particle density and size, and gas velocity effect. With the basic understanding of hydrodynamics, the study was further extended to investigate the different kinetic schemes for biomass fast pyrolysis process. The Eulerian-Eulerian approach was used to model the complex multiphase flows in the reactor. The yield of the products from the simulation was compared with the experimental data. A good comparison was obtained between the literature results and CFD simulation. It is also found that CFD prediction with the advanced kinetic scheme is better when compared to other schemes. With the confidence obtained from the CFD models, a parametric study was carried out to study the effect of biomass particle type and size and temperature on the yield of the products.
Resident space object tracking using an interacting multiple model mixing scheme
Lam, Quang M.
2014-06-01
A multiple model estimation scheme is proposed to enhance the robustness of a resident space object (RSO) tracker subject to its maneuverability uncertainties (unplanned or unknown jet firing activities) and other system variations. The concept is based on the Interacting Multiple Model (IMM) estimation scheme. Within the IMM framework, two Extended Kalman Filter (EKF) models: (i) a 6 State (Position and Velocity of a constant orbiting RSO) EKF and (ii) a 9 state (Position, Velocity, and Acceleration of a maneuvering RSO) EKF are designed and implemented to achieve RSO maneuvering detection and enhanced tracking accuracy. The IMM estimation scheme is capable of providing enhanced state vector estimation accuracy and consistent prediction of the RSO maneuvering status, thus offering an attractive design feature for future Space Situational Awareness (SSA) missions. The design concept is illustrated using the Matlab/Based Simulation testing environment.
A Simplified Scheme of the Generalized Layered Radiative Transfer Model
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In this paper, firstly, a simplified version (SGRTM) of the generalized layered radiative transfer model (GRTM) within the canopy, developed by us, is presented. It reduces the information requirement of inputted sky diffuse radiation, as well as of canopy morphology, and in turn saves computer resources. Results from the SGRTM agree perfectly with those of the GRTM. Secondly, by applying the linear superposition principle of the optics and by using the basic solutions of the GRTM for radiative transfer within the canopy under the condition of assumed zero soil reflectance, two sets of explicit analytical solutions of radiative transfer within the canopy with any soil reflectance magnitude are derived: one for incident diffuse, and the other for direct beam radiation. The explicit analytical solutions need two sets of basic solutions of canopy reflectance and transmittance under zero soil reflectance, run by the model for both diffuse and direct beam radiation. One set of basic solutions is the canopy reflectance αf (written as α1 for direct beam radiation) and transmittance βf (written as β1 for direction beam radiation) with zero soil reflectance for the downward radiation from above the canopy (i.e. sky), and the other set is the canopy reflectance (αb) and transmittanceβb for the upward radiation from below the canopy (i.e., ground). Under the condition of the same plant architecture in the vertical layers, and the same leaf adaxial and abaxial optical properties in the canopies for the uniform diffuse radiation, the explicit solutions need only one set of basic solutions, because under this condition the two basic solutions are equal, i.e., αf = αb and βf = βb. Using the explicit analytical solutions, the fractions of any kind of incident solar radiation reflected from (defined as surface albedo, or canopy reflectance),transmitted through (defined as canopy transmittance), and absorbed by (defined as canopy absorptance)the canopy and other properties
A dynamic neutral fluid model for the PIC scheme
Wu, Alan; Lieberman, Michael; Verboncoeur, John
2010-11-01
Fluid diffusion is an important aspect of plasma simulation. A new dynamic model is implemented using the continuity and boundary equations in OOPD1, an object oriented one-dimensional particle-in-cell code developed at UC Berkeley. The model is described and compared with analytical methods given in [1]. A boundary absorption parameter can be adjusted from ideal absorption to ideal reflection. Simulations exhibit good agreement with analytic time dependent solutions for the two ideal cases, as well as steady state solutions for mixed cases. For the next step, fluid sources and sinks due to particle-particle or particle-fluid collisions within the simulation volume and to surface reactions resulting in emission or absorption of fluid species will be implemented. The resulting dynamic interaction between particle and fluid species will be an improvement to the static fluid in the existing code. As the final step in the development, diffusion for multiple fluid species will be implemented. [4pt] [1] M.A. Lieberman and A.J. Lichtenberg, Principles of Plasma Discharges and Materials Processing, 2nd Ed, Wiley, 2005.
On the modelling of compressible inviscid flow problems using AUSM schemes
Directory of Open Access Journals (Sweden)
Hajžman M.
2007-11-01
Full Text Available During last decades, upwind schemes have become a popular method in the field of computational fluid dynamics. Although they are only first order accurate, AUSM (Advection Upstream Splitting Method schemes proved to be well suited for modelling of compressible flows due to their robustness and ability of capturing shock discontinuities. In this paper, we review the composition of the AUSM flux-vector splitting scheme and its improved version noted AUSM+, proposed by Liou, for the solution of the Euler equations. Mach number splitting functions operating with values from adjacent cells are used to determine numerical convective fluxes and pressure splitting is used for the evaluation of numerical pressure fluxes. Both versions of the AUSM scheme are applied for solving some test problems such as one-dimensional shock tube problem and three dimensional GAMM channel. Features of the schemes are discussed in comparison with some explicit central schemes of the first order accuracy (Lax-Friedrichs and of the second order accuracy (MacCormack.
A gradient stable scheme for a phase field model for the moving contact line problem
Gao, Min
2012-02-01
In this paper, an efficient numerical scheme is designed for a phase field model for the moving contact line problem, which consists of a coupled system of the Cahn-Hilliard and Navier-Stokes equations with the generalized Navier boundary condition [1,2,4]. The nonlinear version of the scheme is semi-implicit in time and is based on a convex splitting of the Cahn-Hilliard free energy (including the boundary energy) together with a projection method for the Navier-Stokes equations. We show, under certain conditions, the scheme has the total energy decaying property and is unconditionally stable. The linearized scheme is easy to implement and introduces only mild CFL time constraint. Numerical tests are carried out to verify the accuracy and stability of the scheme. The behavior of the solution near the contact line is examined. It is verified that, when the interface intersects with the boundary, the consistent splitting scheme [21,22] for the Navier Stokes equations has the better accuracy for pressure. © 2011 Elsevier Inc.
The Application of Flux-Form Semi-Lagrangian Transport Scheme in a Spectral Atmosphere Model
Institute of Scientific and Technical Information of China (English)
WANG Xiaocong; LIU Yimin; WU Guoxiong; Shian-Jiann LIN; BAO Qing
2013-01-01
A flux-form semi-Lagrangian transport scheme (FFSL) was implemented in a spectral atmospheric GCM developed and used at IAP/LASG.Idealized numerical experiments show that the scheme is good at shape preserving with less dissipation and dispersion,in comparison with other conventional schemes.Importantly,FFSL can automatically maintain the positive definition of the transported tracers,which was an underlying problem in the previous spectral composite method (SCM).To comprehensively investigate the impact of FFSL on GCM results,we conducted sensitive experiments.Three main improvements resulted:first,rainfall simulation in both distribution and intensity was notably improved,which led to an improvement in precipitation frequency.Second,the dry bias in the lower troposphere was significantly reduced compared with SCM simulations.Third,according to the Taylor diagram,the FFSL scheme yields simulations that are superior to those using the SCM:a higher correlation between model output and observation data was achieved with the FFSL scheme,especially for humidity in lower troposphere.However,the moist bias in the middle and upper troposphere was more pronounced with the FFSL scheme.This bias led to an over-simulation of precipitable water in comparison with reanalysis data.Possible explanations,as well as solutions,are discussed herein.
Evaluation of nourishment schemes based on long-term morphological modeling
DEFF Research Database (Denmark)
Grunnet, Nicholas; Kristensen, Sten Esbjørn; Drønen, Nils
2012-01-01
A recently developed long-term morphological modeling concept is applied to evaluate the impact of nourishment schemes. The concept combines detailed two-dimensional morphological models and simple one-line models for the coastline evolution and is particularly well suited for long-term simulatio...... site. This study strongly indicates that the hybrid model may be used as an engineering tool to predict shoreline response following the implementation of a nourishment project....
Litta, A. J.; Chakrapani, B.; Mohankumar, K.
2007-07-01
Heavy rainfall events become significant in human affairs when they are combined with hydrological elements. The problem of forecasting heavy precipitation is especially difficult since it involves making a quantitative precipitation forecast, a problem well recognized as challenging. Chennai (13.04°N and 80.17°E) faced incessant and heavy rain about 27 cm in 24 hours up to 8.30 a.m on 27th October 2005 completely threw life out of gear. This torrential rain caused by deep depression which lay 150km east of Chennai city in Bay of Bengal intensified and moved west north-west direction and crossed north Tamil Nadu and south Andhra Pradesh coast on 28th morning. In the present study, we investigate the predictability of the MM5 mesoscale model using different cumulus parameterization schemes for the heavy rainfall event over Chennai. MM5 Version 3.7 (PSU/NCAR) is run with two-way triply nested grids using Lambert Conformal Coordinates (LCC) with a nest ratio of 3:1 and 23 vertical layers. Grid sizes of 45, 15 and 5 km are used for domains 1, 2 and 3 respectively. The cumulus parameterization schemes used in this study are Anthes-Kuo scheme (AK), the Betts-Miller scheme (BM), the Grell scheme (GR) and the Kain-Fritsch scheme (KF). The present study shows that the prediction of heavy rainfall is sensitive to cumulus parameterization schemes. In the time series of rainfall, Grell scheme is in good agreement with observation. The ideal combination of the nesting domains, horizontal resolution and cloud parameterization is able to simulate the heavy rainfall event both qualitatively and quantitatively.
Improved simulation of precipitation in the tropics using a modified BMJ scheme in WRF model
Directory of Open Access Journals (Sweden)
R. Fonseca
2015-05-01
Full Text Available The successful modelling of the observed precipitation, a very important variable for a wide range of climate applications, continues to be one of the major challenges that climate scientists face today. When the Weather Research and Forecasting (WRF model is used to dynamically downscale the Climate Forecast System Reanalysis (CFSR over the Indo-Pacific region, with analysis (grid-point nudging, it is found that the cumulus scheme used, Betts–Miller–Janjić (BMJ, produces excessive rainfall suggesting that it has to be modified for this region. Experimentation has shown that the cumulus precipitation is not very sensitive to changes in the cloud efficiency but varies greatly in response to modifications of the temperature and humidity reference profiles. A new version of the scheme, denominated "modified BMJ" scheme, where the humidity reference profile is more moist, was developed and in tropical belt simulations it was found to give a better estimate of the observed precipitation, as given by the Tropical Rainfall Measuring Mission (TRMM 3B42 dataset, than the default BMJ scheme for the whole tropics and both monsoon seasons. In fact, in some regions the model even outperforms CFSR. The advantage of modifying the BMJ scheme to produce better rainfall estimates lies in the final dynamical consistency of the rainfall with other dynamical and thermodynamical variables of the atmosphere.
SOLVING FRACTIONAL-ORDER COMPETITIVE LOTKA-VOLTERRA MODEL BY NSFD SCHEMES
Directory of Open Access Journals (Sweden)
S.ZIBAEI
2016-12-01
Full Text Available In this paper, we introduce fractional-order into a model competitive Lotka- Volterra prey-predator system. We will discuss the stability analysis of this fractional system. The non-standard nite difference (NSFD scheme is implemented to study the dynamic behaviors in the fractional-order Lotka-Volterra system. Proposed non-standard numerical scheme is compared with the forward Euler and fourth order Runge-Kutta methods. Numerical results show that the NSFD approach is easy and accurate for implementing when applied to fractional-order Lotka-Volterra model.
White, Jeremy T.; Langevin, Christian D.; Hughes, Joseph D.
2010-01-01
Calibration of highly‐parameterized numerical models typically requires explicit Tikhonovtype regularization to stabilize the inversion process. This regularization can take the form of a preferred parameter values scheme or preferred relations between parameters, such as the preferred equality scheme. The resulting parameter distributions calibrate the model to a user‐defined acceptable level of model‐to‐measurement misfit, and also minimize regularization penalties on the total objective function. To evaluate the potential impact of these two regularization schemes on model predictive ability, a dataset generated from a synthetic model was used to calibrate a highly-parameterized variable‐density SEAWAT model. The key prediction is the length of time a synthetic pumping well will produce potable water. A bi‐objective Pareto analysis was used to explicitly characterize the relation between two competing objective function components: measurement error and regularization error. Results of the Pareto analysis indicate that both types of regularization schemes affect the predictive ability of the calibrated model.
Encompassing Sexual Medicine within Psychiatry: Pros and Cons
Segraves, Robert Taylor
2010-01-01
Objective: This article examines the positive and negative aspects of psychiatry encompassing sexual medicine within its purview. Methods: MEDLINE searches for the period between 1980 to the present were performed with the terms "psychiatry," "sexual medicine," and "sexual dysfunction." In addition, sexual medicine texts were reviewed for chapters…
Huang, Bo; Chen, Dehui; Li, Xingliang; Li, Chao
2014-05-01
The Global/Regional Assimilation and PrEdiction System (GRAPES) is the new-generation numerical weather prediction (NWP) system developed by the China Meteorological Administration. It is a fully compressible non-hydrostatical global/regional unified model that uses a traditional semi-Lagrangian advection scheme with cubic Lagrangian interpolation (referred to as the SL_CL scheme). The SL_CL scheme has been used in many operational NWP models, but there are still some deficiencies, such as the damping effects due to the interpolation and the relatively low accuracy. Based on Reich's semi-Lagrangian advection scheme (referred to as the R2007 scheme), the Re_R2007 scheme that uses the low- and high-order B-spline function for interpolation at the departure point, is developed in this paper. One- and two-dimensional idealized tests in the rectangular coordinate system with uniform grid cells were conducted to compare the Re_R2007 scheme and the SL_CL scheme. The numerical results showed that: (1) the damping effects were remarkably reduced with the Re_R2007 scheme; and (2) the normalized errors of the Re_R2007 scheme were about 7.5 and 3 times smaller than those of the SL_CL scheme in one- and two-dimensional tests, respectively, indicating the higher accuracy of the Re_R2007 scheme. Furthermore, two solid-body rotation tests were conducted in the latitude-longitude spherical coordinate system with nonuniform grid cells, which also verified the Re_R2007 scheme's advantages. Finally, in comparison with other global advection schemes, the Re_R2007 scheme was competitive in terms of accuracy and flow independence. An encouraging possibility for the application of the Re_R2007 scheme to the GRAPES model is provided.
Performance of the Goddard multiscale modeling framework with Goddard ice microphysical schemes
Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Matsui, Toshihisa; Li, J.-L. F.; Mohr, Karen I.; Skofronick-Jackson, Gail M.; Peters-Lidard, Christa D.
2016-03-01
The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has become a new approach for climate modeling. The embedded CRMs make it possible to apply CRM-based cloud microphysics directly within a GCM. However, most such schemes have never been tested in a global environment for long-term climate simulation. The benefits of using an MMF to evaluate rigorously and improve microphysics schemes are here demonstrated. Four one-moment microphysical schemes are implemented into the Goddard MMF and their results validated against three CloudSat/CALIPSO cloud ice products and other satellite data. The new four-class (cloud ice, snow, graupel, and frozen drops/hail) ice scheme produces a better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes, with biases within the observational uncertainties. Sensitivity experiments are conducted to examine the impact of recently upgraded microphysical processes on global hydrometeor distributions. Five processes dominate the global distributions of cloud ice and snow amount in long-term simulations: (1) allowing for ice supersaturation in the saturation adjustment, (2) three additional correction terms in the depositional growth of cloud ice to snow, (3) accounting for cloud ice fall speeds, (4) limiting cloud ice particle size, and (5) new size-mapping schemes for snow and graupel. Despite the cloud microphysics improvements, systematic errors associated with subgrid processes, cyclic lateral boundaries in the embedded CRMs, and momentum transport remain and will require future improvement.
Performance of the Goddard Multiscale Modeling Framework with Goddard Ice Microphysical Schemes
Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Matsui, Toshihisa; Li, J.-L.; Mohr, Karen I.; Skofronick-Jackson, Gail M.; Peters-Lidard, Christa D.
2016-01-01
The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has become a new approach for climate modeling. The embedded CRMs make it possible to apply CRM-based cloud microphysics directly within a GCM. However, most such schemes have never been tested in a global environment for long-term climate simulation. The benefits of using an MMF to evaluate rigorously and improve microphysics schemes are here demonstrated. Four one-moment microphysical schemes are implemented into the Goddard MMF and their results validated against three CloudSat/CALIPSO cloud ice products and other satellite data. The new four-class (cloud ice, snow, graupel, and frozen drops/hail) ice scheme produces a better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes, with biases within the observational uncertainties. Sensitivity experiments are conducted to examine the impact of recently upgraded microphysical processes on global hydrometeor distributions. Five processes dominate the global distributions of cloud ice and snow amount in long-term simulations: (1) allowing for ice supersaturation in the saturation adjustment, (2) three additional correction terms in the depositional growth of cloud ice to snow, (3) accounting for cloud ice fall speeds, (4) limiting cloud ice particle size, and (5) new size-mapping schemes for snow and graupel. Despite the cloud microphysics improvements, systematic errors associated with subgrid processes, cyclic lateral boundaries in the embedded CRMs, and momentum transport remain and will require future improvement.
Change in Farm Production Structure Within Different CAP Schemes – an LP Modelling Approach
Directory of Open Access Journals (Sweden)
Jaka ŽGAJNAR
2008-01-01
Full Text Available After accession to European Union in 2004 direct payments became veryimportant income source also for farmers in Slovenia. But agricultural policy inplace at accession changed significantly in year 2007 as result of CAP reformimplementation. The objective of this study was to evaluate decision makingimpacts of direct payments scheme implemented with the reform: regional or morelikely hybrid scheme. The change in farm production structure was simulated withmodel, applying gross margin maximisation, based on static linear programmingapproach. The model has been developed in a spreadsheet framework in MS Excelplatform. A hypothetical farm has been chosen to analyse different scenarios andspecializations. Focus of the analysis was on cattle sector, since it is expected thatdecoupling is going to have significant influence on its optimal productionstructure. The reason is high level of direct payments that could in pre-reformscheme rise up to 70 % of total gross margin. Model results confirm that the reformshould have unfavourable impacts on cattle farms with intensive productionpractice. The results show that hybrid scheme has minor negative impacts in allcattle specializations, while regional scheme would be better option for sheepspecialized farm. Analysis has also shown growing importance of CAP pillar IIpayments, among them particularly agri-environmental measures. In all threeschemes budgetary payments enable farmers to improve financial results and inboth reform schemes they alleviate economic impacts of the CAP reform.
Directory of Open Access Journals (Sweden)
Ireyuwa E. Igbinosa
2015-10-01
Full Text Available Due to the ever growing need for spectrum, the cognitive radio (CR has been proposed to improve the radio spectrum utilization. In this scenario, the secondary users (SU are permitted to share spectrum with the licensed primary users (SU with a strict condition that they do not cause harmful interference to the cognitive network. In this work, we have proposed an interference model for cognitive radio network that utilizes power or contention control interference management schemes. We derived the probability density function (PDF with the power control scheme, where the power of transmission of the CR transmitter is guided by the power control law and also with contention control scheme that has a fixed transmission power for all CR transmitter controlled by a contention control protocol. This protocol makes a decision on which CR transmitter can transmit at any point in time. In this work, we have shown that power and contention control schemes are good candidates for interference modeling in cognitive radio system. The impact of the unknown location of the primary receiver on the resulting interference generated by the CR transmitters was investigated and the results shows that the challenges of the hidden primary receivers lead to higher CR-primary interference in respect to higher mean and variance. Finally, the presented results show power control and the contention control scheme are good candidates in reducing the interference generated by the cognitive radio network.
Energy Technology Data Exchange (ETDEWEB)
Yan, Huiping; Qian, Yun; Lin, Guang; Leung, Lai-Yung R.; Yang, Ben; Fu, Q.
2014-03-25
Convective parameterizations used in weather and climate models all display sensitivity to model resolution and variable skill in different climatic regimes. Although parameters in convective schemes can be calibrated using observations to reduce model errors, it is not clear if the optimal parameters calibrated based on regional data can robustly improve model skill across different model resolutions and climatic regimes. In this study, this issue is investigated using a regional modeling framework based on the Weather Research and Forecasting (WRF) model. To quantify the response and sensitivity of model performance to model parameters, we identified five key input parameters and specified their ranges in the Kain-Fritsch (KF) convection scheme in WRF and calibrated them across different spatial resolutions, climatic regimes, and radiation schemes using observed precipitation data. Results show that the optimal values for the five input parameters in the KF scheme are close and model sensitivity and error exhibit similar dependence on the input parameters for all experiments conducted in this study despite differences in the precipitation climatology. We found that the model overall performances in simulating precipitation are more sensitive to the coefficients of downdraft (Pd) and entrainment (Pe) mass flux and starting height of downdraft (Ph). However, we found that rainfall biases, which are probably more related to structural errors, still exist over some regions in the simulation even with the optimal parameters, suggesting further studies are needed to identify the sources of uncertainties and reduce the model biases or structural errors associated with missed or misrepresented physical processes and/or potential problems with the modeling framework.
Model and algorithm of optimizing alternate traffic restriction scheme in urban traffic network
Institute of Scientific and Technical Information of China (English)
徐光明; 史峰; 刘冰; 黄合来
2014-01-01
An optimization model and its solution algorithm for alternate traffic restriction (ATR) schemes were introduced in terms of both the restriction districts and the proportion of restricted automobiles. A bi-level programming model was proposed to model the ATR scheme optimization problem by aiming at consumer surplus maximization and overload flow minimization at the upper-level model. At the lower-level model, elastic demand, mode choice and multi-class user equilibrium assignment were synthetically optimized. A genetic algorithm involving prolonging codes was constructed, demonstrating high computing efficiency in that it dynamically includes newly-appearing overload links in the codes so as to reduce the subsequent searching range. Moreover, practical processing approaches were suggested, which may improve the operability of the model-based solutions.
Ensuring confidence in predictions: A scheme to assess the scientific validity of in silico models.
Hewitt, Mark; Ellison, Claire M; Cronin, Mark T D; Pastor, Manuel; Steger-Hartmann, Thomas; Munoz-Muriendas, Jordi; Pognan, Francois; Madden, Judith C
2015-06-23
The use of in silico tools within the drug development process to predict a wide range of properties including absorption, distribution, metabolism, elimination and toxicity has become increasingly important due to changes in legislation and both ethical and economic drivers to reduce animal testing. Whilst in silico tools have been used for decades there remains reluctance to accept predictions based on these methods particularly in regulatory settings. This apprehension arises in part due to lack of confidence in the reliability, robustness and applicability of the models. To address this issue we propose a scheme for the verification of in silico models that enables end users and modellers to assess the scientific validity of models in accordance with the principles of good computer modelling practice. We report here the implementation of the scheme within the Innovative Medicines Initiative project "eTOX" (electronic toxicity) and its application to the in silico models developed within the frame of this project.
Validation of a RANS transition model using a high-order weighted compact nonlinear scheme
Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang
2013-04-01
A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.
Chern, J.; Tao, W.; Lang, S. E.; Matsui, T.
2012-12-01
The accurate representation of clouds and cloud processes in atmospheric general circulation models (GCMs) with relatively coarse resolution (~100 km) has been a long-standing challenge. With the rapid advancement in computational technology, new breed of GCMs that are capable of explicitly resolving clouds have been developed. Though still computationally very expensive, global cloud-resolving models (GCRMs) with horizontal resolutions of 3.5 to 14 km are already being run in an exploratory manner. Another less computationally demanding approach is the multi-scale modeling framework (MMF) that replaces conventional cloud parameterizations with a cloud-resolving model (CRM) in each grid column of a GCM. The Goddard MMF is based on the coupling of the Goddard Cumulus Ensemble (GCE), a CRM model, and the GEOS global model. In recent years a few new and improved microphysical schemes are developed and implemented to the GCE based on observations from field campaigns. It is important to evaluating these microphysical schemes for global applications such as the MMFs and GCRMs. Two-year (2007-2008) MMF sensitivity experiments have been carried out with different cloud microphysical schemes. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against TRMM, CloudSat and CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to evaluate the performance of different cloud microphysical schemes. We will assess the strengths and/or deficiencies in of these microphysics schemes and provide guidance on how to improve
DEFF Research Database (Denmark)
Hyun, Jaeyub; Kook, Junghwan; Wang, Semyung
2015-01-01
and basis vectors for use according to the target system. The proposed model reduction scheme is applied to the numerical simulation of the simple mass-damping-spring system and the acoustic metamaterial systems (i.e., acoustic lens and acoustic cloaking device) for the first time. Through these numerical...
An Enhanced Informed Watermarking Scheme Using the Posterior Hidden Markov Model
Directory of Open Access Journals (Sweden)
Chuntao Wang
2014-01-01
Full Text Available Designing a practical watermarking scheme with high robustness, feasible imperceptibility, and large capacity remains one of the most important research topics in robust watermarking. This paper presents a posterior hidden Markov model (HMM- based informed image watermarking scheme, which well enhances the practicability of the prior-HMM-based informed watermarking with favorable robustness, imperceptibility, and capacity. To make the encoder and decoder use the (nearly identical posterior HMM, each cover image at the encoder and each received image at the decoder are attacked with JPEG compression at an equivalently small quality factor (QF. The attacked images are then employed to estimate HMM parameter sets for both the encoder and decoder, respectively. Numerical simulations show that a small QF of 5 is an optimum setting for practical use. Based on this posterior HMM, we develop an enhanced posterior-HMM-based informed watermarking scheme. Extensive experimental simulations show that the proposed scheme is comparable to its prior counterpart in which the HMM is estimated with the original image, but it avoids the transmission of the prior HMM from the encoder to the decoder. This thus well enhances the practical application of HMM-based informed watermarking systems. Also, it is demonstrated that the proposed scheme has the robustness comparable to the state-of-the-art with significantly reduced computation time.
An enhanced informed watermarking scheme using the posterior hidden Markov model.
Wang, Chuntao
2014-01-01
Designing a practical watermarking scheme with high robustness, feasible imperceptibility, and large capacity remains one of the most important research topics in robust watermarking. This paper presents a posterior hidden Markov model (HMM-) based informed image watermarking scheme, which well enhances the practicability of the prior-HMM-based informed watermarking with favorable robustness, imperceptibility, and capacity. To make the encoder and decoder use the (nearly) identical posterior HMM, each cover image at the encoder and each received image at the decoder are attacked with JPEG compression at an equivalently small quality factor (QF). The attacked images are then employed to estimate HMM parameter sets for both the encoder and decoder, respectively. Numerical simulations show that a small QF of 5 is an optimum setting for practical use. Based on this posterior HMM, we develop an enhanced posterior-HMM-based informed watermarking scheme. Extensive experimental simulations show that the proposed scheme is comparable to its prior counterpart in which the HMM is estimated with the original image, but it avoids the transmission of the prior HMM from the encoder to the decoder. This thus well enhances the practical application of HMM-based informed watermarking systems. Also, it is demonstrated that the proposed scheme has the robustness comparable to the state-of-the-art with significantly reduced computation time.
A hybrid scheme for absorbing edge reflections in numerical modeling of wave propagation
Liu, Yang
2010-03-01
We propose an efficient scheme to absorb reflections from the model boundaries in numerical solutions of wave equations. This scheme divides the computational domain into boundary, transition, and inner areas. The wavefields within the inner and boundary areas are computed by the wave equation and the one-way wave equation, respectively. The wavefields within the transition area are determined by a weighted combination of the wavefields computed by the wave equation and the one-way wave equation to obtain a smooth variation from the inner area to the boundary via the transition zone. The results from our finite-difference numerical modeling tests of the 2D acoustic wave equation show that the absorption enforced by this scheme gradually increases with increasing width of the transition area. We obtain equally good performance using pseudospectral and finite-element modeling with the same scheme. Our numerical experiments demonstrate that use of 10 grid points for absorbing edge reflections attains nearly perfect absorption. © 2010 Society of Exploration Geophysicists.
Multi-model ensemble schemes for predicting northeast monsoon rainfall over peninsular India
Indian Academy of Sciences (India)
Nachiketa Acharya; S C Kar; Makarand A Kulkarni; U C Mohanty; L N Sahoo
2011-10-01
The northeast (NE) monsoon season (October, November and December) is the major period of rainfall activity over south peninsular India. This study is mainly focused on the prediction of northeast monsoon rainfall using lead-1 products (forecasts for the season issued in beginning of September) of seven general circulation models (GCMs). An examination of the performances of these GCMs during hindcast runs (1982–2008) indicates that these models are not able to simulate the observed interannual variability of rainfall. Inaccurate response of the models to sea surface temperatures may be one of the probable reasons for the poor performance of these models to predict seasonal mean rainfall anomalies over the study domain. An attempt has been made to improve the accuracy of predicted rainfall using three different multi-model ensemble (MME) schemes, viz., simple arithmetic mean of models (EM), principal component regression (PCR) and singular value decomposition based multiple linear regressions (SVD). It is found out that among these three schemes, SVD based MME has more skill than other MME schemes as well as member models.
Korpusik, Adam
2017-02-01
We present a nonstandard finite difference scheme for a basic model of cellular immune response to viral infection. The main advantage of this approach is that it preserves the essential qualitative features of the original continuous model (non-negativity and boundedness of the solution, equilibria and their stability conditions), while being easy to implement. All of the qualitative features are preserved independently of the chosen step-size. Numerical simulations of our approach and comparison with other conventional simulation methods are presented.
Stasser
1999-10-01
The basic elements of social decision scheme (SDS) theory are individual preferences, group preference compositions (distinguishable distributions), patterns of group influence (decision schemes, social combination rules), and collective responses (group decisions, judgments, solutions, and the like). The theory provides a framework for addressing two fundamental questions in the study of group performance: How are individual resources combined to yield a group response (the individual-into-group problem)? What are the implications of empirical observations under one set of circumstances for other conditions where data do not exist (the sparse data problem)? Several prescriptions for how to conduct fruitful group research are contained in the SDS tradition: make precise theoretical statements, provide strong and competitive tests of theories, and interpret empirical findings in the context of robust process models. Copyright 1999 Academic Press.
A Model-Free Scheme for Meme Ranking in Social Media.
He, Saike; Zheng, Xiaolong; Zeng, Daniel
2016-01-01
The prevalence of social media has greatly catalyzed the dissemination and proliferation of online memes (e.g., ideas, topics, melodies, tags, etc.). However, this information abundance is exceeding the capability of online users to consume it. Ranking memes based on their popularities could promote online advertisement and content distribution. Despite such importance, few existing work can solve this problem well. They are either daunted by unpractical assumptions or incapability of characterizing dynamic information. As such, in this paper, we elaborate a model-free scheme to rank online memes in the context of social media. This scheme is capable to characterize the nonlinear interactions of online users, which mark the process of meme diffusion. Empirical studies on two large-scale, real-world datasets (one in English and one in Chinese) demonstrate the effectiveness and robustness of the proposed scheme. In addition, due to its fine-grained modeling of user dynamics, this ranking scheme can also be utilized to explain meme popularity through the lens of social influence.
Directory of Open Access Journals (Sweden)
Chang-bae Moon
2011-01-01
Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.
A density-dependent matrix model and its applications in optimizing harvest schemes
Institute of Scientific and Technical Information of China (English)
Guofan Shao; WANG Fei; DAI Limin; BAI Jianwei; LI Yingshan
2006-01-01
Based on temporal data collected from 36 re-measured plots, transition probabilities of trees from a diameter class to a higher class were analyzed for the broadleaved-Korean pine forest in the Changbai Mountains. It was found that the transition probabilities were related not only to diameter size but also to the total basal area of trees with the diameter class. This paper demonstrates the development of a density-dependent matrix model, DM2, and a series of simulations with it for forest stands with different conditions under different harvest schemes. After validations with independent field data, this model proved a suitable tool for optimization analysis of harvest schemes on computers. The optimum harvest scheme(s) can be determined by referring to stand growth, total timbers harvested, and size diversity changes over time. Three user-friendly interfaces were built with a forest management decision support system FORESTAR(R) for easy operations of DM2 by forest managers. This paper also summarizes the advantages and disadvantages of DM2.
Validation of sub-grid-scale mixing schemes using CFCs in a global ocean model
Robitaille, Daniel Y.; Weaver, Andrew J.
Three sub-grid-scale mixing parameterizations (lateral/vertical; isopycnal; Gent and McWilliams, 1990) are used in a global ocean model in an attempt to determine which yields the best ocean climate. Observed CFC-11 distributions, in both the North and South Atlantic, are used in evaluating the model results. While the isopycnal mixing scheme does improve the deep ocean potential temperature and salinity distributions, when compared to results from the traditional lateral/vertical mixing scheme, the CFC-11 distribution is worse in the upper ocean due to too much mixing. The Gent and McWilliams (1990) parameterization significantly improves the CFC-11 distributions when compared to both of the other schemes. The main improvement comes from a reduction of CFC uptake in the southern ocean where the ‘bolus’ transport cancels the mean advection of tracers and hence causes the Deacon Cell to disappear. These results suggest that the asymmetric response found in CO2-increase experiments, whereby the climate over the southern ocean does not warm as much as in the northern hemisphere, may be due to the particular mixing schemes used.
A study of the spreading scheme for viral marketing based on a complex network model
Yang, Jianmei; Yao, Canzhong; Ma, Weicheng; Chen, Guanrong
2010-02-01
Buzzword-based viral marketing, known also as digital word-of-mouth marketing, is a marketing mode attached to some carriers on the Internet, which can rapidly copy marketing information at a low cost. Viral marketing actually uses a pre-existing social network where, however, the scale of the pre-existing network is believed to be so large and so random, so that its theoretical analysis is intractable and unmanageable. There are very few reports in the literature on how to design a spreading scheme for viral marketing on real social networks according to the traditional marketing theory or the relatively new network marketing theory. Complex network theory provides a new model for the study of large-scale complex systems, using the latest developments of graph theory and computing techniques. From this perspective, the present paper extends the complex network theory and modeling into the research of general viral marketing and develops a specific spreading scheme for viral marking and an approach to design the scheme based on a real complex network on the QQ instant messaging system. This approach is shown to be rather universal and can be further extended to the design of various spreading schemes for viral marketing based on different instant messaging systems.
Primdahl, Jørgen; Vesterager, Jens Peter; Finn, John A; Vlahos, George; Kristensen, Lone; Vejre, Henrik
2010-06-01
Agri-Environment Schemes (AES) to maintain or promote environmentally-friendly farming practices were implemented on about 25% of all agricultural land in the EU by 2002. This article analyses and discusses the actual and potential use of impact models in supporting the design, implementation and evaluation of AES. Impact models identify and establish the causal relationships between policy objectives and policy outcomes. We review and discuss the role of impact models at different stages in the AES policy process, and present results from a survey of impact models underlying 60 agri-environmental schemes in seven EU member states. We distinguished among three categories of impact models (quantitative, qualitative or common sense), depending on the degree of evidence in the formal scheme description, additional documents, or key person interviews. The categories of impact models used mainly depended on whether scheme objectives were related to natural resources, biodiversity or landscape. A higher proportion of schemes dealing with natural resources (primarily water) were based on quantitative impact models, compared to those concerned with biodiversity or landscape. Schemes explicitly targeted either on particular parts of individual farms or specific areas tended to be based more on quantitative impact models compared to whole-farm schemes and broad, horizontal schemes. We conclude that increased and better use of impact models has significant potential to improve efficiency and effectiveness of AES. (c) 2009 Elsevier Ltd. All rights reserved.
Simulation of hailstorm event using Mesoscale Model MM5 with modified cloud microphysics scheme
Directory of Open Access Journals (Sweden)
P. Chatterjee
2008-11-01
Full Text Available Mesoscale model MM5 (Version 3.5 with some modifications in the cloud microphysics scheme of Schultz (1995, has been used to simulate two hailstorm events over Gangetic Plain of West Bengal, India. While the first event occurred on 12 March 2003 and the hails covered four districts of the state of West Bengal, India, the second hailstorm event struck Srinikatan (22.65° N, 87.7° E on 10 April 2006 at 11:32 UT and it lasted for 2–3 min. Both these events can be simulated, if the same modifications are introduced in the cloud microphysics scheme of Schultz. However, the original scheme of Schultz cannot simulate any hail.
The results of simulation were compared with the necessary products of Doppler Weather Radar (DWR located at Kolkata (22.57° N, 88.35° E. Model products like reflectivity, graupel and horizontal wind are compared with the corresponding products of DWR. The pattern of hail development bears good similarity between model output and observation from DWR, if necessary modifications are introduced in the model. The model output of 24 h accumulated rain from 03:00 UT to next day 03:00 UT has also been compared with the corresponding product of the satellite TRMM.
Development and basic evaluation of a prognostic aerosol scheme in the CNRM Climate Model
Michou, M.; Nabat, P.; Saint-Martin, D.
2014-09-01
We have implemented a prognostic aerosol scheme in the CNRM-GAME/CERFACS climate model, based upon the GEMS/MACC aerosol module of the ECMWF operational forecast model. This scheme describes the physical evolution of the five main types of aerosols, namely black carbon, organic matter, sulfate, desert dust and sea-salt. In this work, we describe the specificities of our implementation, for instance, taking into consideration a different dust scheme or boosting biomass burning emissions by a factor of 2, as well as the evaluation performed on simulation outputs. The simulations consist of 2004 conditions and transient runs over the 1993-2012 period, and are either free-running or nudged towards the ERA-Interim Reanalysis. Evaluation data sets include several satellite instrument AOD products (i.e., MODIS Aqua classic and Deep-Blue products, MISR and CALIOP products), as well as ground-based AERONET data and the derived AERONET climatology, MAC-v1. The internal variability of the model has little impact on the seasonal climatology of the AODs of the various aerosols, and the characteristics of a nudged simulation reflect those of a free-running simulation. In contrast, the impact of the new dust scheme is large, with modelled dust AODs from simulations with the new dust scheme close to observations. Overall patterns and seasonal cycles of the total AOD are well depicted with, however, a systematic low bias over oceans. The comparison to the fractional MAC-v1 AOD climatology shows disagreements mostly over continents, while that to AERONET sites outlines the capability of the model to reproduce monthly climatologies under very diverse dominant aerosol types. Here again, underestimation of the total AOD appears in several cases, linked sometimes to insufficient efficiency of the aerosol transport away from the aerosol sources. Analysis of monthly time series at 166 AERONET sites shows, in general, correlation coefficients higher than 0.5 and lower model variance than
Directory of Open Access Journals (Sweden)
B. C. Backeberg
2009-02-01
Full Text Available A 4th order advection scheme is applied in a nested eddy-resolving Hybrid Coordinate Ocean Model (HYCOM of the greater Agulhas Current system for the purpose of testing advanced numerics as a means for improving the model simulation for eventual operational implementation. Model validation techniques comparing sea surface height variations, sea level skewness and variogram analyses to satellite altimetry measurements quantify that generally the 4th order advection scheme improves the realism of the model simulation. The most striking improvement over the standard 2nd order momentum advection scheme, is that the Southern Agulhas Current is simulated as a well-defined meandering current, rather than a train of successive eddies. A better vertical structure and stronger poleward transports in the Agulhas Current core contribute toward a better southwestward penetration of the current, and its temperature field, implying a stronger Indo-Atlantic inter-ocean exchange. It is found that the transport, and hence this exchange, is sensitive to the occurrences of mesoscale features originating upstream in the Mozambique Channel and Southern East Madagascar Current, and that the improved HYCOM simulation is well suited for further studies of these inter-actions.
Modeling and Analysis of DIPPM: A New Modulation Scheme for Visible Light Communications
Directory of Open Access Journals (Sweden)
Sana Ullah Jan
2015-01-01
Full Text Available Visible Light Communication (VLC uses an Intensity-Modulation and Direct-Detection (IM/DD scheme to transmit data. However, the light source used in VLC systems is continuously switched on and off quickly, resulting in flickering. In addition, recent illumination systems include dimming support to allow users to dim the light sources to the desired level. Therefore, the modulation scheme for data transmission in VLC system must include flicker mitigation and dimming control capabilities. In this paper, the authors propose a Double Inverse Pulse Position Modulation (DIPPM scheme that minimizes flickering and supports a high level of dimming for the illumination sources in VLC systems. To form DIPPM, some changes are made in the symbol structure of the IPPM scheme, and a detailed explanation and mathematical model of DIPPM are given in this paper. Furthermore, both analytical and simulation results for the error performance of 2-DIPPM are compared with the performance of VPPM. Also, the communication performance of DIPPM is analyzed in terms of the normalized required power.
Institute of Scientific and Technical Information of China (English)
DAI Fushan; YU Rucong; ZHANG Xuehong; YU Yongqiang
2005-01-01
In this study, a statistical cloud scheme is first introduced and coupled with a first-order turbulence scheme with second-order turbulence moments parameterized by the timescale of the turbulence dissipation and the vertical turbulent diffusion coefficient. Then the ability of the scheme to simulate cloud fraction at different relative humidity, vertical temperature profile, and the timescale of the turbulent dissipation is examined by numerical simulation. It is found that the simulated cloud fraction is sensitive to the parameter used in the statistical cloud scheme and the timescale of the turbulent dissipation. Based on the analyses, the introduced statistical cloud scheme is modified. By combining the modified statistical cloud scheme with a boundary layer cumulus scheme, a new statistically-based low-level cloud scheme is proposed and tentatively applied in NCAR (National Center for Atmospheric Research) CCM3 (Community Climate Model version3). It is found that the simulation of low-level cloud fraction is markedly improved and the centers with maximum low-level cloud fractions are well simulated in the cold oceans off the western coasts with the statistically-based low-level cloud scheme applied in CCM3. It suggests that the new statistically-based low-level cloud scheme has a great potential in the general circulation model for improving the low-level cloud parameterization.
A Case Study of the Accounting Models for the Participants in an Emissions Trading Scheme
Directory of Open Access Journals (Sweden)
Marius Deac
2013-10-01
Full Text Available As emissions trading schemes are becoming more popular across the world, accounting has to keep up with these new economic developments. The absence of guidance regarding the accounting for greenhouse gases (GHGs emissions generated by the withdrawal of IFRIC 3- Emission Rights - is the main reason why there is a diversity of accounting practices. This diversity of accounting methods makes the financial statements of companies that are taking part in emissions trading schemes like EU ETS, difficult to compare. The present paper uses a case study that assumes the existence of three entities that have chosen three different accounting methods: the IFRIC 3 cost model, the IFRIC 3 revaluation model and the “off balance sheet” approach. This illustrates how the choice of an accounting method regarding GHGs emissions influences their interim and annual reports through the chances in the companies’ balance sheet and financial results.
Simplified prediction model for lighting energy consumption in office building scheme design
Institute of Scientific and Technical Information of China (English)
余琼; 周潇儒; 林波荣; 朱颖心
2009-01-01
At the scheme design stage,the potential of daylighting is significant due to the saving for electric lighting use. There are few simple tools for architects to optimize the daylighting design. Therefore,it is useful to develop a design guideline related to the evaluation of lighting energy saving potential and sunlight design strategies. This paper analyzes the impacts of different artificial lighting control methods and design parameters on daylighting. A direct correlation between lighting energy consumption and parameters such as orientations,window to wall ratio (WWR) and perimeter depth is established. A simplified prediction model is proposed to estimate lighting energy consumption with the given perimeter depth,WWR,and window transparency. Validation of the model is carried out compared with detailed lighting simulation software for an office building. After the variation analysis for these parameters,design advises for the daylighting design at scheme design phase are summarized.
Model-based fault diagnosis techniques design schemes, algorithms, and tools
Ding, Steven
2008-01-01
The objective of this book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms, and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers. This is a textbook with extensive examples and references. Most methods are given in the form of an algorithm that enables a direct implementation in a programme. Comparisons among different methods are included when possible.
Chalupecký, Vladimír
2011-01-01
We propose a semi-discrete finite difference multiscale scheme for a concrete corrosion model consisting of a system of two-scale reaction-diffusion equations coupled with an ode. We prove energy and regularity estimates and use them to get the necessary compactness of the approximation estimates. Finally, we illustrate numerically the behavior of the two-scale finite difference approximation of the weak solution.
Modelling medical care usage under medical insurance scheme for urban non-working residents.
Xiong, Linping; Tian, Wenhua; Tang, Weidong
2013-06-01
This research investigates and evaluates China's urban medical care usage for non-working residents using microsimulation techniques. It focuses on modelling medical services usage and simulating medical expenses on hospitalization treatments as well as clinic services for serious illness in an urban area for the period of 2008-2010. A static microsimulation model was created to project the impact of the medical insurance scheme. Four kinds of achievements have been made. For three different scenarios, the model predicted the hospitalization services costs and payments, as well as the balance of the social pool fund and the medical burden on families.
A new modeling and control scheme for thyristor-controlled series capacitor
Institute of Scientific and Technical Information of China (English)
Zhizhong MAO
2009-01-01
In order to design an optimal controller for the thyristor controlled series capacitor(TCSC),a novel TCSC control model is developed.In the model,the delay angle of thyristor valves is the input,and the inductor current is chosen as the output.Theoretical analysis and simulation studies show that TCSC is a non-linear system and its parameters vary with the operating point.In consideration of the special characteristics of the TCSC,an improved model algorithmic control (IMAC) scheme is proposed to control TCSC effectively.The good performance can be observed from simulation results when IMAC is applied to a series compensated radial system.
Comparison of renormalization group schemes for sine-Gordon type models
Nandori, I; Sailer, K; Trombettoni, A
2009-01-01
We consider the scheme-dependence of the renormalization group (RG) flow obtained in the local potential approximation for two-dimensional periodic, sine-Gordon type field-theoric models with possible inclusion of explicit mass terms. For sine-Gordon type models showing up a Kosterlitz-Thouless-Berezinskii type phase transition the Wegner-Houghton, the Polchinski, the functional Callan-Symanzik and the effective average action RG methods give qualitatively the same result and the critical frequency (temperature) can be obtained scheme-independently from the RG equations linearized around the Gaussian fixed point. For the massive sine-Gordon model which undergoes an Ising type phase transition, the Wegner-Houghton, the functional Callan-Symanzik and the effective average action RG methods provide the same scheme-independent phase structure and value for the critical ratio, in agreement with the results of lattice methods. It is also shown that RG equations linearized around the Gaussian fixed point produce sch...
Nonstandard finite difference scheme for SIRS epidemic model with disease-related death
Fitriah, Z.; Suryanto, A.
2016-04-01
It is well known that SIRS epidemic with disease-related death can be described by a system of nonlinear ordinary differential equations (NL ODEs). This model has two equilibrium points where their existence and stability properties are determined by the basic reproduction number [1]. Besides the qualitative properties, it is also often needed to solve the system of NL ODEs. Euler method and 4th order Runge-Kutta (RK4) method are often used to solve the system of NL ODEs. However, both methods may produce inconsistent qualitative properties of the NL ODEs such as converging to wrong equilibrium point, etc. In this paper we apply non-standard finite difference (NSFD) scheme (see [2,3]) to approximate the solution of SIRS epidemic model with disease-related death. It is shown that the discrete system obtained by NSFD scheme is dynamically consistent with the continuous model. By our numerical simulations, we find that the solutions of NSFD scheme are always positive, bounded and convergent to the correct equilibrium point for any step size of integration (h), while those of Euler or RK4 method have the same properties only for relatively small h.
Directory of Open Access Journals (Sweden)
M. Schraner
2008-10-01
Full Text Available We describe version 2.0 of the chemistry-climate model (CCM SOCOL. The new version includes fundamental changes of the transport scheme such as transporting all chemical species of the model individually and applying a family-based correction scheme for mass conservation for species of the nitrogen, chlorine and bromine groups, a revised transport scheme for ozone, furthermore more detailed halogen reaction and deposition schemes, and a new cirrus parameterisation in the tropical tropopause region. By means of these changes the model manages to overcome or considerably reduce deficiencies recently identified in SOCOL version 1.1 within the CCM Validation activity of SPARC (CCMVal. In particular, as a consequence of these changes, regional mass loss or accumulation artificially caused by the semi-Lagrangian transport scheme can be significantly reduced, leading to much more realistic distributions of the modelled chemical species, most notably of the halogens and ozone.
A multi-modal prostate segmentation scheme by combining spectral clustering and active shape models
Toth, Robert; Tiwari, Pallavi; Rosen, Mark; Kalyanpur, Arjun; Pungavkar, Sona; Madabhushi, Anant
2008-03-01
Segmentation of the prostate boundary on clinical images is useful in a large number of applications including calculating prostate volume during biopsy, tumor estimation, and treatment planning. Manual segmentation of the prostate boundary is, however, time consuming and subject to inter- and intra-reader variability. Magnetic Resonance (MR) imaging (MRI) and MR Spectroscopy (MRS) have recently emerged as promising modalities for detection of prostate cancer in vivo. In this paper we present a novel scheme for accurate and automated prostate segmentation on in vivo 1.5 Tesla multi-modal MRI studies. The segmentation algorithm comprises two steps: (1) A hierarchical unsupervised spectral clustering scheme using MRS data to isolate the region of interest (ROI) corresponding to the prostate, and (2) an Active Shape Model (ASM) segmentation scheme where the ASM is initialized within the ROI obtained in the previous step. The hierarchical MRS clustering scheme in step 1 identifies spectra corresponding to locations within the prostate in an iterative fashion by discriminating between potential prostate and non-prostate spectra in a lower dimensional embedding space. The spatial locations of the prostate spectra so identified are used as the initial ROI for the ASM. The ASM is trained by identifying user-selected landmarks on the prostate boundary on T2 MRI images. Boundary points on the prostate are identified using mutual information (MI) as opposed to the traditional Mahalanobis distance, and the trained ASM is deformed to fit the boundary points so identified. Cross validation on 150 prostate MRI slices yields an average segmentation sensitivity, specificity, overlap, and positive predictive value of 89, 86, 83, and 93% respectively. We demonstrate that the accurate initialization of the ASM via the spectral clustering scheme is necessary for automated boundary extraction. Our method is fully automated, robust to system parameters, and computationally efficient.
Hou, Chieh; Ateshian, Gerard A
2016-01-01
Fibrous biological tissues may be modeled using a continuous fiber distribution (CFD) to capture tension-compression nonlinearity, anisotropic fiber distributions, and load-induced anisotropy. The CFD framework requires spherical integration of weighted individual fiber responses, with fibers contributing to the stress response only when they are in tension. The common method for performing this integration employs the discretization of the unit sphere into a polyhedron with nearly uniform triangular faces (finite element integration or FEI scheme). Although FEI has proven to be more accurate and efficient than integration using spherical coordinates, it presents three major drawbacks: First, the number of elements on the unit sphere needed to achieve satisfactory accuracy becomes a significant computational cost in a finite element (FE) analysis. Second, fibers may not be in tension in some regions on the unit sphere, where the integration becomes a waste. Third, if tensed fiber bundles span a small region compared to the area of the elements on the sphere, a significant discretization error arises. This study presents an integration scheme specialized to the CFD framework, which significantly mitigates the first drawback of the FEI scheme, while eliminating the second and third completely. Here, integration is performed only over the regions of the unit sphere where fibers are in tension. Gauss-Kronrod quadrature is used across latitudes and the trapezoidal scheme across longitudes. Over a wide range of strain states, fiber material properties, and fiber angular distributions, results demonstrate that this new scheme always outperforms FEI, sometimes by orders of magnitude in the number of computational steps and relative accuracy of the stress calculation.
Thermal Error Modeling of a Machine Tool Using Data Mining Scheme
Wang, Kun-Chieh; Tseng, Pai-Chang
In this paper the knowledge discovery technique is used to build an effective and transparent mathematic thermal error model for machine tools. Our proposed thermal error modeling methodology (called KRL) integrates the schemes of K-means theory (KM), rough-set theory (RS), and linear regression model (LR). First, to explore the machine tool's thermal behavior, an integrated system is designed to simultaneously measure the temperature ascents at selected characteristic points and the thermal deformations at spindle nose under suitable real machining conditions. Second, the obtained data are classified by the KM method, further reduced by the RS scheme, and a linear thermal error model is established by the LR technique. To evaluate the performance of our proposed model, an adaptive neural fuzzy inference system (ANFIS) thermal error model is introduced for comparison. Finally, a verification experiment is carried out and results reveal that the proposed KRL model is effective in predicting thermal behavior in machine tools. Our proposed KRL model is transparent, easily understood by users, and can be easily programmed or modified for different machining conditions.
Directory of Open Access Journals (Sweden)
Daniel Holdaway
2015-09-01
Full Text Available The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5. All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have non-linear behaviour. The piecewise parabolic method (PPM with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
A coupled model tree genetic algorithm scheme for flow and water quality predictions in watersheds
Preis, Ami; Ostfeld, Avi
2008-02-01
SummaryThe rapid advance in information processing systems along with the increasing data availability have directed research towards the development of intelligent systems that evolve models of natural phenomena automatically. This is the discipline of data driven modeling which is the study of algorithms that improve automatically through experience. Applications of data driven modeling range from data mining schemes that discover general rules in large data sets, to information filtering systems that automatically learn users' interests. This study presents a data driven modeling algorithm for flow and water quality load predictions in watersheds. The methodology is comprised of a coupled model tree-genetic algorithm scheme. The model tree predicts flow and water quality constituents while the genetic algorithm is employed for calibrating the model tree parameters. The methodology is demonstrated through base runs and sensitivity analysis for daily flow and water quality load predictions on a watershed in northern Israel. The method produced close fits in most cases, but was limited in estimating the peak flows and water quality loads.
A new multi-tracer transport scheme for the dynamical core of NCAR's Community Atmosphere Model
Erath, C.
2012-04-01
The integration of a conservative semi-Lagrangian multi-tracer transport scheme (CSLAM) in NCAR's High-Order Method Modeling Environment (HOMME) is considered here. HOMME is a highly scalable atmospheric modeling framework, and its current horizontal discretization relies on spectral element (SE) and/or discontinuous Galerkin (DG) methods on the cubed-sphere. It is one dynamical core of NCAR's Community Atmosphere Model (CAM). The main advantage of CSLAM is that the upstream cell (trajectories) information and computation of weights of integrals can be reused for each additional tracer. This makes CSLAM particularly interesting for global atmospheric modeling with growing number of tracers, e.g. more than 100 tracers for the chemistry version of CAM. An algorithm specifically designed for multiple processors and on the cubed-sphere grid for CSLAM in HOMME is a challenging task. HOMME is running on an element ansatz on the six cube faces. Inside these elements we create an Eulerian finite volume grid of equiangular gnomonic type, which represents the arrival grid in the scheme. But CSLAM relies on backward trajectories, which entails a departure grid. That means departure and arrival grid don't necessary have to be on the same element and certainly not on the same cube face. Also the reconstruction for higher order modeling needs a patch of tracer values which extend the element. Here we consider a third order reconstruction method. Therefore, we introduce a halo for the tracer values in the cell centers of a cube-element. The size of this halo depends on the Courant number (CFL condition) and the reconstruction type. Note that for a third order scheme and CFL number communication can be limited to one per time step. This data structure allows us to consider an element with its halo as one task where we have to be extra carful for elements which share a cube edge due to projection and orientation reasons. We stress that the reconstruction coefficients for elements
Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.
Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E
2007-02-15
Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.
Distortion Modeling and Error Robust Coding Scheme for H.26L Video
Institute of Scientific and Technical Information of China (English)
CHENChuan; YUSongyu; CHENGLianji
2004-01-01
Transmission of hybrid-coded video including motion compensation and spatial prediction over error prone channel results in the well-known problem of error propagation because of the drift in reference frames between encoder and decoder. The prediction loop propa-gates errors and causes substantial degradation in video quality. Especially in H.26L video, both intra and inter prediction strategies are used to improve compression efficiency, however, they make error propagation more serious. This work proposes distortion models for H.26L video to optimally estimate the overall distortion of decoder frame reconstruction due to quantization, error propagation, and error concealment. Based on these statistical distortion models, our error robust coding scheme only integrates the distinct distortion between intra and inter macroblocks into a rate-distortlon based framework to select suitable coding mode for each macroblock, and so,the cost in computation complexity is modest. Simulations under typical 3GPP/3GPP2 channel and Internet channel conditions have shown that our proposed scheme achieves much better performance than those currently used in H.26L. The error propagation estimation and effect at high fractural pixel-level prediction have also been tested. All the results have demonstrated that our proposed scheme achieves a good balance between compression efficiency and error robustness for H.26L video, at the cost of modest additional complexity.
Institute of Scientific and Technical Information of China (English)
ZHU Shouxian; ZHANG Wenjing
2008-01-01
Much has been written of the error in computing the baroclinic pressure gradient (BPG) with sigma coordinates in ocean or atmos- pheric numerical models. The usual way to reduce the error is to subtract area-averaged density stratification of the whole computa- tion region. But if there is great difference between the area-averaged and the local averaged density stratification, the error will be obvious. An example is given to show that the error from this method may be larger than that from no correction sometimes. The definition of local area is put forward. Then, four improved BPG difference schemes of subtracting the local averaged density strat- ification are designed to reduce the error. Two of them are for diagnostic calculation (density field is fixed), and the others are for prognostic calculation (density field is not fixed). The results show that the errors from these schemes all significantly decrease.
Directory of Open Access Journals (Sweden)
Mohammad Iranmanesh
2014-12-01
Full Text Available Many standard brands sell products under the volume discount scheme (VDS as more and more consumers are fond of purchasing products under this scheme. Despite volume discount being commonly practiced, there is a dearth of research, both conceptual and empirical, focusing on purchase characteristics factors and consumer internal evaluation concerning the purchase of products under VDS. To attempt to fill this void, this article develops a conceptual model on VDS with the intention of delineating the influence of the purchase characteristics factors on the consumer intention to purchase products under VDS and provides an explanation of their effects through consumer internal evaluation. Finally, the authors discuss the managerial implications of their research and offer guidelines for future empirical research.
Convergent and Correct Message Passing Schemes for Optimization Problems over Graphical Models
Ruozzi, Nicholas
2010-01-01
The max-product algorithm, which attempts to compute the most probable assignment (MAP) of a given probability distribution, has recently found applications in quadratic minimization and combinatorial optimization. Unfortunately, the max-product algorithm is not guaranteed to converge and, even if it does, is not guaranteed to produce the MAP assignment. In this work, we provide a simple derivation of a new family of message passing algorithms. We first show how to arrive at this general message passing scheme by "splitting" the factors of our graphical model and then we demonstrate that this construction can be extended beyond integral splitting. We prove that, for any objective function which attains its maximum value over its domain, this new family of message passing algorithms always contains a message passing scheme that guarantees correctness upon convergence to a unique estimate. We then adopt a serial message passing schedule and prove that, under mild assumptions, such a schedule guarantees the conv...
Generalization of the Event-Based Carnevale-Hines Integration Scheme for Integrate-and-Fire Models
van Elburg, Ronald A. J.; van Ooyen, Arjen
2009-01-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on th
Cirrus clouds in a global climate model with a statistical cirrus cloud scheme
Directory of Open Access Journals (Sweden)
M. Wang
2010-06-01
Full Text Available A statistical cirrus cloud scheme that accounts for mesoscale temperature perturbations is implemented in a coupled aerosol and atmospheric circulation model to better represent both subgrid-scale supersaturation and cloud formation. This new scheme treats the effects of aerosol on cloud formation and ice freezing in an improved manner, and both homogeneous freezing and heterogeneous freezing are included. The scheme is able to better simulate the observed probability distribution of relative humidity compared to the scheme that was implemented in an older version of the model. Heterogeneous ice nuclei (IN are shown to decrease the frequency of occurrence of supersaturation, and improve the comparison with observations at 192 hPa. Homogeneous freezing alone can not reproduce observed ice crystal number concentrations at low temperatures (<205 K, but the addition of heterogeneous IN improves the comparison somewhat. Increases in heterogeneous IN affect both high level cirrus clouds and low level liquid clouds. Increases in cirrus clouds lead to a more cloudy and moist lower troposphere with less precipitation, effects which we associate with the decreased convective activity. The change in the net cloud forcing is not very sensitive to the change in ice crystal concentrations, but the change in the net radiative flux at the top of the atmosphere is still large because of changes in water vapor. Changes in the magnitude of the assumed mesoscale temperature perturbations by 25% alter the ice crystal number concentrations and the net radiative fluxes by an amount that is comparable to that from a factor of 10 change in the heterogeneous IN number concentrations. Further improvements on the representation of mesoscale temperature perturbations, heterogeneous IN and the competition between homogeneous freezing and heterogeneous freezing are needed.
An all-encompassing study of an authentic court setting
DEFF Research Database (Denmark)
Christensen, Tina Paulsen
Most professional interpreters and interpreting researchers probably see quality or "professiona¬lism" as the main goal of interpreting in general, but still there is no agreement within the inter¬preting community of how to define interpreting quality. Facing the fact that interpreting can...... necessarily be judged from a particular (subjective) perspective on the communicative event. In this paper I shall address the issue of interpreting quality in an all-encompassing perspective on an authentic Danish courtroom setting. The aim of the empirical case-based survey is unlike that of most existing...... studies which generally have taken either one particular perspective - that of inter¬preters, clients or users - or been experimental in nature - to investigate to which extent different users (judge, defence counsel, prosecutor and non-majority-language speaking user) in a specific courtroom setting...
An all-encompassing study of an authentic court setting
DEFF Research Database (Denmark)
Christensen, Tina Paulsen
Most professional interpreters and interpreting researchers probably see quality or "professiona¬lism" as the main goal of interpreting in general, but still there is no agreement within the inter¬preting community of how to define interpreting quality. Facing the fact that interpreting can...... not only be seen as a text-processing task, this study will focus on interpreting as a process of com¬municative interaction where quality means successful communication in a particular commu¬nicative situation. The consequence of focusing on interpreting as a service is that the degree of success must...... necessarily be judged from a particular (subjective) perspective on the communicative event. In this paper I shall address the issue of interpreting quality in an all-encompassing perspective on an authentic Danish courtroom setting. The aim of the empirical case-based survey is unlike that of most existing...
Robust DTC-SVM Method for Matrix Converter Drives with Model Reference Adaptive Control Scheme
DEFF Research Database (Denmark)
Lee, Kyo Beum; Huh, Sunghoi; Sim, Kyung-Hun
2007-01-01
strategy using space vector modulations and a deadbeat algorithm in the stator flux reference frame. The lumped disturbances such as parameter variation and load disturbance of the system are estimated by a neuro-sliding mode approach based on model reference adaptive control (MRAC). An adaptive observer......This paper presents a new robust DTC-SVM control system for high performance induction motor drives fed by a matrix converter with variable structure - model reference adaptive control scheme (VS-MRAC). It is possible to combine the advantages of matrix converters with the advantages of the DTC...
Turing patterns in a reaction-diffusion model with the Degn-Harrison reaction scheme
Li, Shanbing; Wu, Jianhua; Dong, Yaying
2015-09-01
In this paper, we consider a reaction-diffusion model with Degn-Harrison reaction scheme. Some fundamental analytic properties of nonconstant positive solutions are first investigated. We next study the stability of constant steady-state solution to both ODE and PDE models. Our result also indicates that if either the size of the reactor or the effective diffusion rate is large enough, then the system does not admit nonconstant positive solutions. Finally, we establish the global structure of steady-state bifurcations from simple eigenvalues by bifurcation theory and the local structure of the steady-state bifurcations from double eigenvalues by the techniques of space decomposition and implicit function theorem.
Wang, Jun; Cieplak, Piotr; Cai, Qin; Hsieh, Meng-Juei; Wang, Junmei; Duan, Yong; Luo, Ray
2012-07-19
As an integrated step toward a coherent polarizable force field for biomolecular modeling, we analyzed four polarizable water models to evaluate their consistencies with the Thole polarization screening schemes utilized in our latest Amber polarizable force field. Specifically, we studied the performance of both the Thole linear and exponential schemes in these water models to assess their abilities to reproduce experimental water properties. The analysis shows that the tested water models reproduce most of the room-temperature properties of liquid water reasonably well but fall short of reproducing the dynamic properties and temperature-dependent properties. This study demonstrates the necessity to further fine-tune water polarizable potentials for more robust polarizable force fields for biomolecular simulations.
Comparison of Aircraft Models and Integration Schemes for Interval Management in the TRACON
Neogi, Natasha; Hagen, George E.; Herencia-Zapana, Heber
2012-01-01
Reusable models of common elements for communication, computation, decision and control in air traffic management are necessary in order to enable simulation, analysis and assurance of emergent properties, such as safety and stability, for a given operational concept. Uncertainties due to faults, such as dropped messages, along with non-linearities and sensor noise are an integral part of these models, and impact emergent system behavior. Flight control algorithms designed using a linearized version of the flight mechanics will exhibit error due to model uncertainty, and may not be stable outside a neighborhood of the given point of linearization. Moreover, the communication mechanism by which the sensed state of an aircraft is fed back to a flight control system (such as an ADS-B message) impacts the overall system behavior; both due to sensor noise as well as dropped messages (vacant samples). Additionally simulation of the flight controller system can exhibit further numerical instability, due to selection of the integration scheme and approximations made in the flight dynamics. We examine the theoretical and numerical stability of a speed controller under the Euler and Runge-Kutta schemes of integration, for the Maintain phase for a Mid-Term (2035-2045) Interval Management (IM) Operational Concept for descent and landing operations. We model uncertainties in communication due to missed ADS-B messages by vacant samples in the integration schemes, and compare the emergent behavior of the system, in terms of stability, via the boundedness of the final system state. Any bound on the errors incurred by these uncertainties will play an essential part in a composable assurance argument required for real-time, flight-deck guidance and control systems,. Thus, we believe that the creation of reusable models, which possess property guarantees, such as safety and stability, is an innovative and essential requirement to assessing the emergent properties of novel airspace
A regional hybrid GSI/ETKF data assimilation scheme for the WRF/ARW model
Mizzi, A. P.
2011-12-01
A regional hybrid GSI/ETKF data assimilation scheme for the WRF/ARW model Arthur P. Mizzi National Center for Atmospheric Research Boulder, CO 80307 303-497-8987 mizzi@ucar.edu Recently, there has been increased interest in hybrid variational data assimilation due to its ability to improve numerical weather forecast accuracy by incorporating ensemble error information into the data assimilation process (Buehner, 2010a, b; Wang 2010). In this paper, we introduce a GSI/ETKF regional hybrid (Mizzi, 2011). The GSI/ETKF regional hybrid uses a modified version of NOAA/EMC's GSI global hybrid (Wang, 2010) for the ensemble mean analysis and an ETKF (Bishop, et. al., 2001) to update the ensemble perturbations. We tested the GSI/ETKF regional hybrid by applying it to cycling experiments with WRF/ARW on a coarse-resolution domain covering the continental United States (CONUS) that: (i) compared different ETKF schemes, and (ii) reduced and held the number of ETKF observations constant. The results from those experiments showed that: (i) the ETKF scheme requiring the least amount of inflation provided the lowest 12-hr forecast RMSEs (ii) holding the number of ETKF observations constant removed the oscillation in the posterior ETKF ensemble spread noted by Bowler et al., (2008), and (iii) reducing the number of ETKF observations lowered the 12-hr forecast RMSEs. Presently, we are extending this work to a comparison of the GSI/ETKF regional hybrid with a GSI/LETKF regional hybrid based on the LETKF of Ott, et. al., (2004) and a GSI/EnKF regional hybrid based on the DART EnKF (Anderson et. al., 2009). Generally, the GSI/LETKF and GSI/EnKF schemes require less ensemble spread inflation compared to the GSI/ETKF scheme. Consequently, we expect the GSI/LETKF and GSI/EnKF schemes to provide lower 12-hr forecast RMSEs compared to the GSI/ETKF results. Our preliminary results are consistent with that supposition.
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
In this article, one and two-dimensional hydrodynamical models of semiconductor devices are numerically investigated. The models treat the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. Mathematically, the governing equations form a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as low field mobility, device length, lattice temperature and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations. A two dimensional simulation is also performed by KFVS method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
A mixed finite element scheme for viscoelastic flows with XPP model
Institute of Scientific and Technical Information of China (English)
Xianhong Han; Xikui Li
2008-01-01
A mixed finite element formulation for viscoe-lastic flows is derived in this paper, in which the FIC (finite incremental calculus) pressure stabilization process and the DEVSS (discrete elastic viscous stress splitting) method using the Crank-Nicolson-based split are introduced within a general framework of the iterative version of the fractio-nal step algorithm. The SU (streamline-upwind) method is particularly chosen to tackle the convective terms in constitu-tive equations of viscoelastic flows. Thanks to the proposed scheme the finite elements with equal low-order interpola-tion approximations for stress-velocity-pressure variables can be successfully used even for viscoelastic flows with high Weissenberg numbers. The XPP (extended Pom-Pom) consti-tutive model for describing viscoelastic behaviors is particu-larly integrated into the proposed scheme. The numerical results for the 4:1 sudden contraction flow problem demons-trate prominent stability, accuracy and convergence rate of the proposed scheme in both pressure and stress distributions over the flow domain within a wide range of the Weissenberg number, particularly the capability in reproducing the results, which can be used to explain the "die swell" phenomenon observed in the polymer injection molding process.
Liu, Zhe; Lin, Lei; Xie, Lian; Gao, Huiwang
2016-10-01
To improve the efficiency of the terrain-following σ-coordinate non-hydrostatic ocean model, a partially implicit finite difference (PIFD) scheme is proposed. By using explicit terms instead of implicit terms to discretize the parts of the vertical dynamic pressure gradient derived from the σ-coordinate transformation, the coefficient matrix of the discrete Poisson equation that the dynamic pressure satisfies can be simplified from 15 diagonals to 7 diagonals. The PIFD scheme is shown to run stably when it is applied to simulate five benchmark cases, namely, a standing wave in a basin, a surface solitary wave, a lock-exchange problem, a periodic wave over a bar and a tidally induced internal wave. Compared with the conventional fully implicit finite difference (FIFD) scheme, the PIFD scheme produces simulation results of equivalent accuracy at only 40-60% of the computational cost. The PIFD scheme demonstrates strong applicability and can be easily implemented in σ-coordinate ocean models.
Energy Technology Data Exchange (ETDEWEB)
Zubov, V.A.; Rozanov, E.V. [Main Geophysical Observatory, St.Petersburg (Russian Federation); Schlesinger, M.E.; Andronova, N.G. [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Atmospheric Sciences
1997-12-31
The problems of ozone depletion, climate change and atmospheric pollution strongly depend on the processes of production, destruction and transport of chemical species. A hybrid transport scheme was developed, consisting of the semi-Lagrangian scheme for horizontal advection and the Prather scheme for vertical transport, which have been used for the Atmospheric Chemical Transport model to calculate the distributions of different chemical species. The performance of the new hybrid scheme has been evaluated in comparison with other transport schemes on the basis of specially designed tests. The seasonal cycle of the distribution of N{sub 2}O simulated by the model, as well as the dispersion of NO{sub x} exhausted from subsonic aircraft, are in a good agreement with published data. (author) 8 refs.
Energy Technology Data Exchange (ETDEWEB)
Mengelkamp, H.T.; Warrach, K.; Raschke, E. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Atmosphaerenphysik
1997-12-31
A soil-vegetation-atmosphere-transfer scheme is presented here which solves the coupled system of the Surface Energy and Water Balance (SEWAB) equations considering partly vegetated surfaces. It is based on the one-layer concept for vegetation. In the soil the diffusion equations for heat and moisture are solved on a multi-layer grid. SEWAB has been developed to serve as a land-surface scheme for atmospheric circulation models. Being forced with atmospheric data from either simulations or measurements it calculates surface and subsurface runoff that can serve as input to hydrologic models. The model has been validated with field data from the FIFE experiment and has participated in the PILPS project for intercomparison of land-surface parameterization schemes. From these experiments we feel that SEWAB reasonably well partitions the radiation and precipitation into sensible and latent heat fluxes as well as into runoff and soil moisture Storage. (orig.) [Deutsch] Ein Landoberflaechenschema wird vorgestellt, das den Transport von Waerme und Wasser zwischen dem Erdboden, der Vegetation und der Atmosphaere unter Beruecksichtigung von teilweise bewachsenem Boden beschreibt. Im Erdboden werden die Diffusionsgleichungen fuer Waerme und Feuchte auf einem Gitter mit mehreren Schichten geloest. Das Schema SEWAB (Surface Energy and Water Balance) beschreibt die Landoberflaechenprozesse in atmosphaerischen Modellen und berechnet den Oberflaechenabfluss und den Basisabfluss, die als Eingabedaten fuer hydrologische Modelle genutzt werden koennen. Das Modell wurde mit Daten des FIFE-Experiments kalibriert und hat an Vergleichsexperimenten fuer Landoberflaechen-Schemata im Rahmen des PILPS-Projektes teilgenommen. Dabei hat sich gezeigt, dass die Aufteilung der einfallenden Strahlung und des Niederschlages in den sensiblen und latenten Waermefluss und auch in Abfluss und Speicherung der Bodenfeuchte in SEWAB den beobachteten Daten recht gut entspricht. (orig.)
Gust response modeling and alleviation scheme design for an elastic aircraft
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
Time-domain approaches are presented for analysis of the dynamic response of aeroservoelastic systems to atmospheric gust excitations. The continuous and discrete gust inputs are defined in the time domain. The time-domain approach to continuous gust response uses a state-space formulation that requires the frequency-dependent aerodynamic coefficients to be approximated with the rational function of a Laplace variable. A hybrid method which combines the Fourier transform and time-domain approaches is used to calculate discrete gust response. The purpose of this approach is to obtain a time-domain state-space model without using rational function approximation of the gust columns. Three control schemes are designed for gust alleviation on an elastic aircraft, and three control surfaces are used: aileron, elevator and spoiler. The signals from the rate of pitch angle gyroscope or angle of attack sensor are sent to the elevator while the signals from accelerometers at the wing tip and center of gravity of the aircraft are sent to the aileron and spoiler, respectively. All the control laws are based on classical control theory. The results show that acceleration at the center of gravity of the aircraft and bending-moment at the wing-root section are mainly excited by rigid modes of the aircraft and the accelerations at the wing-tip are mainly excited by elastic modes of the aircraft. All the three control schemes can be used to alleviate the wing-root moments and the accelerations. The gust response can be alleviated using control scheme 3, in which the spoiler is used as a control surface, but the effects are not as good as those of control schemes 1 and 2.
Impact of an improved shortwave radiation scheme in the MAECHAM5 General Circulation Model
Directory of Open Access Journals (Sweden)
J. J. Morcrette
2007-05-01
Full Text Available In order to improve the representation of ozone absorption in the stratosphere of the MAECHAM5 general circulation model, the spectral resolution of the shortwave radiation parameterization used in the model has been increased from 4 to 6 bands. Two 20-years simulations with the general circulation model have been performed, one with the standard and the other with the newly introduced parameterization respectively, to evaluate the temperature and dynamical changes arising from the two different representations of the shortwave radiative transfer. In the simulation with the increased spectral resolution in the radiation parameterization, a significant warming of almost the entire model domain is reported. At the summer stratopause the temperature increase is about 6 K and alleviates the cold bias present in the model when the standard radiation scheme is used. These general circulation model results are consistent both with previous validation of the radiation scheme and with the offline clear-sky comparison performed in the current work with a discrete ordinate 4 stream scattering line by line radiative transfer model. The offline validation shows a substantial reduction of the daily averaged shortwave heating rate bias (1–2 K/day cooling that occurs for the standard radiation parameterization in the upper stratosphere, present under a range of atmospheric conditions. Therefore, the 6 band shortwave radiation parameterization is considered to be better suited for the representation of the ozone absorption in the stratosphere than the 4 band parameterization. Concerning the dynamical response in the general circulation model, it is found that the reported warming at the summer stratopause induces stronger zonal mean zonal winds in the middle atmosphere. These stronger zonal mean zonal winds thereafter appear to produce a dynamical feedback that results in a dynamical warming (cooling of the polar winter (summer mesosphere, caused by an
INFERNO: a fire and emissions scheme for the UK Met Office's Unified Model
Mangeon, Stéphane; Voulgarakis, Apostolos; Gilham, Richard; Harper, Anna; Sitch, Stephen; Folberth, Gerd
2016-08-01
Warm and dry climatological conditions favour the occurrence of forest fires. These fires then become a significant emission source to the atmosphere. Despite this global importance, fires are a local phenomenon and are difficult to represent in large-scale Earth system models (ESMs). To address this, the INteractive Fire and Emission algoRithm for Natural envirOnments (INFERNO) was developed. INFERNO follows a reduced complexity approach and is intended for decadal- to centennial-scale climate simulations and assessment models for policy making. Fuel flammability is simulated using temperature, relative humidity (RH) and fuel load as well as precipitation and soil moisture. Combining flammability with ignitions and vegetation, the burnt area is diagnosed. Emissions of carbon and key species are estimated using the carbon scheme in the Joint UK Land Environment Simulator (JULES) land surface model. JULES also possesses fire index diagnostics, which we document and compare with our fire scheme. We found INFERNO captured global burnt area variability better than individual indices, and these performed best for their native regions. Two meteorology data sets and three ignition modes are used to validate the model. INFERNO is shown to effectively diagnose global fire occurrence (R = 0.66) and emissions (R = 0.59) through an approach appropriate to the complexity of an ESM, although regional biases remain.
A design of mathematical modelling for the mudharabah scheme in shariah insurance
Cahyandari, R.; Mayaningsih, D.; Sukono
2017-01-01
Indonesian Shariah Insurance Association (AASI) believes that 2014 is the year of Indonesian Shariah insurance, since its growth was above the conventional insurance. In December 2013, 43% growth was recorded for shariah insurance, while the conventional insurance was only hit 20%. This means that shariah insurance has tremendous potential to remain growing in the future. In addition, the growth can be predicted from the number of conventional insurance companies who open sharia division, along with the development of Islamic banking development which automatically demand the role of shariah insurance to protect assets and banking transactions. The development of shariah insurance should be accompanied by the development of premium fund management mechanism, in order to create innovation on shariah insurance products which beneficial for the society. The development of premium fund management model shows a positive progress through the emergence of Mudharabah, Wakala, Hybrid (Mudharabah-Wakala), and Wakala-Waqf. However, ‘model’ term that referred in this paper is regarded as an operational model in form of a scheme of management mechanism. Therefore, this paper will describe a mathematical modeling for premium fund management scheme, especially for Mudharabah concept. Mathematical modeling is required for an analysis process that can be used to predict risks that could be faced by a company in the future, so that the company could take a precautionary policy to minimize those risks.
Numerical Modeling of Deep Mantle Convection: Advection and Diffusion Schemes for Marker Methods
Mulyukova, Elvira; Dabrowski, Marcin; Steinberger, Bernhard
2013-04-01
Thermal and chemical evolution of Earth's deep mantle can be studied by modeling vigorous convection in a chemically heterogeneous fluid. Numerical modeling of such a system poses several computational challenges. Dominance of heat advection over the diffusive heat transport, and a negligible amount of chemical diffusion results in sharp gradients of thermal and chemical fields. The exponential dependence of the viscosity of mantle materials on temperature also leads to high gradients of the velocity field. The accuracy of many numerical advection schemes degrades quickly with increasing gradient of the solution, while the computational effort, in terms of the scheme complexity and required resolution, grows. Additional numerical challenges arise due to a large range of length-scales characteristic of a thermochemical convection system with highly variable viscosity. To examplify, the thickness of the stem of a rising thermal plume may be a few percent of the mantle thickness. An even thinner filament of an anomalous material that is entrained by that plume may consitute less than a tenth of a percent of the mantle thickness. We have developed a two-dimensional FEM code to model thermochemical convection in a hollow cylinder domain, with a depth- and temperature-dependent viscosity representative of the mantle (Steinberger and Calderwood, 2006). We use marker-in-cell method for advection of chemical and thermal fields. The main advantage of perfoming advection using markers is absence of numerical diffusion during the advection step, as opposed to the more diffusive field-methods. However, in the common implementation of the marker-methods, the solution of the momentum and energy equations takes place on a computational grid, and nodes do not generally coincide with the positions of the markers. Transferring velocity-, temperature-, and chemistry- information between nodes and markers introduces errors inherent to inter- and extrapolation. In the numerical scheme
Modeling the pairwise key distribution scheme in the presence of unreliable links
Yagan, Osman
2011-01-01
We investigate the secure connectivity of wireless sensor networks under the pairwise key distribution scheme of Chan et al.. Unlike recent work which was carried out under the assumption of full visibility, here we assume a (simplified) communication model where unreliable wireless links are represented as on/off channels. We present conditions on how to scale the model parameters so that the network i) has no secure node which is isolated and ii) is securely connected, both with high probability when the number of sensor nodes becomes large. The results are given in the form of zero-one laws, and exhibit significant differences with corresponding results in the full visibility case. Through simulations these zero-one laws are shown to be valid also under a more realistic communication model, i.e., the disk model.
Arctic clouds in the ECMWF forecast model: an evaluation of cloud parameterization schemes
Sotiropoulou, Georgia; Sedlar, Joseph; Forbes, Richard; Tjernström, Michael
2016-04-01
The Arctic is experiencing significant changes and is an important part of the global climate, which needs to be understood and accurately represented in both climate and weather prediction models. Mixed-phase clouds are an integral part of the Arctic system, for precipitation and for their interactions with radiation and the local thermodynamics. Mixed-phase processes are often poorly represented in global models and many use an empirically based diagnostic partition between the liquid and ice phase that is dependent solely on temperature. However, increasingly more complex microphysical parameterizations are being implemented allowing a more physical representation of mixed-phase clouds. This study uses in situ observations from ASCOS campaign in the central Arctic to evaluate the impact of a change from a diagnostic to a prognostic parameterization of mixed-phase cloud and increased vertical resolution in the ECMWF Integrated Forecast System (IFS). The newer cloud scheme improves the representation of the vertical structure of mixed-phase clouds, with supercooled liquid water at cloud top and ice precipitating below, improved further with higher vertical resolution. Increased supercooled liquid water and decreased ice content are both in closer agreement with observations. However, these changes do not result in any substantial improvement in surface radiation and there remains a warm and moist bias in the lowest part of the atmosphere. Both schemes also fail to capture the transitions from overcast to cloud-free conditions. Moreover, whereas the observed cloud layer is frequently decoupled from the surface, in the model the cloud remains coupled to the surface most of the time. The changes to the cloud scheme are an important step forward in improving the representation of Arctic clouds, but improvements in other aspects such as boundary layer turbulence, cloud radiative properties, sensitivity to low aerosol concentrations and representation of the sea
Summer Arctic Clouds in the ECMWF Forecast Model: an Evaluation of Cloud Parameterization Schemes
Sotiropoulou, G.; Sedlar, J.; Forbes, R.; Tjernstrom, M. K. H.
2015-12-01
The Arctic is experiencing significant changes and is an important part of the global climate, which needs to be understood and accurately represented in both climate and weather prediction models. Mixed-phase clouds are an integral part of the Arctic system, for precipitation and for their interactions with radiation and the local thermodynamics. Mixed-phase processes are often poorly represented in global models and many use an empirically based diagnostic partition between the liquid and ice phase that is dependent solely on temperature. However, increasingly more complex microphysical parameterizations are being implemented allowing a more physical representation of mixed-phase clouds. This study uses in situ observations from the ASCOS campaign in the central Arctic to evaluate the impact of a change from a diagnostic to a prognostic parameterization of mixed-phase clouds and increased vertical resolution in the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecast System (IFS). The newer cloud scheme improves the representation of the vertical structure of mixed-phase clouds, with supercooled liquid water at cloud top and ice precipitating below, improved further with higher vertical resolution. Increased supercooled liquid water and decreased ice content are both in closer agreement with observations. However, these changes do not result in any substantial improvement in surface radiation and there remains a warm and moist bias in the lowest part of the atmosphere. Both schemes also fail to capture the transitions from overcast to cloud-free conditions. Moreover, whereas the observed cloud layer is frequently decoupled from the surface, in the model the cloud remains coupled to the surface most of the time. The changes implemented to the cloud scheme are an important step forward in improving the representation of Arctic clouds, but improvements in other aspects such as boundary layer turbulence, cloud radiative properties
Zhao, Fang; Veldkamp, Ted; Schauberger, Bernhard; Willner, Sven; Yamazaki, Dai
2017-04-01
Global hydrological models (GHMs) have been applied to assess global flood hazards. However, their capacity to capture the timing and amplitude of peak river discharge—which is crucial in flood simulations—has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a project. The runoff simulations were used as input for the global river routing model CaMa-Flood. The simulated daily discharges were compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, probably induced by the buffering capacity of floodplain reservoirs. For most river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over more than 60% of the basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not present in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.
Zhao, Fang; Veldkamp, Ted I. E.; Frieler, Katja; Schewe, Jacob; Ostberg, Sebastian; Willner, Sven; Schauberger, Bernhard; Gosling, Simon N.; Müller Schmied, Hannes; Portmann, Felix T.; Leng, Gobias; Huang, Maoyi; Liu, Xingcai; Tang, Qiuhong; Hanasaki, Naota; Biemans, Hester; Gerten, Dieter; Satoh, Yusuke; Pokhrel, Yadu; Stacke, Tobias; Ciais, Philippe; Chang, Jinfeng; Ducharne, Agnes; Guimberteau, Matthieu; Wada, Yoshihide; Kim, Hyungjun; Yamazaki, Dai
2017-07-01
Global hydrological models (GHMs) have been applied to assess global flood hazards, but their capacity to capture the timing and amplitude of peak river discharge—which is crucial in flood simulations—has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a project. The runoff simulations were used as input for the global river routing model CaMa-Flood. The simulated daily discharge was compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, likely induced by the buffering capacity of floodplain reservoirs. For a majority of river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over about 2/3 of the analysed basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme choice in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not represented in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.
Snow modeling using SURFEX with the CROCUS snow scheme for Norway
Vikhamar-Schuler, D.; Müller, K.
2012-04-01
In 2010 a research project was initiated with the aim to investigate methods to establish a regional snow avalanche forecasting system for Norway. A part of this project concerns snow models that simulate snow stratigraphy and physical parameters in the snow pack. For this purpose we have used the CROCUS snow scheme within the land surface model SURFEX for the location of 18 weather stations in Norway. We have carried out a sensitivity study of available meteorological data. Few weather stations have measurements of all the parameters used by the model on an hourly basis. Therefore it is interesting to investigate if certain parameters can be replaced by short-term prognoses from the operational weather prediction models (Unified Model-4 km, HARMONIE-4 km and postprocessed prognoses of temperature and precipitation). This study indicates that short-term prognoses of radiation, air humidity, wind and air pressure may replace observations without loosing the quality of the snow simulations. For all stations the modeled snow depth is validated with the observed snow depth for the last 2-3 winter seasons. Our results show that the modeled snow depth is most sensitive to precipitation and air temperature. Overall, very good estimates of the snow depth are obtained using the CROCUS snow scheme, except for very wind exposed stations. Temperatures within the snowpack were compared with observations of snow temperature at the Filefjell station, showing promising results. A cold bias was observed, but daily variations were reasonably modeled. During the winter 2011/2012 a series of snow stratigraphy observations from the Filefjell station is carried out for validation purposes of other intra-snowpack physical properties (density, liquid water content, temperature, grain type).
Development and evaluation of a building energy model integrated in the TEB scheme
Directory of Open Access Journals (Sweden)
B. Bueno
2011-11-01
Full Text Available The use of air-conditioning systems is expected to increase as a consequence of global-scale and urban-scale climate warming. In order to represent future scenarios of urban climate and building energy consumption, the Town Energy Budget (TEB scheme must be improved. This paper presents a new building energy model (BEM that has been integrated in the TEB scheme. BEM-TEB makes it possible to represent the energy effects of buildings and building systems on the urban climate and to estimate the building energy consumption at city scale (~10 km with a resolution of a neighbourhood (~100 m. The physical and geometric definition of buildings in BEM has been intentionally kept as simple as possible, while maintaining the required features of a comprehensive building energy model. The model considers a single thermal zone, where the thermal inertia of building materials associated with multiple levels is represented by a generic thermal mass. The model accounts for heat gains due to transmitted solar radiation, heat conduction through the enclosure, infiltration, ventilation, and internal heat gains. As a difference with respect to other building parameterizations used in urban climate, BEM includes specific models for real air-conditioning systems. It accounts for the dependence of the system capacity and efficiency on indoor and outdoor air temperatures and solves the dehumidification of the air passing through the system. Furthermore, BEM includes specific models for passive systems, such as window shadowing devices and natural ventilation. BEM has satisfactorily passed different evaluation processes, including testing its modelling assumptions, verifying that the chosen equations are solved correctly, and validating the model with field data.
Development and evaluation of a building energy model integrated in the TEB scheme
Bueno, B.; Pigeon, G.; Norford, L. K.; Zibouche, K.; Marchadier, C.
2012-03-01
The use of air-conditioning systems is expected to increase as a consequence of global-scale and urban-scale climate warming. In order to represent future scenarios of urban climate and building energy consumption, the Town Energy Balance (TEB) scheme must be improved. This paper presents a new building energy model (BEM) that has been integrated in the TEB scheme. BEM-TEB makes it possible to represent the energy effects of buildings and building systems on the urban climate and to estimate the building energy consumption at city scale (~10 km) with a resolution of a neighbourhood (~100 m). The physical and geometric definition of buildings in BEM has been intentionally kept as simple as possible, while maintaining the required features of a comprehensive building energy model. The model considers a single thermal zone, where the thermal inertia of building materials associated with multiple levels is represented by a generic thermal mass. The model accounts for heat gains due to transmitted solar radiation, heat conduction through the enclosure, infiltration, ventilation, and internal heat gains. BEM allows for previously unavailable sophistication in the modelling of air-conditioning systems. It accounts for the dependence of the system capacity and efficiency on indoor and outdoor air temperatures and solves the dehumidification of the air passing through the system. Furthermore, BEM includes specific models for passive systems, such as window shadowing devices and natural ventilation. BEM has satisfactorily passed different evaluation processes, including testing its modelling assumptions, verifying that the chosen equations are solved correctly, and validating the model with field data.
Development and evaluation of a building energy model integrated in the TEB scheme
Directory of Open Access Journals (Sweden)
B. Bueno
2012-03-01
Full Text Available The use of air-conditioning systems is expected to increase as a consequence of global-scale and urban-scale climate warming. In order to represent future scenarios of urban climate and building energy consumption, the Town Energy Balance (TEB scheme must be improved. This paper presents a new building energy model (BEM that has been integrated in the TEB scheme. BEM-TEB makes it possible to represent the energy effects of buildings and building systems on the urban climate and to estimate the building energy consumption at city scale (~10 km with a resolution of a neighbourhood (~100 m. The physical and geometric definition of buildings in BEM has been intentionally kept as simple as possible, while maintaining the required features of a comprehensive building energy model. The model considers a single thermal zone, where the thermal inertia of building materials associated with multiple levels is represented by a generic thermal mass. The model accounts for heat gains due to transmitted solar radiation, heat conduction through the enclosure, infiltration, ventilation, and internal heat gains. BEM allows for previously unavailable sophistication in the modelling of air-conditioning systems. It accounts for the dependence of the system capacity and efficiency on indoor and outdoor air temperatures and solves the dehumidification of the air passing through the system. Furthermore, BEM includes specific models for passive systems, such as window shadowing devices and natural ventilation. BEM has satisfactorily passed different evaluation processes, including testing its modelling assumptions, verifying that the chosen equations are solved correctly, and validating the model with field data.
Braghiere, Renato; Quaife, Tristan; Black, Emily
2016-04-01
Incoming shortwave radiation is the primary source of energy driving the majority of the Earth's climate system. The partitioning of shortwave radiation by vegetation into absorbed, reflected, and transmitted terms is important for most of biogeophysical processes, including leaf temperature changes and photosynthesis, and it is currently calculated by most of land surface schemes (LSS) of climate and/or numerical weather prediction models. The most commonly used radiative transfer scheme in LSS is the two-stream approximation, however it does not explicitly account for vegetation architectural effects on shortwave radiation partitioning. Detailed three-dimensional (3D) canopy radiative transfer schemes have been developed, but they are too computationally expensive to address large-scale related studies over long time periods. Using a straightforward one-dimensional (1D) parameterisation proposed by Pinty et al. (2006), we modified a two-stream radiative transfer scheme by including a simple function of Sun zenith angle, so-called "structure factor", which does not require an explicit description and understanding of the complex phenomena arising from the presence of vegetation heterogeneous architecture, and it guarantees accurate simulations of the radiative balance consistently with 3D representations. In order to evaluate the ability of the proposed parameterisation in accurately represent the radiative balance of more complex 3D schemes, a comparison between the modified two-stream approximation with the "structure factor" parameterisation and state-of-art 3D radiative transfer schemes was conducted, following a set of virtual scenarios described in the RAMI4PILPS experiment. These experiments have been evaluating the radiative balance of several models under perfectly controlled conditions in order to eliminate uncertainties arising from an incomplete or erroneous knowledge of the structural, spectral and illumination related canopy characteristics typical
A convergent scheme for a non-local coupled system modelling dislocations densities dynamics
Hajj, A. El; Forcadel, N.
2008-06-01
In this paper, we study a non-local coupled system that arises in the theory of dislocations densities dynamics. Within the framework of viscosity solutions, we prove a long time existence and uniqueness result for the solution of this model. We also propose a convergent numerical scheme and we prove a Crandall-Lions type error estimate between the continuous solution and the numerical one. As far as we know, this is the first error estimate of Crandall-Lions type for Hamilton-Jacobi systems. We also provide some numerical simulations.
A Certificateless Ring Signature Scheme with High Efficiency in the Random Oracle Model
Directory of Open Access Journals (Sweden)
Yingying Zhang
2017-01-01
Full Text Available Ring signature is a kind of digital signature which can protect the identity of the signer. Certificateless public key cryptography not only overcomes key escrow problem but also does not lose some advantages of identity-based cryptography. Certificateless ring signature integrates ring signature with certificateless public key cryptography. In this paper, we propose an efficient certificateless ring signature; it has only three bilinear pairing operations in the verify algorithm. The scheme is proved to be unforgeable in the random oracle model.
Energy Technology Data Exchange (ETDEWEB)
Park, Ju Yeop; In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok [Korea Atomic Energy Research Institute, Taejeon (Korea)
2000-02-01
The development of orthogonal 2-dimensional numerical code is made. The present code contains 9 kinds of turbulence models that are widely used. They include a standard k-{epsilon} model and 8 kinds of low Reynolds number ones. They also include 6 kinds of numerical schemes including 5 kinds of low order schemes and 1 kind of high order scheme such as QUICK. To verify the present numerical code, pipe flow, channel flow and expansion pipe flow are solved by this code with various options of turbulence models and numerical schemes and the calculated outputs are compared to experimental data. Furthermore, the discretization error that originates from the use of standard k-{epsilon} turbulence model with wall function is much more diminished by introducing a new grid system than a conventional one in the present code. 23 refs., 58 figs., 6 tabs. (Author)
A SUBDIVISION SCHEME FOR VOLUMETRIC MODELS%体模型的一种细分格式
Institute of Scientific and Technical Information of China (English)
Ghulam Mustafa; 刘雪峰
2005-01-01
In this paper, a subdivision scheme which generalizes a surface scheme in previous papers to volume meshes is designed.The scheme exhibits significant control over shrink-age/size of volumetric models.It also has the ability to conveniently incorporate boundaries and creases into a smooth limit shape of models.The method presented here is much simpler and easier as compared to MacCracken and Joys.This method makes no restrictions on the local topology of meshes.Particularly,it can be applied without any change to meshes of non-manifold topology.
Institute of Scientific and Technical Information of China (English)
HUA Zu-lin; XING Ling-hang; GU Li
2008-01-01
The modified QUICK scheme on unstructured grid was used to improve the advection flux approximation, and the depth-averaged turbulence model with the scheme based on FVM by SIMPLE series algorithm was established and applied to spur-dike flow computation. In this model, the over-relaxed approach was adopted to estimate the diffusion flux in view of its advantages in reducing errors and sustaining numerical stability usually encountered in non-orthogonal meshes. Two spur-dike cases with different defection angles (90oand 135o) were analyzed to validate the model. Computed results show that the predicted velocities and recirculation lengths are in good agreement with the observed data. Moreover, the computations on structured and unstructured grids were compared in terms of the approximately equivalent grid numbers. It can be concluded that the precision with unstructured grids is higher than that with structured grids in spite that the CPU time required is slightly more with unstructured grids. Thus, it is significant to apply the method to numerical simulation of practical hydraulic engineering.
Gall, Heather E.; Jafvert, Chad T.; Jenkinson, Byron
2010-11-01
Automated sample collection for water quality research and evaluation generally is performed by simple time-paced or flow-weighted sampling protocols. However, samples collected on strict time-paced or flow-weighted schemes may not adequately capture all elements of storm event hydrographs (i.e., rise, peak, and recession). This can result in inadequate information for calculating chemical mass flux over storm events. In this research, an algorithm was developed to guide automated sampling of hydrographs based on storm-specific information. A key element of the new "hydrograph-specific sampling scheme" is the use of a hydrograph recession model for predicting the hydrograph recession curve, during which flow-paced intervals are calculated for scheduling the remaining samples. The algorithm was tested at a tile drained Midwest agricultural site where real-time flow data were processed by a programmable datalogger that in turn activated an automated sampler at the appropriate sampling times to collect a total of twenty samples during each storm event independent of the number of sequential hydrographs generated. The utility of the algorithm was successfully tested with hydrograph data collected at both a tile drain and agricultural ditch, suggesting the potential for general applicability of the method. This sampling methodology is flexible in that the logic can be adapted for use with any hydrograph recession model; however, in this case a power law equation proved to be the most practical model.
Institute of Scientific and Technical Information of China (English)
QIU Zhongfeng; Andrea M. DOGLIOLI; HE Yijun; Francois CARLOTTI
2011-01-01
This paper presents two comparisons or tests for a Lagrangian model of zooplankton dispersion: numerical schemes and time steps. Firstly, we compared three numerical schemes using idealized circulations. Results show that the precisions of the advanced Adams-Bashfold-Moulton (ABM) method and the Runge-Kutta (RK) method were in the same order and both were much higher than that of the Euler method. Furthermore, the advanced ABM method is more efficient than the RK method in computational memory requirements and time consumption. We therefore chose the advanced ABM method as the Lagrangian particle-tracking algorithm. Secondly, we performed a sensitivity test for time steps, using outputs of the hydrodynamic model, Symphonie. Results show that the time step choices depend on the fluid response time that is related to the spatial resolution of velocity fields. The method introduced by Oliveira et al. in 2002 is suitable for choosing time steps of Lagrangian particle-tracking models, at least when only considering advection.
A self-organized internal models architecture for coding sensory-motor schemes
Directory of Open Access Journals (Sweden)
Esaú eEscobar Juárez
2016-04-01
Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.
Couderc, F.; Duran, A.; Vila, J.-P.
2017-08-01
We present an explicit scheme for a two-dimensional multilayer shallow water model with density stratification, for general meshes and collocated variables. The proposed strategy is based on a regularized model where the transport velocity in the advective fluxes is shifted proportionally to the pressure potential gradient. Using a similar strategy for the potential forces, we show the stability of the method in the sense of a discrete dissipation of the mechanical energy, in general multilayer and non-linear frames. These results are obtained at first-order in space and time and extended using a second-order MUSCL extension in space and a Heun's method in time. With the objective of minimizing the diffusive losses in realistic contexts, sufficient conditions are exhibited on the regularizing terms to ensure the scheme's linear stability at first and second-order in time and space. The other main result stands in the consistency with respect to the asymptotics reached at small and large time scales in low Froude regimes, which governs large-scale oceanic circulation. Additionally, robustness and well-balanced results for motionless steady states are also ensured. These stability properties tend to provide a very robust and efficient approach, easy to implement and particularly well suited for large-scale simulations. Some numerical experiments are proposed to highlight the scheme efficiency: an experiment of fast gravitational modes, a smooth surface wave propagation, an initial propagating surface water elevation jump considering a non-trivial topography, and a last experiment of slow Rossby modes simulating the displacement of a baroclinic vortex subject to the Coriolis force.
Accuracy tests of radiation schemes used in hot Jupiter global circulation models
Amundsen, David S.; Baraffe, Isabelle; Tremblin, Pascal; Manners, James; Hayek, Wolfgang; Mayne, Nathan J.; Acreman, David M.
2014-04-01
The treatment of radiation transport in global circulation models (GCMs) is crucial for correctly describing Earth and exoplanet atmospheric dynamics processes. The two-stream approximation and correlated-k method are currently state-of-the-art approximations applied in both Earth and hot Jupiter GCM radiation schemes to facilitate the rapid calculation of fluxes and heating rates. Their accuracy have been tested extensively for Earth-like conditions, but verification of the methods' applicability to hot Jupiter-like conditions is lacking in the literature. We are adapting the UK Met Office GCM, the Unified Model (UM), for the study of hot Jupiters, and present in this work the adaptation of the Edwards-Slingo radiation scheme based on the two-stream approximation and the correlated-k method. We discuss the calculation of absorption coefficients from high-temperature line lists and highlight the large uncertainty in the pressure-broadened line widths. We compare fluxes and heating rates obtained with our adapted scheme to more accurate discrete ordinate (DO) line-by-line (LbL) calculations ignoring scattering effects. We find that, in most cases, errors stay below 10% for both heating rates and fluxes using ~10 k-coefficients in each band and a diffusivity factor D = 1.66. The two-stream approximation and the correlated-k method both contribute non-negligibly to the total error. We also find that using band-averaged absorption coefficients, which have previously been used in radiative-hydrodynamical simulations of a hot Jupiter, may yield errors of ~100%, and should thus be used with caution.
The destructive negative binomial cure rate model with a latent activation scheme.
Cancho, Vicente G; Bandyopadhyay, Dipankar; Louzada, Francisco; Yiqi, Bao
2013-07-01
A new flexible cure rate survival model is developed where the initial number of competing causes of the event of interest (say lesions or altered cells) follow a compound negative binomial (NB) distribution. This model provides a realistic interpretation of the biological mechanism of the event of interest as it models a destructive process of the initial competing risk factors and records only the damaged portion of the original number of risk factors. Besides, it also accounts for the underlying mechanisms that leads to cure through various latent activation schemes. Our method of estimation exploits maximum likelihood (ML) tools. The methodology is illustrated on a real data set on malignant melanoma, and the finite sample behavior of parameter estimates are explored through simulation studies.
Effect of Vapor Pressure Scheme on Multiday Evolution of SOA in an Explicit Model
Lee-Taylor, J.; Madronich, S.; Aumont, B.; Camredon, M.; Emmons, L. K.; Tyndall, G. S.; Valorso, R.
2011-12-01
Recent modeling of the evolution of Secondary Organic Aerosol (SOA) has led to the critically important prediction that SOA mass continues to increase for several days after emission of primary pollutants. This growth of organic aerosol in dispersing plumes originating from urban point sources has direct implications for regional aerosol radiative forcing. We investigate the robustness of predicted SOA mass growth downwind of Mexico City in the model GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere), by assessing its sensitivity to the choice of vapor pressure prediction scheme. We also explore the implications for multi-day SOA mass growth of glassification / solidification of SOA constituents during aging. Finally we use output from the MOZART-4 chemical transport model to evaluate our results in the regional and global context.
Bayesian Estimation and Prediction for Flexible Weibull Model under Type-II Censoring Scheme
Directory of Open Access Journals (Sweden)
Sanjay Kumar Singh
2013-01-01
Full Text Available We have developed the Bayesian estimation procedure for flexible Weibull distribution under Type-II censoring scheme assuming Jeffrey's scale invariant (noninformative and Gamma (informative priors for the model parameters. The interval estimation for the model parameters has been performed through normal approximation, bootstrap, and highest posterior density (HPD procedures. Further, we have also derived the predictive posteriors and the corresponding predictive survival functions for the future observations based on Type-II censored data from the flexible Weibull distribution. Since the predictive posteriors are not in the closed form, we proposed to use the Monte Carlo Markov chain (MCMC methods to approximate the posteriors of interest. The performance of the Bayes estimators has also been compared with the classical estimators of the model parameters through the Monte Carlo simulation study. A real data set representing the time between failures of secondary reactor pumps has been analysed for illustration purpose.
Comparing Numerical Integration Schemes for Time-Continuous Car-Following Models
Treiber, Martin
2014-01-01
When simulating trajectories by integrating time-continuous car-following models, standard integration schemes such as the forth-order Runge-Kutta method (RK4) are rarely used while the simple Euler's method is popular among researchers. We compare four explicit methods: Euler's method, ballistic update, Heun's method (trapezoidal rule), and the standard forth-order RK4. As performance metrics, we plot the global discretization error as a function of the numerical complexity. We tested the methods on several time-continuous car-following models in several multi-vehicle simulation scenarios with and without discontinuities such as stops or a discontinuous behavior of an external leader. We find that the theoretical advantage of RK4 (consistency order~4) only plays a role if both the acceleration function of the model and the external data of the simulation scenario are sufficiently often differentiable. Otherwise, we obtain lower (and often fractional) consistency orders. Although, to our knowledge, Heun's met...
Smart Grid Network Transmission Line RLC Modelling Using Random Power Line Synthesis Scheme
Directory of Open Access Journals (Sweden)
Ezennaya S.O
2013-07-01
Full Text Available This work proposes Random Power line Synthesis (RPLS as a quicker computational approach to solving RLC parameters of a modern smart grid transmission network. Since modern grid systems provide a holistic perspective of modern grid development, it is obvious that a transmission network that is ageing cannot serve the expanded load demand. The need to revoltionalize the traditional transmission model while exploiting basic electrical theories and principles in Smart Grid (SG architecture necessitated this paper. This work seeks to address the RLC parameter modelling for SG template to provision dynamic power in Nigerian context. Other schemes of transmission RLC modelling were studied as well as outlining their limitations. Consequently, we then proposed a fuzzy smart grid framework for RLC computation and developed a proposed SG overhead transmission line from its conductor characteristics and tower geometry considering the RLC parameters of the conductor while applying RPLS to generate the parameter metrics.
A multi-objective model for cordon-based congestion pricing schemes with nonlinear distance tolls
Institute of Scientific and Technical Information of China (English)
孙鑫; 刘志远; THOMPSON Russell G; 别一鸣; 翁金贤; 陈淑燕
2016-01-01
Congestion pricing is an important component of urban intelligent transport system. The efficiency, equity and the environmental impacts associated with road pricing schemes are key issues that should be considered before such schemes are implemented. This paper focuses on the cordon-based pricing with distance tolls, where the tolls are determined by a nonlinear function of a vehicles’ travel distance within a cordon, termed as toll charge function. The optimal tolls can give rise to:1) higher total social benefits, 2) better levels of equity, and 3) reduced environmental impacts (e.g., less emission). Firstly, a deterministic equilibrium (DUE) model with elastic demand is presented to evaluate any given toll charge function. The distance tolls are non-additive, thus a modified path-based gradient projection algorithm is developed to solve the DUE model. Then, to quantitatively measure the equity level of each toll charge function, the Gini coefficient is adopted to measure the equity level of the flows in the entire transport network based on equilibrium flows. The total emission level is used to reflect the impacts of distance tolls on the environment. With these two indexes/measurements for the efficiency, equity and environmental issues as well as the DUE model, a multi-objective bi-level programming model is then developed to determine optimal distance tolls. The multi-objective model is converted to a single level model using the goal programming. A genetic algorithm (GA) is adopted to determine solutions. Finally, a numerical example is presented to verify the methodology.
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
The features of Gregory cumulus parameterization scheme, which is used in British Weather Office, are researched and then this scheme is developed and improved according to the characteristics of area precipitation over China. Firstly, the influence of the large-scale convergence in lower tropopause upon cumulus convection is directly taken into account in a "bulk" cloud model. The organized entrainment and detrainment is considered in the model. Secondly, the initial mass flux is revised. Thirdly, the effects of subcooling water upon saturation vapour pressure are considered. Eventually, the drown-draft air is regulated. For several years, the numerical forecast of seasonal precipitation in China has been carried out by using the modified Gregory scheme. The result shows that the model with improved Gregory scheme well simulates the precipitation over China and the prediction result is good.
Xia, T.; Kustas, W. P.; Anderson, M. C.; Alfieri, J. G.; Gao, F.; McKee, L.; Prueger, J. H.; Geli, H. M. E.; Neale, C. M. U.; Sanchez, L.; Mar Alsina, M.; Wang, Z.
2015-11-01
Thermal and multispectral remote sensing data from low-altitude aircraft can provide high spatial resolution necessary for sub-field (≤ 10 m) and plant canopy (≤ 1m) scale evapotranspiration (ET) monitoring. In this study, high resolution aircraft sub-meter scale thermal infrared and multispectral shortwave data are used to map ET over vineyards in central California with the Two Source Energy Balance (TSEB) model and with a simple model called DATTUTDUT (Deriving Atmosphere Turbulent Transport Useful To Dummies Using Temperature) which uses contextual information within the image to scale between radiometric land surface temperature (TR) values representing hydrologic limits of potential ET and a non-evaporative surface. Imagery from five days throughout the growing season is used for mapping ET at the sub-field scale. The performance of the two models is evaluated using tower-based energy flux measurements of sensible (H) and latent heat (LE) or ET. The comparison indicates that TSEB was able to derive reasonable ET estimates under varying conditions, likely due to the physically based treatment of the energy and the surface temperature partitioning between the soil/cover crop inter-row and vine canopy elements. On the other hand, DATTUTDUT performance was somewhat degraded presumably because the simple scaling scheme does not consider differences in the two sources (vine and inter-row) of heat and temperature contributions or the effect of surface roughness on the efficiency of heat exchange. Maps of the evaporative fraction (EF = LE/(H + LE)) from the two models had similar spatial patterns but different magnitudes in some areas within the fields on certain days. Large EF discrepancies between the models were found on two of the five days (DOY 162 and 219) when there were significant differences with the tower-based ET measurements, particularly using the DATTUTDUT model. These differences in EF between the models translate to significant variations in
Directory of Open Access Journals (Sweden)
T. Xia
2015-11-01
Full Text Available Thermal and multispectral remote sensing data from low-altitude aircraft can provide high spatial resolution necessary for sub-field (≤ 10 m and plant canopy (≤ 1m scale evapotranspiration (ET monitoring. In this study, high resolution aircraft sub-meter scale thermal infrared and multispectral shortwave data are used to map ET over vineyards in central California with the Two Source Energy Balance (TSEB model and with a simple model called DATTUTDUT (Deriving Atmosphere Turbulent Transport Useful To Dummies Using Temperature which uses contextual information within the image to scale between radiometric land surface temperature (TR values representing hydrologic limits of potential ET and a non-evaporative surface. Imagery from five days throughout the growing season is used for mapping ET at the sub-field scale. The performance of the two models is evaluated using tower-based energy flux measurements of sensible (H and latent heat (LE or ET. The comparison indicates that TSEB was able to derive reasonable ET estimates under varying conditions, likely due to the physically based treatment of the energy and the surface temperature partitioning between the soil/cover crop inter-row and vine canopy elements. On the other hand, DATTUTDUT performance was somewhat degraded presumably because the simple scaling scheme does not consider differences in the two sources (vine and inter-row of heat and temperature contributions or the effect of surface roughness on the efficiency of heat exchange. Maps of the evaporative fraction (EF = LE/(H + LE from the two models had similar spatial patterns but different magnitudes in some areas within the fields on certain days. Large EF discrepancies between the models were found on two of the five days (DOY 162 and 219 when there were significant differences with the tower-based ET measurements, particularly using the DATTUTDUT model. These differences in EF between the models translate to significant variations
Accuracy tests of radiation schemes used in hot Jupiter Global Circulation Models
Amundsen, David Skålid; Tremblin, Pascal; Manners, James; Hayek, Wolfgang; Mayne, N J; Acreman, David M
2014-01-01
The treatment of radiation transport in global circulation models (GCMs) is crucial to correctly describe Earth and exoplanet atmospheric dynamics processes. The two-stream approximation and correlated-$k$ method are currently state-of-the-art approximations applied in both Earth and hot Jupiter GCM radiation schemes to facilitate rapid calculation of fluxes and heating rates. Their accuracy have been tested extensively for Earth-like conditions, but verification of the methods' applicability to hot Jupiter-like conditions is lacking in the literature. We are adapting the UK Met Office GCM, the Unified Model (UM), for the study of hot Jupiters, and present in this work the adaptation of the Edwards-Slingo radiation scheme based on the two-stream approximation and the correlated-$k$ method. We discuss the calculation of absorption coefficients from high temperature line lists and highlight the large uncertainty in the pressure-broadened line widths. We compare fluxes and heating rates obtained with our adapted...
Penenko, Alexey; Penenko, Vladimir; Tsvetova, Elena
2015-04-01
out of two models each having its own assimilation scheme. In integrated schemes data assimilation is carried out with respect to the split model as a whole. First approach is more efficient from computational point of view, for in some important cases it can be implemented without iterations [2]. Its shortcoming is that control functions in different part of the model are adjusted independently thus having less evident physical sense. With the aid of numerical experiments we compare the two approaches. Work has been partially supported by COST Action ES1004 STSM Grants #16817 and #21654, RFBR 14-01-31482 mol a and 14-01-00125, Programmes # 4 Presidium RAS and # 3 MSD RAS, integration projects SB RAS #8 and #35. References: [1] V. V. Penenko Variational methods of data assimilation and inverse problems for studying the atmosphere, ocean, and environment Num. Anal. and Appl., 2009 V 2 No 4, 341-351. [2] A.V. Penenko and V.V. Penenko. Direct data assimilation method for convection-diffusion models based on splitting scheme. Computational technologies, 19(4):69-83, 2014.
Directory of Open Access Journals (Sweden)
Nopparat Pochai
2011-01-01
Full Text Available The stream water quality model of water quality assessment problems often involves numerical methods to solve the equations. The governing equation of the uniform flow model is one-dimensional advection-dispersion-reaction equations (ADREs. In this paper, a better finite difference scheme for solving ADRE is focused, and the effect of nonuniform water flows in a stream is considered. Two mathematical models are used to simulate pollution due to sewage effluent. The first is a hydrodynamic model that provides the velocity field and elevation of the water flow. The second is a advection-dispersion-reaction model that gives the pollutant concentration fields after input of the velocity data from the hydrodynamic model. For numerical techniques, we used the Crank-Nicolson method for system of a hydrodynamic model and the explicit schemes to the dispersion model. The revised explicit schemes are modified from two computation techniques of uniform flow stream problems: forward time central space (FTCS and Saulyev schemes for dispersion model. A comparison of both schemes regarding stability aspect is provided so as to illustrate their applicability to the real-world problem.
Efficient systematic scheme to construct second-principles lattice dynamical models
Escorihuela-Sayalero, Carlos; Wojdeł, Jacek C.; Íñiguez, Jorge
2017-03-01
We start from the polynomial interatomic potentials introduced by Wojdeł et al. [J. Phys.: Condens. Matter 25, 305401 (2013), 10.1088/0953-8984/25/30/305401] and take advantage of one of their key features—namely, the linear dependence of the energy on the potential's adjustable parameters—to devise a scheme for the construction of first-principles-based (second-principles) models for large-scale lattice-dynamical simulations. Our method presents the following convenient features. The parameters of the model are computed in a very fast and efficient way, as it is possible to recast the fit to a training set of first-principles data into a simple matrix diagonalization problem. Our method selects automatically the interactions that are most relevant to reproduce the training-set data, by choosing from a pool that includes virtually all possible coupling terms, and produces a family of models of increasing complexity and accuracy. We work with practical and convenient cross-validation criteria linked to the physical properties that will be relevant in future simulations based on the new model, and which greatly facilitate the task of identifying a potential that is simultaneously simple (thus computationally light), very accurate, and predictive. We also discuss practical ways to guarantee that our energy models are bounded from below, with a minimal impact on their accuracy. Finally, we demonstrate our scheme with an application to ferroelastic perovskite SrTiO3, which features many nontrivial lattice-dynamical features (e.g., a phase transition driven by soft phonons, competing structural instabilities, highly anharmonic dynamics) and provides a very demanding test.
Sensitivity study of the wet deposition schemes in the modelling of the Fukushima accident.
Quérel, Arnaud; Quélo, Denis; Roustan, Yelva; Mathieu, Anne; Kajino, Mizuo; Sekiyama, Thomas; Adachi, Kouji; Didier, Damien; Igarashi, Yasuhito
2016-04-01
The Fukushima-Daiichi release of radioactivity is a relevant event to study the atmospheric dispersion modelling of radionuclides. Actually, the atmospheric deposition onto the ground may be studied through the map of measured Cs-137 established consecutively to the accident. The limits of detection were low enough to make the measurements possible as far as 250km from the nuclear power plant. This large scale deposition has been modelled with the Eulerian model ldX. However, several weeks of emissions in multiple weather conditions make it a real challenge. Besides, these measurements are accumulated deposition of Cs-137 over the whole period and do not inform of deposition mechanisms involved: in-cloud, below-cloud, dry deposition. A comprehensive sensitivity analysis is performed in order to understand wet deposition mechanisms. It has been shown in a previous study (Quérel et al, 2016) that the choice of the wet deposition scheme has a strong impact on the assessment of the deposition patterns. Nevertheless, a "best" scheme could not be highlighted as it depends on the selected criteria: the ranking differs according to the statistical indicators considered (correlation, figure of merit in space and factor 2). A possibility to explain the difficulty to discriminate between several schemes was the uncertainties in the modelling, resulting from the meteorological data for instance. Since the move of the plume is not properly modelled, the deposition processes are applied with an inaccurate activity in the air. In the framework of the SAKURA project, an MRI-IRSN collaboration, new meteorological fields at higher resolution (Sekiyama et al., 2013) were provided and allows to reconsider the previous study. An updated study including these new meteorology data is presented. In addition, a focus on several releases causing deposition in located areas during known period was done. This helps to better understand the mechanisms of deposition involved following the
Some considerations on numerical schemes for treating hyperbolicity issues in two-layer models
Sarno, L.; Carravetta, A.; Martino, R.; Papa, M. N.; Tai, Y.-C.
2017-02-01
Multi-layer depth-averaged models are widely employed in various hydraulic engineering applications. Yet, such models are not strictly hyperbolic. Their equation systems typically lose hyperbolicity when the relative velocities between layers become too large, which is associated with Kelvin-Helmholtz instabilities involving turbulent momentum exchanges between the layers. Focusing on the two-layer case, we present a numerical improvement that locally avoids the loss of hyperbolicity. The proposed modification introduces an additional momentum exchange between layers, whose value is iteratively calculated to be strictly sufficient to keep the system hyperbolic. The approach can be easily implemented in any finite volume scheme and there is no limitation concerning the density ratio between layers. Numerical examples, employing both HLL-type and Roe-type approximate Riemann solvers, are reported to validate the method and its key features.
Hu, Kainan; Zhang, Hongwu
2016-01-01
A lattice Boltzmann scheme associated with flexible Prandtl number and specific heat ratio is proposed, which is based on the polyatomic ellipsoidal statistics model(ES-BGK). The Prandtl number can be modified by a parameter of the Gaussian distribution and the specific heat ratio can be modified by additional free degrees. For the sake of constructing the scheme proposed, the Gaussian distribution is expanded on the Hermite polynomials and the general term formula for the Hermite coefficients of the Gaussian distribution is deduced. Benchmarks are carried out to verify the scheme proposed. The numerical results are in good agreement with the the analytical solutions.
DEFF Research Database (Denmark)
Primdahl, Jorgen; Vesterager, Jens Peter; Finn, John A.
2010-01-01
Agri-Environment Schemes (AES) to maintain or promote environmentally-friendly farming practices were implemented on about 25% of all agricultural land in the EU by 2002. This article analyses and discusses the actual and potential use of impact models in supporting the design, implementation...... and evaluation of AES. Impact models identify and establish the causal relationships between policy objectives and policy outcomes. We review and discuss the role of impact models at different stages in the AES policy process, and present results from a survey of impact models underlying 60 agri-environmental...... schemes in seven EU member states. We distinguished among three categories of impact models (quantitative, qualitative or common sense), depending on the degree of evidence in the formal scheme description, additional documents, or key person interviews. The categories of impact models used mainly...
A test of an optimal stomatal conductance scheme within the CABLE Land Surface Model
Directory of Open Access Journals (Sweden)
M. G. De Kauwe
2014-10-01
Full Text Available Stomatal conductance (gs affects the fluxes of carbon, energy and water between the vegetated land surface and the atmosphere. We test an implementation of an optimal stomatal conductance model within the Community Atmosphere Biosphere Land Exchange (CABLE land surface model (LSM. In common with many LSMs, CABLE does not differentiate between gs model parameters in relation to plant functional type (PFT, but instead only in relation to photosynthetic pathway. We therefore constrained the key model parameter "g1" which represents a plants water use strategy by PFT based on a global synthesis of stomatal behaviour. As proof of concept, we also demonstrate that the g1 parameter can be estimated using two long-term average (1960–1990 bioclimatic variables: (i temperature and (ii an indirect estimate of annual plant water availability. The new stomatal models in conjunction with PFT parameterisations resulted in a large reduction in annual fluxes of transpiration (~ 30% compared to the standard CABLE simulations across evergreen needleleaf, tundra and C4 grass regions. Differences in other regions of the globe were typically small. Model performance when compared to upscaled data products was not degraded, though the new stomatal conductance scheme did not noticeably change existing model-data biases. We conclude that optimisation theory can yield a simple and tractable approach to predicting stomatal conductance in LSMs.
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
The HIRLAM fast radiation scheme for mesoscale numerical weather prediction models
Directory of Open Access Journals (Sweden)
L. Rontu
2017-07-01
Full Text Available This paper provides an overview of the HLRADIA shortwave (SW and longwave (LW broadband radiation schemes used in the HIRLAM numerical weather prediction (NWP model and available in the HARMONIE-AROME mesoscale NWP model. The advantage of broadband, over spectral, schemes is that they can be called more frequently within the model, without compromising on computational efficiency. In mesoscale models fast interactions between clouds and radiation and the surface and radiation can be of greater importance than accounting for the spectral details of clear-sky radiation; thus calling the routines more frequently can be of greater benefit than the deterioration due to loss of spectral details. Fast but physically based radiation parametrizations are expected to be valuable for high-resolution ensemble forecasting, because as well as the speed of their execution, they may provide realistic physical perturbations. Results from single-column diagnostic experiments based on CIRC benchmark cases and an evaluation of 10 years of radiation output from the FMI operational archive of HIRLAM forecasts indicate that HLRADIA performs sufficiently well with respect to the clear-sky downwelling SW and longwave LW fluxes at the surface. In general, HLRADIA tends to overestimate surface fluxes, with the exception of LW fluxes under cold and dry conditions. The most obvious overestimation of the surface SW flux was seen in the cloudy cases in the 10-year comparison; this bias may be related to using a cloud inhomogeneity correction, which was too large. According to the CIRC comparisons, the outgoing LW and SW fluxes at the top of atmosphere are mostly overestimated by HLRADIA and the net LW flux is underestimated above clouds. The absorption of SW radiation by the atmosphere seems to be underestimated and LW absorption seems to be overestimated. Despite these issues, the overall results are satisfying and work on the improvement of HLRADIA for the use in HARMONIE
The HIRLAM fast radiation scheme for mesoscale numerical weather prediction models
Rontu, Laura; Gleeson, Emily; Räisänen, Petri; Pagh Nielsen, Kristian; Savijärvi, Hannu; Hansen Sass, Bent
2017-07-01
This paper provides an overview of the HLRADIA shortwave (SW) and longwave (LW) broadband radiation schemes used in the HIRLAM numerical weather prediction (NWP) model and available in the HARMONIE-AROME mesoscale NWP model. The advantage of broadband, over spectral, schemes is that they can be called more frequently within the model, without compromising on computational efficiency. In mesoscale models fast interactions between clouds and radiation and the surface and radiation can be of greater importance than accounting for the spectral details of clear-sky radiation; thus calling the routines more frequently can be of greater benefit than the deterioration due to loss of spectral details. Fast but physically based radiation parametrizations are expected to be valuable for high-resolution ensemble forecasting, because as well as the speed of their execution, they may provide realistic physical perturbations. Results from single-column diagnostic experiments based on CIRC benchmark cases and an evaluation of 10 years of radiation output from the FMI operational archive of HIRLAM forecasts indicate that HLRADIA performs sufficiently well with respect to the clear-sky downwelling SW and longwave LW fluxes at the surface. In general, HLRADIA tends to overestimate surface fluxes, with the exception of LW fluxes under cold and dry conditions. The most obvious overestimation of the surface SW flux was seen in the cloudy cases in the 10-year comparison; this bias may be related to using a cloud inhomogeneity correction, which was too large. According to the CIRC comparisons, the outgoing LW and SW fluxes at the top of atmosphere are mostly overestimated by HLRADIA and the net LW flux is underestimated above clouds. The absorption of SW radiation by the atmosphere seems to be underestimated and LW absorption seems to be overestimated. Despite these issues, the overall results are satisfying and work on the improvement of HLRADIA for the use in HARMONIE-AROME NWP system
Directory of Open Access Journals (Sweden)
Biswajit Bhowmik
2013-01-01
Full Text Available Wireless and Mobile Network Infrastructure services generally examine how different infrastructure providers are positioning themselves to compete in wireless services. These services analyze the opportunities and challenges facing by the infrastructure providers selling to wireless and mobile network operators worldwide. The opportunities and challenges are the review of strategies, market positioning, and future direction in the market for wireless and mobile network infrastructure. The service covers all generations of wireless and mobile network infrastructure: 2G, 2.5G, 3G, and 4G so on global in scope. Majority of the users are mobile while availing these services. Mobility is the most important feature of any wireless and mobile network. Usually, continuous service is achieved by supporting handoff from one cell to another. Poorly designed handoff schemes tend to generate very heavy signaling traffic and, thereby, lead to a dramatic decrease in quality of service. In this paper we make a comparison study on some selective traffic models with priority handoff scheme. The study shows how a handoff procedure has a significant impact on wireless mobile network performance on the basis of two primary criteria – Blocking probability of originating calls and the forced termination probability of ongoing calls. In addition to this the study shows arrival rate of handoff calls, and approximate number of mobile users that get serviced by underlying base station.
Scheme of Constructing CGE Model of China’s Direct Grain Subsidy Policy
Institute of Scientific and Technical Information of China (English)
2011-01-01
This paper introduces the model of China’s direct grain subsidy policy, adopts computable general equilibrium (CGE) theory, and advances the scheme of constructing the model of China’s direct grain subsidy policy. On the basis of some assumptions, such as conforming to the complete competition of market, inexistence of move of capital and labor forces among countries, unchanged exchange rate and incomplete substitution, and the main body of behavior comprising representative households, producers, local government and central government, the model established in this paper includes production module and demand module. Moreover, the model takes into account equilibrium structure, the definition of profiting and macro condition for closure, the related coefficients this CGE model needs include Armingtion substitution elasticity of intermediate composite product, investment composite product and consumption composite product between import and domestic production; substitution elasticity among production factors; conversion elasticity of total sales between domestic absorption and export; elasticity of residents’ expenditure; elasticity of price; elasticity of export demand; the relative share of all sectors in capital stock. This paper also points out the advantages and limitations of CGE model in analyzing direct grain subsidy policy.
Model assessment of atmospheric pollution control schemes for critical emission regions
Zhai, Shixian; An, Xingqin; Liu, Zhao; Sun, Zhaobin; Hou, Qing
2016-01-01
In recent years, the atmospheric environment in portions of China has become significantly degraded and the need for emission controls has become urgent. Because more international events are being planned, it is important to implement air quality assurance targeted at significant events held over specific periods of time. This study sets Yanqihu (YQH), Beijing, the location of the 2014 Beijing APEC (Asia-Pacific Economic Cooperation) summit, as the target region. By using the atmospheric inversion model FLEXPART, we determined the sensitive source zones that had the greatest impact on the air quality of the YQH region in November 2012. We then used the air-quality model Models-3/CMAQ and a high-resolution emissions inventory of the Beijing-Tianjian-Hebei region to establish emission reduction tests for the entire source area and for specific sensitive source zones. This was achieved by initiating emission reduction schemes at different ratios and different times. The results showed that initiating a moderate reduction of emissions days prior to a potential event is more beneficial to the air quality of Beijing than initiating a high-strength reduction campaign on the day of the event. The sensitive source zone of Beijing (BJ-Sens) accounts for 54.2% of the total source area of Beijing (BJ), but its reduction effect reaches 89%-100% of the total area, with a reduction efficiency 1.6-1.9 times greater than that of the entire area. The sensitive source zone of Huabei (HuaB-Sens.) only represents 17.6% of the total area of Huabei (HuaB), but its emission reduction effect reaches 59%-97% of the entire area, with a reduction efficiency 4.2-5.5 times greater than that of the total area. The earlier that emission reduction measures are implemented, the greater the effect they have on preventing the transmission of pollutants. In addition, expanding the controlling areas to sensitive provinces and cities around Beijing (HuaB-sens) can significantly accelerate the reduction
TTLEM - an implicit-explicit (IMEX) scheme for modelling landscape evolution in MATLAB
Campforts, Benjamin; Schwanghart, Wolfgang
2016-04-01
Landscape evolution models (LEM) are essential to unravel interdependent earth surface processes. They are proven very useful to bridge several temporal and spatial timescales and have been successfully used to integrate existing empirical datasets. There is a growing consensus that landscapes evolve at least as much in the horizontal as in the vertical direction urging for an efficient implementation of dynamic drainage networks. Here we present a spatially explicit LEM, which is based on the object-oriented function library TopoToolbox 2 (Schwanghart and Scherler, 2014). Similar to other LEMs, rivers are considered to be the main drivers for simulated landscape evolution as they transmit pulses of tectonic perturbations and set the base level of surrounding hillslopes. Highly performant graph algorithms facilitate efficient updates of the flow directions to account for planform changes in the river network and the calculation of flow-related terrain attributes. We implement the model using an implicit-explicit (IMEX) scheme, i.e. different integrators are used for different terms in the diffusion-incision equation. While linear diffusion is solved using an implicit scheme, we calculate incision explicitly. Contrary to previously published LEMS, however, river incision is solved using a total volume method which is total variation diminishing in order to prevent numerical diffusion when solving the stream power law (Campforts and Govers, 2015). We show that the use of this updated numerical scheme alters both landscape topography and catchment wide erosion rates at a geological time scale. Finally, the availability of a graphical user interface facilitates user interaction, making the tool very useful both for research and didactical purposes. References Campforts, B., Govers, G., 2015. Keeping the edge: A numerical method that avoids knickpoint smearing when solving the stream power law. J. Geophys. Res. Earth Surf. 120, 1189-1205. doi:10.1002/2014JF003376
Third Order Reconstruction of the KP Scheme for Model of River Tinnelva
Directory of Open Access Journals (Sweden)
Susantha Dissanayake
2017-01-01
Full Text Available The Saint-Venant equation/Shallow Water Equation is used to simulate flow of river, flow of liquid in an open channel, tsunami etc. The Kurganov-Petrova (KP scheme which was developed based on the local speed of discontinuity propagation, can be used to solve hyperbolic type partial differential equations (PDEs, hence can be used to solve the Saint-Venant equation. The KP scheme is semi discrete: PDEs are discretized in the spatial domain, resulting in a set of Ordinary Differential Equations (ODEs. In this study, the common 2nd order KP scheme is extended into 3rd order scheme while following the Weighted Essentially Non-Oscillatory (WENO and Central WENO (CWENO reconstruction steps. Both the 2nd order and 3rd order schemes have been used in simulation in order to check the suitability of the KP schemes to solve hyperbolic type PDEs. The simulation results indicated that the 3rd order KP scheme shows some better stability compared to the 2nd order scheme. Computational time for the 3rd order KP scheme for variable step-length ode solvers in MATLAB is less compared to the computational time of the 2nd order KP scheme. In addition, it was confirmed that the order of the time integrators essentially should be lower compared to the order of the spatial discretization. However, for computation of abrupt step changes, the 2nd order KP scheme shows a more accurate solution.
A new aerosol wet removal scheme for the Lagrangian particle model FLEXPART v10
Grythe, Henrik; Kristiansen, Nina I.; Groot Zwaaftink, Christine D.; Eckhardt, Sabine; Ström, Johan; Tunved, Peter; Krejci, Radovan; Stohl, Andreas
2017-04-01
A new, more physically based wet removal scheme for aerosols has been implemented in the Lagrangian particle dispersion model FLEXPART. It uses three-dimensional cloud water fields from the European Centre for Medium-Range Weather Forecasts (ECMWF) to determine cloud extent and distinguishes between in-cloud and below-cloud scavenging. The new in-cloud nucleation scavenging depends on cloud water phase (liquid, ice or mixed-phase), based on the aerosol's prescribed efficiency to serve as ice crystal nuclei and liquid water nuclei, respectively. The impaction scavenging scheme now parameterizes below-cloud removal as a function of aerosol particle size and precipitation type (snow or rain) and intensity.Sensitivity tests with the new scavenging scheme and comparisons with observational data were conducted for three distinct types of primary aerosols, which pose different challenges for modeling wet scavenging due to their differences in solubility, volatility and size distribution: (1) 137Cs released during the Fukushima nuclear accident attached mainly to highly soluble sulphate aerosol particles, (2) black carbon (BC) aerosol particles, and (3) mineral dust. Calculated e-folding lifetimes of accumulation mode aerosols for these three aerosol types were 11.7, 16.0, and 31.6 days respectively, when well mixed in the atmosphere. These are longer lifetimes than those obtained by the previous removal schem, and, for mineral dust in particular, primarily result from very slow in-cloud removal, which globally is the primary removal mechanism for these accumulation mode particles.Calculated e-folding lifetimes in FLEXPART also have a strong size dependence, with the longest lifetimes found for the accumulation-mode aerosols. For example, for dust particles emitted at the surface the lifetimes were 13.8 days for particles with 1 µm diameter and a few hours for 10 µm particles. A strong size dependence in below-cloud scavenging, combined with increased dry removal, is the
Four-stream Radiative Transfer Parameterization Scheme in a Land Surface Process Model
Institute of Scientific and Technical Information of China (English)
ZHOU Wenyan; GUO Pinwen; LUO Yong; Kuo-Nan LIOU; Yu GU; Yongkang XUE
2009-01-01
Accurate estimates of albedos are required in climate modeling. Accurate and simple schemes for radiative transfer within canopy are required for these estimates, but severe limitations exist. This paper developed a four-stream solar radiative transfer model and coupled it with a land surface process model. The radiative model uses a four-stream approximation method as in the atmosphere to obtain analytic solutions of the basic equation of canopy radiative transfer. As an analytical model, the four-stream radiative transfer model can be easily applied efficiently to improve the parameterization of land surface radiation in climate models. Our four-stream solar radiative transfer model is based on a two-stream short wave radiative transfer model. It can simulate short wave solar radiative transfer within canopy according to the relevant theory in the atmosphere. Each parameter of the basic radiative transfer equation of canopy has special geometry and optical characters of leaves or canopy. The upward or downward radiative fluxes are related to the diffuse phase function, the G-function, leaf reflectivity and transmission, leaf area index, and the solar angle of the incident beam.The four-stream simulation is compared with that of the two-stream model. The four-stream model is proved successful through its consistent modeling of canopy albedo at any solar incident angle. In order to compare and find differences between the results predicted by the four-and two-stream models, a number of numerical experiments are performed through examining the effects of different leaf area indices, leaf angle distributions, optical properties of leaves, and ground surface conditions on the canopy albcdo. Parallel experiments show that the canopy albedos predicted by the two models differ significantly when the leaf angle distribution is spherical and vertical. The results also show that the difference is particularly great for different incident solar beams.One additional
Energy Technology Data Exchange (ETDEWEB)
Ming, Y; Ramaswamy, V; Donner, L J; Phillips, V T; Klein, S A; Ginoux, P A; Horowitz, L H
2005-05-02
This paper describes a self-consistent prognostic cloud scheme that is able to predict cloud liquid water, amount and droplet number (N{sub d}) from the same updraft velocity field, and is suitable for modeling aerosol-cloud interactions in general circulation models (GCMs). In the scheme, the evolution of droplets fully interacts with the model meteorology. An explicit treatment of cloud condensation nuclei (CCN) activation allows the scheme to take into account the contributions to N{sub d} of multiple types of aerosol (i.e., sulfate, organic and sea-salt aerosols) and kinetic limitations of the activation process. An implementation of the prognostic scheme in the Geophysical Fluid Dynamics Laboratory (GFDL) AM2 GCM yields a vertical distribution of N{sub d} characteristic of maxima in the lower troposphere differing from that obtained through diagnosing N{sub d} empirically from sulfate mass concentrations. As a result, the agreement of model-predicted present-day cloud parameters with satellite measurements is improved compared to using diagnosed N{sub d}. The simulations with pre-industrial and present-day aerosols show that the combined first and second indirect effects of anthropogenic sulfate and organic aerosols give rise to a global annual mean flux change of -1.8 W m{sup -2} consisting of -2.0 W m{sup -2} in shortwave and 0.2 W m{sup -2} in longwave, as model response alters cloud field, and subsequently longwave radiation. Liquid water path (LWP) and total cloud amount increase by 19% and 0.6%, respectively. Largely owing to high sulfate concentrations from fossil fuel burning, the Northern Hemisphere mid-latitude land and oceans experience strong cooling. So does the tropical land which is dominated by biomass burning organic aerosol. The Northern/Southern Hemisphere and land/ocean ratios are 3.1 and 1.4, respectively. The calculated annual zonal mean flux changes are determined to be statistically significant, exceeding the model's natural
Modified coupled map car-following model and its delayed feedback control scheme
Institute of Scientific and Technical Information of China (English)
Ge Hong-Xia
2011-01-01
A modified coupled map car-following model is proposed,in which two successive vehicle headways in front of the considering vehicle is incorporated into the optimal velocity function. The steady state under certain conditions is obtained. An error system around the steady state is studied further. Moreover,the condition for the state having no traffic jam is derived. A new control scheme is presented to suppress the traffic jam in the modified coupled map car-following model under the open boundary. A control signal including the velocity differences between the following and the considering vehicles,and between the preceding and the considering vehicles is used. The condition under which the traffic jam can be well suppressed is analysed. The results are compared with that presented by Konishi et al. (the KKH model). The simulation results show that the temporal behaviour obtained in our model is better than that in the KKH model. The simulation results are in good agreement with the theoretical analysis.
Modified coupled map car-following model and its delayed feedback control scheme
Ge, Hong-Xia
2011-09-01
A modified coupled map car-following model is proposed, in which two successive vehicle headways in front of the considering vehicle is incorporated into the optimal velocity function. The steady state under certain conditions is obtained. An error system around the steady state is studied further. Moreover, the condition for the state having no traffic jam is derived. A new control scheme is presented to suppress the traffic jam in the modified coupled map car-following model under the open boundary. A control signal including the velocity differences between the following and the considering vehicles, and between the preceding and the considering vehicles is used. The condition under which the traffic jam can be well suppressed is analysed. The results are compared with that presented by Konishi et al. (the KKH model). The simulation results show that the temporal behaviour obtained in our model is better than that in the KKH model. The simulation results are in good agreement with the theoretical analysis.
Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann
2013-04-01
to sue a dynamic averaging scheme to generate the final output. [1] N. Kayastha, D. L. Shrestha and D. P. Solomatine. Experiments with several methods of parameter uncertainty estimation in hydrological modeling. Proc. 9th Intern. Conf. on Hydroinformatics, Tianjin, China, September 2010. [2] D. L. Shrestha, N. Kayastha, and D. P. Solomatine. A novel approach to parameter uncertainty analysis of hydrological models using neural networks. Hydrol. Earth Syst. Sci., 13, 1235-1248, 2009.
NOVEL MULTIMEDIA TRAFFIC MODELING BASED CAC SCHEME FOR CDMA COMMUNICATION SYSTEMS
Institute of Scientific and Technical Information of China (English)
Zhang Dandan; Fang Xuming; Zhu Longjie
2007-01-01
As the radio spectrum is a very scarce resource, the Call Admission Control (CAC) is one of the most important parts in radio resource management. The Code Division Multiple Access (CDMA)based next generation wireless communications systems will support the transmission of multimedia traffic, such as voice, video and data, thus the CAC, which can support the multimedia traffic and guarantee the Quality of Service (QoS) of different traffic, has gained broad attention. In this paper, a novel multimedia traffic modeling method and a corresponding dynamic QoS based CAC are proposed.The analysis and simulation results show that the proposed CAC scheme can guarantee the QoS to different traffic demand, and improve the system performance significantly.
Modeling the mechanics of HMX detonation using a Taylor–Galerkin scheme
Directory of Open Access Journals (Sweden)
Adam V. Duran
2016-05-01
Full Text Available Design of energetic materials is an exciting area in mechanics and materials science. Energetic composite materials are used as propellants, explosives, and fuel cell components. Energy release in these materials are accompanied by extreme events: shock waves travel at typical speeds of several thousand meters per second and the peak pressures can reach hundreds of gigapascals. In this paper, we develop a reactive dynamics code for modeling detonation wave features in one such material. The key contribution in this paper is an integrated algorithm to incorporate equations of state, Arrhenius kinetics, and mixing rules for particle detonation in a Taylor–Galerkin finite element simulation. We show that the scheme captures the distinct features of detonation waves, and the detonation velocity compares well with experiments reported in literature.
Günther, T; Büttner, C; Käsbohrer, A; Filter, M
2015-01-01
Mathematical models on properties and behavior of harmful organisms in the food chain are an increas- ingly relevant approach of the agriculture and food industry. As a consequence, there are many efforts to develop biological models in science, economics and risk assessment nowadays. However, there is a lack of international harmonized standards on model annotation and model formats, which would be neces- sary to set up efficient tools supporting broad model application and information exchange. There are some established standards in the field of systems biology, but there is currently no corresponding provi- sion in the area of plant protection. This work therefore aimed at the development of an annotation scheme using domain-specific metadata. The proposed scheme has been validated in a prototype implementation of a web-database model repository. This prototypic community resource currently contains models on aflatoxin secreting fungal Aspergillus flavus in maize, as these models have a high relevance to food safety and economic impact. Specifically, models describing biological processes of the fungus (growth, Aflatoxin secreting), as well as dose-response- and carry over models were included. Furthermore, phenological models for maize were integrated as well. The developed annotation scheme is based on the well-established data exchange format SBML, which is broadly applied in the field of systems biology. The identified example models were annotated according to the developed scheme and entered into a Web-table (Google Sheets), which was transferred to a web based demonstrator available at https://sites.google.com/site/test782726372685/. By implementation of a software demonstrator it became clear that the proposed annotation scheme can be applied to models on plant pathogens and that broad adoption within the domain could promote communication and application of mathematical models.
Institute of Scientific and Technical Information of China (English)
LIANG Miaoling; XIE Zhenghui
2008-01-01
Canopy interception of incident precipitation, as a critical component of a forest's water budget, can affect the amount of water available to the soil, and ultimately vegetation distribution and function. In this paper, a statistical-dynamic approach based on leaf area index and statistical canopy interception is used to parameterize the canopy interception process. The statistical-dynamic canopy interception scheme is implemented into the Community Land Model with dynamic global vegetation model (CLM-DGVM) to improve its dynamic vegetation simulation. The simulation for continental China by the land surface model with the new canopy interception scheme shows that the new one reasonably represents the precipitation intercepted by the canopy. Moreover, the new scheme enhances the water availability in the root zone for vegetation growth, especially in the densely vegetated and semi-arid areas, and improves the model's performance of potential vegetation simulation.
Modelling tools for managing Induced RiverBank Filtration MAR schemes
De Filippis, Giovanna; Barbagli, Alessio; Marchina, Chiara; Borsi, Iacopo; Mazzanti, Giorgio; Nardi, Marco; Vienken, Thomas; Bonari, Enrico; Rossetto, Rudy
2017-04-01
Induced RiverBank Filtration (IRBF) is a widely used technique in Managed Aquifer Recharge (MAR) schemes, when aquifers are hydraulically connected with surface water bodies, with proven positive effects on quality and quantity of groundwater. IRBF allows abstraction of a large volume of water, avoiding large decrease in groundwater heads. Moreover, thanks to the filtration process through the soil, the concentration of chemical species in surface water can be reduced, thus becoming an excellent resource for the production of drinking water. Within the FP7 MARSOL project (demonstrating Managed Aquifer Recharge as a SOLution to water scarcity and drought; http://www.marsol.eu/), the Sant'Alessio IRBF (Lucca, Italy) was used to demonstrate the feasibility and technical and economic benefits of managing IRBF schemes (Rossetto et al., 2015a). The Sant'Alessio IRBF along the Serchio river allows to abstract an overall amount of about 0.5 m3/s providing drinking water for 300000 people of the coastal Tuscany (mainly to the town of Lucca, Pisa and Livorno). The supplied water is made available by enhancing river bank infiltration into a high yield (10-2 m2/s transmissivity) sandy-gravelly aquifer by rising the river head and using ten vertical wells along the river embankment. A Decision Support System, consisting in connected measurements from an advanced monitoring network and modelling tools was set up to manage the IRBF. The modelling system is based on spatially distributed and physically based coupled ground-/surface-water flow and solute transport models integrated in the FREEWAT platform (developed within the H2020 FREEWAT project - FREE and Open Source Software Tools for WATer Resource Management; Rossetto et al., 2015b), an open source and public domain GIS-integrated modelling environment for the simulation of the hydrological cycle. The platform aims at improving water resource management by simplifying the application of EU water-related Directives and at
Hanasaki, N.; Masaki, Y.; Mateo, C.; Kanae, S.; Oki, T.
2013-12-01
More than 45000 large dams have been constructed all over the world, and their total storage capacity exceeds 7000 km3 which corresponds to approximately 20% of the total annual global river discharge. Explicit incorporation of reservoir operation is indispensable for global-/macro-scale hydrological models, since their storage capacity and potential of altering flow regime are too large to neglect. Our presentation consists of three parts. In the first part, the H08 model, a global hydrological model with reservoir operation scheme is introduced. The H08 model includes an algorithm to estimate reservoir operating rules of individual reservoirs. This enables us to apply the model to the regions and periods with no recorded reservoir operation. The key concepts and challenges of the model are discussed. In the second part, the role of reservoirs in water scarcity is globally assessed. In many parts of the world, local people are suffered from temporal variability in precipitation and river flow. The contribution of reservoirs to water scarcity alleviation is quantitatively and globally assessed. The assessment is extended to the future periods utilizing the latest climate and socio-economic scenarios. In the last part, a regional model application is introduced. The Chao Phraya River in Thailand was suffered from a severe flood in 2011. Vast area was inundated for months and the economic damages reached 40 billion USD. The Chao Phraya River has two major reservoirs and their role is crucial for both water use and flood control. The H08 model is substantially enhanced and applied to the basin to reproduce the 2011 floods. The tradeoff between water use and flood control was investigated by changing reservoir operation options.
Zeroual, Abdelhafid
2017-08-19
Monitoring vehicle traffic flow plays a central role in enhancing traffic management, transportation safety and cost savings. In this paper, we propose an innovative approach for detection of traffic congestion. Specifically, we combine the flexibility and simplicity of a piecewise switched linear (PWSL) macroscopic traffic model and the greater capacity of the exponentially-weighted moving average (EWMA) monitoring chart. Macroscopic models, which have few, easily calibrated parameters, are employed to describe a free traffic flow at the macroscopic level. Then, we apply the EWMA monitoring chart to the uncorrelated residuals obtained from the constructed PWSL model to detect congested situations. In this strategy, wavelet-based multiscale filtering of data has been used before the application of the EWMA scheme to improve further the robustness of this method to measurement noise and reduce the false alarms due to modeling errors. The performance of the PWSL-EWMA approach is successfully tested on traffic data from the three lane highway portion of the Interstate 210 (I-210) highway of the west of California and the four lane highway portion of the State Route 60 (SR60) highway from the east of California, provided by the Caltrans Performance Measurement System (PeMS). Results show the ability of the PWSL-EWMA approach to monitor vehicle traffic, confirming the promising application of this statistical tool to the supervision of traffic flow congestion.
Improved computational schemes for the numerical modeling of hydrothermal resources in Wyoming
Energy Technology Data Exchange (ETDEWEB)
Heasler, H.P.; George, J.H.; Allen, M.B.
1990-05-01
A new method, the Conjugate Gradient Squared (CGS) solution technique, is shown to be extremely effective when applied to the finite-difference solution of conductive and convective heat transfer in geologic systems. The CGS method is compared to the Successive Over/Under Relaxation schemes, a version of the Gaussian elimination method, and the Generalized Minimum Residual (GMRES) approach. The CGS procedure converges at least ten times faster than the nearest competitor. The model is applied to the Thermopolis hydrothermal system, located in northwestern Wyoming. Modeled results are compared with measured temperature-depth profiles and results from other studies. The temperature decrease from 72{degree}C to 54{degrees}C along the crest of the Thermopolis anticline is shown to result from cooling of the geothermal fluid as it moves to the southeast. Modeled results show correct general trends, however, a time-varying three-dimensional model will be needed to fully explain the effects of mixing within the aquifers along the crest of the anticline and thermal affects of surface surface topography. 29 refs., 18 figs., 2 tabs.
Yasunari, Teppei
2012-01-01
Recently the issue on glacier retreats comes up and many factors should be relevant to the issue. The absorbing aerosols such as dust and black carbon (BC) are considered to be one of the factors. After they deposited onto the snow surface, it will reduce snow albedo (called snow darkening effect) and probably contribute to further melting of glacier. The Goddard Earth Observing System version 5 (GEOS-5) has developed at NASA/GSFC. However, the original snowpack model used in the land surface model in the GEOS-5 did not consider the snow darkening effect. Here we developed the new snow albedo scheme which can consider the snow darkening effect. In addition, another scheme on calculating mass concentrations on the absorbing aerosols in snowpack was also developed, in which the direct aerosol depositions from the chemical transport model in the GEOS-5 were used. The scheme has been validated with the observed data obtained at backyard of the Institute of Low Temperature Science, Hokkaido University, by Dr. Teruo Aoki (Meteorological Research Institute) et aL including me. The observed data was obtained when I was Ph.D. candidate. The original GEOS-5during 2007-2009 over the Himalayas and Tibetan Plateau region showed more reductions of snow than that of the new GEOS-5 because the original one used lower albedo settings. On snow cover fraction, the new GEOS-5 simulated more realistic snow-covered area comparing to the MODIS snow cover fraction. The reductions on snow albedo, snow cover fraction, and snow water equivalent were seen with statistically significance if we consider the snow darkening effect comparing to the results without the snow darkening effect. In the real world, debris cover, inside refreezing process, surface flow of glacier, etc. affect glacier mass balance and the simulated results immediately do not affect whole glacier retreating. However, our results indicate that some surface melting over non debris covered parts of the glacier would be
Strongly Unforgeable Ring Signature Scheme from Lattices in the Standard Model
Directory of Open Access Journals (Sweden)
Geontae Noh
2014-01-01
from lattices are not even existentially unforgeable with respect to insider corruption. We then improve previous schemes by applying, for the first time, the concept of strong unforgeability with respect to insider corruption to a ring signature scheme in lattices. This offers more security than any previous ring signature scheme: adversaries cannot produce new signatures for any ring-message pair, including previously signed ring-message pairs.
Tariku, Tebikachew Betru; Gan, Thian Yew
2017-08-01
Regional climate models (RCMs) have been used to simulate rainfall at relatively high spatial and temporal resolutions useful for sustainable water resources planning, design and management. In this study, the sensitivity of the RCM, weather research and forecasting (WRF), in modeling the regional climate of the Nile River Basin (NRB) was investigated using 31 combinations of different physical parameterization schemes which include cumulus (Cu), microphysics (MP), planetary boundary layer (PBL), land-surface model (LSM) and radiation (Ra) schemes. Using the European Centre for Medium-Range Weather Forecast (ECMWF) ERA-Interim reanalysis data as initial and lateral boundary conditions, WRF was configured to model the climate of NRB at a resolution of 36 km with 30 vertical levels. The 1999-2001 simulations using WRF were compared with satellite data combined with ground observation and the NCEP reanalysis data for 2 m surface air temperature (T2), rainfall, short- and longwave downward radiation at the surface (SWRAD, LWRAD). Overall, WRF simulated more accurate T2 and LWRAD (with correlation coefficients >0.8 and low root-mean-square error) than SWRAD and rainfall for the NRB. Further, the simulation of rainfall is more sensitive to PBL, Cu and MP schemes than other schemes of WRF. For example, WRF simulated less biased rainfall with Kain-Fritsch combined with MYJ than with YSU as the PBL scheme. The simulation of T2 is more sensitive to LSM and Ra than to Cu, PBL and MP schemes selected, SWRAD is more sensitive to MP and Ra than to Cu, LSM and PBL schemes, and LWRAD is more sensitive to LSM, Ra and PBL than Cu, and MP schemes. In summary, the following combination of schemes simulated the most representative regional climate of NRB: WSM3 microphysics, KF cumulus, MYJ PBL, RRTM longwave radiation and Dudhia shortwave radiation schemes, and Noah LSM. The above configuration of WRF coupled to the Noah LSM has also been shown to simulate representative regional
Li, Yuan; Ryu, Dongryeol; Western, Andrew W.; Wang, Q. J.
2015-05-01
Real-time discharge observations can be assimilated into flood models to improve forecast accuracy; however, the presence of time lags in the routing process and a lack of methods to quantitatively represent different sources of uncertainties challenge the implementation of data assimilation techniques for operational flood forecasting. To address these issues, an integrated error parameter estimation and lag-aware data assimilation (IEELA) scheme was recently developed for a lumped model. The scheme combines an ensemble-based maximum a posteriori (MAP) error estimation approach with a lag-aware ensemble Kalman smoother (EnKS). In this study, the IEELA scheme is extended to a semidistributed model to provide for more general application in flood forecasting by including spatial and temporal correlations in model uncertainties between subcatchments. The result reveals that using a semidistributed model leads to more accurate forecasts than a lumped model in an open-loop scenario. The IEELA scheme improves the forecast accuracy significantly in both lumped and semidistributed models, and the superiority of the semidistributed model remains in the data assimilation scenario. However, the improvements resulting from IEELA are confined to the outlet of the catchment where the discharge observations are assimilated. Forecasts at "ungauged" internal locations are not improved, and in some instances, even become less accurate.
A Semi-implicit Numerical Scheme for a Two-dimensional, Three-field Thermo-Hydraulic Modeling
Energy Technology Data Exchange (ETDEWEB)
Hwang, Moonkyu; Jeong, Jaejoon
2007-07-15
The behavior of two-phase flow is modeled, depending on the purpose, by either homogeneous model, drift flux model, or separated flow model, Among these model, in the separated flow model, the behavior of each flow phase is modeled by its own governing equation, together with the interphase models which describe the thermal and mechanical interactions between the phases involved. In this study, a semi-implicit numerical scheme for two-dimensional, transient, two-fluid, three-field is derived. The work is an extension to the previous study for the staggered, semi-implicit numerical scheme in one-dimensional geometry (KAERI/TR-3239/2006). The two-dimensional extension is performed by specifying a relevant governing equation set and applying the related finite differencing method. The procedure for employing the semi-implicit scheme is also described in detail. Verifications are performed for a 2-dimensional vertical plate for a single-phase and two-phase flows. The calculations verify the mass and energy conservations. The symmetric flow behavior, for the verification problem, also confirms the momentum conservation of the numerical scheme.
Incompressible Turbulent Flow Simulation Using the κ-ɛ Model and Upwind Schemes
Directory of Open Access Journals (Sweden)
V. G. Ferreira
2007-01-01
Full Text Available In the computation of turbulent flows via turbulence modeling, the treatment of the convective terms is a key issue. In the present work, we present a numerical technique for simulating two-dimensional incompressible turbulent flows. In particular, the performance of the high Reynolds κ-ɛ model and a new high-order upwind scheme (adaptative QUICKEST by Kaibara et al. (2005 is assessed for 2D confined and free-surface incompressible turbulent flows. The model equations are solved with the fractional-step projection method in primitive variables. Solutions are obtained by using an adaptation of the front tracking GENSMAC (Tomé and McKee (1994 methodology for calculating fluid flows at high Reynolds numbers. The calculations are performed by using the 2D version of the Freeflow simulation system (Castello et al. (2000. A specific way of implementing wall functions is also tested and assessed. The numerical procedure is tested by solving three fluid flow problems, namely, turbulent flow over a backward-facing step, turbulent boundary layer over a flat plate under zero-pressure gradients, and a turbulent free jet impinging onto a flat surface. The numerical method is then applied to solve the flow of a horizontal jet penetrating a quiescent fluid from an entry port beneath the free surface.
Wave Motion Compensation Scheme and Its Model Tests for the Salvage of An Ancient Sunken Boat
Institute of Scientific and Technical Information of China (English)
YE Jia-wei; CHEN Yuan-ming; WANG Dong-jiao; LIU Yue-qin; SONG Xin; HUANG Yuan-tian
2006-01-01
The application of the vertical hoisting jack and wave motion compensation techniques to the salvage of an ancient sunken boat is introduced. The boat is wooden, loaded with cultural relics. It has been immersed at the bottom of the South China Sea for more than 800 years. In order to protect the structure of the boat and the cultural relics inside to the largest extent, an open caisson is used to hold the sunken boat and the silts around before they are raised from the seabed all together as a whole. In the paper, first, the seakeeping model test of the system of the salvage barge and the open caisson is done to determine some important wave response parameters. And then a further experimental study of the application of the vertical hoisting jack and wave motion compensation scheme to the salvage of the sunken boat is carried out. In the model tests, the techniques of the integrative mechanic-electronic-hydraulic control, wave motion forecast and wave motion compensation are used to minimize the heave motion of the open caisson. The results of the model tests show that the heave motion of the open caisson can be reduced effectively by the use of the present method.
Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools
Ding, Steven X
2013-01-01
Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: · new material on fault isolation and identification, and fault detection in feedback control loops; · extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and · enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...
Fuzzy Multiple Criteria Decision Making Model with Fuzzy Time Weight Scheme
Directory of Open Access Journals (Sweden)
Chin-Yao Low
2013-11-01
Full Text Available In this study, we purpose a common fuzzy multiple criteria decision making model. A brand new concept - fuzzy time weighted scheme is adopted for considering in the model to establish a fuzzy multiple criteria decision making with time weight (FMCDMTW model. A real case of fuzzy multiple criteria decision making (FMCDM problems to be considering in this study. The performance evaluation of auction websites based on all criteria proposed in related literature. Obviously, the problem under investigated is a FMCDM problem with historic data and recent data. Since the evaluated criteria proposed in the literature cannot be defined precisely and numerically, fuzzy linguistic terms can be used to aggregate them numerically. It not only conforms to human cognition but also benefits interpretation. Furthermore, notice that the literature considered contains certain amount of historic data. Equally weighted historic data is usually considered in FMCDM problems, and this approach would introduce bias owing to the collected data for a certain long time period. As a result, fuzzy time weighted technique is adopted to resolve this issue.
Sensitivity analysis of PBL schemes by comparing WRF model and experimental data
Directory of Open Access Journals (Sweden)
A. Balzarini
2014-09-01
Full Text Available This work discusses the sources of model biases in reconstructing the Planetary Boundary Layer (PBL height among five commonly used PBL parameterizations. The Weather Research and Forecasting (WRF Model was applied over the critical area of Northern Italy with 5 km of horizontal resolution, and compared against a wide set of experimental data for February 2008. Three non-local closure PBL schemes (Asymmetrical Convective Model version 2, ACM2; Medium Range Forecast, MRF; Yonsei University, YSU and two local closure parameterizations (Mellor Yamada Janjic, MYJ; University of Washington Moist Turbulence, UW were selected for the analysis. Vertical profiles of aerosol number concentrations and Lidar backscatter profiles were collected in the metropolitan area of Milan in order to derive the PBL hourly evolution. Moreover, radio-soundings of Milano Linate airport as well as surface temperature, mixing ratio and wind speed of several meteorological stations were considered too. Results show that all five parameterizations produce similar performances in terms of temperature, mixing ratio and wind speed in the city of Milan, implying some systematic errors in all simulations. However, UW and ACM2 use the same local closure during nighttime conditions, allowing smaller mean biases (MB of temperature (ACM2 MB = 0.606 K, UW MB = 0.209 K, and wind speed (ACM2 MB = 0.699 m s−1, UW MB = 0.918 m s−1. All schemes have the same variations of the diurnal PBL height, since over predictions of temperature and wind speed are found to cause a general overestimation of mixing during its development in winter. In particular, temperature estimates seem to impact the early evolution of the PBL height, while entrainment fluxes parameterizations have major influence on the afternoon development. MRF, MYJ and ACM2 use the same approach in reconstructing the entrainment process, producing the largest overestimations of PBL height (MB ranges from 85.51–179.10 m. On
Sensitivity analysis of PBL schemes by comparing WRF model and experimental data
Balzarini, A.; Angelini, F.; Ferrero, L.; Moscatelli, M.; Perrone, M. G.; Pirovano, G.; Riva, G. M.; Sangiorgi, G.; Toppetti, A. M.; Gobbi, G. P.; Bolzacchini, E.
2014-09-01
This work discusses the sources of model biases in reconstructing the Planetary Boundary Layer (PBL) height among five commonly used PBL parameterizations. The Weather Research and Forecasting (WRF) Model was applied over the critical area of Northern Italy with 5 km of horizontal resolution, and compared against a wide set of experimental data for February 2008. Three non-local closure PBL schemes (Asymmetrical Convective Model version 2, ACM2; Medium Range Forecast, MRF; Yonsei University, YSU) and two local closure parameterizations (Mellor Yamada Janjic, MYJ; University of Washington Moist Turbulence, UW) were selected for the analysis. Vertical profiles of aerosol number concentrations and Lidar backscatter profiles were collected in the metropolitan area of Milan in order to derive the PBL hourly evolution. Moreover, radio-soundings of Milano Linate airport as well as surface temperature, mixing ratio and wind speed of several meteorological stations were considered too. Results show that all five parameterizations produce similar performances in terms of temperature, mixing ratio and wind speed in the city of Milan, implying some systematic errors in all simulations. However, UW and ACM2 use the same local closure during nighttime conditions, allowing smaller mean biases (MB) of temperature (ACM2 MB = 0.606 K, UW MB = 0.209 K), and wind speed (ACM2 MB = 0.699 m s-1, UW MB = 0.918 m s-1). All schemes have the same variations of the diurnal PBL height, since over predictions of temperature and wind speed are found to cause a general overestimation of mixing during its development in winter. In particular, temperature estimates seem to impact the early evolution of the PBL height, while entrainment fluxes parameterizations have major influence on the afternoon development. MRF, MYJ and ACM2 use the same approach in reconstructing the entrainment process, producing the largest overestimations of PBL height (MB ranges from 85.51-179.10 m). On the contrary, the
ERA-Interim forced H-TESSEL and WRF schemes for modeling ground
Rocha, M. J.; Dutra, E.; Vieira, G.; Miranda, P.; Fragoso, M.; Ramos, M.
2009-04-01
Permafrost is central to the carbon cycle and to the climate system and is recognized by the WCRP/WMO as a key element of the Earth System in which research efforts should focus. Compared with the Arctic, very little is known about the distribution, thickness, and properties of permafrost in the Antarctic. The main reason for this is the scarce network of permafrost temperature monitoring boreholes, as well as the short number of active layer monitoring sites. According to the IPCC in the last decades regions underlain by permafrost have been reduced in extent, and a warming of the ground has been observed in many areas. This study focus on Livingston and Deception Islands (South Shetlands), located in the Antarctic Peninsula region, one of the Earth's regions where warming has been more significant in the last 50 years. Our work is integrated in a project focusing on studying the influence of climate change on permafrost temperatures, which includes systematic and long-term terrain monitoring and also modeling using mesoscale meteorological models. A significant contribution will be the evaluation of the possibilities for using the mesoscale modeling approaches to other areas of the Antarctic Peninsula where no data exist on permafrost temperatures. Climate variability of the Antarctic Peninsula region was studied using the new reanalysis product from ECMWF Era-Interim and observational data from meteorological monitoring sites and boreholes run by our group. Monthly and annual cycles of near surface climate variables are compared. The modeling approach includes the H-TESSEL (Hydrology Tiled ECMWF Scheme for Surface Exchanges over Land) and the WRF (Weather Research and Forecasting), both forced with ERA-Interim for modeling ground temperatures in the study region. Simulations of both land surface and mesoscale models are compared with the observational data of soil temperatures. Preliminary results are presented and show that our approach can provide a good tool
Kim, Youngseob; Sartelet, Karine; Raut, Jean-Christophe; Chazette, Patrick
2015-04-01
Impacts of meteorological modeling in the planetary boundary layer (PBL) and urban canopy model (UCM) on the vertical mixing of pollutants are studied. Concentrations of gaseous chemical species, including ozone (O3) and nitrogen dioxide (NO2), and particulate matter over Paris and the near suburbs are simulated using the 3-dimensional chemistry-transport model Polair3D of the Polyphemus platform. Simulated concentrations of O3, NO2 and PM10/PM2.5 (particulate matter of aerodynamic diameter lower than 10 μm/2.5 μm, respectively) are first evaluated using ground measurements. Higher surface concentrations are obtained for PM10, PM2.5 and NO2 with the MYNN PBL scheme than the YSU PBL scheme because of lower PBL heights in the MYNN scheme. Differences between simulations using different PBL schemes are lower than differences between simulations with and without the UCM and the Corine land-use over urban areas. Regarding the root mean square error, the simulations using the UCM and the Corine land-use tend to perform better than the simulations without it. At urban stations, the PM10 and PM2.5 concentrations are over-estimated and the over-estimation is reduced using the UCM and the Corine land-use. The ability of the model to reproduce vertical mixing is evaluated using NO2 measurement data at the upper air observation station of the Eiffel Tower, and measurement data at a ground station near the Eiffel Tower. Although NO2 is under-estimated in all simulations, vertical mixing is greatly improved when using the UCM and the Corine land-use. Comparisons of the modeled PM10 vertical distributions to distributions deduced from surface and mobile lidar measurements are performed. The use of the UCM and the Corine land-use is crucial to accurately model PM10 concentrations during nighttime in the center of Paris. In the nocturnal stable boundary layer, PM10 is relatively well modeled, although it is over-estimated on 24 May and under-estimated on 25 May. However, PM10 is
Zhang, Yichuan; Wang, Jiangping
2015-07-01
Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.
Chang, E. C.; Yoshimura, K.
2016-12-01
The non-iteration dimensional-split semi-Lagrangian (NDSL) advection scheme is applied to the Experimental Climate Prediction Center (ECPC) regional spectral model (RSM) in order to alleviate the Gibbs phenomenon. The Gibbs problem is solved by replacing the spectral prognostic vapor and radioactive tracer calculations with the NDSL method, which considers advection of tracers on grid system without spectral space transformations. The NDSL scheme in the RSM successfully solved the Gibbs problem of the radioactive tracers for the Fukushima nuclear power plant accident case. In this study, analyses are focused on the improvement of the simulated precipitation from the RSM by applying the NDSL scheme for hydrometeors. It is shown that the NDSL improves location and intensity of the precipitation for the case of the Changma front over Korea. Furthermore, a mass-conserving NDSL scheme is also tested with the monotonic NDSL scheme for the Changma case. The mass-conserving scheme shows advantages in the simulated humidity fields and rainfall intensity.
2013-01-01
Snow grain size is a key parameter for modeling microwave snow emission properties and the surface energy balance because of its influence on the snow albedo, thermal conductivity and diffusivity. A model of the specific surface area (SSA) of snow was implemented in the one-layer snow model in the Canadian LAnd Surface Scheme (CLASS) version 3.4. This offline multilayer model (CLASS-SSA) simulates the decrease of SSA based on snow age, snow temperature and t...
Energy Technology Data Exchange (ETDEWEB)
Abu Saleem, Rabie A., E-mail: raabusaleem@just.edu.jo [Nuclear Engineering Department, Jordan University of Science and Technology, P.O. Box 3030, Irbid 22110 (Jordan); Kozlowski, Tomasz, E-mail: txk@illinois.edu [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois at Urbana-Champaign, 216 Talbot Laboratory, 104 S. Wright St., Urbana, IL 61801 (United States); Shrestha, Rijan, E-mail: rijan.shrestha@intel.com [Portland Technology Development, Intel Corporation, 2501 NW 229th Ave Hillsboro OR 97124 (United States)
2016-05-15
Highlights: • The two-fluid model and the challenges associated with its numerical modeling are investigated. • A high-order solver based on flux limiter schemes and the theta method was developed. • The solver was compared to existing thermal hydraulics codes used in nuclear industry. • The solver was shown to handle fast transients with discontinuities and phase change. - Abstract: Finite volume techniques with staggered mesh are used to develop a new numerical solver for the one-dimensional two-phase two-fluid model using a high-resolution, Total Variation Diminishing (TVD) scheme. The solver is implemented to analyze numerical benchmark problems for verification and testing its abilities to handle discontinuities and fast transients with phase change. Convergence rates are investigated by comparing numerical results to analytical solutions available in literature for the case of the faucet flow problem. The solver based on a new TVD scheme is shown to exhibit higher-order of accuracy compared to other numerical schemes. Mass errors are also examined when phase change occurs for the shock tube problem, and compared to those of the 1st-order upwind scheme implemented in the nuclear thermal-hydraulics code TRACE. The solver is shown to exhibit numerical stability when applied to problems with discontinuous solutions and results of the new solver are free of spurious oscillations.
Institute of Scientific and Technical Information of China (English)
Zheng Shi-Biao
2004-01-01
A scheme is proposed for measuring the Wigner characteristic function of a cavity field. In the scheme an atom is sent through a slightly detuned cavity and driven by a strong resonant classical field. Then the population of the atom in the ground state directly yields the Wigner characteristic function of the cavity field.
The impact of fires in the UK Met Office's Unified Model and the INFERNO interactive fire scheme
Mangeon, S.
2015-12-01
Forest fires are a key interaction between the land and the atmosphere. Yet this interaction is often omitted from Earth System Models. We will present the efforts carried out within the UK Met Office: for both its Unified Model (UM8.4) and land surface model (JULES). These efforts have focused on diagnosing fire occurrence and impact on composition. We will present the schemes used for diagnostic fire weather indices, and INFERNO (INteractive Fires and Emissions algoRithm for Natural envirOnments). INFERNO follows a reduced complexity approach and is intended for decadal to centennial scale climate simulations and assessment models for policy making. The scheme uses temperature, relative humidity, precipitation and soil moisture to simulate fuel flammability; once combined with ignitions, INFERNO diagnoses burnt area. Using JULES' carbon scheme, burnt area leads to fire emissions which are inputs to the model's chemistry and aerosol scheme (UKCA). We will show the coupled model performance in capturing burnt area and fire emissions and investigate the role of fires on atmospheric composition interannual variability (in particular CO, and aerosols).
A remark on the GNSS single difference model with common clock scheme for attitude determination
Chen, Wantong
2016-09-01
GNSS-based attitude determination technique is an important field of study, in which two schemes can be used to construct the actual system: the common clock scheme and the non-common clock scheme. Compared with the non-common clock scheme, the common clock scheme can strongly improve both the reliability and the accuracy. However, in order to gain these advantages, specific care must be taken in the implementation. The cares are thus discussed, based on the generating technique of carrier phase measurement in GNSS receivers. A qualitative assessment of potential phase bias contributes is also carried out. Possible technical difficulties are pointed out for the development of single-board multi-antenna GNSS attitude systems with a common clock.
Directory of Open Access Journals (Sweden)
P. Chitra
2017-04-01
Full Text Available Recently, wireless network technologies were designed for most of the applications. Congestion raised in the wireless network degrades the performance and reduces the throughput. Congestion-free network is quit essen- tial in the transport layer to prevent performance degradation in a wireless network. Game theory is a branch of applied mathematics and applied sciences that used in wireless network, political science, biology, computer science, philosophy and economics. e great challenges of wireless network are their congestion by various factors. E ective congestion-free alternate path routing is pretty essential to increase network performance. Stackelberg game theory model is currently employed as an e ective tool to design and formulate conges- tion issues in wireless networks. is work uses a Stackelberg game to design alternate path model to avoid congestion. In this game, leaders and followers are selected to select an alternate routing path. e correlated equilibrium is used in Stackelberg game for making better decision between non-cooperation and cooperation. Congestion was continuously monitored to increase the throughput in the network. Simulation results show that the proposed scheme could extensively improve the network performance by reducing congestion with the help of Stackelberg game and thereby enhance throughput.
Khairoutdinov, M.
2015-12-01
The representation of microphysics, especially ice microphysics, remains one of the major uncertainties in cloud-resolving models (CRMs). Most of the cloud schemes use the so-called bulk microphysics approach, in which a few moments of such distributions are used as the prognostic variables. The System for Atmospheric Modeling (SAM) is the CRM that employs two such schemes. The single-moment scheme, which uses only mass for each of the water phases, and the two-moment scheme, which adds the particle concentration for each of the hydrometeor category. Of the two, the single-moment scheme is much more computationally efficient as it uses only two prognostic microphysics variables compared to ten variables used by the two-moment scheme. The efficiency comes from a rather considerable oversimplification of the microphysical processes. For instance, only a sum of the liquid and icy cloud water is predicted with the temperature used to diagnose the mixing ratios of different hydrometeors. The main motivation for using such simplified microphysics has been computational efficiency, especially in the applications of SAM as the super-parameterization in global climate models. Recently, we have extended the single-moment microphysics by adding only one additional prognostic variable, which has, nevertheless, allowed us to separate the cloud ice from liquid water. We made use of some of the recent observations of ice microphysics collected at various parts of the world to parameterize several aspects of ice microphysics that have not been explicitly represented before in our sing-moment scheme. For example, we use the observed broad dependence of ice concentration on temperature to diagnose the ice concentration in addition to prognostic mass. Also, there is no artificial separation between the pristine ice and snow, often used by bulk models. Instead we prescribed the ice size spectrum as the gamma distribution, with the distribution shape parameter controlled by the
An all-encompassing global convergence result for processive multisite phosphorylation systems.
Eithun, Mitchell; Shiu, Anne
2017-09-01
Phosphorylation, the enzyme-mediated addition of a phosphate group to a molecule, is a ubiquitous chemical mechanism in biology. Multisite phosphorylation, the addition of phosphate groups to multiple sites of a single molecule, may be distributive or processive. Distributive systems, which require an enzyme and substrate to bind several times in order to add multiple phosphate groups, can be bistable. Processive systems, in contrast, require only one binding to add all phosphate groups, and were recently shown to be globally stable. However, this global convergence result was proven only for a specific mechanism of processive phosphorylation/dephosphorylation (namely, all catalytic reactions are reversible). Accordingly, we generalize this result to allow for processive phosphorylation networks in which each reaction may be irreversible, and also to account for possible product inhibition. We accomplish this by first defining an all-encompassing processive network that encapsulates all of these schemes, and then appealing to recent results of Marcondes de Freitas et al. that assert global convergence by way of monotone systems theory and network/graph reductions (corresponding to removing intermediate complexes). Our results form a case study into the question of when global convergence is preserved when reactions and/or intermediate complexes are added to or removed from a network. Copyright © 2017 Elsevier Inc. All rights reserved.
Gansen, A.; Hachemi, M. El; Belouettar, S.; Hassan, O.; Morgan, K.
2016-09-01
The standard Yee algorithm is widely used in computational electromagnetics because of its simplicity and divergence free nature. A generalization of the classical Yee scheme to 3D unstructured meshes is adopted, based on the use of a Delaunay primal mesh and its high quality Voronoi dual. This allows the problem of accuracy losses, which are normally associated with the use of the standard Yee scheme and a staircased representation of curved material interfaces, to be circumvented. The 3D dual mesh leapfrog-scheme which is presented has the ability to model both electric and magnetic anisotropic lossy materials. This approach enables the modelling of problems, of current practical interest, involving structured composites and metamaterials.
Bogoni, Antonella; Potì, Luca; Ponzini, Filippo; Ghelfi, Paolo
2006-01-01
The electrical modeling of complex electrooptical devices is a useful task for the correct design of its schemes and for the estimation of its performance. In this paper, we consider an electrooptical phase-locked loop (PLL) used to synchronize an RF system clock to the repetition rate of an optical pulsed source, realized by an active fiber mode-locking (ML) technique in the regenerative configuration. The synchronization scheme is suggested by a description of the pulsed source, for the first time, as an optical voltage-control oscillator (VCO). In particular, we present a simple new all-electrical model for the proposed optical VCO, and we verify its accuracy by the implementation of the whole PLL scheme at 2.5 and 10 GHz.
Li, Y.; Han, B.; Métivier, L.; Brossier, R.
2016-09-01
We investigate an optimal fourth-order staggered-grid finite-difference scheme for 3D frequency-domain viscoelastic wave modeling. An anti-lumped mass strategy is incorporated to minimize the numerical dispersion. The optimal finite-difference coefficients and the mass weighting coefficients are obtained by minimizing the misfit between the normalized phase velocities and the unity. An iterative damped least-squares method, the Levenberg-Marquardt algorithm, is utilized for the optimization. Dispersion analysis shows that the optimal fourth-order scheme presents less grid dispersion and anisotropy than the conventional fourth-order scheme with respect to different Poisson's ratios. Moreover, only 3.7 grid-points per minimum shear wavelength are required to keep the error of the group velocities below 1%. The memory cost is then greatly reduced due to a coarser sampling. A parallel iterative method named CARP-CG is used to solve the large ill-conditioned linear system for the frequency-domain modeling. Validations are conducted with respect to both the analytic viscoacoustic and viscoelastic solutions. Compared with the conventional fourth-order scheme, the optimal scheme generates wavefields having smaller error under the same discretization setups. Profiles of the wavefields are presented to confirm better agreement between the optimal results and the analytic solutions.
Institute of Scientific and Technical Information of China (English)
DING XIU-HUAN; FU ZHI-GUO; ZHANG SHU-GONG
2009-01-01
This paper proposes an XTR version of the Kurosawa-Desmedt scheme. Our scheme is secure against adaptive choeen-ciphertext attack under the XTR version of the Decisional Diffie-Hellman assumption in the standard model. Comparing efficiency between the Kurosawa-Desmedt scheme and the proposed XTR-Kurosawa-Desmedt scheme, we find that the proposed scheme is more efficient than the Kurosawa-Desmedt scheme both in communication and computation without compromising security.
Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang
2015-11-06
As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ'E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ'E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models.
European air quality modelled by CAMx including the volatility basis set scheme
Directory of Open Access Journals (Sweden)
G. Ciarelli
2015-12-01
Full Text Available Four periods of EMEP (European Monitoring and Evaluation Programme intensive measurement campaigns (June 2006, January 2007, September–October 2008 and February–March 2009 were modelled using the regional air quality model CAMx with VBS (Volatility Basis Set approach for the first time in Europe within the framework of the EURODELTA-III model intercomparison exercise. More detailed analysis and sensitivity tests were performed for the period of February–March 2009 and June 2006 to investigate the uncertainties in emissions as well as to improve the modelling of organic aerosols (OA. Model performance for selected gas phase species and PM2.5 was evaluated using the European air quality database Airbase. Sulfur dioxide (SO2 and ozone (O3 were found to be overestimated for all the four periods with O3 having the largest mean bias during June 2006 and January–February 2007 periods (8.93 and 12.30 ppb mean biases, respectively. In contrast, nitrogen dioxide (NO2 and carbon monoxide (CO were found to be underestimated for all the four periods. CAMx reproduced both total concentrations and monthly variations of PM2.5 very well for all the four periods with average biases ranging from −2.13 to 1.04 μg m-3. Comparisons with AMS (Aerosol Mass Spectrometer measurements at different sites in Europe during February–March 2009, showed that in general the model over-predicts the inorganic aerosol fraction and under-predicts the organic one, such that the good agreement for PM2.5 is partly due to compensation of errors. The effect of the choice of volatility basis set scheme (VBS on OA was investigated as well. Two sensitivity tests with volatility distributions based on previous chamber and ambient measurements data were performed. For February–March 2009 the chamber-case reduced the total OA concentrations by about 43 % on average. On the other hand, a test based on ambient measurement data increased OA concentrations by about 47 % for the same
Incremental 4D-VAR assimilation scheme based on Lorenz model
Institute of Scientific and Technical Information of China (English)
WANG Xidong; XU Dongfeng; XU Xiaohua
2008-01-01
Four-dimensional variational(4D-VAR) data assimilation method is a perfect data assimilation solution in theory, but the compu- tational issue is quite difficult in operational implementation. The incremental 4D-VAR assimilation scheme is set up in order to re- duce the computational cost. It is shown that the accuracy of the observations, the length of the assimilation window and the choice of the first guess have an important influence on the assimilation outcome through the contrast experiment. Compared with the standard 4D-VAR assimilation scheme, the incremental 4D-VAR assimilation scheme shows its advantage in the computation speed through an assimilation experiment.
Milan, M.; Schüttemeyer, D.; Venema, V.; Simmer, C.
2009-04-01
We implemented a PI (Physical Initialization) method in the non hydrostatic limited-area model COSMO (version 4.2) of the DWD (German Meteorological Service). The goal is the improvement of quantitative rain nowcasting with a high resolution NWP model. Input radar data is a DWD product: the national radar composite for 16 radars with a spatial resolution of one kilometer and a time resolution of 5 minutes. The conversion from reflectivity to rain rate is already made by DWD. This data is interpolated on the LM grid ( 2.8 × 2.8 km resolution) in order to calculate the analysed precipitation rate which depends on the observed precipitation and the model precipitation. The PIB (Physical Initialization Bonn) takes as input the radar based precipitation product and a cloud top height field retrieved from satellite observations, in our case we are using the SAFNWC products generated from Meteosat Second Generation data by DWD. During the assimilation window PIB adjusts the vertical wind, humidity, cloud water and cloud ice in order to force the model state towards the measurements. The most distinctive feature of the algorithm is the adjustment of the vertical wind profile in the framework of a simple precipitation scheme. The PIB assumes that the rain rate is proportional to the vertical humidity flux at cloud base and the vertical wind is adapted according to the conversion efficiency of saturated water vapor into rain water at the cloud base. This parameter is dynamically adjusted by the comparison between the model precipitation and the radar precipitation. The model is tested in convective cases over Germany, an identical twin experiment is used in order to demonstrate the consistency of PIB with the physics of the NWP model. In the tests which we have already performed this method has improved the forecast of the precipitation patterns, as well as the dynamics of the events. These improvements are found both during the assimilation window and for the first hours
Tsai, F.; Lai, J. S.; Chiang, S. H.
2015-12-01
Landslides are frequently triggered by typhoons and earthquakes in Taiwan, causing serious economic losses and human casualties. Remotely sensed images and geo-spatial data consisting of land-cover and environmental information have been widely used for producing landslide inventories and causative factors for slope stability analysis. Landslide susceptibility, on the other hand, can represent the spatial likelihood of landslide occurrence and is an important basis for landslide risk assessment. As multi-temporal satellite images become popular and affordable, they are commonly used to generate landslide inventories for subsequent analysis. However, it is usually difficult to distinguish different landslide sub-regions (scarp, debris flow, deposition etc.) directly from remote sensing imagery. Consequently, the extracted landslide extents using image-based visual interpretation and automatic detections may contain many depositions that may reduce the fidelity of the landslide susceptibility model. This study developed an empirical thresholding scheme based on terrain characteristics for eliminating depositions from detected landslide areas to improve landslide susceptibility modeling. In this study, Bayesian network classifier is utilized to build a landslide susceptibility model and to predict sequent rainfall-induced shallow landslides in the Shimen reservoir watershed located in northern Taiwan. Eleven causative factors are considered, including terrain slope, aspect, curvature, elevation, geology, land-use, NDVI, soil, distance to fault, river and road. Landslide areas detected using satellite images acquired before and after eight typhoons between 2004 to 2008 are collected as the main inventory for training and verification. In the analysis, previous landslide events are used as training data to predict the samples of the next event. The results are then compared with recorded landslide areas in the inventory to evaluate the accuracy. Experimental results
Temimi, Marouane; Chaouch, Naira; Weston, Michael; Ghedira, Hosni
2017-04-01
This study covers five fog events reported in 2014 at Abu Dhabi International Airport in the United Arab Emirates (UAE). We assess the performance of WRF-ARW model during fog conditions and we intercompare seven different PBL schemes and assess their impact on the performance of the simulations. Seven PBL schemes, namely, Yonsei University (YSU), Mellor-Yamada-Janjic (MYJ), Moller-Yamada Nakanishi and Niino (MYNN) level 2.5, Quasi-Normal Scale Elimination (QNSE-EDMF), Asymmetric Convective Model (ACM2), Grenier-Bretherton-McCaa (GBM) and MYNN level 3 were tested. Radiosonde data from the Abu Dhabi International Airport and surface measurements of relative humidity (RH), dew point temperature, wind speed, and temperature profiles were used to assess the performance of the model. All PBL schemes showed comparable skills with relatively higher performance with the QNSE scheme. The average RH Root Mean Square Error (RMSE) and BIAS for all PBLs were 15.75 % and -9.07 %, respectively, whereas the obtained RMSE and BIAS when QNSE was used were 14.65 % and -6.3 % respectively. Comparable skills were obtained for the rest of the variables. Local PBL schemes showed better performance than non-local schemes. Discrepancies between simulated and observed values were higher at the surface level compared to high altitude values. The sensitivity to lead time showed that best simulation performances were obtained when the lead time varies between 12 and 18 hours. In addition, the results of the simulations show that better performance is obtained when the starting condition is dry.
Etemadsaeed, Leila; Moczo, Peter; Kristek, Jozef; Ansari, Anooshiravan; Kristekova, Miriam
2016-10-01
We investigate the problem of finite-difference approximations of the velocity-stress formulation of the equation of motion and constitutive law on the staggered grid (SG) and collocated grid (CG). For approximating the first spatial and temporal derivatives, we use three approaches: Taylor expansion (TE), dispersion-relation preserving (DRP), and combined TE-DRP. The TE and DRP approaches represent two fundamental extremes. We derive useful formulae for DRP and TE-DRP approximations. We compare accuracy of the numerical wavenumbers and numerical frequencies of the basic TE, DRP and TE-DRP approximations. Based on the developed approximations, we construct and numerically investigate 14 basic TE, DRP and TE-DRP finite-difference schemes on SG and CG. We find that (1) the TE second-order in time, TE fourth-order in space, 2-point in time, 4-point in space SG scheme (that is the standard (2,4) VS SG scheme, say TE-2-4-2-4-SG) is the best scheme (of the 14 investigated) for large fractions of the maximum possible time step, or, in other words, in a homogeneous medium; (2) the TE second-order in time, combined TE-DRP second-order in space, 2-point in time, 4-point in space SG scheme (say TE-DRP-2-2-2-4-SG) is the best scheme for small fractions of the maximum possible time step, or, in other words, in models with large velocity contrasts if uniform spatial grid spacing and time step are used. The practical conclusion is that in computer codes based on standard TE-2-4-2-4-SG, it is enough to redefine the values of the approximation coefficients by those of TE-DRP-2-2-2-4-SG for increasing accuracy of modelling in models with large velocity contrast between rock and sediments.
Global functional atlas of Escherichia coli encompassing previously uncharacterized proteins.
Directory of Open Access Journals (Sweden)
Pingzhao Hu
2009-04-01
Full Text Available One-third of the 4,225 protein-coding genes of Escherichia coli K-12 remain functionally unannotated (orphans. Many map to distant clades such as Archaea, suggesting involvement in basic prokaryotic traits, whereas others appear restricted to E. coli, including pathogenic strains. To elucidate the orphans' biological roles, we performed an extensive proteomic survey using affinity-tagged E. coli strains and generated comprehensive genomic context inferences to derive a high-confidence compendium for virtually the entire proteome consisting of 5,993 putative physical interactions and 74,776 putative functional associations, most of which are novel. Clustering of the respective probabilistic networks revealed putative orphan membership in discrete multiprotein complexes and functional modules together with annotated gene products, whereas a machine-learning strategy based on network integration implicated the orphans in specific biological processes. We provide additional experimental evidence supporting orphan participation in protein synthesis, amino acid metabolism, biofilm formation, motility, and assembly of the bacterial cell envelope. This resource provides a "systems-wide" functional blueprint of a model microbe, with insights into the biological and evolutionary significance of previously uncharacterized proteins.
An Indirect Data Assimilation Scheme for Deep Soil Temperature in the Pleim-Xiu Land Surface Model
The Pleim-Xiu land surface model (PX LSM) has been improved by the addition of a 2nd indirect data assimilation scheme. The first, which was described previously, is a technique where soil moisture in nudged according to the biases in 2-m air temperature and relative humidity be...
Kim, Ju-Hye; Shin, Dong-Bin; Kummerow, Christian
2014-05-01
Physically-based rainfall retrievals from passive microwave sensors often make use of cloud resolving models (CRMs) to build a-priori databases of potential rain structures. Each CRM, however, has its own assumptions on the cloud microphysics. Hence, approximated microphysics may cause uncertainties in the a-priori information resulting in inaccurate rainfall estimates. This study first builds a-priori databases by combining the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) observations and simulations from the Weather Research and Forecasting (WRF) model with six different cloud microphysics schemes. The microphysics schemes include the Purdue Lin (LIN), WRF-Single-Moment 6 (WSM6), Goddard Cumulus Ensemble (GCE), Thompson (THOM), WRF-Double-Moment 6 (WDM6), and Morrison (MORR) schemes. As expected, the characteristics of the a-priori databases are inherited from the individual cloud microphysics schemes. There are several distinct differences in the databases. Particularly, excessive graupel and snow exist with the LIN and THOM schemes, while more rainwater is incorporated into the a-priori information with WDM6 than with any of the other schemes. Major results show that convective rainfall regions are not well captured by the LIN and THOM schemes-based retrievals with correlations of 0.56 and 0.73. Rainfall distributions and their quantities retrieved from the WSM6 and WDM6 schemes-based estimations, however, show relatively better agreement with the PR observations with correlations of 0.79 and 0.81, respectively. Based on the comparisons of the various microphysics schemes in the retrievals, it appears that differences in the a-priori databases considerably affect the properties of rainfall estimations. This study also includes the discrepancy of estimated rain rate from passive radiometer and active radar for two rainfall systems of different cloud microphysics near the Yellow Sea. The first case have high cloud top (HCT) with large ice
Energy Technology Data Exchange (ETDEWEB)
Barriopedro, D. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal); Universidad de Extremadura, Departamento de Fisica, Facultad de Ciencias, Badajoz (Spain); Garcia-Herrera, R. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain); Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal)
2010-12-15
This paper aims to provide a new blocking definition with applicability to observations and model simulations. An updated review of previous blocking detection indices is provided and some of their implications and caveats discussed. A novel blocking index is proposed by reconciling two traditional approaches based on anomaly and absolute flows. Blocks are considered from a complementary perspective as a signature in the anomalous height field capable of reversing the meridional jet-based height gradient in the total flow. The method succeeds in identifying 2-D persistent anomalies associated to a weather regime in the total flow with blockage of the westerlies. The new index accounts for the duration, intensity, extension, propagation, and spatial structure of a blocking event. In spite of its increased complexity, the detection efficiency of the method is improved without hampering the computational time. Furthermore, some misleading identification problems and artificial assumptions resulting from previous single blocking indices are avoided with the new approach. The characteristics of blocking for 40 years of reanalysis (1950-1989) over the Northern Hemisphere are described from the perspective of the new definition and compared to those resulting from two standard blocking indices and different critical thresholds. As compared to single approaches, the novel index shows a better agreement with reported proxies of blocking activity, namely climatological regions of simultaneous wave amplification and maximum band-pass filtered height standard deviation. An additional asset of the method is its adaptability to different data sets. As critical thresholds are specific of the data set employed, the method is useful for observations and model simulations of different resolutions, temporal lengths and time variant basic states, optimizing its value as a tool for model validation. Special attention has been paid on the devise of an objective scheme easily applicable
2009-09-01
28 2.3 Delfin and Finel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.4 ELCIRC and...Designed for coastal oceans, shelves, estuaries, inlets, floodplains, rivers and beaches Delfin FV/FD ELCIRC Eularian-Lagrangian CIRCu- lation model FV/FD...NOAA’s National Ocean Services for storm surge/inundation applications. 2.3 Delfin and Finel Delfin was developed by D. Ham under the supervision of J
Secure proxy signature scheme with fast revocation in the standard model
Institute of Scientific and Technical Information of China (English)
LIU Zhen-hua; HU Yu-pu; ZHANG Xiang-song; MA Hua
2009-01-01
proposed scheme is provably secure based on the computational Diffie-Hellman (CDH) intractability assumption without relying on the random oracles, and satisfies all the security requirements for a secure proxy signature.
Mass-modulation schemes for a class of wave energy converters: Experiments, models, and efficacy
Diamond, CA; Judge, CQ; Orazov, B; Savaş, Ö; O'Reilly, OM
2015-01-01
© 2015, Elsevier Ltd. All rights reserved. In a recent series of works, mass-modulation schemes have been proposed for a class of ocean wave energy converter (WEC). The goal of the schemes is to improve the energy harvesting capabilities of these devices by taking advantage of the ambient water. However this improvement comes at the cost of increased system complexity and possible impulse loadings at the instances where the mass changes. In the present work, experimental results for a pair of...
Xia, Weiwei; Shen, Lianfeng
We propose two vertical handoff schemes for cellular network and wireless local area network (WLAN) integration: integrated service-based handoff (ISH) and integrated service-based handoff with queue capabilities (ISHQ). Compared with existing handoff schemes in integrated cellular/WLAN networks, the proposed schemes consider a more comprehensive set of system characteristics such as different features of voice and data services, dynamic information about the admitted calls, user mobility and vertical handoffs in two directions. The code division multiple access (CDMA) cellular network and IEEE 802.11e WLAN are taken into account in the proposed schemes. We model the integrated networks by using multi-dimensional Markov chains and the major performance measures are derived for voice and data services. The important system parameters such as thresholds to prioritize handoff voice calls and queue sizes are optimized. Numerical results demonstrate that the proposed ISHQ scheme can maximize the utilization of overall bandwidth resources with the best quality of service (QoS) provisioning for voice and data services.
Yu, Lang; Huang, Mei
2015-01-01
We study the chiral phase transition in the presence of the chiral chemical potential $\\mu_5$ using the two-flavor Nambu--Jona-Lasinio model. In particular, we analyze the reason why one can obtain two opposite behaviors of the chiral critical temperature as a function of $\\mu_5$ in the framework of different regularization schemes. We compare the modifications of the chiral condensate and the critical temperature due to $\\mu_5$ in different regularization schemes, analytically and numerically. Finally, we find that, for the conventional hard-cutoff regularization scheme, the increasing dependence of the critical temperature on the chiral chemical potential is an artifact, which is caused by the fact that it does not include complete contribution from the thermal fluctuations. When the thermal contribution is fully taken into account, the chiral critical temperature should decrease with $\\mu_5$.
Kartje, John F.
1995-10-01
I have computed the 1200-8000 A thermal continuum polarization induced by gas and dust arranged in configurations compatible with current active galactic nuclei (AGNs) unification schemes. Both uniform- density tori and stratified-density disk-driven winds were considered. A Monte Carlo radiative transfer code was developed which includes the polarization mechanisms of electron and dust scattering as well as dichroic extinction by aligned grains. A Galactic-type grain population was assumed. Based on these calculations, I propose a new interpretation of many of the observed polarization traits of Seyfert galaxies and QSOs: namely, that the polarization in these sources is induced by the same optically thick material which is assumed to obscure the central engine in unification schemes. In particular, I suggest that stratified-density winds could provide a natural explanation (and one consistent with unification models) of the polarization trends observed in Seyfert galaxies. Such winds can display polarizations (P ≲ 20%) oriented perpendicular to the axis along viewing angles inclined to the axis by θ0 ≳ 45° in well-collimated winds, this polarization shifts to smaller magnitudes (P ≲ 2%) and parallel orientations for more face-on viewing, consistent with the patterns observed in Seyfert 2 and Seyfert 1 sources, respectively. In less-collimated winds, scattering alone tends to produce parallel orientations for all viewing angles; perpendicular polarization at large θ0 can result if there is a high degree of magnetic grain alignment. The simplest torus models (i.e., uniform-density, opaque gas and dust) do not reproduce this flip in polarization position angle. Furthermore, they generally display high polarization magnitudes (P ≳ 10%) along most viewing angles θ0 > θ∞ (where θ is the torus half-opening angle) and negligible polarization along θ0 > θ∞. Unlike previous models for AGN polarization which invoke scattering by optically thin electron
Directory of Open Access Journals (Sweden)
N. Bellouin
2012-08-01
Full Text Available The Hadley Centre Global Environmental Model (HadGEM includes two aerosol schemes: the Coupled Large-scale Aerosol Simulator for Studies in Climate (CLASSIC, and the new Global Model of Aerosol Processes (GLOMAP-mode. GLOMAP-mode is a modal aerosol microphysics scheme that simulates not only aerosol mass but also aerosol number, represents internally-mixed particles, and includes aerosol microphysical processes such as nucleation. In this study, both schemes provide hindcast simulations of natural and anthropogenic aerosol species for the period 2000–2006. HadGEM simulations using GLOMAP-mode compare better than CLASSIC against a data-assimilated aerosol re-analysis and aerosol ground-based observations. GLOMAP-mode sulphate aerosol residence time is two days longer than CLASSIC sulphate aerosols, whereas black carbon residence time is much shorter. As a result, CLASSIC underestimates aerosol optical depths in continental regions of the Northern Hemisphere and likely overestimates absorption in remote regions. Aerosol direct and first indirect radiative forcings are computed from simulations of aerosols with emissions for the year 1850 and 2000. In 1850, GLOMAP-mode predicts lower aerosol optical depths and higher cloud droplet number concentrations than CLASSIC. Consequently, simulated clouds are much less susceptible to natural and anthropogenic aerosol changes when the microphysical scheme is used. In particular, the response of cloud condensation nuclei to an increase in dimethyl sulphide emissions becomes a factor of four smaller. The combined effect of different 1850 baselines, residence times, and cloud susceptibilities, leads to substantial differences in the aerosol forcings simulated by the two schemes. GLOMAP-mode finds a present-day direct aerosol forcing of −0.49 W m^{−2} on a global average, 72% stronger than the corresponding forcing from CLASSIC. This difference is compensated by changes in first indirect aerosol
Directory of Open Access Journals (Sweden)
Isaac Osei
2016-11-01
Full Text Available Techno-economic models for optimised utilisation of jatropha oil under an out-grower farming scheme were developed based on different considerations for oil and by-product utilisation. Model 1: Out-grower scheme where oil is exported and press cake utilised for compost. Model 2: Out-grower scheme with six scenarios considered for the utilisation of oil and by-products. Linear programming models were developed based on outcomes of the models to optimise the use of the oil through profit maximisation. The findings revealed that Model 1 was financially viable from the processors’ perspective but not for the farmer at seed price of $0.07/kg. All scenarios considered under Model 2 were financially viable from the processors perspective but not for the farmer at seed price of $0.07/kg; however, at seed price of $0.085/kg, financial viability was achieved for both parties. Optimising the utilisation of the oil resulted in an annual maximum profit of $123,300.
Franz, Martina; Simpson, David; Arneth, Almut; Zaehle, Sönke
2017-01-01
Ozone (O3) is a toxic air pollutant that can damage plant leaves and substantially affect the plant's gross primary production (GPP) and health. Realistic estimates of the effects of tropospheric anthropogenic O3 on GPP are thus potentially important to assess the strength of the terrestrial biosphere as a carbon sink. To better understand the impact of ozone damage on the terrestrial carbon cycle, we developed a module to estimate O3 uptake and damage of plants for a state-of-the-art global terrestrial biosphere model called OCN. Our approach accounts for ozone damage by calculating (a) O3 transport from 45 m height to leaf level, (b) O3 flux into the leaf, and (c) ozone damage of photosynthesis as a function of the accumulated O3 uptake over the lifetime of a leaf. A comparison of modelled canopy conductance, GPP, and latent heat to FLUXNET data across European forest and grassland sites shows a general good performance of OCN including ozone damage. This comparison provides a good baseline on top of which ozone damage can be evaluated. In comparison to literature values, we demonstrate that the new model version produces realistic O3 surface resistances, O3 deposition velocities, and stomatal to total O3 flux ratios. A sensitivity study reveals that key metrics of the air-to-leaf O3 transport and O3 deposition, in particular the stomatal O3 uptake, are reasonably robust against uncertainty in the underlying parameterisation of the deposition scheme. Nevertheless, correctly estimating canopy conductance plays a pivotal role in the estimate of cumulative O3 uptake. We further find that accounting for stomatal and non-stomatal uptake processes substantially affects simulated plant O3 uptake and accumulation, because aerodynamic resistance and non-stomatal O3 destruction reduce the predicted leaf-level O3 concentrations. Ozone impacts on GPP and transpiration in a Europe-wide simulation indicate that tropospheric O3 impacts the regional carbon and water cycling less
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2016-04-01
This contribution presents a framework, which enables the use of an Evolutionary Algorithm (EA) for the calibration and regionalization of the hydrological model COSEROreg. COSEROreg uses an updated version of the HBV-type model COSERO (Kling et al. 2014) for the modelling of hydrological processes and is embedded in a parameter regionalization scheme based on Samaniego et al. (2010). The latter uses subscale-information to estimate model via a-priori chosen transfer functions (often derived from pedotransfer functions). However, the transferability of the regionalization scheme to different model-concepts and the integration of new forms of subscale information is not straightforward. (i) The usefulness of (new) single sub-scale information layers is unknown beforehand. (ii) Additionally, the establishment of functional relationships between these (possibly meaningless) sub-scale information layers and the distributed model parameters remain a central challenge in the implementation of a regionalization procedure. The proposed method theoretically provides a framework to overcome this challenge. The implementation of the EA encompasses the following procedure: First, a formal grammar is specified (Ryan et al., 1998). The construction of the grammar thereby defines the set of possible transfer functions and also allows to incorporate hydrological domain knowledge into the search itself. The EA iterates over the given space by combining parameterized basic functions (e.g. linear- or exponential functions) and sub-scale information layers into transfer functions, which are then used in COSEROreg. However, a pre-selection model is applied beforehand to sort out unfeasible proposals by the EA and to reduce the necessary model runs. A second optimization routine is used to optimize the parameters of the transfer functions proposed by the EA. This concept, namely using two nested optimization loops, is inspired by the idea of Lamarckian Evolution and Baldwin Effect
Energy Technology Data Exchange (ETDEWEB)
Beliaev, J.; Trunov, N.; Tschekin, I. [OKB Gidropress (Russian Federation); Luther, W. [GRS Garching (Germany); Spolitak, S. [RNC-KI (Russian Federation)
1995-12-31
Currently the ATHLET code is widely applied for modelling of several Power Plants of WWER type with horizontal steam generators. A main drawback of all these applications is the insufficient verification of the models for the steam generator. This paper presents the nodalization schemes for the secondary side of the steam generator, the results of stationary calculations, and preliminary comparisons to experimental data. The consideration of circulation in the water inventory of the secondary side is proved to be necessary. (orig.). 3 refs.
Energy Technology Data Exchange (ETDEWEB)
Camacho, E.F.; Rubio, F.R. [Universidad de Sevilla, Escuela Superior de Ingenieros, Departamento de Ingenieria de Sistemas y Automatica, Camino de Los Descubrimientos s/n, E-41092, Sevilla (Spain); Berenguel, M. [Universidad de Almeria, Departamento de Lenguajes y Computacion, Area de Ingenieria de Sistemas y Automatica, Carretera Sacramento s/n, E-04120 La Canada, Almeria (Spain); Valenzuela, L. [Plataforma Solar de Almeria - CIEMAT, Carretera Senes s/n, P.O. Box 22, E-04200 Tabernas, Almeria (Spain)
2007-10-15
This article presents a survey of the different automatic control techniques that have been applied to control the outlet temperature of solar plants with distributed collectors during the last 25 years. Different aspects of the control problem involved in this kind of plants are treated, from modeling and simulation approaches to the different basic control schemes developed and successfully applied in real solar plants. A classification of the modeling and control approaches is used to explain the main features of each strategy. (author)
Ackleh, Azmy S; Ma, Baoling; Thibodeaux, Jeremy J
2013-09-01
We develop a second-order high-resolution finite difference scheme to approximate the solution of a mathematical model describing the within-host dynamics of malaria infection. The model consists of two nonlinear partial differential equations coupled with three nonlinear ordinary differential equations. Convergence of the numerical method to the unique weak solution with bounded total variation is proved. Numerical simulations demonstrating the achievement of the designed accuracy are presented.
A reaction-based river/stream water quality model: Model development and numerical schemes
Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C.; Jardine, Philip M.
2008-01-01
SummaryThis paper presents the conceptual and mathematical development of a numerical model of sediment and reactive chemical transport in rivers and streams. The distribution of mobile suspended sediments and immobile bed sediments is controlled by hydrologic transport as well as erosion and deposition processes. The fate and transport of water quality constituents involving a variety of chemical and physical processes is mathematically described by a system of reaction equations for immobile constituents and advective-dispersive-reactive transport equations for mobile constituents. To circumvent stiffness associated with equilibrium reactions, matrix decomposition is performed via Gauss-Jordan column reduction. After matrix decomposition, the system of water quality constituent reactive transport equations is transformed into a set of thermodynamic equations representing equilibrium reactions and a set of transport equations involving no equilibrium reactions. The decoupling of equilibrium and kinetic reactions enables robust numerical integration of the partial differential equations (PDEs) for non-equilibrium-variables. Solving non-equilibrium-variable transport equations instead of individual water quality constituent transport equations also reduces the number of PDEs. A variety of numerical methods are investigated for solving the mixed differential and algebraic equations. Two verification examples are compared with analytical solutions to demonstrate the correctness of the code and to illustrate the importance of employing application-dependent numerical methods to solve specific problems.
Yang, Xiaofeng
2016-12-01
In this paper, we develop a series of efficient numerical schemes to solve the phase field model for homopolymer blends. The governing system is derived from the energetic variational approach of a total free energy, that consists of a nonlinear logarithmic Flory-Huggins potential, and a gradient entropy with a concentration-dependent de-Gennes type coefficient. The main challenging issue to solve this kind of models numerically is about the time marching problem, i.e., how to develop suitable temporal discretizations for the nonlinear terms in order to preserve the energy stability at the discrete level. We solve this issue in this paper, by developing the first and second order temporal approximation schemes based on the "Invariant Energy Quadratization" method, where all nonlinear terms are treated semi-explicitly. Consequently, the resulting numerical schemes lead to a symmetric positive definite linear system to be solved at each time step. The unconditional energy stabilities are further proved. Various numerical simulations of 2D and 3D are presented to demonstrate the stability and the accuracy of the proposed schemes.
Oddo, Paolo; Storto, Andrea; Dobricic, Srdjan; Russo, Aniello; Lewis, Craig; Onken, Reiner; Coelho, Emanuel
2016-10-01
A hybrid variational-ensemble data assimilation scheme to estimate the vertical and horizontal parts of the background error covariance matrix for an ocean variational data assimilation system is presented and tested in a limited-area ocean model implemented in the western Mediterranean Sea. An extensive data set collected during the Recognized Environmental Picture Experiments conducted in June 2014 by the Centre for Maritime Research and Experimentation has been used for assimilation and validation. The hybrid scheme is used to both correct the systematic error introduced in the system from the external forcing (initialisation, lateral and surface open boundary conditions) and model parameterisation, and improve the representation of small-scale errors in the background error covariance matrix. An ensemble system is run offline for further use in the hybrid scheme, generated through perturbation of assimilated observations. Results of four different experiments have been compared. The reference experiment uses the classical stationary formulation of the background error covariance matrix and has no systematic error correction. The other three experiments account for, or not, systematic error correction and hybrid background error covariance matrix combining the static and the ensemble-derived errors of the day. Results show that the hybrid scheme when used in conjunction with the systematic error correction reduces the mean absolute error of temperature and salinity misfit by 55 and 42 % respectively, versus statistics arising from standard climatological covariances without systematic error correction.
Teruzzi, Anna; Dobricic, Srdjan; Solidoro, Cosimo; Cossarini, Gianpiero
2014-01-01
[1] Increasing attention is dedicated to the implementation of suitable marine forecast systems for the estimate of the state of the ocean. Within the framework of the European MyOcean infrastructure, the pre-existing short-term Mediterranean Sea biogeochemistry operational forecast system has been upgraded by assimilating remotely sensed ocean color data in the coupled transport-biogeochemical model OPATM-BFM using a 3-D variational data assimilation (3D-VAR) procedure. In the present work, the 3D-VAR scheme is used to correct the four phytoplankton functional groups included in the OPATM-BFM in the period July 2007 to September 2008. The 3D-VAR scheme decomposes the error covariance matrix using a sequence of different operators that account separately for vertical covariance, horizontal covariance, and covariance among biogeochemical variables. The assimilation solution is found in a reduced dimensional space, and the innovation for the biogeochemical variables is obtained by the sequential application of the covariance operators. Results show a general improvement in the forecast skill, providing a correction of the basin-scale bias of surface chlorophyll concentration and of the local-scale spatial and temporal dynamics of typical bloom events. Further, analysis of the assimilation skill provides insights into the functioning of the model. The computational costs of the assimilation scheme adopted are low compared to other assimilation techniques, and its modular structure facilitates further developments. The 3D-VAR scheme results especially suitable for implementation within a biogeochemistry operational forecast system.
Diagnosis and Modeling of the Explosive Development of Winter Storms: Sensitivity to PBL Schemes
Liberato, Margarida L. R.; Pradhan, Prabodha K.
2014-05-01
The correct representation of extreme windstorms in regional models is of great importance for impact studies of climate change. The Iberian Peninsula has recently witnessed major damage from winter extratropical intense cyclones like Klaus (January 2009), Xynthia (February 2010) and Gong (January 2013) which formed over the mid-Atlantic, experienced explosive intensification while travelling eastwards at lower latitudes than usual [Liberato et al. 2011; 2013]. In this paper the explosive development of these storms is simulated by the advanced mesoscale Weather Research and Forecasting Model (WRF v 3.4.1), initialized with NCEP Final Analysis (FNL) data as initial and lateral boundary conditions (boundary conditions updated in every 3 hours intervals). The simulation experiments are conducted with two domains, a coarser (25km) and nested (8.333km), covering the entire North Atlantic and Iberian Peninsula region. The characteristics of these storms (e.g. wind speed, precipitation) are studied from WRF model and compared with multiple observations. In this context simulations with different Planetary Boundary Layer (PBL) schemes are performed. This approach aims at understanding which mechanisms favor the explosive intensification of these storms at a lower than usual latitudes, thus improving the knowledge of atmospheric dynamics (including small-scale processes) on controlling the life cycle of midlatitude extreme storms and contributing to the improvement in predictability and in our ability to forecast storms' impacts over Iberian Peninsula. Acknowledgments: This work was partially supported by FEDER (Fundo Europeu de Desenvolvimento Regional) funds through the COMPETE (Programa Operacional Factores de Competitividade) and by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project STORMEx FCOMP-01-0124-FEDER- 019524 (PTDC/AAC-CLI/121339/2010). References: Liberato M.L.R., J.G. Pinto, I.F. Trigo, R.M. Trigo (2011) Klaus - an
The Satellite Based Hydrological Model (SHM): Routing Scheme and its Evaluation
kumari, Nikul; Paul, Pranesh Kumar; Singh, Rajendra; Panigrahy, Niranjan; Mishra, Ashok; Gupta, Praveen Kumar; Singh, Raghavendra P.
2016-04-01
The collection of spatially extensive data by using the traditional methods of data acquisition is a challenging task for a large territory like India. To overcome such problems, the Satellite based Hydrological Model (SHM), a large scale conceptual hydrological model for the Indian Territory, is being developed under the PRACRITI-2 program of the Space Applications Centre (SAC), Ahmedabad. The model aims at preparing sustainable water management scenarios using remote sensing data from Indian satellites to handle the fresh water crisis in India. There are five modules namely, Surface Water (SW), Forest (F), Snow (S), Groundwater (GW) and Routing (ROU) in the SHM. The SW, F and S modules convert rainfall into surface runoff and generate input (infiltration and percolation) for the GW module, and GW generates baseflow using that input. In this study, a cell-to-cell routing (ROU) module has been developed for SHM. It is based on the principle of Time Variant Spatially Distributed Direct Hydrograph (SDDH) to route the generated runoff and baseflow generated by various modules upto the outlet. The entire India is divided into 5km x 5km grid cells and properties at the center of the cell are assumed to represent the property of the cell. In the routing scheme, for each cell a single downstream cell is defined in the direction of steepest descent, to create the flow network. These grid cells are classified into overland cells and channel cells based on the threshold value taken into consideration. The overland flow travel time of each overland cell is estimated by combining a steady state kinematic wave approximation with Manning's equation and the channel flow travel time of each channel cell is estimated using Manning's equation and the steady state continuity equation. The travel time for each cell is computed by dividing the travel distance through that cell with cell velocity. The cumulative travel time from each grid cell to the watershed outlet is the sum of
Avolio, E.; Federico, S.; Miglietta, M. M.; Lo Feudo, T.; Calidonna, C. R.; Sempreviva, A. M.
2017-08-01
The sensitivity of boundary layer variables to five (two non-local and three local) planetary boundary-layer (PBL) parameterization schemes, available in the Weather Research and Forecasting (WRF) mesoscale meteorological model, is evaluated in an experimental site in Calabria region (southern Italy), in an area characterized by a complex orography near the sea. Results of 1 km × 1 km grid spacing simulations are compared with the data collected during a measurement campaign in summer 2009, considering hourly model outputs. Measurements from several instruments are taken into account for the performance evaluation: near surface variables (2 m temperature and relative humidity, downward shortwave radiation, 10 m wind speed and direction) from a surface station and a meteorological mast; vertical wind profiles from Lidar and Sodar; also, the aerosol backscattering from a ceilometer to estimate the PBL height. Results covering the whole measurement campaign show a cold and moist bias near the surface, mostly during daytime, for all schemes, as well as an overestimation of the downward shortwave radiation and wind speed. Wind speed and direction are also verified at vertical levels above the surface, where the model uncertainties are, usually, smaller than at the surface. A general anticlockwise rotation of the simulated flow with height is found at all levels. The mixing height is overestimated by all schemes and a possible role of the simulated sensible heat fluxes for this mismatching is investigated. On a single-case basis, significantly better results are obtained when the atmospheric conditions near the measurement site are dominated by synoptic forcing rather than by local circulations. From this study, it follows that the two first order non-local schemes, ACM2 and YSU, are the schemes with the best performance in representing parameters near the surface and in the boundary layer during the analyzed campaign.
DEFF Research Database (Denmark)
Avolio, E.; Federico, S.; Miglietta, M.
2017-01-01
The sensitivity of boundary layer variables to five (two non-local and three local) planetary boundary-layer (PBL) parameterization schemes, available in the Weather Research and Forecasting (WRF) mesoscale meteorological model, is evaluated in an experimental site in Calabria region (southern...... Italy), in an area characterized by a complex orography near the sea. Results of 1kmÃ—1km grid spacing simulations are compared with the data collected during a measurement campaign in summer 2009, considering hourly model outputs. Measurements from several instruments are taken into account...... the surface, where the model uncertainties are, usually, smaller than at the surface. A general anticlockwise rotation of the simulated flow with height is found at all levels. The mixing height is overestimated by all schemes and a possible role of the simulated sensible heat fluxes for this mismatching...
Modeling and analysis of the effectiveness of two redistributing schemes in P2P streaming network
Gaidamaka, Yu.; Medvedeva, E.; Adamu, A.
2017-07-01
This paper explores so called "View-Upload Decoupling scheme" which strictly decouples data to what peer uploads and what it personally views. It is based on the split of downloaded user data streams into two types: the stream of the chosen TV channel and the stream (one or more) of the other TV channel, exclusively, to deliver it to other users. We developed a method for calculating one of the key QoS characteristics in P2P streaming network - the probability of universal streaming for each channel and a system-wide universal streaming across all channels. The statistical probability results of universal streaming for the VUD scheme compared to the traditional redistributing ISO scheme are presented.
Capillary-wave models and the effective-average-action scheme of functional renormalization group.
Jakubczyk, P
2011-08-01
We reexamine the functional renormalization-group theory of wetting transitions. As a starting point of the analysis we apply an exact equation describing renormalization group flow of the generating functional for irreducible vertex functions. We show how the standard nonlinear renormalization group theory of wetting transitions can be recovered by a very simple truncation of the exact flow equation. The derivation makes all the involved approximations transparent and demonstrates the applicability of the approach in any spatial dimension d≥2. Exploiting the nonuniqueness of the renormalization-group cutoff scheme, we find, however, that the capillary parameter ω is a scheme-dependent quantity below d=3. For d=3 the parameter ω is perfectly robust against scheme variation.
Michou, M.; Nabat, P.; Saint-Martin, D.
2015-03-01
We have implemented a prognostic aerosol scheme (v1) in CNRM-CM6, the climate model of CNRM-GAME and CERFACS, based upon the GEMS/MACC aerosol module of the ECMWF operational forecast model. This scheme describes the physical evolution of the five main types of aerosols, namely black carbon, organic matter, sulfate, desert dust and sea salt. In this work, we describe the characteristics of our implementation, for instance, taking into consideration a different dust scheme or boosting biomass burning emissions by a factor of 2, as well as the evaluation performed on simulation output. The simulations consist of time slice simulations for 2004 conditions and transient runs over the 1993-2012 period, and are either free-running or nudged towards the ERA-Interim Reanalysis. Evaluation data sets include several satellite instrument AOD (aerosol optical depth) products (i.e., MODIS Aqua classic and Deep-Blue products, MISR and CALIOP products), as well as ground-based AERONET data and the derived AERONET climatology, MAC-v1. The uncertainty of aerosol-type seasonal AOD due to model internal variability is low over large parts of the globe, and the characteristics of a nudged simulation reflect those of a free-running simulation. In contrast, the impact of the new dust scheme is large, with modelled dust AODs from simulations with the new dust scheme close to observations. Overall patterns and seasonal cycles of the total AOD are well depicted with, however, a systematic low bias over oceans. The comparison to the fractional MAC-v1 AOD climatology shows disagreements mostly over continents, while that to AERONET sites outlines the capability of the model to reproduce monthly climatologies under very diverse dominant aerosol types. Here again, underestimation of the total AOD appears in several cases, sometimes linked to insufficient efficiency of the aerosol transport away from the aerosol sources. Analysis of monthly time series at 166 AERONET sites shows, in general
Efficient Threshold Signature Scheme
Directory of Open Access Journals (Sweden)
Sattar J Aboud
2012-01-01
Full Text Available In this paper, we introduce a new threshold signature RSA-typed scheme. The proposed scheme has the characteristics of un-forgeable and robustness in random oracle model. Also, signature generation and verification is entirely non-interactive. In addition, the length of the entity signature participate is restricted by a steady times of the length of the RSA signature modulus. Also, the signing process of the proposed scheme is more efficient in terms of time complexity and interaction.
Stateless Transitive Signature Schemes
Institute of Scientific and Technical Information of China (English)
MA Chun-guang; CAI Man-chun; YANG Yi-xian
2004-01-01
A new practical method is introduced to transform the stateful transitive signature scheme to stateless one without the loss of security. According to the approach, two concrete stateless transitive signature schemes based on Factoring and RSA are presented respectively. Under the assumption of the hardness of factoring and one-more- RSA-inversion problem, both two schemes are secure under the adaptive chosen-message attacks in random oracle model.
The emergence of nonuniform spatiotemporal fractionation schemes within the standard BED model
Energy Technology Data Exchange (ETDEWEB)
Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Papp, Dávid [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)
2015-05-15
Purpose: Nonuniform spatiotemporal radiotherapy fractionation schemes, i.e., delivering distinct dose distributions in different fractions can potentially improve the therapeutic ratio. This is possible if the dose distributions are designed such that similar doses are delivered to normal tissues (exploit the fractionation effect) while hypofractionating subregions of the tumor. In this paper, the authors develop methodology for treatment planning with nonuniform fractions and demonstrate this concept in the context of intensity-modulated proton therapy (IMPT). Methods: Treatment planning is performed by simultaneously optimizing (possibly distinct) IMPT dose distributions for multiple fractions. This is achieved using objective and constraint functions evaluated for the cumulative biologically equivalent dose (BED) delivered at the end of treatment. BED based treatment planning formulations lead to nonconvex optimization problems, such that local gradient based algorithms require adequate starting positions to find good local optima. To that end, the authors develop a combinatorial algorithm to initialize the pencil beam intensities. Results: The concept of nonuniform spatiotemporal fractionation schemes is demonstrated for a spinal metastasis patient treated in two fractions using stereotactic body radiation therapy. The patient is treated with posterior oblique beams with the kidneys being located in the entrance region of the beam. It is shown that a nonuniform fractionation scheme that hypofractionates the central part of the tumor allows for a skin and kidney BED reduction of approximately 10%–20%. Conclusions: Nonuniform spatiotemporal fractionation schemes represent a novel approach to exploit fractionation effects that deserves further exploration for selected disease sites.
CSIR Research Space (South Africa)
Jovanovic, Nebo
2017-01-01
Full Text Available The Atlantis Water Supply Scheme (AWSS, Western Cape, South Africa) has been in operation for about 40 years as a means to supply and augment drinking water to the town of Atlantis via managed aquifer recharge (MAR). In this study, the numerical...
Directory of Open Access Journals (Sweden)
Yanxue Yu
2017-01-01
Full Text Available As a basic building block in power systems, the three-phase voltage-source inverter (VSI connects the distributed energy to the grid. For the inductor-capacitor-inductor (LCL-filter three-phase VSI, according to different current sampling position and different reference frame, there mainly exist four control schemes. Different control schemes present different impedance characteristics in their corresponding determined frequency range. To analyze the existing resonance phenomena due to the variation of grid impedances, the sequence impedance models of LCL-type grid-connected three-phase inverters under different control schemes are presented using the harmonic linearization method. The impedance-based stability analysis approach is then applied to compare the relative stability issues due to the impedance differences at some frequencies and to choose the best control scheme and the better controller parameters regulating method for the LCL-type three-phase VSI. The simulation and experiments both validate the resonance analysis results.
Energy Technology Data Exchange (ETDEWEB)
Goudon, Thierry, E-mail: thierry.goudon@inria.fr [Team COFFEE, INRIA Sophia Antipolis Mediterranee (France); Labo. J.A. Dieudonne CNRS and Univ. Nice-Sophia Antipolis (UMR 7351), Parc Valrose, 06108 Nice cedex 02 (France); Parisot, Martin, E-mail: martin.parisot@gmail.com [Project-Team SIMPAF, INRIA Lille Nord Europe, Park Plazza, 40 avenue Halley, F-59650 Villeneuve d' Ascq cedex (France)
2012-10-15
In the so-called Spitzer-Haerm regime, equations of plasma physics reduce to a nonlinear parabolic equation for the electronic temperature. Coming back to the derivation of this limiting equation through hydrodynamic regime arguments, one is led to construct a hierarchy of models where the heat fluxes are defined through a non-local relation which can be reinterpreted as well by introducing coupled diffusion equations. We address the question of designing numerical methods to simulate these equations. The basic requirement for the scheme is to be asymptotically consistent with the Spitzer-Haerm regime. Furthermore, the constraints of physically realistic simulations make the use of unstructured meshes unavoidable. We develop a Finite Volume scheme, based on Vertex-Based discretization, which reaches these objectives. We discuss on numerical grounds the efficiency of the method, and the ability of the generalized models in capturing relevant phenomena missed by the asymptotic problem.
Directory of Open Access Journals (Sweden)
Parovik Roman I.
2016-09-01
Full Text Available The paper deals with the model of variable-order nonlinear hereditary oscillator based on a numerical finite-difference scheme. Numerical experiments have been carried out to evaluate the stability and convergence of the difference scheme. It is argued that the approximation, stability and convergence are of the first order, while the scheme is stable and converges to the exact solution.
Peng, Qiujin
2017-09-18
In this paper, we present two second-order numerical schemes to solve the fourth order parabolic equation derived from a diffuse interface model with Peng-Robinson Equation of state (EOS) for pure substance. The mass conservation, energy decay property, unique solvability and L-infinity convergence of these two schemes are proved. Numerical results demonstrate the good approximation of the fourth order equation and confirm reliability of these two schemes.
A non-perturbative real-space renormalization group scheme for the spin-1/2 XXX Heisenberg model
Degenhard, Andreas
1999-01-01
In this article we apply a recently invented analytical real-space renormalization group formulation which is based on numerical concepts of the density matrix renormalization group. Within a rigorous mathematical framework we construct non-perturbative renormalization group transformations for the spin-1/2 XXX Heisenberg model in the finite temperature regime. The developed renormalization group scheme allows for calculating the renormalization group flow behaviour in the temperature depende...
Hu, Kainan; Geng, Shaojuan
2016-01-01
A new lattice Boltzmann scheme associated with flexible specific heat ratio is proposed. The new free degree is introduced via the internal energy associated with the internal structure. The evolution equation of the distribution function is reduced to two evolution equations. One is connected to the density and velocity, the other is of the energy. A two-dimensional lattice Boltzmann model and a three-dimensional lattice Boltzmann model are derived via the Hermite expansion. The two lattice Boltzmann models are applied to simulating the shock tube of one dimension. Good agreement between the numerical results and the analytical solutions are obtained.
Institute of Scientific and Technical Information of China (English)
Yingmin Jia
2009-01-01
This paper mainly studies the model matching problem of multiple-output-delay systems in which the reference model is assigned to a diagonal transfer function matrix.A new model matching controller structure is first developed,and then,it is shown that the controller is feasible if and only if the sets of Diophantine equations have common solutions.The obtained controller allows a parametric representation,which shows that an adaptive scheme can be used to tolerate parameter variations in the plants.The resulting adaptive law can guarantee the global stability of the closed-loop systems and the convergence of the output error.
Liarokapis, Minas V; Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J; Manolakos, Elias S
2013-09-01
A learning scheme based on random forests is used to discriminate between different reach to grasp movements in 3-D space, based on the myoelectric activity of human muscles of the upper-arm and the forearm. Task specificity for motion decoding is introduced in two different levels: Subspace to move toward and object to be grasped. The discrimination between the different reach to grasp strategies is accomplished with machine learning techniques for classification. The classification decision is then used in order to trigger an EMG-based task-specific motion decoding model. Task specific models manage to outperform "general" models providing better estimation accuracy. Thus, the proposed scheme takes advantage of a framework incorporating both a classifier and a regressor that cooperate advantageously in order to split the task space. The proposed learning scheme can be easily used to a series of EMG-based interfaces that must operate in real time, providing data-driven capabilities for multiclass problems, that occur in everyday life complex environments.
Ekin Aydin, Boran; Rutten, Martine; Oude Essink, Gualbert H. P.; Delsman, Joost; Abraham, Edo
2017-04-01
Saline groundwater exfiltration to surface water increases surface water salinization and degrades the water quality in low-lying deltaic areas. As the use of surface water is less appropriate for agricultural, industrial and drinking water production due to salinization, freshwater diverted from a river is often utilised for flushing canals and ditches in these areas. Current water management strategies for flushing control in low-lying deltaic areas have to be revised to mitigate expected negative effects of climate change, sea level increase and decreasing fresh water availability. Model predictive control (MPC) is a powerful control method that is increasingly used for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. In this study, an MPC scheme is developed and tested for combined salinity and water level control of a ditch/water course. Saline groundwater exfiltration fluxes and salinities are modelled by applying the Rapid Saline Groundwater Exfiltration Model (RSGEM) and used as known disturbances for the MPC scheme. The developed control scheme is applied to a test canal using real data from a Dutch polder (Polders are low lying and artificially drained areas surrounded by dikes, with a controlled surface water level below M.S.L) which is affected by high saline groundwater exfiltration. This test demonstrates the performance of the controller for a real scenario. Simulation results show that MPC can increase the operational efficiency of flushing operations.
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
2016-08-01
Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
Directory of Open Access Journals (Sweden)
Saulo Frietas
2012-01-01
Full Text Available An advection scheme, which maintains the initial monotonic characteristics of a tracer field being transported and at the same time produces low numerical diffusion, is implemented in the Coupled Chemistry-Aerosol-Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CCATT-BRAMS. Several comparisons of transport modeling using the new and original (non-monotonic CCATT-BRAMS formulations are performed. Idealized 2-D non-divergent or divergent and stationary or time-dependent wind fields are used to transport sharply localized tracer distributions, as well as to verify if an existent correlation of the mass mixing ratios of two interrelated tracers is kept during the transport simulation. Further comparisons are performed using realistic 3-D wind fields. We then perform full simulations of real cases using data assimilation and complete atmospheric physics. In these simulations, we address the impacts of both advection schemes on the transport of biomass burning emissions and the formation of secondary species from non-linear chemical reactions of precursors. The results show that the new scheme produces much more realistic transport patterns, without generating spurious oscillations and under- and overshoots or spreading mass away from the local peaks. Increasing the numerical diffusion in the original scheme in order to remove the spurious oscillations and maintain the monotonicity of the transported field causes excessive smoothing in the tracer distribution, reducing the local gradients and maximum values and unrealistically spreading mass away from the local peaks. As a result, huge differences (hundreds of % for relatively inert tracers (like carbon monoxide are found in the smoke plume cores. In terms of the secondary chemical species formed by non-linear reactions (like ozone, we found differences of up to 50% in our simulations.
The four-populations model: a new classification scheme for pre-planetesimal collisions
Geretshauser, Ralf J; Speith, Roland; Kley, WIlhelm
2011-01-01
Within the collision growth scenario for planetesimal formation, the growth step from centimetre sized pre-planetesimals to kilometre sized planetesimals is still unclear. The formation of larger objects from the highly porous pre-planetesimals may be halted by a combination of fragmentation in disruptive collisions and mutual rebound with compaction. However, the right amount of fragmentation is necessary to explain the observed dust features in late T Tauri discs. Therefore, detailed data on the outcome of pre-planetesimal collisions is required and has to be presented in a suitable and precise format. We propose and apply a new classification scheme for pre-planetesimal collisions based on the quantitative aspects of four fragment populations: the largest and second largest fragment, a power-law population, and a sub-resolution population. For the simulations of pre-planetesimal collisions, we adopt the SPH numerical scheme with extensions for the simulation of porous solid bodies. By means of laboratory b...
Burgalat, J.; Rannou, P.; Cours, T.; Rivière, E. D.
2014-03-01
Microphysical models describe the way aerosols and clouds behave in the atmosphere. Two approaches are generally used to model these processes. While the first approach discretizes processes and aerosols size distributions on a radius grid (bin scheme), the second uses bulk parameters of the size distribution law (its mathematical moments) to represent the evolution of the particle population (moment scheme). However, with the latter approach, one needs to have an a priori knowledge of the size distributions. Moments scheme for Cloud microphysics modeling have been used and enhanced since decades for climate studies of the Earth. Most of the tools are based on Log-Normal law which are suitable for Earth, Mars or Venus. On Titan, due to the fractal structure of the aerosols, the size distributions do not follow a log-normal law. Then using a moment scheme in that case implies to define the description of the size distribution and to review the equations that are widely published in the literature. Our objective is to enable the use of a fully described microphysical model using a moment scheme within a Titan's Global Climate Model. As a first step in this direction, we present here a moment scheme dedicated to clouds microphysics adapted for Titan's atmosphere conditions. We perform comparisons between the two kinds of schemes (bin and moments) using an annual and a diurnal cycle, to check the validity of our moment description. The various forcing produce a time-variable cloud layer in relation with the temperature cycle. We compare the column opacities and the temperature for the two schemes, for each cycles. We also compare more detailed quantities as the opacity distribution of the cloud events at different periods of these cycles. Results show that differences between the two approaches have a small impact on the temperature (less than 1 K) and range between 1% and 10% for haze and clouds opacities. Both models behave in similar way when forced by an annual and
A Coordination Scheme for Distributed Model Predictive Control: Integration of Flexible DERs
DEFF Research Database (Denmark)
Costanzo, Giuseppe Tommaso; Gehrke, Oliver; Bondy, Daniel Esteban Morales
2013-01-01
between the grid and a cluster of units in the same feeder in pricedriven demand response applications. Preliminary simulations prove that the proposed coordination scheme for DMPC succeeds in coordinating flexible DER unit, achieving significant peak shaving when required. The rationale of this approach...... consists in coordinating independent units equipped with local MPC controller via simple information passing and hiding in the local controllers the units’ dynamics....
Simsek, Gorkem; Roudbari, Mahnaz Shokrpour; van Brummelen, E Harald
2016-01-01
We derive a new form of a thermodynamically consistent quasi-incompressible diffuse-interface Navier-Stokes Cahn-Hilliard model for a two-phase-flow of incompressible fluids with different densities. The derivation is based on mixture theory by invoking the second law of thermodynamics and Coleman-Noll procedure. In addition, we develop a linear and unconditionally energy stable time-integration scheme for the derived model. Such a scheme is nontrivial, because it has to suitably deal with all nonlinear terms in the model. Our proposed scheme is the first linear method satisfying a discrete energy law for quasi-incompressible two-phase flows. The scheme also preserves mass. Numerical experiments verify the suitability of the scheme for high density ratios and for large time step sizes by considering the coalescence and break-up dynamics of droplets including pinching due to gravity.
Directory of Open Access Journals (Sweden)
G. Santhosh Kumar
2014-01-01
Full Text Available In this paper, we propose a frame-work for the performance evaluation of frequency allocation schemes in 3G LTE OFDMA systems. We first develop an analytical model for collisions in an OFDMA system for an arbitrary number of users in the different cells. We then calculate the capacity of the system using a Markov model and taking into account the inter-cell interference and its impact on the adaptive modulation. We finally apply this model to compare three frequency allocation schemes, namely reuse 1, reuse 3, and a mix of reuse 1 and 3. Our results show that a mix of reuse 1 and 3 schemes outperforms a reuse 1 scheme in terms of better cell-edge performance, and outperforms also a reuse 3 scheme by achieving an higher cell throughput.
Lin, Shian-Jiann; Chao, Winston C.; Sud, Y. C.; Walker, G. K.
1994-01-01
A generalized form of the second-order van Leer transport scheme is derived. Several constraints to the implied subgrid linear distribution are discussed. A very simple positive-definite scheme can be derived directly from the generalized form. A monotonic version of the scheme is applied to the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) for the moisture transport calculations, replacing the original fourth-order center-differencing scheme. Comparisons with the original scheme are made in idealized tests as well as in a summer climate simulation using the full GLA GCM. A distinct advantage of the monotonic transport scheme is its ability to transport sharp gradients without producing spurious oscillations and unphysical negative mixing ratio. Within the context of low-resolution climate simulations, the aforementioned characteristics are demonstrated to be very beneficial in regions where cumulus convection is active. The model-produced precipitation pattern using the new transport scheme is more coherently organized both in time and in space, and correlates better with observations. The side effect of the filling algorithm used in conjunction with the original scheme is also discussed, in the context of idealized tests. The major weakness of the proposed transport scheme with a local monotonic constraint is its substantial implicit diffusion at low resolution. Alternative constraints are discussed to counter this problem.
DEFF Research Database (Denmark)
Sørensen, Jacob Viborg Tornfeldt; Madsen, Henrik; Madsen, H.
2006-01-01
sensitivity study of three well known Kalman filter approaches for the assimilation of water levels in a three dimensional hydrodynamic modelling system. The filters considered are the ensemble Kalman filter (EnKF), the reduced rank square root Kalman filter (RRSQRT) and the steady Kalman filter....... A sensitivity analysis of key parameters in the schemes is undertaken for a setup in an idealised bay. The sensitivity of the resulting root mean square error (RMSE) is shown to be low to moderate. Hence the schemes are robust within an acceptable range and their application even with misspecified parameters...... is to be encouraged in this perspective. However, the predicted uncertainty of the assimilation results are sensitive to the parameters and hence must be applied with care. The sensitivity study further demonstrates the effectiveness of the steady Kalman filter in the given system as well as the great impact...
Provable Secure Identity Based Generalized Signcryption Scheme
Yu, Gang; Shen, Yong; Han, Wenbao
2010-01-01
According to actual needs, generalized signcryption scheme can flexibly work as an encryption scheme, a signature scheme or a signcryption scheme. In this paper, firstly, we give a security model for identity based generalized signcryption which is more complete than existing model. Secondly, we propose an identity based generalized signcryption scheme. Thirdly, we give the security proof of the new scheme in this complete model. Comparing with existing identity based generalized signcryption, the new scheme has less implementation complexity. Moreover, the new scheme has comparable computation complexity with the existing normal signcryption schemes.
Geroyannis, Vassilis S
2014-01-01
We develop a "hybrid approximative scheme" in the framework of the post-Newtonian approximation for computing general-relativistic polytropic models simulating neutron stars in critical rigid rotation. We treat the differential equations governing such a model as a "complex initial value problem", and we solve it by using the so-called "complex-plane strategy". We incorporate into the computations the complete solution for the relativistic effects, this issue representing a significant improvement with regard to the classical post-Newtonian approximation, as verified by extended comparisons of the numerical results.
Gaonkar, A. K.; Kulkarni, S. S.
2015-01-01
In the present paper, a method to reduce the computational cost associated with solving a nonlinear transient heat conduction problem is presented. The proposed method combines the ideas of two level discretization and the multilevel time integration schemes with the proper orthogonal decomposition model order reduction technique. The accuracy and the computational efficiency of the proposed methods is discussed. Several numerical examples are presented for validation of the approach. Compared to the full finite element model, the proposed method significantly reduces the computational time while maintaining an acceptable level of accuracy.
Iguchi, Takamichi; Nakajima, Teruyuki; Khain, Alexander P.; Saito, Kazuo; Takemura, Toshihiko; Suzuki, Kentaroh
2008-01-01
A bin-based microphysics scheme for cloud is implemented into a three-dimensional nonhydrostatic model and off-line coupled with a global aerosol transport model to reproduce realistic and inhomogeneous condensation nuclei (CN) fields. This coupling makes it possible to calculate cloud microphysical properties over a larger area under more realistic environmental conditions. Using the model, nested grid simulations are performed for two precipitation events associated with transitional synopt...
Verification of a Higher-Order Finite Difference Scheme for the One-Dimensional Two-Fluid Model
Directory of Open Access Journals (Sweden)
William D. Fullmer
2013-06-01
Full Text Available The one-dimensional two-fluid model is widely acknowledged as the most detailed and accurate macroscopic formulation model of the thermo-fluid dynamics in nuclear reactor safety analysis. Currently the prevailing one-dimensional thermal hydraulics codes are only first-order accurate. The benefit of first-order schemes is numerical viscosity, which serves as a regularization mechanism for many otherwise ill-posed two-fluid models. However, excessive diffusion in regions of large gradients leads to poor resolution of phenomena related to void wave propagation. In this work, a higher-order shock capturing method is applied to the basic equations for incompressible and isothermal flow of the one-dimensional two-fluid model. The higher-order accuracy is gained by a strong stability preserving multi-step scheme for the time discretization and a minmod flux limiter scheme for the convection terms. Additionally the use of a staggered grid allows for several second-order centered terms, when available. The continuity equations are first tested by manipulating the two-fluid model into a pair of linear wave equations and tested for smooth and discontinuous initial data. The two-fluid model is benchmarked with the water faucet problem. With the higher-order method, the ill-posed nature of the governing equations presents severe challenges due to a growing void fraction jump in the solution. Therefore the initial and boundary conditions of the problem are modified in order to eliminate a large counter-current flow pattern that develops. With the modified water faucet problem the numerical models behave well and allow a convergence study. Using the L1 norm of the liquid fraction, it is verified that the first and higher-order numerical schemes converge to the quasi-analytical solution at a rate of O(1/2 and O(2/3, respectively. It is also shown that the growing void jump is a contact discontinuity, i.e. it is a linearly degenerate wave. The sub
Heller, Johann; Van Rienen, Ursula; 10.1016/j.phpro.2015.11.060
2015-01-01
The computation of electromagnetic fields and parameters derived thereof for lossless radio frequency (RF) structures filled with isotropic media is an important task for the design and operation of particle accelerators. Unfortunately, these computations are often highly demanding with regard to computational effort. The entire computational demand of the problem can be reduced using decomposition schemes in order to solve the field problems on standard workstations. This paper presents one of the first detailed comparisons between the recently proposed state-space concatenation approach (SSC) and a direct computation for an accelerator cavity with coupler-elements that break the rotational symmetry.
PTFE treatment by remote atmospheric Ar/O2 plasmas: a simple reaction scheme model proposal
Carbone, E A D; Keuning, W; van der Mullen, J J A M
2013-01-01
Polytetrafluoroethylene (PTFE) samples were treated by a remote atmospheric pressure microwave plasma torch and analyzed by water contact angle (WCA) and X-ray photoelectron spectroscopy (XPS). In the case of pure argon plasma a decrease of WCA is observed meanwhile an increase of hydrophobicity was observed when some oxygen was added to the discharge. The WCA results are correlated to XPS of reference samples and the change of WCA are attributed to changes in roughness of the samples. A simple kinetics scheme for the chemistry on the PTFE surface is proposed to explain the results.
DEFF Research Database (Denmark)
van Leeuwen, Theo
2013-01-01
This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....
Development of Non-staggered, semi-implicit ICE numerical scheme for a two-fluid, three-field model
Energy Technology Data Exchange (ETDEWEB)
Jeong, Jae Jun; Yoon, H. Y.; Bae, S. W
2007-11-15
A pilot code for one-dimensional, transient, two-fluid, three-field model has been developed. In this code, the semi-implicit ICE numerical scheme has been adapted to a 'non-staggered' grid. Using several conceptual problems, the numerical scheme has been verified. The results of the verifications are summarized below: - It was confirmed that the basic pilot code can simulate various flow conditions (such as single-phase liquid flow, two-phase mixture flow, and single-phase vapor flow) and transitions of the flow conditions. A mist flow was not simulated, but it seems that the basic pilot code can simulate mist flow conditions. - The mass and energy conservation was confirmed for single-phase liquid and single-phase vapor flows. - It was confirmed that the inlet pressure and velocity boundary conditions work properly. - It was confirmed that, for single- and two-phase flows, the velocity and temperature of non-existing phase are calculated as intended. The non-staggered, semi-implicit ICE numerical scheme, which has been developed in this study, will be a starting point of a new code development that adopts an unstructured finite volume method.
Provable Secure Identity Based Generalized Signcryption Scheme
Yu, Gang; Ma, Xiaoxiao; Shen, Yong; Han, Wenbao
2010-01-01
According to actual needs, generalized signcryption scheme can flexibly work as an encryption scheme, a signature scheme or a signcryption scheme. In this paper, firstly, we give a security model for identity based generalized signcryption which is more complete than existing model. Secondly, we propose an identity based generalized signcryption scheme. Thirdly, we give the security proof of the new scheme in this complete model. Comparing with existing identity based generalized signcryption...
Chu, Chunlei
2009-01-01
We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.
Bonne, François; Alamir, Mazen; Bonnay, Patrick
2014-01-01
In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.
Li, Wenkai; Guo, Weidong; Xue, Yongkang; Fu, Congbin; Qiu, Bo
2016-10-01
Land surface processes play an important role in the East Asian Summer Monsoon (EASM) system. Parameterization schemes of land surface processes may cause uncertainties in regional climate model (RCM) studies for the EASM. In this paper, we investigate the sensitivity of a RCM to land surface parameterization (LSP) schemes for long-term simulation of the EASM. The Weather Research and Forecasting (WRF) Model coupled with four different LSP schemes (Noah-MP, CLM4, Pleim-Xiu and SSiB), hereafter referred to as Sim-Noah, Sim-CLM, Sim-PX and Sim-SSiB respectively, have been applied for 22-summer EASM simulations. The 22-summer averaged spatial distributions and strengths of downscaled large-scale circulation, 2-m temperature and precipitation are comprehensively compared with ERA-Interim reanalysis and dense station observations in China. Results show that the downscaling ability of RCM for the EASM is sensitive to LSP schemes. Furthermore, this study confirms that RCM does add more information to the EASM compared to reanalysis that imposes the lateral boundary conditions (LBC) because it provides 2-m temperature and precipitation that are with higher resolution and more realistic compared to LBC. For 2-m temperature and monsoon precipitation, Sim-PX and Sim-SSiB simulations are more consistent with observation than simulations of Sim-Noah and Sim-CLM. To further explore the physical and dynamic mechanisms behind the RCM sensitivity to LSP schemes, differences in the surface energy budget between simulations of Ens-Noah-CLM (ensemble mean averaging Sim-Noah and Sim-CLM) and Ens-PX-SSiB (ensemble mean averaging Sim-PX and Sim-SSiB) are investigated and their subsequent impacts on the atmospheric circulation are analyzed. It is found that the intensity of simulated sensible heat flux over Asian continent in Ens-Noah-CLM is stronger than that in Ens-PX-SSiB, which induces a higher tropospheric temperature in Ens-Noah-CLM than in Ens-PX-SSiB over land. The adaptive
Xia, Xilin; Liang, Qiuhua; Ming, Xiaodong; Hou, Jingming
2017-05-01
Numerical models solving the full 2-D shallow water equations (SWEs) have been increasingly used to simulate overland flows and better understand the transient flow dynamics of flash floods in a catchment. However, there still exist key challenges that have not yet been resolved for the development of fully dynamic overland flow models, related to (1) the difficulty of maintaining numerical stability and accuracy in the limit of disappearing water depth and (2) inaccurate estimation of velocities and discharges on slopes as a result of strong nonlinearity of friction terms. This paper aims to tackle these key research challenges and present a new numerical scheme for accurately and efficiently modeling large-scale transient overland flows over complex terrains. The proposed scheme features a novel surface reconstruction method (SRM) to correctly compute slope source terms and maintain numerical stability at small water depth, and a new implicit discretization method to handle the highly nonlinear friction terms. The resulting shallow water overland flow model is first validated against analytical and experimental test cases and then applied to simulate a hypothetic rainfall event in the 42 km2 Haltwhistle Burn, UK.
Directory of Open Access Journals (Sweden)
Novriana Sumarti
2015-03-01
Full Text Available The mathematical model for a profit-loss sharing scheme is formulated in order to see how this scheme can replace the traditional practice of lending money against high interest by usurers. It is sourced from the musyarakah method in Islamic Syariah law and implemented for small-scale investments of traditional-market traders. They are the common target of usurers, so they may end up poorer than they were before. The main goal of the model is to find the appropriate portion of profit share, so the investment is profitable not only for the investor but also for the trader. There are three main problems in the process of formulating the mathematical model and finding optimized results. The first problem is providing the appropriate amount of data to be implemented in the model. The second problem is determining the objective function for the optimization of the portion of profit share. The last problem is determining the appropriate values of the parameters for certain types of traders. We found a significant result in determining the appropriate values of the parameters that explain the potential capability of the traders in handling larger amounts of capital to be invested in order to achieve our main goal.
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
Bates, J. R.; Moorthi, S.; Higgins, R. W.
1993-01-01
An adiabatic global multilevel primitive equation model using a two time-level, semi-Lagrangian semi-implicit finite-difference integration scheme is presented. A Lorenz grid is used for vertical discretization and a C grid for the horizontal discretization. The momentum equation is discretized in vector form, thus avoiding problems near the poles. The 3D model equations are reduced by a linear transformation to a set of 2D elliptic equations, whose solution is found by means of an efficient direct solver. The model (with minimal physics) is integrated for 10 days starting from an initialized state derived from real data. A resolution of 16 levels in the vertical is used, with various horizontal resolutions. The model is found to be stable and efficient, and to give realistic output fields. Integrations with time steps of 10 min, 30 min, and 1 h are compared, and the differences are found to be acceptable.
Fan, Xiaolin
2017-01-19
This paper presents a componentwise convex splitting scheme for numerical simulation of multicomponent two-phase fluid mixtures in a closed system at constant temperature, which is modeled by a diffuse interface model equipped with the Van der Waals and the Peng-Robinson equations of state (EoS). The Van der Waals EoS has a rigorous foundation in physics, while the Peng-Robinson EoS is more accurate for hydrocarbon mixtures. First, the phase field theory of thermodynamics and variational calculus are applied to a functional minimization problem of the total Helmholtz free energy. Mass conservation constraints are enforced through Lagrange multipliers. A system of chemical equilibrium equations is obtained which is a set of second-order elliptic equations with extremely strong nonlinear source terms. The steady state equations are transformed into a transient system as a numerical strategy on which the scheme is based. The proposed numerical algorithm avoids the indefiniteness of the Hessian matrix arising from the second-order derivative of homogeneous contribution of total Helmholtz free energy; it is also very efficient. This scheme is unconditionally componentwise energy stable and naturally results in unconditional stability for the Van der Waals model. For the Peng-Robinson EoS, it is unconditionally stable through introducing a physics-preserving correction term, which is analogous to the attractive term in the Van der Waals EoS. An efficient numerical algorithm is provided to compute the coefficient in the correction term. Finally, some numerical examples are illustrated to verify the theoretical results and efficiency of the established algorithms. The numerical results match well with laboratory data.
(12) limit and complete classiﬁcation of symmetry schemes in proton–neutron interacting boson model
Indian Academy of Sciences (India)
V K B Kota
2003-01-01
It is shown that the proton–neutron interacting boson model (pnIBM) admits new symmetry limits with (12) algebra which break spin but preserves the quantum number . The generators of (12) are derived and the quantum number of (12) for a given boson number is determined by identifying the corresponding quasi-spin algebra. The (12) algebra generates two symmetry schemes and for both of them, complete classiﬁcation of the basis states and typical spectra are given. With the (12) algebra identiﬁed, complete classiﬁcation of pnIBM symmetry limits with good is established.
Neelin, J David; Peters, Ole; Lin, Johnny W-B; Hales, Katrina; Holloway, Christopher E
2008-07-28
Convective quasi-equilibrium (QE) has for several decades stood as a key postulate for parametrization of the impacts of moist convection at small scales upon the large-scale flow. Departures from QE have motivated stochastic convective parametrization, which in its early stages may be viewed as a sensitivity study. Introducing plausible stochastic terms to modify the existing convective parametrizations can have substantial impact, but, as for so many aspects of convective parametrization, the results are sensitive to details of the assumed processes. We present observational results aimed at helping to constrain convection schemes, with implications for each of conventional, stochastic or 'superparametrization' schemes. The original vision of QE due to Arakawa fares well as a leading approximation, but with a number of updates. Some, like the imperfect connection between the boundary layer and the free troposphere, and the importance of free-tropospheric moisture to buoyancy, are quantitatively important but lie within the framework of ensemble-average convection slaved to the large scale. Observations of critical phenomena associated with a continuous phase transition for precipitation as a function of water vapour and temperature suggest a more substantial revision. While the system's attraction to the critical point is predicted by QE, several fundamental properties of the transition, including high precipitation variance in the critical region, need to be added to the theory. Long-range correlations imply that this variance does not reduce quickly under spatial averaging; scaling associated with this spatial averaging has potential implications for superparametrization. Long tails of the distribution of water vapour create relatively frequent excursions above criticality with associated strong precipitation events.
Koulakov, I. Yu.
2009-04-01
Seismic tomography is like a photography taken by a camera with deformed and blurred lenses. In the resulting tomograms, colors (amplitudes of anomalies) and shapes of objects are often strongly biased and are usually not representing the reality. We propose an approach which allows investigating properties of the "camera" and retrieving most probable shapes and amplitudes of anomalies in the real Earth. The main idea of this approach is to construct a synthetic model which, after performing forward modeling and tomographic inversion, reproduces the same amplitudes and shapes of patterns as after inversion of observed data. In this modeling, the conditions of the tomographic inversion (damping, grid spacing, source location parameters etc) should be absolutely identical to the case of the observed data processing. The a priori information, if available any, should be taken into account in this modeling to decrease the uncertainty related to fundamental non-uniqueness of the inversion problem. In the talk, several examples of applying this approach at various scales for different data schemes are presented: (1) regional scheme which uses the global data of the ISC catalogue (with examples of regional upper mantle models in Europe and central Asia); (2) local earthquake tomography scheme (illustrated with models in Toba caldera area and in Central Java); (3) seismic profiling which is based on active source refraction travel time data (with examples of several deep seismic sounding profiles in Central Pacific and subduction zones in Chile).
Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.
2017-05-01
We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.
Ricciuto, D. M.; Yang, X.; Thornton, P. E.
2015-12-01
Soils contain the largest pool of carbon in terrestrial ecosystems. Soil carbon dynamics and associated nutrient dynamics play significant roles in regulating global carbon cycle and atmospheric CO2 concentrations. Our capability to predict future climate change depends to a large extent on a well-constrained representation of soil carbon dynamics in ESMs. Here we evaluate two decomposition schemes - converging trophic cascade (CTC) and Century - in CLM4.5/ACME V0 using data from the long-term intersite decomposition experiment team (LIDET), radiocarbon (14C) observations, and Harmonized World Soil Database (HWSD). For the evaluation against LIDET, We exercise the full CLM4.5/ ACME V0 land model, including seasonal variability in nitrogen limitation and environmental scalars (temperature, moisture, O2), in order to represent LIDET experiment in a realistic way. We show that the proper design of model experiments is crucial to model evaluation using data from field experiments such as LIDET. We also use 14C profile data at 10 sites to evaluate the performance of CTC and CENTURY decomposition scheme. We find that the 14C profiles at these sites are most sensitive to the depth dependent decomposition parameters, consistent with previous studies.
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.; Goldberg, Mitchell D.
2011-11-01
Several bulk water microphysics schemes are available within the Weather Research and Forecasting (WRF) model, with different numbers of simulated hydrometeor classes and methods for estimating their size fall speeds, distributions and densities. Stony-Brook University (SBU-YLIN) microphysics scheme is a 5-class scheme with riming intensity predicted to account for mixed-phase processes. In this paper, we develop an efficient graphics processing unit (GPU) based SBU-YLIN scheme. The GPU-based SBU-YLIN scheme will be compared to a CPU-based single-threaded counterpart. The implementation achieves 213x speedup with I/O compared to a Fortran implementation running on a CPU. Without I/O the speedup is 896x.
Zhao, Jia; Yang, Xiaofeng; Shen, Jie; Wang, Qi
2016-01-01
We develop a linear, first-order, decoupled, energy-stable scheme for a binary hydrodynamic phase field model of mixtures of nematic liquid crystals and viscous fluids that satisfies an energy dissipation law. We show that the semi-discrete scheme in time satisfies an analogous, semi-discrete energy-dissipation law for any time-step and is therefore unconditionally stable. We then discretize the spatial operators in the scheme by a finite-difference method and implement the fully discrete scheme in a simplified version using CUDA on GPUs in 3 dimensions in space and time. Two numerical examples for rupture of nematic liquid crystal filaments immersed in a viscous fluid matrix are given, illustrating the effectiveness of this new scheme in resolving complex interfacial phenomena in free surface flows of nematic liquid crystals.
Directory of Open Access Journals (Sweden)
C. Knote
2012-10-01
Full Text Available Clouds are reaction chambers for atmospheric trace gases and aerosols, and the associated precipitation is a major sink for atmospheric constituents. The regional chemistry-climate model COSMO-ART has been lacking a description of wet scavenging of gases and aqueous-phase chemistry. In this work we present a coupling of COSMO-ART with a wet scavenging and aqueous-phase chemistry scheme. The coupling is made consistent with the cloud microphysics scheme of the underlying meteorological model COSMO. While the choice of the aqueous-chemistry mechanism is flexible, the effects of a simple sulfur oxidation scheme are shown in the application of the coupled system in this work. We give details explaining the coupling and extensions made, then present results from idealized flow-over-hill experiments in a 2-D model setup and finally results from a full 3-D simulation. Comparison against measurement data shows that the scheme efficiently reduces SO_{2} trace gas concentrations by 0.3 ppbv (−30% on average, while leaving O_{3} and NO_{x} unchanged. PM_{10} aerosol mass, which has been overestimated previously, is now in much better agreement with measured values due to a stronger scavenging of coarse particles. While total PM_{2.5} changes only little, chemical composition is improved notably. Overestimations of nitrate aerosols are reduced by typically 0.5–1 μg m^{−3} (up to −2 μg m^{−3} in the Po Valley while sulfate mass is increased by 1–1.5 μg m^{−3} on average (up to 2.5 μg m^{−3} in Eastern Europe. The effect of cloud processing of aerosols on its size distribution, i. e. a shift towards larger diameters, is observed. Compared against wet deposition measurements the system underestimates the total wet deposited mass for the simulated case study. We find that while evaporation of cloud droplets dominates in higher altitudes, evaporation of precipitation can
Knote, C.; Brunner, D.
2012-10-01
Clouds are reaction chambers for atmospheric trace gases and aerosols, and the associated precipitation is a major sink for atmospheric constituents. The regional chemistry-climate model COSMO-ART has been lacking a description of wet scavenging of gases and aqueous-phase chemistry. In this work we present a coupling of COSMO-ART with a wet scavenging and aqueous-phase chemistry scheme. The coupling is made consistent with the cloud microphysics scheme of the underlying meteorological model COSMO. While the choice of the aqueous-chemistry mechanism is flexible, the effects of a simple sulfur oxidation scheme are shown in the application of the coupled system in this work. We give details explaining the coupling and extensions made, then present results from idealized flow-over-hill experiments in a 2-D model setup and finally results from a full 3-D simulation. Comparison against measurement data shows that the scheme efficiently reduces SO2 trace gas concentrations by 0.3 ppbv (-30%) on average, while leaving O3 and NOx unchanged. PM10 aerosol mass, which has been overestimated previously, is now in much better agreement with measured values due to a stronger scavenging of coarse particles. While total PM2.5 changes only little, chemical composition is improved notably. Overestimations of nitrate aerosols are reduced by typically 0.5-1 μg m-3 (up to -2 μg m-3 in the Po Valley) while sulfate mass is increased by 1-1.5 μg m-3 on average (up to 2.5 μg m-3 in Eastern Europe). The effect of cloud processing of aerosols on its size distribution, i. e. a shift towards larger diameters, is observed. Compared against wet deposition measurements the system underestimates the total wet deposited mass for the simulated case study. We find that while evaporation of cloud droplets dominates in higher altitudes, evaporation of precipitation can contribute up to 50% of total evaporated mass near the surface.
Yan, Hongyong; Yang, Lei; Li, Xiang-Yang
2016-12-01
High-order staggered-grid finite-difference (SFD) schemes have been universally used to improve the accuracy of wave equation modeling. However, the high-order SFD coefficients on spatial derivatives are usually determined by the Taylor-series expansion (TE) method, which just leads to great accuracy at small wavenumbers for wave equation modeling. Some conventional optimization methods can achieve high accuracy at large wavenumbers, but they hardly guarantee the small numerical dispersion error at small wavenumbers. In this paper, we develop new optimal explicit SFD (ESFD) and implicit SFD (ISFD) schemes for wave equation modeling. We first derive the optimal ESFD and ISFD coefficients for the first-order spatial derivatives by applying the combination of the TE and the sampling approximation to the dispersion relation, and then analyze their numerical accuracy. Finally, we perform elastic wave modeling with the ESFD and ISFD schemes based on the TE method and the optimal method, respectively. When the appropriate number and interval for the sampling points are chosen, these optimal schemes have extremely high accuracy at small wavenumbers, and can also guarantee small numerical dispersion error at large wavenumbers. Numerical accuracy analyses and modeling results demonstrate the optimal ESFD and ISFD schemes can efficiently suppress the numerical dispersion and significantly improve the modeling accuracy compared to the TE-based ESFD and ISFD schemes.
Esquível, Manuel L.; Fernandes, José Moniz; Guerreiro, Gracinda R.
2016-06-01
We introduce a schematic formalism for the time evolution of a random population entering some set of classes and such that each member of the population evolves among these classes according to a scheme based on a Markov chain model. We consider that the flow of incoming members is modeled by a time series and we detail the time series structure of the elements in each of the classes. We present a practical application to data from a credit portfolio of a Cape Verdian bank; after modeling the entering population in two different ways - namely as an ARIMA process and as a deterministic sigmoid type trend plus a SARMA process for the residues - we simulate the behavior of the population and compare the results. We get that the second method is more accurate in describing the behavior of the populations when compared to the observed values in a direct simulation of the Markov chain.
Directory of Open Access Journals (Sweden)
Yandy G. Mayor
2015-01-01
Full Text Available This paper evaluates the sensitivity to cumulus and microphysics schemes, as represented in numerical simulations of the Weather Research and Forecasting model, in characterizing a deep convection event over the Cuban island on 1 May 2012. To this end, 30 experiments combining five cumulus and six microphysics schemes, in addition to two experiments in which the cumulus parameterization was turned off, are tested in order to choose the combination that represents the event precipitation more accurately. ERA Interim is used as lateral boundary condition data for the downscaling procedure. Results show that convective schemes are more important than microphysics schemes for determining the precipitation areas within a high-resolution domain simulation. Also, while one cumulus scheme captures the overall spatial convective structure of the event more accurately than others, it fails to capture the precipitation intensity. This apparent discrepancy leads to sensitivity related to the verification method used to rank the scheme combinations. This sensitivity is also observed in a comparison between parameterized and explicit cumulus formation when the Kain-Fritsch scheme was used. A loss of added value is also found when the Grell-Freitas cumulus scheme was activated at 1 km grid spacing.
DEFF Research Database (Denmark)
Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela
2013-01-01
Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...... of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit...
Xia, Ting; Kustas, William P.; Anderson, Martha C.; Alfieri, Joseph G.; Gao, Feng; McKee, Lynn; Prueger, John H.; Geli, Hatim M. E.; Neale, Christopher M. U.; Sanchez, Luis; Mar Alsina, Maria; Wang, Zhongjing
2016-04-01
Thermal and multispectral remote sensing data from low-altitude aircraft can provide high spatial resolution necessary for sub-field (≤ 10 m) and plant canopy (≤ 1 m) scale evapotranspiration (ET) monitoring. In this study, high-resolution (sub-meter-scale) thermal infrared and multispectral shortwave data from aircraft are used to map ET over vineyards in central California with the two-source energy balance (TSEB) model and with a simple model having operational immediate capabilities called DATTUTDUT (Deriving Atmosphere Turbulent Transport Useful To Dummies Using Temperature). The latter uses contextual information within the image to scale between radiometric land surface temperature (TR) values representing hydrologic limits of potential ET and a non-evaporative surface. Imagery from 5 days throughout the growing season is used for mapping ET at the sub-field scale. The performance of the two models is evaluated using tower-based measurements of sensible (H) and latent heat (LE) flux or ET. The comparison indicates that TSEB was able to derive reasonable ET estimates under varying conditions, likely due to the physically based treatment of the energy and the surface temperature partitioning between the soil/cover crop inter-row and vine canopy elements. On the other hand, DATTUTDUT performance was somewhat degraded presumably because the simple scaling scheme does not consider differences in the two sources (vine and inter-row) of heat and temperature contributions or the effect of surface roughness on the efficiency of heat exchange. Maps of the evaporative fraction (EF = LE/(H + LE)) from the two models had similar spatial patterns but different magnitudes in some areas within the fields on certain days. Large EF discrepancies between the models were found on 2 of the 5 days (DOY 162 and 219) when there were significant differences with the tower-based ET measurements, particularly using the DATTUTDUT model. These differences in EF between the models
High-order scheme for the source-sink term in a one-dimensional water temperature model
Jing, Zheng; Kang, Ling
2017-01-01
The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data. PMID:28264005
Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng
2016-10-14
In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical data, and then all nodes in each grid of the target area can be classified into several categories. Secondly, the redundancy degree of a node is calculated according to its sensing area being covered by the neighboring sensors. Finally, a centralized approximation algorithm based on the partition of the target area is designed to obtain the approximate minimum set of nodes, which can retain the sufficient coverage of the target region and ensure the connectivity of the network at the same time. The simulation results show that the proposed CPCSS can balance the energy consumption and optimize the coverage performance of the sensor network.
Ren, Yihui; Eubank, Stephen; Nath, Madhurima
2016-10-01
Network reliability is the probability that a dynamical system composed of discrete elements interacting on a network will be found in a configuration that satisfies a particular property. We introduce a reliability property, Ising feasibility, for which the network reliability is the Ising model's partition function. As shown by Moore and Shannon, the network reliability can be separated into two factors: structural, solely determined by the network topology, and dynamical, determined by the underlying dynamics. In this case, the structural factor is known as the joint density of states. Using methods developed to approximate the structural factor for other reliability properties, we simulate the joint density of states, yielding an approximation for the partition function. Based on a detailed examination of why naïve Monte Carlo sampling gives a poor approximation, we introduce a parallel scheme for estimating the joint density of states using a Markov-chain Monte Carlo method with a spin-exchange random walk. This parallel scheme makes simulating the Ising model in the presence of an external field practical on small computer clusters for networks with arbitrary topology with ˜106 energy levels and more than 10308 microstates.
Directory of Open Access Journals (Sweden)
Tianhui Meng
2016-09-01
Full Text Available Wireless sensor networks (WSNs have recently gained popularity for a wide spectrum of applications. Monitoring tasks can be performed in various environments. This may be beneficial in many scenarios, but it certainly exhibits new challenges in terms of security due to increased data transmission over the wireless channel with potentially unknown threats. Among possible security issues are timing attacks, which are not prevented by traditional cryptographic security. Moreover, the limited energy and memory resources prohibit the use of complex security mechanisms in such systems. Therefore, balancing between security and the associated energy consumption becomes a crucial challenge. This paper proposes a secure scheme for WSNs while maintaining the requirement of the security-performance tradeoff. In order to proceed to a quantitative treatment of this problem, a hybrid continuous-time Markov chain (CTMC and queueing model are put forward, and the tradeoff analysis of the security and performance attributes is carried out. By extending and transforming this model, the mean time to security attributes failure is evaluated. Through tradeoff analysis, we show that our scheme can enhance the security of WSNs, and the optimal rekeying rate of the performance and security tradeoff can be obtained.
Duvel, Jean Philippe; Camargo, Suzana; Sobel, Adam
2016-04-01
Modifications of the large-scale environment related to intraseasonal (MJO) and interannual (ENSO) time-scale variability or to global climate warming may have important impacts on the tropical cyclonic activity. This sensitivity of tropical cyclones (TC) on environmental changes can now be studied using General Circulation Models (GCM). Before doing sensitivity studies with a GCM, it is interesting to assess the representation of the TC activity for different configurations (resolution, parameterization) of the GCM in present climate conditions and to trace possible causes of bias in TC number, location or strength. A possible approach to do this assessment is to separate initiation and intensification processes. By using either GCM output, or meteorological analysis combined to TC observation databases, it is possible to study the condition of formation of tropical depressions vortices (TDV) at an early stage and their possible intensification into a TC (say the Tropical Storm stage). We use the LMDZ GCM to study the sensitivity of TDV characteristics to different entrainment and closure formulations of the convective scheme. The study focuses on the Tropical North Atlantic using the "zoom" capability of the LMDZ GCM. The horizontal resolution of the model is set to 0.75° over a large region of the North Atlantic and West Africa. The GCM is free to run in this region and is tied to ERA-Interim reanalysis outside that region, with intermediate relaxation times in-between. We use the Tiedtke convective scheme with entrainment and closure based on the moisture convergence, or with an entrainment based on the relative humidity of the environment, and additionally a closure based on CAPE. Each configuration is run for 10 years between 2000 and 2009 with prescribed SST. In summary, the convective entrainment based on the relative humidity in the environment deepens the TDV in LMDZ, resulting in more TDV and TC. The convective closure mitigates this tendency and
Conceptual design and modeling of a six-dimensional bunch merging scheme for a muon collider
Bao, Yu; Hanson, Gail; Palmer, Robert B.; Stratakis, Diktys
2016-03-01
A high luminosity muon collider requires single, intense, muon bunches with small emittances: just one of each sign. An efficient front end and a cooling channel have been designed and simulated within the collaboration of the Muon Accelerator Program. The muons are first bunched and phase rotated into 21 bunches, and then cooled in six dimensions. When they are cool enough, they are merged into single bunches: one of each sign. The bunch merging scheme has been outlined with preliminary simulations in previous studies. In this paper we present a comprehensive design with its end-to-end simulation. The 21 bunches are first merged in longitudinal phase space into seven bunches. These are directed into seven "trombone" paths with different lengths, to bring them to the same time, and then merged transversely in a collecting "funnel" into the required single larger bunches. Detailed numerical simulations show that the 6D emittance of the resulting bunch reaches the parameters needed for high acceptance into the downstream cooling channel.
Tateo, Andrea; Marcello Miglietta, Mario; Fedele, Francesca; Menegotto, Micaela; Monaco, Alfonso; Bellotti, Roberto
2017-04-01
The Weather Research and Forecasting mesoscale model (WRF) was used to simulate hourly 10 m wind speed and direction over the city of Taranto, Apulia region (south-eastern Italy). This area is characterized by a large industrial complex including the largest European steel plant and is subject to a Regional Air Quality Recovery Plan. This plan constrains industries in the area to reduce by 10 % the mean daily emissions by diffuse and point sources during specific meteorological conditions named wind days. According to the Recovery Plan, the Regional Environmental Agency ARPA-PUGLIA is responsible for forecasting these specific meteorological conditions with 72 h in advance and possibly issue the early warning. In particular, an accurate wind simulation is required. Unfortunately, numerical weather prediction models suffer from errors, especially for what concerns near-surface fields. These errors depend primarily on uncertainties in the initial and boundary conditions provided by global models and secondly on the model formulation, in particular the physical parametrizations used to represent processes such as turbulence, radiation exchange, cumulus and microphysics. In our work, we tried to compensate for the latter limitation by using different Planetary Boundary Layer (PBL) parameterization schemes. Five combinations of PBL and Surface Layer (SL) schemes were considered. Simulations are implemented in a real-time configuration since our intention is to analyze the same configuration implemented by ARPA-PUGLIA for operational runs; the validation is focused over a time range extending from 49 to 72 h with hourly time resolution. The assessment of the performance was computed by comparing the WRF model output with ground data measured at a weather monitoring station in Taranto, near the steel plant. After the analysis of the simulations performed with different PBL schemes, both simple (e.g. average) and more complex post-processing methods (e.g. weighted average
An Importance Sampling Scheme for Models in a Strong External Field
Molkaraie, Mehdi
2015-01-01
We propose Monte Carlo methods to estimate the partition function of the two-dimensional Ising model in the presence of an external magnetic field. The estimation is done in the dual of the Forney factor graph representing the model. The proposed methods can efficiently compute an estimate of the partition function in a wide range of model parameters. As an example, we consider models that are in a strong external field.
A general scheme for training and optimization of the Grenander deformable template model
DEFF Research Database (Denmark)
Fisker, Rune; Schultz, Nette; Duta, N.
2000-01-01
parameters, a very fast general initialization algorithm and an adaptive likelihood model based on local means. The model parameters are trained by a combination of a 2D shape learning algorithm and a maximum likelihood based criteria. The fast initialization algorithm is based on a search approach using...... a filter interpretation of the likelihood model....
Line Differential Protection Scheme Modelling for Underground 420 kV Cable Systems
DEFF Research Database (Denmark)
Sztykiel, Michal; Bak, Claus Leth; Dollerup, Sebastian
2011-01-01
Based on the analysis of a specific relay model and an HVAC (High Voltage Alternating Current) cable system, a detailed approach to EMTDC/PSCAD modelling of protective relays is presented. Such approach allows to create complex and accurate relay models derived from the original algorithms. Relay...
Line Differential Protection Scheme Modelling for Underground 420 kV Cable Systems
DEFF Research Database (Denmark)
Sztykiel, Michal; Bak, Claus Leth; Wiechowski, Wojciech
2010-01-01
Based on the analysis of a specific relay model and an HVAC (High Voltage Alternating Current) cable system, a new approach to EMTDC/PSCAD modelling of protective relays is presented. Such approach allows to create complex and accurate relay models derived from the original algorithms. Relay mode...
Directory of Open Access Journals (Sweden)
A. Petroff
2010-08-01
Full Text Available A size-resolved particle dry deposition scheme is developed, which has been designed for inclusion in large-scale air quality and climate models, where the size distribution and fate of the atmospheric aerosol is of concern. The "resistance" structure is similar to what is proposed by Zhang et al. (2001, 2003, while a new "surface" deposition velocity (or surface resistance is derived by simplification of a one-dimensional aerosol transport model (Petroff et al., 2008b, 2009. Collection efficiencies are given for the 26 Land Use Categories that decribe the earth surface. Validation of this model with existing measurements is performed on desert, grass, coniferous forest and liquid water surfaces. A comparison of this model with measurements on snow and ice is also given. Even though a qualitative agreement is reached, further size-segegated measurements are needed in order to confirm the model accuracy on this surface. The present analytical model provides more accurate predictions of the aerosol deposition on these surfaces than previous models.
Amundsen, David S.; Mayne, Nathan J.; Baraffe, Isabelle; Manners, James; Tremblin, Pascal; Drummond, Benjamin; Smith, Chris; Acreman, David M.; Homeier, Derek
2016-10-01
To study the complexity of hot Jupiter atmospheres revealed by observations of increasing quality, we have adapted the UK Met Office Global Circulation Model (GCM), the Unified Model (UM), to these exoplanets. The UM solves the full 3D Navier-Stokes equations with a height-varying gravity, avoiding the simplifications used in most GCMs currently applied to exoplanets. In this work we present the coupling of the UM dynamical core to an accurate radiation scheme based on the two-stream approximation and correlated-k method with state-of-the-art opacities from ExoMol. Our first application of this model is devoted to the extensively studied hot Jupiter HD 209458b. We have derived synthetic emission spectra and phase curves, and compare them to both previous models also based on state-of-the-art radiative transfer, and to observations. We find a reasonable agreement between observations and both our days side emission and hot spot offset, however, our night side emissions is too large. Overall our results are qualitatively similar to those found by Showman et al. (2009, ApJ, 699, 564) with the SPARC/MITgcm, however, we note several quantitative differences: Our simulations show significant variation in the position of the hottest part of the atmosphere with pressure, as expected from simple timescale arguments, and in contrast to the "vertical coherency" found by Showman et al. (2009). We also see significant quantitative differences in calculated synthetic observations. Our comparisons strengthen the need for detailed intercomparisons of dynamical cores, radiation schemes and post-processing tools to understand these differences. This effort is necessary in order to make robust conclusions about these atmospheres based on GCM results.
Mihailovic, D. T.; Alapaty, K.; Lalic, B.; Arsenic, I.; Rajkovic, B.; Malinovic, S.
2004-10-01
A method for estimating profiles of turbulent transfer coefficients inside a vegetation canopy and their use in calculating the air temperature inside tall grass canopies in land surface schemes for environmental modeling is presented. The proposed method, based on K theory, is assessed using data measured in a maize canopy. The air temperature inside the canopy is determined diagnostically by a method based on detailed consideration of 1) calculations of turbulent fluxes, 2) the shape of the wind and turbulent transfer coefficient profiles, and 3) calculation of the aerodynamic resistances inside tall grass canopies. An expression for calculating the turbulent transfer coefficient inside sparse tall grass canopies is also suggested, including modification of the corresponding equation for the wind profile inside the canopy. The proposed calculations of K-theory parameters are tested using the Land Air Parameterization Scheme (LAPS). Model outputs of air temperature inside the canopy for 8 17 July 2002 are compared with micrometeorological measurements inside a sunflower field at the Rimski Sancevi experimental site (Serbia). To demonstrate how changes in the specification of canopy density affect the simulation of air temperature inside tall grass canopies and, thus, alter the growth of PBL height, numerical experiments are performed with LAPS coupled with a one-dimensional PBL model over a sunflower field. To examine how the turbulent transfer coefficient inside tall grass canopies over a large domain represents the influence of the underlying surface on the air layer above, sensitivity tests are performed using a coupled system consisting of the NCEP Nonhydrostatic Mesoscale Model and LAPS.
Comparison of statistical models to estimate daily milk yield in single milking testing schemes
Directory of Open Access Journals (Sweden)
Marija Klopcˇic
2010-01-01
Full Text Available Different statistical models were compared to estimate daily milk yield from morning or evening milking test results. The experiment was conducted on 14 family farms with 325 recorded cows. The amount of explained variance was higher for models including the effects of partial milk yield, the interval between successive milking, the interaction between partial milk yield and the milking interval and the farm (R2 = 0.976 for AM, R2 = 0.956 for PM than for models including partial milk yield effect only (R2 = 0.957 for AM, R2 = 0.937 for PM. Estimates of daily milk yield from linear models were more accurate than those obtained by doubling single milking weights. The results show that more complex model gives the best fit to the data. Differences between models according to determination and correlation coefficient were minor. Further investigations on larger sets of data are needed to draw more general conclusion.
Directory of Open Access Journals (Sweden)
Shahriar Afandizadeh
2016-02-01
Full Text Available Congestion pricing strategy has been recognized as an effective countermeasure in the practical field of urban traffic congestion mitigation. Despite the positive effects of congestion pricing, its implementation has faced problems. This paper investigates the issue of environmental equity in cordon pricing and a park-and-ride scheme. Although pollution decreases inside the cordon by implementation of cordon pricing, air pollutants emission may increase in some links and in the whole network. Therefore, an increase in air emissions in the network means more emission outside the cordon. In fact, due to the implementation of this policy, air pollutants emission may transfer from inside to outside the cordon, creating a type of environmental inequity. To reduce this inequity, a bi-level optimization model with an equity constraint is developed. The proposed solution algorithm based on the second version of the strength Pareto evolutionary algorithm (SPEA2 is applied to the city network in Tehran. The results revealed that it seems reasonable to consider environmental equity as an objective function in cordon pricing. In addition, we can create a sustainable situation for the transportation system by improving environmental inequity with a relatively low reduction in social welfare. Moreover, there are environmental inequity impacts in real networks, which should be considered in the cordon pricing scheme.
Directory of Open Access Journals (Sweden)
Lee Jie-Min
2006-03-01
Full Text Available Abstract Background This study evaluates the impact of an increase in cigarette tax in Taiwan in terms of the effects it has on the overall economy and the health benefits that it brings. Methods The multisector computable general equilibrium (CGE model was used to simulate the impact of reduced cigarette consumption resulting from a new tax scheme on the entire economy gains and on health benefits. Results The results predict that because of the new tax scheme, there should be a marked reduction in cigarette consumption but a notable increase in health benefits that include saving between 28,125 and 56,250 lives. This could save NT$1.222~2.445 billion (where US$1 = NT$34.6 annually in life-threatening, cigarette-related health insurance expenses which exceeds the projected decrease of NT$1.275 billion in Gross Domestic Product (GDP because of reduced consumption and therefore tax revenue. Conclusion Overall, the increased cigarette excise tax will be beneficial in terms of both the health of the general public and the economy as a whole.
Kiessling, Jonas
2014-05-06
Option prices in exponential Lévy models solve certain partial integro-differential equations. This work focuses on developing novel, computable error approximations for a finite difference scheme that is suitable for solving such PIDEs. The scheme was introduced in (Cont and Voltchkova, SIAM J. Numer. Anal. 43(4):1596-1626, 2005). The main results of this work are new estimates of the dominating error terms, namely the time and space discretisation errors. In addition, the leading order terms of the error estimates are determined in a form that is more amenable to computations. The payoff is only assumed to satisfy an exponential growth condition, it is not assumed to be Lipschitz continuous as in previous works. If the underlying Lévy process has infinite jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time and space discretisation errors on this approximation. Consequently, it is possible to determine how to jointly choose the space and time grid sizes and the cut off parameter (Formula presented.). © 2014 Springer Science+Business Media Dordrecht.
Gao, Min
2014-09-01
In this paper, we develop an efficient numerical method for the two phase moving contact line problem with variable density, viscosity, and slip length. The physical model is based on a phase field approach, which consists of a coupled system of the Cahn-Hilliard and Navier-Stokes equations with the generalized Navier boundary condition [1,2,5]. To overcome the difficulties due to large density and viscosity ratio, the Navier-Stokes equations are solved by a splitting method based on a pressure Poisson equation [11], while the Cahn-Hilliard equation is solved by a convex splitting method. We show that the method is stable under certain conditions. The linearized schemes are easy to implement and introduce only mild CFL time constraint. Numerical tests are carried out to verify the accuracy, stability and efficiency of the schemes. The method allows us to simulate the interface problems with extremely small interface thickness. Three dimensional simulations are included to validate the efficiency of the method. © 2014 Elsevier Inc.
Shafiei, M.; Gharari, S.; Pande, S.; Bhulai, S.
2014-01-01
Posterior sampling methods are increasingly being used to describe parameter and model predictive uncertainty in hydrologic modelling. This paper proposes an alternative to random walk chains (such as DREAM-zs). We propose a sampler based on independence chains with an embedded feature of standardiz
A new flexible plug and play scheme for modeling, simulating, and predicting gastric emptying
Krishnan, S.; Avesaat, M. van; Troost, F.J.; Hendriks, H.F.J.; Graaf, A.A. de
2014-01-01
In-silico models that attempt to capture and describe the physiological behavior of biological organisms, including humans, are intrinsically complex and time consuming to build and simulate in a computing environment. The level of detail of description incorporated in the model depends on the
A new flexible plug and play scheme for modeling, simulating, and predicting gastric emptying
Krishnan, S.; Avesaat, M. van; Troost, F.J.; Hendriks, H.F.J.; Graaf, A.A. de
2014-01-01
In-silico models that attempt to capture and describe the physiological behavior of biological organisms, including humans, are intrinsically complex and time consuming to build and simulate in a computing environment. The level of detail of description incorporated in the model depends on the knowl
A New Scheme for Experimental-Based Modeling of a Traveling Wave Ultrasonic Motor
DEFF Research Database (Denmark)
Mojallali, Hamed; Amini, R.; Izadi-Zamanabadi, Roozbeh
2005-01-01
In this paper, a new method for equivalent circuit modeling of a traveling wave ultrasonic motor is presented. The free stator of the motor is modeled by an equivalent circuit containing complex circuit elements. A systematic approach for identifying the elements of the equivalent circuit...
A Trust Scheme Based DRM Model for P2P System
Institute of Scientific and Technical Information of China (English)
XIAO Shangqin; LU Zhengding; LING Hefei; ZOU Fuhao
2006-01-01
With the maturation of P2P technology, there are more and more challenges to the protection of digital rights.DRM for traditional Client/Server model can not satisfy the requirement of digital right protection of P2P network.With practical application of P2P network technology and new generational DRM technology, the article proposes a DRM model for P2P system based on trust degree.The proposed model shares the secret key to the trusty peers and affords necessary security ensure for digital content delivery based on P2P technology.Mathematic analyses and simulations show that, compared to the current DRM model, the proposed model is more robust toleration of transmit and security.
Chang, Chih-Hao; Liou, Meng-Sing
2007-07-01
In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations . Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM +-up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion. However, conservative form is lost in these balance equations when considering each individual phase; in fact, the interactions that exist simultaneously in both phases manifest themselves as nonconservative terms.
A test of an optimal stomatal conductance scheme within the CABLE land surface model
De Kauwe, M. G.; Kala, J.; Lin, Y.-S.; Pitman, A. J.; Medlyn, B. E.; Duursma, R. A.; Abramowitz, G.; Wang, Y.-P.; Miralles, D. G.
2015-02-01
Stomatal conductance (gs) affects the fluxes of carbon, energy and water between the vegetated land surface and the atmosphere. We test an implementation of an optimal stomatal conductance model within the Community Atmosphere Biosphere Land Exchange (CABLE) land surface model (LSM). In common with many LSMs, CABLE does not differentiate between gs model parameters in relation to plant functional type (PFT), but instead only in relation to photosynthetic pathway. We constrained the key model parameter "g1", which represents plant water use strategy, by PFT, based on a global synthesis of stomatal behaviour. As proof of concept, we also demonstrate that the g1 parameter can be estimated using two long-term average (1960-1990) bioclimatic variables: (i) temperature and (ii) an indirect estimate of annual plant water availability. The new stomatal model, in conjunction with PFT parameterisations, resulted in a large reduction in annual fluxes of transpiration (~ 30% compared to the standard CABLE simulations) across evergreen needleleaf, tundra and C4 grass regions. Differences in other regions of the globe were typically small. Model performance against upscaled data products was not degraded, but did not noticeably reduce existing model-data biases. We identified assumptions relating to the coupling of the vegetation to the atmosphere and the parameterisation of the minimum stomatal conductance as areas requiring further investigation in both CABLE and potentially other LSMs. We conclude that optimisation theory can yield a simple and tractable approach to predicting stomatal conductance in LSMs.
DEFF Research Database (Denmark)
Mashayekhi, Sima; Hugger, Jens
2015-01-01
Several nonlinear Black-Scholes models have been proposed to take transaction cost, large investor performance and illiquid markets into account. One of the most comprehensive models introduced by Barles and Soner in [4] considers transaction cost in the hedging strategy and risk from an illiquid...... market. In this paper, we compare several finite difference methods for the solution of this model with respect to precision and order of convergence within a computationally feasible domain allowing at most 200 space steps and 10000 time steps. We conclude that standard explicit Euler comes out...
Directory of Open Access Journals (Sweden)
Bozhenyuk Alexander
2015-12-01
Full Text Available The decision-making model with basic fuzzy rule modus ponens is suggested in this paper to control the hand prosthesis. The hand movements are described by angles of finger and wrist flexion. Electromyogram (EMG of hand muscles was used as a source of the input data. Software was developed to implement the decision-making model with fuzzy rule modus ponens. In particular, the software receives EMG data, executes calculations and visualises the output data. The key advantage of the model is smoothness of output data changes; this way a maximum approach to natural hand movements is reached.
En route to a multi-model scheme for clinker comminution with chemical grinding aids
Mishra, R K; Carmona, H A; Wittel, F K; Sawley, M L; Weibel, M; Gallucci, E; Herrmann, H J; Heinz, H; Flatt, R J
2015-01-01
We present a multi-model simulation approach, targeted at understanding the behavior of comminution and the effect of grinding aids (GAs) in industrial cement mills. On the atomistic scale we use Molecular Dynamics (MD) simulations with validated force field models to quantify elastic and structural properties, cleavage energies as well as the organic interactions with mineral surfaces. Simulations based on the Discrete Element Method (DEM) are used to integrate the information gained from MD simulations into the clinker particle behavior at larger scales. Computed impact energy distributions from DEM mill simulations can serve as a link between large-scale industrial and laboratory sized mills. They also provide the required input for particle impact fragmentation models. Such a multi-scale, multi-model methodology paves the way for a structured approach to the design of chemical additives aimed at improving mill performance.
Indian Academy of Sciences (India)
Devanil Choudhury; Someshwar Das
2017-06-01
The Advanced Research WRF (ARW) model is used to simulate Very Severe Cyclonic Storms (VSCS) Hudhud (7–13 October, 2014), Phailin (8–14 October, 2013) and Lehar (24–29 November, 2013) to investigate the sensitivity to microphysical schemes on the skill of forecasting track and intensity of the tropical cyclones for high-resolution (9 and 3 km) 120-hr model integration. For cloud resolving grid scale (<5 km) cloud microphysics plays an important role. The performance of the Goddard, Thompson, LIN and NSSL schemes are evaluated and compared with observations and a CONTROL forecast. This study is aimed to investigate the sensitivity to microphysics on the track and intensity with explicitly resolved convection scheme. It shows that the Goddard one-moment bulk liquid-ice microphysical scheme provided the highest skill on the track whereas for intensity both Thompson and Goddard microphysical schemes perform better. The Thompson scheme indicates the highest skill in intensity at 48, 96 and 120 hr, whereas at 24 and 72 hr, the Goddard scheme provides the highest skill in intensity. It is known that higher resolution domain produces better intensity and structure of the cyclones and it is desirable to resolve the convection with sufficiently high resolution and with the use of explicit cloud physics. This study suggests that the Goddard cumulus ensemble microphysical scheme is suitable for high resolution ARW simulation for TC’s track and intensity over the BoB. Although the present study is based on only three cyclones, it could be useful for planning real-time predictions using ARW modelling system.
A unified gas-kinetic scheme for continuum and rarefied flows IV: Full Boltzmann and model equations
Liu, Chang; Xu, Kun; Sun, Quanhua; Cai, Qingdong
2016-06-01
Fluid dynamic equations are valid in their respective modeling scales, such as the particle mean free path scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. With a variation of the modeling scales, theoretically there should have a continuous spectrum of fluid dynamic equations. Even though the Boltzmann equation is claimed to be valid in all scales, many Boltzmann solvers, including direct simulation Monte Carlo method, require the cell resolution to the order of particle mean free path scale. Therefore, they are still single scale methods. In order to study multiscale flow evolution efficiently, the dynamics in the computational fluid has to be changed with the scales. A direct modeling of flow physics with a changeable scale may become an appropriate approach. The unified gas-kinetic scheme (UGKS) is a direct modeling method in the mesh size scale, and its underlying flow physics depends on the resolution of the cell size relative to the particle mean free path. The cell size of UGKS is not limited by the particle mean free path. With the variation of the ratio between the numerical cell size and local particle mean free path, the UGKS recovers the flow dynamics from the particle transport and collision in the kinetic scale to the wave propagation in the hydrodynamic scale. The previous UGKS is mostly constructed from the evolution solution of kinetic model equations. Even though the UGKS is very accurate and effective in the low transition and continuum flow regimes with the time step being much larger than the particle mean free time, it still has space to develop more accurate flow solver in the region, where the time step is comparable with the local particle mean free time. In such a scale, there is dynamic difference from the full Boltzmann collision term and the model equations. This work is about the further development of the UGKS with the implementation of the full Boltzmann collision term in the region
Directory of Open Access Journals (Sweden)
A. Petroff
2010-12-01
Full Text Available A size-resolved particle dry deposition scheme is developed for inclusion in large-scale air quality and climate models where the size distribution and fate of atmospheric aerosols is of concern. The "resistance" structure is similar to what is proposed by Zhang et al. (2001, while a new "surface" deposition velocity (or surface resistance is derived by simplification of a one-dimensional aerosol transport model (Petroff et al., 2008b, 2009. Compared to Zhang et al.'s model, the present model accounts for the leaf size, shape and area index as well as the height of the vegetation canopy. Consequently, it is more sensitive to the change of land covers, particularly in the accumulation mode (0.1–1 micron. A drift velocity is included to account for the phoretic effects related to temperature and humidity gradients close to liquid and solid water surfaces. An extended comparison of this model with experimental evidence is performed over typical land covers such as bare ground, grass, coniferous forest, liquid and solid water surfaces and highlights its adequate prediction. The predictions of the present model differ from Zhang et al.'s model in the fine mode, where the latter tends to over-estimate in a significant way the particle deposition, as measured by various investigators or predicted by the present model. The present development is thought to be useful to modellers of the atmospheric aerosol who need an adequate parameterization of aerosol dry removal to the earth surface, described here by 26 land covers. An open source code is available in Fortran90.
Directory of Open Access Journals (Sweden)
O. Yu. Mukhomorova
2015-01-01
Full Text Available Anti-angiogenesis therapy is an alternative and successfully employed method for treatment of cancerous tumour. However, this therapy isn't widely used in medicine because of expensive drugs. It leads naturally to elaboration of such treatment regimens which use minimum amount of drugs.The aim of the paper is to investigate the model of development of illness and elaborate appropriate treatment regimens in the case of early diagnosis of the disease. The given model reflects the therapy at an intermediate stage of the disease treatment. Further treatment is aimed to destroy cancer cells and may be continued by other means, which are not reflected in the model.Analysis of the main properties of the model was carried out with consideration of two types of auxiliary systems. In the first case, the system is considered without control, as a model of tumour development in the absence of medical treatment. The study of the equilibrium point and determination of its type allowed us to describe disease dynamics and to determine tumour size resulting in death. In the second case a model with a constant control was investigated. The study of its equilibrium point showed that continuous control is not sufficient to support satisfactory patient's condition, and it is necessary to elaborate more complex treatment regimens. For this purpose, we used the method of terminal problems consisting in the search for such program control which forces system to a given final state. Selecting the initial and final states is due to medical grounds.As a result, we found two treatment regimens | one-stage treatment regimen and multi-stage one. The properties of each treatment regimen are analyzed and compared. The total amount of used drugs was a criterion for comparing these two treatment regimens. The theoretical conclusions obtained in this work are supported by computer modeling in MATLAB environment.
Al-Rousan, R. Z.
2015-09-01
The main objective of this study was to assess the effect of the number and schemes of carbon-fiber-reinforced polymer (CFRP) sheets on the capacity of bending moment, the ultimate displacement, the ultimate tensile strain of CFRP, the yielding moment, concrete compression strain, and the energy absorption of RC beams and to provide useful relationships that can be effectively utilized to determine the required number of CFRP sheets for a necessary increase in the flexural strength of the beams without a major loss in their ductility. To accomplish this, various RC beams, identical in their geometric and reinforcement details and having different number and configurations of CFRP sheets, are modeled and analyzed using the ANSYS software and a nonlinear finite-element analysis.
Jin, Shi; Shu, Ruiwen
2017-04-01
In this paper we consider a kinetic-fluid model for disperse two-phase flows with uncertainty. We propose a stochastic asymptotic-preserving (s-AP) scheme in the generalized polynomial chaos stochastic Galerkin (gPC-sG) framework, which allows the efficient computation of the problem in both kinetic and hydrodynamic regimes. The s-AP property is proved by deriving the equilibrium of the gPC version of the Fokker-Planck operator. The coefficient matrices that arise in a Helmholtz equation and a Poisson equation, essential ingredients of the algorithms, are proved to be positive definite under reasonable and mild assumptions. The computation of the gPC version of a translation operator that arises in the inversion of the Fokker-Planck operator is accelerated by a spectrally accurate splitting method. Numerical examples illustrate the s-AP property and the efficiency of the gPC-sG method in various asymptotic regimes.
Yang, Ming; Elkibbi, Maya; Rial, José A.
2005-03-01
Shear wave splitting polarization (p) and delay time (Δt) observations are used to invert for fracture orientation and intensity of fracturing, simultaneously. By addressing the different levels of uncertainty involved in measurements of these two parameters, as well as their dissimilar relationships to fracture configuration, we have developed an inversion algorithm which reduces the primary double-response inversion to two connected single-response ones. We show that its inherent non-linearity complicates this problem, which therefore requires a more sophisticated attack than conventional inversion schemes. It will be shown that the construction of residue function contours in the model plane and the generation of surrogate data by simulation process are essential to this approach. We illustrate the capabilities of this technique by inverting shear wave splitting data from The Geysers geothermal reservoir in California. In principle the method should be useful for characterizing fractured reservoirs, whether geothermal or hydrocarbon.
Energy Technology Data Exchange (ETDEWEB)
Jin, Shi, E-mail: sjin@wisc.edu [Department of Mathematics, University of Wisconsin–Madison, Madison, WI 53706 (United States); Institute of Natural Sciences, School of Mathematical Science, MOELSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240 (China); Shu, Ruiwen, E-mail: rshu2@math.wisc.edu [Department of Mathematics, University of Wisconsin–Madison, Madison, WI 53706 (United States)
2017-04-15
In this paper we consider a kinetic-fluid model for disperse two-phase flows with uncertainty. We propose a stochastic asymptotic-preserving (s-AP) scheme in the generalized polynomial chaos stochastic Galerkin (gPC-sG) framework, which allows the efficient computation of the problem in both kinetic and hydrodynamic regimes. The s-AP property is proved by deriving the equilibrium of the gPC version of the Fokker–Planck operator. The coefficient matrices that arise in a Helmholtz equation and a Poisson equation, essential ingredients of the algorithms, are proved to be positive definite under reasonable and mild assumptions. The computation of the gPC version of a translation operator that arises in the inversion of the Fokker–Planck operator is accelerated by a spectrally accurate splitting method. Numerical examples illustrate the s-AP property and the efficiency of the gPC-sG method in various asymptotic regimes.
Okumura, Kenji; Kuriyama, Toshihiko
1993-09-01
A new computational method that can be actually useful for an initial designing phase of automobile development has being developed. Standard kappa-epsilon turbulence model and QUICK (Quadratic Upwind Interpolation for Convective Kinematics) of the third-order-upwind scheme were used. A commercial CFD (Computational Fluid Dynamics) code, BFC (Boundary Fitted Coordinate) /SCRYU is applied. With this method, the reattachment point of backward-facing step flow is improved even in a rough calculational grid. When it was applied to 3-D (three dimensional) two-box automobile, it is able to improve the error of C(sub D) from 27 percent to 2 percent. The CPU (Central Processing Unit) time was within 20 hours by CRAY Y-MP (1 CPU), and the total analysis time was shortened from three weeks to five days.
Energy Technology Data Exchange (ETDEWEB)
Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.
Energy Technology Data Exchange (ETDEWEB)
Protsenko, K.; Badrzadeh, B. [Vestas Technology R and D, Aarhus (Denmark); Mayer, P.F. [Vestas Technology R and D, Singapore (Singapore); Luo, Z. [Vestas Americas, Houston, TX (United States)
2011-07-01
This paper presents the application of a real-time digital simulation program for wind turbine modeling of a doubly-fed induction generator (type 3) and a fully-converted permanent magnet synchronous generator (type 4). Vestas type 3 and type 4,3 MW turbines are taken as representative for the two turbine types, respectively. The paper begins with an overview of the hardware details used for the simulation studies, outlines the assumptions applied to derive the models, and highlights the limitations imposed by the use of a real time digital simulation program. The implementation of both types of turbine models is then discussed. The validation of both models against electromagnetic time domain simulation results obtained from PSCAD is shown. The validation test performed is the evaluation of low voltage ride through capability of the turbine. Results obtained from the real time digital simulation provide a good match with the PSCAD simulation results, which have in turn been validated against field measurements. This gives confidence in the future application of such real time models, for example in wind power plant protection relay coordination. (orig.)
Directory of Open Access Journals (Sweden)
Pascalle C. Smith
2012-11-01
Full Text Available This paper presents a simple approach for estimating the spatial and temporal variability of seasonal net irrigation water requirement (IWR at the catchment scale, based on gridded land use, soil and daily weather data at 500 × 500 m resolution. In this approach, IWR is expressed as a bounded, linear function of the atmospheric water budget, whereby the latter is defined as the difference between seasonal precipitation and reference evapotranspiration. To account for the effects of soil and crop properties on the soil water balance, the coefficients of the linear relation are expressed as a function of the soil water holding capacity and the so-called crop coefficient. The 12 parameters defining the relation were estimated with good coefficients of determination from a systematic analysis of simulations performed at daily time step with a FAO-type point-scale model for five climatically contrasted sites around the River Rhone and for combinations of six crop and ten soil types. The simple scheme was found to reproduce well results obtained with the daily model at six additional verification sites. We applied the simple scheme to the assessment of irrigation requirements in the whole Swiss Rhone catchment. The results suggest seasonal requirements of 32 × 106 m3 per year on average over 1981–2009, half of which at altitudes above 1500 m. They also disclose a positive trend in the intensity of extreme events over the study period, with an estimated total IWR of 55 × 106 m3 in 2009, and indicate a 45% increase in water demand of grasslands during the 2003 European heat wave in the driest area of the studied catchment. In view of its simplicity, the approach can be extended to other applications, including assessments of the impacts of climate and land-use change.
Nassar, M.; Ginn, T.
2012-12-01
The purpose of this study is to investigate the effect of the computational error on the solution of the inverse problem connecting with density-dependent flow problem. This effect will be addressed by evaluating the uniqueness of the inverse via monitoring objective function surface behavior in two dimensions parameter space, hydraulic conductivity and longitudinal dispersivity. In addition, the Pareto surface will be generated to evaluate the trade-offs between two calibration objectives based on head and concentration measurement errors. This is conducted by changing the aspects of forward model solution scheme, Eulerian and Lagrangian methods with associated variables. The data used for this study is based on the lab study of Nassar et al (2008). The seepage tank is essentially 2D (in an x-z vertical plane) with relatively homogenous coarse sand media with assigned flux in the upstream and constant head or assigned flux boundary condition at the downstream. The forward model solution is conducted with SEAWAT and it is utilized jointly with the inverse code UCODE-2005. This study demonstrates that the choice of the different numerical scheme with associated aspects of the forward problem is a vital step in the solution of the inverse problem in indirect manner. The method of characteristics gives good results by increasing the initial particles numbers and/ or reducing the time step. The advantage of using more particles concept over decreasing the time step is in smoothing the objective function surface that enable the gradient based search technique works in efficient way. Also, the selected points on the Pareto surface is collapsed to two points on the objective function space. Most likely they are not collapsed to a single point in objective function space with one best parameter set because the problem is advection dominating problem.
2016-01-01
Background Visceral leishmaniasis (VL) is a disease caused by two known vector-borne parasite species (Leishmania donovani, L. infantum), transmitted to man by phlebotomine sand flies (species: Phlebotomus and Lutzomyia), resulting in ≈50,000 human fatalities annually, ≈67% occurring on the Indian subcontinent. Indoor residual spraying is the current method of sand fly control in India, but alternative means of vector control, such as the treatment of livestock with systemic insecticide-based drugs, are being evaluated. We describe an individual-based, stochastic, life-stage-structured model that represents a sand fly vector population within a village in India and simulates the effects of vector control via fipronil-based drugs orally administered to cattle, which target both blood-feeding adults and larvae that feed on host feces. Principle findings Simulation results indicated efficacy of fipronil-based control schemes in reducing sand fly abundance depended on timing of drug applications relative to seasonality of the sand fly life cycle. Taking into account cost-effectiveness and logistical feasibility, two of the most efficacious treatment schemes reduced population peaks occurring from April through August by ≈90% (applications 3 times per year at 2-month intervals initiated in March) and >95% (applications 6 times per year at 2-month intervals initiated in January) relative to no control, with the cumulative number of sand fly days occurring April-August reduced by ≈83% and ≈97%, respectively, and more specifically during the summer months of peak human exposure (June-August) by ≈85% and ≈97%, respectively. Conclusions Our model should prove useful in a priori evaluation of the efficacy of fipronil-based drugs in controlling leishmaniasis on the Indian subcontinent and beyond. PMID:27537774
Energy Technology Data Exchange (ETDEWEB)
Wong, May Wai San; Ovchinnikov, Mikhail; Wang, Minghuai
2015-09-14
Potential ways of parameterizing vertical turbulent fluxes of hydrometeors are examined using a high-resolution cloud-resolving model. The cloud-resolving model uses the Morrison microphysics scheme, which contains prognostic variables for rain, graupel, ice, and snow. A benchmark simulation with a horizontal grid spacing of 250 m of a deep convection case carried out to evaluate three different ways of parameterizing the turbulent vertical fluxes of hydrometeors: an eddy-diffusion approximation, a quadrant-based decomposition, and a scaling method that accounts for within-quadrant (subplume) correlations. Results show that the down-gradient nature of the eddy-diffusion approximation tends to transport mass away from concentrated regions, whereas the benchmark simulation indicates that the vertical transport tends to transport mass from below the level of maximum to aloft. Unlike the eddy-diffusion approach, the quadri-modal decomposition is able to capture the signs of the flux gradient but underestimates the magnitudes. The scaling approach is shown to perform the best by accounting for within-quadrant correlations, and improves the results for all hydrometeors except for snow. A sensitivity study is performed to examine how vertical transport may affect the microphysics of the hydrometeors. The vertical transport of each hydrometeor type is artificially suppressed in each test. Results from the sensitivity tests show that cloud-droplet-related processes are most sensitive to suppressed rain or graupel transport. In particular, suppressing rain or graupel transport has a strong impact on the production of snow and ice aloft. Lastly, a viable subgrid-scale hydrometeor transport scheme in an assumed probability density function parameterization is discussed.
Three hybridization models based on local search scheme for job shop scheduling problem
Balbi Fraga, Tatiana
2015-05-01
This work presents three different hybridization models based on the general schema of Local Search Heuristics, named Hybrid Successive Application, Hybrid Neighborhood, and Hybrid Improved Neighborhood. Despite similar approaches might have already been presented in the literature in other contexts, in this work these models are applied to analyzes the solution of the job shop scheduling problem, with the heuristics Taboo Search and Particle Swarm Optimization. Besides, we investigate some aspects that must be considered in order to achieve better solutions than those obtained by the original heuristics. The results demonstrate that the algorithms derived from these three hybrid models are more robust than the original algorithms and able to get better results than those found by the single Taboo Search.
Verification of the multi-layer SNOWPACK model with different water transport schemes
Wever, N.; Schmid, L.; Heilig, A.; Eisen, O.; Fierz, C.; Lehning, M.
2015-12-01
The widely used detailed SNOWPACK model has undergone constant development over the years. A notable recent extension is the introduction of a Richards equation (RE) solver as an alternative for the bucket-type approach for describing water transport in the snow and soil layers. In addition, continuous updates of snow settling and new snow density parameterizations have changed model behavior. This study presents a detailed evaluation of model performance against a comprehensive multiyear data set from Weissfluhjoch near Davos, Switzerland. The data set is collected by automatic meteorological and snowpack measurements and manual snow profiles. During the main winter season, snow height (RMSE: manually observed snow profiles do not support this conclusion. This discrepancy suggests that the implementation of RE partly mimics preferential flow effects.
Role of numerical scheme choice on the results of mathematical modeling of combustion and detonation
Yakovenko, I. S.; Kiverin, A. D.; Pinevich, S. G.; Ivanov, M. F.
2016-11-01
The present study discusses capabilities of dissipation-free CABARET numerical method application to unsteady reactive gasdynamic flows modeling. In framework of present research the method was adopted for reactive flows governed by real gas equation of state and applied for several typical problems of unsteady gas dynamics and combustion modeling such as ignition and detonation initiation by localized energy sources. Solutions were thoroughly analyzed and compared with that derived by using of the modified Euler-Lagrange method of “coarse” particles. Obtained results allowed us to distinguish range of phenomena where artificial effects of numerical approach may counterfeit their physical nature and to develop guidelines for numerical approach selection appropriate for unsteady reactive gasdynamic flows numerical modeling.
Directory of Open Access Journals (Sweden)
P. Jiménez-Guerrero
2011-05-01
Full Text Available A number of attempts have been made to incorporate sea-salt aerosol (SSA source functions in chemistry transport models with varying results according to the complexity of the scheme considered. This contribution compares the inclusion of two different SSA algorithms in two chemistry transport models: CMAQ and CHIMERE. The main goal is to examine the differences in average SSA mass and composition and to study the seasonality of the prediction of SSA when applied to the Mediterranean area with high resolution for a reference year. Dry and wet deposition schemes are also analyzed to better understand the differences observed between both models in the target area. The applied emission algorithm in CHIMERE uses a semi-empirical formulation which obtains the surface emission rate of SSA as a function of the particle size and the surface wind speed raised to the power 3.41. The emission parameterization included within CMAQ is somehow more sophisticated, since fluxes of SSA are corrected with relative humidity. In order to evaluate their strengths and weaknesses, the participating algorithms as implemented in the chemistry transport models were evaluated against AOD measurements from Aeronet and available surface measurements in Southern Europe and the Mediterranean area, showing biases around −0.002 and −1.2 μg m^{−3}, respectively. The results indicate that both models represent accurately the patterns and dynamics of SSA and its non-uniform behavior in the Mediterranean basin, showing a strong seasonality. The levels of SSA strongly vary across the Western and the Eastern Mediterranean, reproducing CHIMERE higher annual levels in the Aegean Sea (12 μg m^{−3} and CMAQ in the Gulf of Lion (9 μg m^{−3}. The large difference found for the ratio PM_{2.5}/total SSA in CMAQ and CHIMERE is also investigated. The dry and wet removal rates are very similar for both models despite the different schemes
Trust recovery model of Ad Hoc network based on identity authentication scheme
Liu, Jie; Huan, Shuiyuan
2017-05-01
Mobile Ad Hoc network trust model is widely used to solve mobile Ad Hoc network security issues. Aiming at the problem of reducing the network availability caused by the processing of malicious nodes and selfish nodes in mobile Ad Hoc network routing based on trust model, an authentication mechanism based on identity authentication mobile Ad Hoc network is proposed, which uses identity authentication to identify malicious nodes, And trust the recovery of selfish nodes in order to achieve the purpose of reducing network congestion and improving network quality. The simulation results show that the implementation of the mechanism can effectively improve the network availability and security.
A new adaptive control scheme based on the interacting multiple model (IMM) estimation
Energy Technology Data Exchange (ETDEWEB)
Afshari, Hamed H.; Al-Ani, Dhafar; Habibi, Saeid [McMaster University, Hamilton (Canada)
2016-06-15
In this paper, an Interacting multiple model (IMM) adaptive estimation approach is incorporated to design an optimal adaptive control law for stabilizing an Unmanned vehicle. Due to variations of the forward velocity of the Unmanned vehicle, its aerodynamic derivatives are constantly changing. In order to stabilize the unmanned vehicle and achieve the control objectives for in-flight conditions, one seeks for an adaptive control strategy that can adjust itself to varying flight conditions. In this context, a bank of linear models is used to describe the vehicle dynamics in different operating modes. Each operating mode represents a particular dynamic with a different forward velocity. These models are then used within an IMM filter containing a bank of Kalman filters (KF) in a parallel operating mechanism. To regulate and stabilize the vehicle, a Linear quadratic regulator (LQR) law is designed and implemented for each mode. The IMM structure determines the particular mode based on the stored models and in-flight input-output measurements. The LQR controller also provides a set of controllers; each corresponds to a particular flight mode and minimizes the tracking error. Finally, the ultimate control law is obtained as a weighted summation of all individual controllers whereas weights are obtained using mode probabilities of each operating mode.
DEFF Research Database (Denmark)
Mashayekhi, Sima; Hugger, Jens
2015-01-01
market. In this paper, we compare several finite difference methods for the solution of this model with respect to precision and order of convergence within a computationally feasible domain allowing at most 200 space steps and 10000 time steps. We conclude that standard explicit Euler comes out...
Noh, S.J.; Rakovec, O.; Weerts, A.H.; Tachikawa, Y.
2014-01-01
We investigate the effects of noise specification on the quality of hydrological forecasts via an advanced data assimilation (DA) procedure using a distributed hydrological model driven by numerical weather predictions. The sequential DA procedure is based on (1) a multivariate rainfall ensemble
Implementation of an atmospheric sulfur scheme in the HIRLAM regional weather forecast model
Energy Technology Data Exchange (ETDEWEB)
Ekman, Annica [Stockholm Univ. (Sweden). Dept. of Meteorology
2000-02-01
Sulfur chemistry has been implemented into the regional weather forecast model HIRLAM in order to simulate sulfur fields during specific weather situations. The model calculates concentrations of sulfur dioxide in air (SO{sub 2}(a)), sulfate in air (SO{sub 4}(a)), sulfate in cloud water (SO{sub 4}(aq)) and hydrogen peroxide (H{sub 2}O{sub 2}). Modeled concentrations of SO{sub 2}(a), SO{sub 4}(a) and SO{sub 4}(aq) in rain water are compared with observations for two weather situations, one winter case with an extensive stratiform cloud cover and one summer case with mostly convective clouds. A comparison of the weather forecast parameters precipitation, relative humidity, geopotential and temperature with observations is also performed. The results show that the model generally overpredicts the SO{sub 2}(a) concentration and underpredicts the SO{sub 4}(a) concentration. The agreement between modeled and observed SO{sub 4}(aq) in rain water is poor. Calculated turnover times are approximately 1 day for SO{sub 2}(a) and 2-2.5 days for SO{sub 4}(a). For SO{sub 2}(a) this is in accordance with earlier simulated global turnover times, but for SO{sub 4}(a) it is substantially lower. Several sensitivity simulations show that the fractional mean bias and root mean square error decreases, mainly for SO{sub 4}(a) and SO{sub 4}(aq), if an additional oxidant for converting SO{sub 2}(a) to SO{sub 4}(a) is included in the model. All weather forecast parameters, except precipitation, agree better with observations than the sulfur variables do. Wet scavenging is responsible for about half of the deposited sulfur and in addition, a major part of the sulfate production occurs through in-cloud oxidation. Hence, the distribution of clouds and precipitation must be better simulated by the weather forecast model in order to improve the agreement between observed and simulated sulfur concentrations.
Implementation of an atmospheric sulfur scheme in the HIRLAM regional weather forecast model
Energy Technology Data Exchange (ETDEWEB)
Ekman, Annica [Stockholm Univ. (Sweden). Dept. of Meteorology
2000-02-01
Sulfur chemistry has been implemented into the regional weather forecast model HIRLAM in order to simulate sulfur fields during specific weather situations. The model calculates concentrations of sulfur dioxide in air (SO{sub 2}(a)), sulfate in air (SO{sub 4}(a)), sulfate in cloud water (SO{sub 4}(aq)) and hydrogen peroxide (H{sub 2}O{sub 2}). Modeled concentrations of SO{sub 2}(a), SO{sub 4}(a) and SO{sub 4}(aq) in rain water are compared with observations for two weather situations, one winter case with an extensive stratiform cloud cover and one summer case with mostly convective clouds. A comparison of the weather forecast parameters precipitation, relative humidity, geopotential and temperature with observations is also performed. The results show that the model generally overpredicts the SO{sub 2}(a) concentration and underpredicts the SO{sub 4}(a) concentration. The agreement between modeled and observed SO{sub 4}(aq) in rain water is poor. Calculated turnover times are approximately 1 day for SO{sub 2}(a) and 2-2.5 days for SO{sub 4}(a). For SO{sub 2}(a) this is in accordance with earlier simulated global turnover times, but for SO{sub 4}(a) it is substantially lower. Several sensitivity simulations show that the fractional mean bias and root mean square error decreases, mainly for SO{sub 4}(a) and SO{sub 4}(aq), if an additional oxidant for converting SO{sub 2}(a) to SO{sub 4}(a) is included in the model. All weather forecast parameters, except precipitation, agree better with observations than the sulfur variables do. Wet scavenging is responsible for about half of the deposited sulfur and in addition, a major part of the sulfate production occurs through in-cloud oxidation. Hence, the distribution of clouds and precipitation must be better simulated by the weather forecast model in order to improve the agreement between observed and simulated sulfur concentrations.
Modeling Small Exoplanets Interiors: a Numerical Scheme to Explore Possible Compositions
Brugger, B.; Mousis, O.; Deleuil, M.
2016-12-01
Despite the huge number of discovered exoplanets, our knowledge of their compositions remains extremely limited. Modeling the interiors of such bodies is necessary to go further than the first approximation given by their mean density. Here we present a numerical model aiming at computing the internal structure of a given exoplanet from its measured mass and radius, and providing a range of compositions compatible with these data. Our model assumes the presence of a metal core surrounded by a silicate mantle and a water layer. Depending on their respective proportions, we can model various compositions, typically from terrestrial planets to ocean or Mercury-like planets. We apply this model to the case of CoRoT-7b, whose mass and radius values have recently been updated to 4.73 ± 0.95% mearth and 1.585 ± 0.064 rearth, respectively. We show that these values are fully compatible with a solid composition, and find that CoRoT-7b may present a core mass fraction of 80% at maximum, or on the opposite, a maximum water mass fraction of 51%. If this latter composition is compatible with that of several icy moons in the solar system, a 80% core in mass is less conceivable and a lower limit can be placed from solar system formation conditions. These results confirm the Super-Earth status of CoRoT-7b, and show that an Earth-like composition may be obtained more easily compared to previous conclusions.
Dynamic inversion method based on the time-staggered stereo-modeling scheme and its acceleration
Jing, Hao; Yang, Dinghui; Wu, Hao
2016-12-01
A set of second-order differential equations describing the space-time behaviour of derivatives of displacement with respect to model parameters (i.e. waveform sensitivities) is obtained via taking the derivative of the original wave equations. The dynamic inversion method obtains sensitivities of the seismic displacement field with respect to earth properties directly by solving differential equations for them instead of constructing sensitivities from the displacement field itself. In this study, we have taken a new perspective on the dynamic inversion method and used acceleration approaches to reduce the computational time and memory usage to improve its ability of performing high-resolution imaging. The dynamic inversion method, which can simultaneously use different waves and multicomponent observation data, is appropriate for directly inverting elastic parameters, medium density or wave velocities. Full wavefield information is utilized as much as possible at the expense of a larger amount of calculations. To mitigate the computational burden, two ways are proposed to accelerate the method from a computer-implementation point of view. One is source encoding which uses a linear combination of all shots, and the other is to reduce the amount of calculations on forward modeling. We applied a new finite-difference (FD) method to the dynamic inversion to improve the computational accuracy and speed up the performance. Numerical experiments indicated that the new FD method can effectively suppress the numerical dispersion caused by the discretization of wave equations, resulting in enhanced computational efficiency with less memory cost for seismic modeling and inversion based on the full wave equations. We present some inversion results to demonstrate the validity of this method through both checkerboard and Marmousi models. It shows that this method is also convergent even with big deviations for the initial model. Besides, parallel calculations can be easily
Integrating a reservoir regulation scheme into a spatially distributed hydrological model
Energy Technology Data Exchange (ETDEWEB)
Zhao, Gang; Gao, Huilin; Naz, Bibi S.; Kao, Shih-Chieh; Voisin, Nathalie
2016-12-01
During the past several decades, numerous reservoirs have been built across the world for a variety of purposes such as flood control, irrigation, municipal/industrial water supplies, and hydropower generation. Consequently, natural streamflow timing and magnitude have been altered significantly by reservoir operations. In addition, the hydrological cycle can be modified by land use/land cover and climate changes. To understand the fine scale feedback between hydrological processes and water management decisions, a distributed hydrological model embedded with a reservoir component is of desire. In this study, a multi-purpose reservoir module with predefined complex operational rules was integrated into the Distributed Hydrology Soil Vegetation Model (DHSVM). Conditional operating rules, which are designed to reduce flood risk and enhance water supply reliability, were adopted in this module. The performance of the integrated model was tested over the upper Brazos River Basin in Texas, where two U.S. Army Corps of Engineers reservoirs, Lake Whitney and Aquilla Lake, are located. The integrated DHSVM model was calibrated and validated using observed reservoir inflow, outflow, and storage data. The error statistics were summarized for both reservoirs on a daily, weekly, and monthly basis. Using the weekly reservoir storage for Lake Whitney as an example, the coefficients of determination (R2) and the Nash-Sutcliff Efficiency (NSE) are 0.85 and 0.75, respectively. These results suggest that this reservoir module has promise for use in sub-monthly hydrological simulations. Enabled with the new reservoir component, the DHSVM model provides a platform to support adaptive water resources management under the impacts of evolving anthropogenic activities and substantial environmental changes.
A flexible additive inflation scheme for treating model error in ensemble Kalman Filters
Sommer, Matthias; Janjic, Tijana
2017-04-01
Data assimilation algorithms require an accurate estimate of the uncertainty of the prior, background, field. However, the background error covariance derived from the ensemble of numerical model simulations does not adequately represent the uncertainty of it. This is partially due to the sampling error that arises from the use of a small number of ensemble members to represent the background error covariance. It is also partially a consequence of the fact that the model does not represent its own error. Several mechanisms have been introduced so far aiming at alleviating the detrimental e ffects of misrepresented ensemble covariances, allowing for the successful implementation of ensemble data assimilation techniques for atmospheric dynamics. One of the established approaches in ensemble data assimilation is additive inflation which perturbs each ensemble member with a sample from a given distribution. This results in a fixed rank of the model error covariance matrix. Here, a more flexible approach is suggested where the model error samples are treated as additional synthetic ensemble members which are used in the update step of data assimilation but are not forecast. In this way, the rank of the model error covariance matrix can be chosen independently of the ensemble. The eff ect of this altered additive inflation method on the performance of the filter is analyzed here in an idealised experiment. It is shown that the additional synthetic ensemble members can make it feasible to achieve convergence in an otherwise divergent setting of data assimilation. The use of this method also allows for a less stringent localization radius.
Energy Technology Data Exchange (ETDEWEB)
Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew
2006-01-01
Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and
A Bayesian spatial assimilation scheme for snow coverage observations in a gridded snow model
Directory of Open Access Journals (Sweden)
S. Kolberg
2006-01-01
Full Text Available A method for assimilating remotely sensed snow covered area (SCA into the snow subroutine of a grid distributed precipitation-runoff model (PRM is presented. The PRM is assumed to simulate the snow state in each grid cell by a snow depletion curve (SDC, which relates that cell's SCA to its snow cover mass balance. The assimilation is based on Bayes' theorem, which requires a joint prior distribution of the SDC variables in all the grid cells. In this paper we propose a spatial model for this prior distribution, and include similarities and dependencies among the grid cells. Used to represent the PRM simulated snow cover state, our joint prior model regards two elevation gradients and a degree-day factor as global variables, rather than describing their effect separately for each cell. This transformation results in smooth normalised surfaces for the two related mass balance variables, supporting a strong inter-cell dependency in their joint prior model. The global features and spatial interdependency in the prior model cause each SCA observation to provide information for many grid cells. The spatial approach similarly facilitates the utilisation of observed discharge. Assimilation of SCA data using the proposed spatial model is evaluated in a 2400 km2 mountainous region in central Norway (61° N, 9° E, based on two Landsat 7 ETM+ images generalized to 1 km2 resolution. An image acquired on 11 May, a week before the peak flood, removes 78% of the variance in the remaining snow storage. Even an image from 4 May, less than a week after the melt onset, reduces this variance by 53%. These results are largely improved compared to a cell-by-cell independent assimilation routine previously reported. Including observed discharge in the updating information improves the 4 May results, but has weak effect on 11 May. Estimated elevation gradients are shown to be sensitive to informational deficits occurring at high altitude, where snowmelt has not started
Mounirou Toure, Ally
The feasibility of a radiance assimilation using a multi-layered snow physical model to estimate snow physical parameters is studied. The work is divided in five parts. The first two chapters are dedicated to the literature review. In the third chapter, experimental work was conducted in the alpine snow to estimate snow correlation (for microwave emission modelling) using near-infrared digital photography. We made microwave radiometric and near-infrared reflectance measurements of snow slabs under different experimental conditions. We used an empirical relation to link near-infrared reflectance of snow to the specific surface area (SSA), and converted the SSA into the correlation length. From the measurements of snow radiances at 21 and 35 GHz, we derived the microwave scattering coefficient by inverting two coupled radiative transfer models (RTM) (the sandwich and six-flux model). The correlation lengths found are in the same range as those determined in the literature using cold laboratory work. The technique shows great potential in the determination of the snow correlation length under field conditions. In the fourth chapter, the performance of the ensemble Kalman filter (EnKF) for snow water equivalent (SWE) estimation is assessed by assimilating synthetic microwave observations at Ground Based Microwave Radiometer (GBMR-7) frequencies (18.7, 23.8, 36.5, 89 vertical and horizontal polarization) into a snow physics model, CROCUS. CROCUS has a realistic stratigraphic and ice layer modelling scheme. This work builds on previous methods that used snow physics model with limited number of layers. Data assimilation methods require accurate predictions of the brightness temperature (Tb) emitted by the snowpack. It has been shown that the accuracy of RTMs is sensitive to the stratigraphic representation of the snowpack. However, as the stratigraphic fidelity increases, the number of layers increases, as does the number of state variables estimated in the assimilation
A model of pyrolysis in a staged scheme of low-grade solid fuel gasification
Levin, A. A.; Shamansky, V. A.; Kozlov, A. N.
2016-10-01
The development and evolution of the theory of solid fuel combustion took place in the mid-20th century. The most studied research subject was sized fossil coal. At the same time the research on the fuels with a high yield of volatile matter (biomass) was limited due to insufficient data on kinetics of physical and chemical processes that occur at their heating. Obviously, the creation of a prospective low-tonnage technology for staged gasification of wooden biomass is possible only when a detailed mechanism of these processes is well understood and their kinetic parameters are known. This paper presents the initial results of the development of a model of wood pyrolysis in a screw reactor as the first stage of the multistage gasification process. One of the currently set goals in this research is to develop a mathematical model of heat and mass transfer processes to perform optimization calculations.
Chaotic Dynamics in Smart Grid and Suppression Scheme via Generalized Fuzzy Hyperbolic Model
Directory of Open Access Journals (Sweden)
Qiuye Sun
2014-01-01
Full Text Available This paper presents a method to control chaotic behavior of a typical Smart Grid based on generalized fuzzy hyperbolic model (GFHM. As more and more distributed generations (DG are incorporated into the Smart Grid, the chaotic behavior occurs increasingly. To verify the behavior, a dynamic model which describes a power system with DG is presented firstly. Then, the simulation result shows that the power system can lead to chaos under certain initial conditions. Based on the universal approximation of GFHM, we confirm that the chaotic behavior could be suppressed by a new controller, which is designed by means of solving a linear matrix inequality (LMI. This approach could make a good application to suppress the chaos in Smart Grid. Finally, a numerical example is given to demonstrate the effectiveness of the proposed chaotic suppression strategy.
Policy Decisions for a Price Dependent Demand Rate Inventory Model with Progressive Payments Scheme
Rajat Kumar; Mukesh Kumar
2012-01-01
Problem statement: In this proposed research, we developed an inventory model to formulate an optimal ordering policies for supplier who offers progressive permissible delay periods to the retailer to settle his/her account. We assumed that the annual demand rate as a decreasing function of price with constant rate of deterioration and time-varying holding cost. Shortages in inventory are allowed which is completely backlogged. Approach: The main objective of this study to frame an inventory ...
A Simple Mixing Scheme for Models that Resolve Breaking Internal Waves
2010-03-23
these have been treated with large-scale general circulation models (i.e. POM Merri - field and Holloway (2002) or ROMS), or with specialized non...We integrate the dissipation (figure 4) for the whole water column between−25 km < x < 25 km, to get D = ∫ A [ Az ( ∂u ∂z )2 +Ah ( ∂u ∂x )2] dA (4) 7
Integrating a reservoir regulation scheme into a spatially distributed hydrological model
Zhao, Gang; Gao, Huilin; Naz, Bibi S.; Kao, Shih-Chieh; Voisin, Nathalie
2016-12-01
During the past several decades, numerous reservoirs have been built across the world for a variety of purposes such as flood control, irrigation, municipal/industrial water supplies, and hydropower generation. Consequently, the timing and magnitude of natural streamflow have been altered significantly by reservoir operations. In addition, the hydrological cycle is also modified by land-use/land-cover change and by climate change. To understand the fine-scale feedback between hydrological processes and water management decisions, a distributed hydrological model embedded with a reservoir component is desired. In this study, a multi-purpose reservoir module with predefined complex operational rules was integrated into the Distributed Hydrology Soil Vegetation Model (DHSVM). Conditional operating rules, which are designed to reduce flood risk and enhance water supply reliability, were adopted in this module. The performance of the integrated model was tested over the upper Brazos River Basin in Texas, where two U.S. Army Corps of Engineers managed reservoirs, Lake Whitney and Aquilla Lake, are located. The integrated model was calibrated and validated using observed reservoir inflow, outflow, and storage data. The error statistics were summarized for both reservoirs on a daily, weekly, and monthly basis. Using the weekly reservoir storage for Lake Whitney as an example, the coefficient of determination (R2) was 0.85 and the Nash-Sutcliff Efficiency (NSE) was 0.75. These results suggest that this reservoir module holds promise for use in sub-monthly hydrological simulations. With the new reservoir component, the DHSVM provides a platform to support adaptive water resources management under the impacts of evolving anthropogenic activities and substantial environmental changes.
Kumar, Ramesh; Pal, Parimal
2013-08-01
Modeling and simulation was carried out for an advanced membrane-integrated hybrid treatment process that ensures reuse of water with conversion and recovery of ammoniacal nitrogen as value-added struvite fertilizer from coke wastewater. While toxic cyanide was largely removed in a pre-chemical treatment unit using Fenton's reagents under optimized conditions, more than 95% of NH4(+)-N could be recovered as a valuable by-product called struvite through addition of appropriate doses of magnesium and phosphate salts. Water could be turned reusable through a polishing treatment by nanofiltration membranes in a largely fouling free membrane module following a biodegradation step. Mathematical modeling of such an integrated process was done with Haldane-Andrew approach for the associated microbial degradation of phenol by Pseudomonas putida. Residual NH4(+) was degraded by nitrification and denitrification following the modified Monod kinetics. The model could successfully predict the plant performance as reflected in reasonably low relative error (0.03-0.18) and high Willmott d-index (>0.98).
A stable and robust calibration scheme of the log-periodic power law model
Filimonov, V.; Sornette, D.
2013-09-01
We present a simple transformation of the formulation of the log-periodic power law formula of the Johansen-Ledoit-Sornette (JLS) model of financial bubbles that reduces it to a function of only three nonlinear parameters. The transformation significantly decreases the complexity of the fitting procedure and improves its stability tremendously because the modified cost function is now characterized by good smooth properties with in general a single minimum in the case where the model is appropriate to the empirical data. We complement the approach with an additional subordination procedure that slaves two of the nonlinear parameters to the most crucial nonlinear parameter, the critical time tc, defined in the JLS model as the end of the bubble and the most probable time for a crash to occur. This further decreases the complexity of the search and provides an intuitive representation of the results of the calibration. With our proposed methodology, metaheuristic searches are not longer necessary and one can resort solely to rigorous controlled local search algorithms, leading to a dramatic increase in efficiency. Empirical tests on the Shanghai Composite index (SSE) from January 2007 to March 2008 illustrate our findings.
Gijben, Morné; Dyson, Liesl L.; Loots, Mattheus T.
2017-09-01
Cloud-to-ground lightning data from the Southern Africa Lightning Detection Network and numerical weather prediction model parameters from the Unified Model are used to develop a lightning threat index (LTI) for South Africa. The aim is to predict lightning for austral summer days (September to February) by means of a statistical approach. The austral summer months are divided into spring and summer seasons and analysed separately. Stepwise logistic regression techniques are used to select the most appropriate model parameters to predict lightning. These parameters are then utilized in a rare-event logistic regression analysis to produce equations for the LTI that predicts the probability of the occurrence of lightning. Results show that LTI forecasts have a high sensitivity and specificity for spring and summer. The LTI is less reliable during spring, since it over-forecasts the occurrence of lightning. However, during summer, the LTI forecast is reliable, only slightly over-forecasting lightning activity. The LTI produces sharp forecasts during spring and summer. These results show that the LTI will be useful early in the morning in areas where lightning can be expected during the day.
High-resolution weather forecasting is affected by many aspects, i.e. model initial conditions, subgrid-scale cumulus convection and cloud microphysics schemes. Recent 12km grid studies using the Weather Research and Forecasting (WRF) model have identified the importance of inco...
Gómez, I.; Ronda, R.J.; Caselles, V.; Estrela, M.J.
2016-01-01
This paper proposes the implementation of different non-local Planetary Boundary Layer schemes within the Regional Atmospheric Modeling System (RAMS) model. The two selected PBL parameterizations are the Medium-Range Forecast (MRF) PBL and its updated version, known as the Yonsei University (YSU)
Energy Technology Data Exchange (ETDEWEB)
Touma, Rony [Department of Computer Science & Mathematics, Lebanese American University, Beirut (Lebanon); Zeidan, Dia [School of Basic Sciences and Humanities, German Jordanian University, Amman (Jordan)
2016-06-08
In this paper we extend a central finite volume method on nonuniform grids to the case of drift-flux two-phase flow problems. The numerical base scheme is an unstaggered, non oscillatory, second-order accurate finite volume scheme that evolves a piecewise linear numerical solution on a single grid and uses dual cells intermediately while updating the numerical solution to avoid the resolution of the Riemann problems arising at the cell interfaces. We then apply the numerical scheme and solve a classical drift-flux problem. The obtained results are in good agreement with corresponding ones appearing in the recent literature, thus confirming the potential of the proposed scheme.
Lowman, L.; Barros, A. P.
2014-12-01
Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.
Policy Decisions for a Price Dependent Demand Rate Inventory Model with Progressive Payments Scheme
Directory of Open Access Journals (Sweden)
Rajat Kumar
2012-01-01
Full Text Available Problem statement: In this proposed research, we developed an inventory model to formulate an optimal ordering policies for supplier who offers progressive permissible delay periods to the retailer to settle his/her account. We assumed that the annual demand rate as a decreasing function of price with constant rate of deterioration and time-varying holding cost. Shortages in inventory are allowed which is completely backlogged. Approach: The main objective of this study to frame an inventory model in real life situations. In this study, we introduced a new idea of trade credits, namely, the supplier charges the retailer progressive interest rates if the retailer prolongs its unpaid balance. By offering progressive interest rates to the retailers, a supplier, can secure competitive market advantage over the competitors and possibly improve market share profit. This study has two main purposes, first the mathematical model of an inventory system are establish under the above conditions and second demonstrate that the optimal solution not only exists but also feasible. We developed theoretical results to obtain the optimal replenishment interval by examine the explicit condition. An algorithm is given to find the flow of optimal ordering policy. Results: The results is illustrated with the help of numerical example using Mathematica software and the optimal solution of the problem is Z (p, T1 = 76.8586 at (p, T1 = (0.952656, 0.128844. Conclusion: We proposed an algorithm to find the optimal ordering policy. A numerical study has been performed to observe the sensitivity of the effect of demand parameter changes.
An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu
2012-11-01
SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.
Stability of a neural predictive controller scheme on a neural model
DEFF Research Database (Denmark)
Luther, Jim Benjamin; Sørensen, Paul Haase
2009-01-01
In previous works presenting various forms of neural-network-based predictive controllers, the main emphasis has been on the implementation aspects, i.e. the development of a robust optimization algorithm for the controller, which will be able to perform in real time. However, the stability issue...... has not been addressed specifically for these controllers. On the other hand a number of results concerning the stability of receding horizon controllers on a nonlinear system exist. In this paper we present a proof of stability for a predictive controller controlling a neural network model...
Wang, Tianbo; Zhou, Wuneng; Zhao, Shouwei; Yu, Weiqin
2014-03-01
In this paper, the robust exponential synchronization problem for a class of uncertain delayed master-slave dynamical system is investigated by using the adaptive control method. Different from some existing master-slave models, the considered master-slave system includes bounded unmodeled dynamics. In order to compensate the effect of unmodeled dynamics and effectively achieve synchronization, a novel adaptive controller with simple updated laws is proposed. Moreover, the results are given in terms of LMIs, which can be easily solved by LMI Toolbox in Matlab. A numerical example is given to illustrate the effectiveness of the method.
DEFF Research Database (Denmark)
Weerts, Hermanus H. M.; Shafiei, Seyed Ehsan; Stoustrup, Jakob
2014-01-01
A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predictive...... control design. It is however shown that taking into account the knowledge of different time scales in the dynamical subsystems makes possible a linear formulation of a centralized predictive controller. A realistic scenario of regulatory power services in the smart grid is considered and formulated...
Towards Symbolic Encryption Schemes
DEFF Research Database (Denmark)
Ahmed, Naveed; Jensen, Christian D.; Zenner, Erik
2012-01-01
Symbolic encryption, in the style of Dolev-Yao models, is ubiquitous in formal security models. In its common use, encryption on a whole message is specified as a single monolithic block. From a cryptographic perspective, however, this may require a resource-intensive cryptographic algorithm......, namely an authenticated encryption scheme that is secure under chosen ciphertext attack. Therefore, many reasonable encryption schemes, such as AES in the CBC or CFB mode, are not among the implementation options. In this paper, we report new attacks on CBC and CFB based implementations of the well......-known Needham-Schroeder and Denning-Sacco protocols. To avoid such problems, we advocate the use of refined notions of symbolic encryption that have natural correspondence to standard cryptographic encryption schemes....
Surrounded by Safety. Safety As An Encompassing Policy Concept in The Netherlands
Resodihardjo, S.L.; Prins, R.
2014-01-01
Safety used to be a pretty straightforward concept: governments need to safeguard the country from invasion and its citizens from crime. Over time, additional issues became to be defined as a safety problem. Even so, the safety problems and their accompanying policies remained clearly demarcated – safety revolved, for example, around product safety. Recently, an encompassing safety concept has become popular in the Netherlands. No longer referring to a clearly demarcated policy issue, the con...
A Fovea Localization Scheme Using Vessel Origin-Based Parabolic Model
Directory of Open Access Journals (Sweden)
Chun-Yuan Yu
2014-09-01
Full Text Available At the center of the macula, fovea plays an important role in computer-aided diagnosis. To locate the fovea, this paper proposes a vessel origin (VO-based parabolic model, which takes the VO as the vertex of the parabola-like vasculature. Image processing steps are applied to accurately locate the fovea on retinal images. Firstly, morphological gradient and the circular Hough transform are used to find the optic disc. The structure of the vessel is then segmented with the line detector. Based on the characteristics of the VO, four features of VO are extracted, following the Bayesian classification procedure. Once the VO is identified, the VO-based parabolic model will locate the fovea. To find the fittest parabola and the symmetry axis of the retinal vessel, an Shift and Rotation (SR-Hough transform that combines the Hough transform with the shift and rotation of coordinates is presented. Two public databases of retinal images, DRIVE and STARE, are used to evaluate the proposed method. The experiment results show that the average Euclidean distances between the located fovea and the fovea marked by experts in two databases are 9.8 pixels and 30.7 pixels, respectively. The results are stronger than other methods and thus provide a better macular detection for further disease discovery.
Energy Technology Data Exchange (ETDEWEB)
Lode, Axel U.J.
2013-06-03
This thesis explores the quantum many-body tunneling dynamics of open ultracold bosonic systems with the recently developed multiconfigurational time-dependent Hartree for bosons (MCTDHB) method. The capabilities of MCTDHB to provide solutions to the full time-dependent many-body problem are assessed in a benchmark using the analytically solvable harmonic interaction Hamiltonian and a generalization of it with time-dependent both one- and two-body potentials. In a comparison with numerically exact MCTDHB results, it is shown that e.g. lattice methods fail qualitatively to describe the tunneling dynamics. A model assembling the many-body physics of the process from basic simultaneously happening single-particle processes is derived and verified with a numerically exact MCTDHB description. The generality of the model is demonstrated even for strong interactions and large particle numbers. The ejection of the bosons from the source occurs with characteristic velocities. These velocities are defined by the chemical potentials of systems with different particle numbers which are converted to kinetic energy. The tunneling process is accompanied by fragmentation: the ejected bosons lose their coherence with the source and among each other. It is shown that the various aspects of the tunneling dynamics' can be controlled well with the interaction and the potential threshold.
Jin, J.; Miller, N.
2007-12-01
The Community Land Model version 3 (CLM3) developed by the National Center for Atmospheric Research (NCAR) was coupled into the Weather Research and Forecasting (WRF) Model version 2.2. The performance of WRF-CLM3 in predicting regional climate was quantitatively compared with that of WRF coupled to the soil thermal diffusion (STD), Rapid Update Cycle, and NOAH Land Surface Schemes. These land surface schemes represent a range of complexity within land-surface schemes. CLM3 is the most sophisticated model, with detailed snow and vegetation processes. The STD scheme is oversimplified, which only calculates soil temperature and neglects vegetation and snow physics. The RUC and NOAH schemes are intermediate in the detail, and the major deference between them is that RUC has a multi-layer snow scheme, and Noah has a single snow layer lumped with the topmost soil layer. WRF was driven by the National Centers for Environmental Prediction Reanalysis data II with each of these land surface schemes for one-year simulations over the period, 1 October 1995 to 30 September 1996, resulting in four one-year simulations for intercomparison. Each simulation has 30km-10km two-way nested domains. The 30 km domain includes the western U.S. and eastern Pacific, and the inner domain includes California and parts of Nevada, Oregon, and the eastern Pacific. Our analysis shows that WRF-CLM3 outperforms WRF-RUC, WRF-NOAH, and WRF-STD in simulating temperature and snow when compared with observations. The WRF-STD scheme, which does not include snow and vegetation processes resulted in the poorest results, with a dramatic overestimation of surface air temperature. However, regardless of the land surface scheme chosen, WRF reasonably well reproduces the winter precipitation, a major water resource for California, suggesting that the linkage between land surface processes and precipitation is not explicit. In general, land surface schemes play a significant role in the simulation of regional
Bernardi, G; Ord, S M; Greenhill, L J; Pindor, B; Wayth, R B; Wyithe, J S B
2010-01-01
We present a method for subtracting point sources from interferometric radio images via forward modeling of the instrument response and involving an algebraic nonlinear minimization. The method is applied to simulated maps of the Murchison Wide-field Array but is generally useful in cases where only image data are available. After source subtraction, the residual maps have no statistical difference to the expected thermal noise distribution at all angular scales, indicating high effectiveness in the subtraction. Simulations indicate that the errors in recovering the source parameters decrease with increasing signal-to-noise ratio, which is consistent with the theoretical measurement errors. In applying the technique to simulated snapshot observations with the Murchison Wide-field Array, we found that all 101 sources present in the simulation were recovered with an average position error of 10 arcsec and an average flux density error of 0.15%. This led to a dynamic range increase of approximately 3 orders of m...
Fault detection in processes represented by PLS models using an EWMA control scheme
Harrou, Fouzi
2016-10-20
Fault detection is important for effective and safe process operation. Partial least squares (PLS) has been used successfully in fault detection for multivariate processes with highly correlated variables. However, the conventional PLS-based detection metrics, such as the Hotelling\\'s T and the Q statistics are not well suited to detect small faults because they only use information about the process in the most recent observation. Exponentially weighed moving average (EWMA), however, has been shown to be more sensitive to small shifts in the mean of process variables. In this paper, a PLS-based EWMA fault detection method is proposed for monitoring processes represented by PLS models. The performance of the proposed method is compared with that of the traditional PLS-based fault detection method through a simulated example involving various fault scenarios that could be encountered in real processes. The simulation results clearly show the effectiveness of the proposed method over the conventional PLS method.
DEFF Research Database (Denmark)
Bak, Claus Leth; Sztykiel, Michal; Dollerup, Sebastian
2011-01-01
Based on the analysis of a specific relay model and an HVAC (High Voltage Alternating Current) cable system, a new approach to EMTDC/PSCAD modelling of protective relays is presented. Such approach allows creating complex and accurate relay models derived from the original algorithms. Relay model...
Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.
2012-04-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment
Palace, Jacqueline; Bregenzer, Thomas; Tremlett, Helen; Oger, Joel; Zhu, Fheng; Boggild, Mike; Duddy, Martin; Dobson, Charles
2014-01-01
Objectives In 2002, the UK's National Institute for Health and Care Excellence concluded that the multiple sclerosis (MS) disease modifying therapies; interferon-β and glatiramer acetate, were not cost effective over the short term but recognised that reducing disability over the longer term might dramatically improve the cost effectiveness. The UK Risk-sharing Scheme (RSS) was established to ensure cost-effective provision by prospectively collecting disability-related data from UK-treated patients with MS and comparing findings to a natural history (untreated) cohort. However, deficiencies were found in the originally selected untreated cohort and the resulting analytical approach. This study aims to identify a more suitable natural history cohort and to develop a robust analytical approach using the new cohort. Design The Scientific Advisory Group, recommended the British Columbia Multiple Sclerosis (BCMS) database, Canada, as providing a more suitable natural history comparator cohort. Transition probabilities were derived and different Markov models (discrete and continuous) with and without baseline covariates were applied. Setting MS clinics in Canada and the UK. Participants From the BCMS database, 898 ‘untreated’ patients with MS considered eligible for drug treatment based on the UK's Association of British Neurologists criteria. Outcome measure The predicted Expanded Disability Status Scale (EDSS) score was collected and assessed for goodness of fit when compared with actual outcome. Results The BCMS untreated cohort contributed 7335 EDSS scores over a median 6.4 years (6357 EDSS ‘transitions’ recorded at consecutive visits) during the period 1980–1995. A continuous Markov model with ‘onset age’ as a binary covariate was deemed the most suitable model for future RSS analysis. Conclusions A new untreated MS cohort from British Columbia has been selected and will be modelled using a continuous Markov model with onset age as a baseline
Simulating Replica Exchange: Markov State Models, Proposal Schemes, and the Infinite Swapping Limit.
Zhang, Bin W; Dai, Wei; Gallicchio, Emilio; He, Peng; Xia, Junchao; Tan, Zhiqiang; Levy, Ronald M
2016-08-25
Replica exchange molecular dynamics is a multicanonical simulation technique commonly used to enhance the sampling of solvated biomolecules on rugged free energy landscapes. While replica exchange is relatively easy to implement, there are many unanswered questions about how to use this technique most efficiently, especially because it is frequently the case in practice that replica exchange simulations are not fully converged. A replica exchange cycle consists of a series of molecular dynamics steps of a set of replicas moving under different Hamiltonians or at different thermodynamic states followed by one or more replica exchange attempts to swap replicas among the different states. How the replica exchange cycle is constructed affects how rapidly the system equilibrates. We have constructed a Markov state model of replica exchange (MSMRE) using long molecular dynamics simulations of a host-guest binding system as an example, in order to study how different implementations of the replica exchange cycle can affect the sampling efficiency. We analyze how the number of replica exchange attempts per cycle, the number of MD steps per cycle, and the interaction between the two parameters affects the largest implied time scale of the MSMRE simulation. The infinite swapping limit is an important concept in replica exchange. We show how to estimate the infinite swapping limit from the diagonal elements of the exchange transition matrix constructed from MSMRE "simulations of simulations" as well as from relatively short runs of the actual replica exchange simulations.
Roul, Pradip; Warbhe, Ujwal
2017-08-01
The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).
Energy Technology Data Exchange (ETDEWEB)
Pin, F.G.
1993-11-01
Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ``minimal model`` for accomplishing given tasks and proposes to utilize only the minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept.
Nguyen, T. T.; Laurent, F.; Fox, R. O.; Massot, M.
2016-11-01
The accurate description and robust simulation, at relatively low cost, of global quantities (e.g. number density or volume fraction) as well as the size distribution of a population of fine particles in a carrier fluid is still a major challenge for many applications. For this purpose, two types of methods are investigated for solving the population balance equation with aggregation, continuous particle size change (growth and size reduction), and nucleation: the extended quadrature method of moments (EQMOM) based on the work of Yuan et al. [52] and a hybrid method (TSM) between the sectional and moment methods, considering two moments per section based on the work of Laurent et al. [30]. For both methods, the closure employs a continuous reconstruction of the number density function of the particles from its moments, thus allowing evaluation of all the unclosed terms in the moment equations, including the negative flux due to the disappearance of particles. Here, new robust and efficient algorithms are developed for this reconstruction step and two kinds of reconstruction are tested for each method. Moreover, robust and accurate numerical methods are developed, ensuring the realizability of the moments. The robustness is ensured with efficient and tractable algorithms despite the numerous couplings and various algebraic constraints thanks to a tailored overall strategy. EQMOM and TSM are compared to a sectional method for various simple but relevant test cases, showing their ability to describe accurately the fine-particle population with a much lower number of variables. These results demonstrate the efficiency of the modeling and numerical choices, and their potential for the simulation of real-world applications.
A standard numbering scheme for thiamine diphosphate-dependent decarboxylases
Directory of Open Access Journals (Sweden)
Vogel Constantin
2012-11-01
Full Text Available Abstract Background Standard numbering schemes for families of homologous proteins allow for the unambiguous identification of functionally and structurally relevant residues, to communicate results on mutations, and to systematically analyse sequence-function relationships in protein families. Standard numbering schemes have been successfully implemented for several protein families, including lactamases and antibodies, whereas a numbering scheme for the structural family of thiamine-diphosphate (ThDP -dependent decarboxylases, a large subfamily of the class of ThDP-dependent enzymes encompassing pyruvate-, benzoylformate-, 2-oxo acid-, indolpyruvate- and phenylpyruvate decarboxylases, benzaldehyde lyase, acetohydroxyacid synthases and 2-succinyl-5-enolpyruvyl-6-hydroxy-3-cyclohexadiene-1-carboxylate synthase (MenD is still missing. Despite a high structural similarity between the members of the ThDP-dependent decarboxylases, their sequences are diverse and make a pairwise sequence comparison of protein family members difficult. Results We developed and validated a standard numbering scheme for the family of ThDP-dependent decarboxylases. A profile hidden Markov model (HMM was created using a set of representative sequences from the family of ThDP-dependent decarboxylases. The pyruvate decarboxylase from S. cerevisiae (PDB: 2VK8 was chosen as a reference because it is a well characterized enzyme. The crystal structure with the PDB identifier 2VK8 encompasses the structure of the ScPDC mutant E477Q, the cofactors ThDP and Mg2+ as well as the substrate analogue (2S-2-hydroxypropanoic acid. The absolute numbering of this reference sequence was transferred to all members of the ThDP-dependent decarboxylase protein family. Subsequently, the numbering scheme was integrated into the already established Thiamine-diphosphate dependent Enzyme Engineering Database (TEED and was used to systematically analyze functionally and structurally relevant
Salamatin, A.
2016-11-01
Numerical algorithm is developed for modelling non-linear mass transfer process in supercritical fluid extraction (SFE). The ground raw material is considered as polydisperse, characterized by discrete number of effective particle fractions. Two continuous interacting counterparts separated by permeable membrane are distinguished in plant material build-up. The apoplast plays role of transport channels during extraction, and symplast contains extractable oil. The complete SFE model is non-linear as a result of non-linearity of oil dissolution kinetics. The computational scheme is based on the finite-volume approximation method and Thomas elimination procedure. The resulting system of algebraic equations is solved iteratively. Special attention is paid to polydisperse substrates, when particle scale characteristics of all fractions interact with each other through pore phase concentration on the vessel scale. Stability of the developed algorithm is demonstrated in numerical tests. Special iterative procedure guarantees a monotonic decrease of oil content in individual particles of substrate. It is also shown that in the limit of the so-called shrinking core approach the number of mesh nodes on a particle scale should be increased.
Alternative health insurance schemes
DEFF Research Database (Denmark)
Keiding, Hans; Hansen, Bodil O.
2002-01-01
In this paper, we present a simple model of health insurance with asymmetric information, where we compare two alternative ways of organizing the insurance market. Either as a competitive insurance market, where some risks remain uninsured, or as a compulsory scheme, where however, the level...... competitive insurance; this situation turns out to be at least as good as either of the alternatives...
Iguchi, Takamichi; Nakajima, Teruyuki; Khain, Alexander P.; Saito, Kazuo; Takemura, Toshihiko; Suzuki, Kentaroh
2008-07-01
A bin-based microphysics scheme for cloud is implemented into a three-dimensional nonhydrostatic model and off-line coupled with a global aerosol transport model to reproduce realistic and inhomogeneous condensation nuclei (CN) fields. This coupling makes it possible to calculate cloud microphysical properties over a larger area under more realistic environmental conditions. Using the model, nested grid simulations are performed for two precipitation events associated with transitional synoptic-scale forcing during the spring over an area of the East China Sea. The nested grid simulations reproduce the general features of the horizontal distributions of variables such as effective droplet radius derived from satellite data retrieval. Comparison of the relationships among simulated cloud variables with those among satellite-derived variables reveals that the implementation of an inhomogeneous CN field results in a more accurate simulation of the distribution of cloud microphysical properties. Sensitivity tests with respect to CN concentration show that the simulated area and amount of precipitation are slightly affected by the CN concentration. Comparative simulations using bin-based and bulk microphysical schemes indicate that the difference in cloud microphysics has little effect on precipitation except over the areas of elevated pollution (i.e., elevated CN). Comparison with previous reports indicates that the precipitation response to aerosols is dependent on the environmental conditions and the type of the cloud system.
Morrison, James L.; Oladunjoye, Ganiyu Titi
2002-01-01
A survey of 287 business faculty found that few were infusing electronic commerce topics into existing curricula despite its growing use in business. Responses were similar regardless of faculty gender, region, and program size or level. (SK)
Morrison, James L.; Oladunjoye, Ganiyu Titi
2002-01-01
A survey of 287 business faculty found that few were infusing electronic commerce topics into existing curricula despite its growing use in business. Responses were similar regardless of faculty gender, region, and program size or level. (SK)
The coupling of land surface models and hydrological models potentially improves the land surface representation, benefiting both the streamflow prediction capabilities as well as providing improved estimates of water and energy fluxes into the atmosphere. In this study, the simple biosphere model 2...
Mishra, C.; Samantaray, A. K.; Chakraborty, G.
2016-09-01
Vibration analysis for diagnosis of faults in rolling element bearings is complicated when the rotor speed is variable or slow. In the former case, the time interval between the fault-induced impact responses in the vibration signal are non-uniform and the signal strength is variable. In the latter case, the fault-induced impact response strength is weak and generally gets buried in the noise, i.e. noise dominates the signal. This article proposes a diagnosis scheme based on a combination of a few signal processing techniques. The proposed scheme initially represents the vibration signal in terms of uniformly resampled angular position of the rotor shaft by using the interpolated instantaneous angular position measurements. Thereafter, intrinsic mode functions (IMFs) are generated through empirical mode decomposition (EMD) of resampled vibration signal which is followed by thresholding of IMFs and signal reconstruction to de-noise the signal and envelope order tracking to diagnose the faults. Data for validating the proposed diagnosis scheme are initially generated from a multi-body simulation model of rolling element bearing which is developed using bond graph approach. This bond graph model includes the ball and cage dynamics, localized fault geometry, contact mechanics, rotor unbalance, and friction and slip effects. The diagnosis scheme is finally validated with experiments performed with the help of a machine fault simulator (MFS) system. Some fault scenarios which could not be experimentally recreated are then generated through simulations and analyzed through the developed diagnosis scheme.
Haussaire, J.-M.; Bocquet, M.
2016-01-01
Bocquet and Sakov (2013) introduced a low-order model based on the coupling of the chaotic Lorenz-95 (L95) model, which simulates winds along a mid-latitude circle, with the transport of a tracer species advected by this zonal wind field. This model, named L95-T, can serve as a playground for testing data assimilation schemes with an online model. Here, the tracer part of the model is extended to a reduced photochemistry module. This coupled chemistry meteorology model (CCMM), the L95-GRS (generic reaction set) model, mimics continental and transcontinental transport and the photochemistry of ozone, volatile organic compounds and nitrogen oxides. Its numerical implementation is described. The model is shown to reproduce the major physical and chemical processes being considered. L95-T and L95-GRS are specifically designed and useful for testing advanced data assimilation schemes, such as the iterative ensemble Kalman smoother (IEnKS), which combines the best of ensemble and variational methods. These models provide useful insights prior to the implementation of data assimilation methods into larger models. We illustrate their use with data assimilation schemes on preliminary yet instructive numerical experiments. In particular, online and offline data assimilation strategies can be conveniently tested and discussed with this low-order CCMM. The impact of observed chemical species concentrations on the wind field estimate can be quantitatively assessed. The impacts of the wind chaotic dynamics and of the chemical species non-chaotic but highly nonlinear dynamics on the data assimilation strategies are illustrated.
Ginzburg, Irina
2016-02-01
In this Comment on the recent work (Zhu and Ma, 2013) [11] by Zhu and Ma (ZM) we first show that all three local gray Lattice Boltzmann (GLB) schemes in the form (Zhu and Ma, 2013) [11]: GS (Chen and Zhu, 2008; Gao and Sharma, 1994) [1,4], WBS (Walsh et al., 2009) [12] and ZM, fail to get constant Darcy's velocity in series of porous blocks. This inconsistency is because of their incorrect definition of the macroscopic velocity in the presence of the heterogeneous momentum exchange, while the original WBS model (Walsh et al., 2009) [12] does this properly. We improve the GS and ZM schemes for this and other related deficiencies. Second, we show that the ;discontinuous velocity; they recover on the stratified interfaces with their WBS scheme is inherent, in different degrees, to all LBE Brinkman schemes, including ZM scheme. None of them guarantees the stress and the velocity continuity by their implicit interface conditions, even in the frame of the two-relaxation-times (TRT) collision operator where these two properties are assured in stratified Stokes flow, Ginzburg (2007) [5]. Third, the GLB schemes are presented in work (Zhu and Ma, 2013) [11] as the alternative ones to direct, Brinkman-force based (BF) schemes (Freed, 1998; Nie and Martys, 2007) [3,8]. Yet, we show that the BF-TRT scheme (Ginzburg, 2008) [6] gets the solutions of any of the improved GLB schemes for specific, viscosity-dependent choice of its one or two local relaxation rates. This provides the principal difference between the GLB and BF: while the BF may respect the linearity of the Stokes-Brinkman equation rigorously, the GLB-TRT cannot, unless it reduces to the BF via the inverse transform of the relaxation rates. Furthermore, we show that, in limited parameter space, ;gray; schemes may run one another. From the practical point of view, permeability values obtained with the GLB are viscosity-dependent, unlike with the BF. Finally, the GLB shares with the BF a so-called anisotropy (Ginzburg
Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States
Sousan, Sinan Dhia Jameel
This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that
Li, Ping
2016-01-13
To meet the electromagnetic interference regulation, the radiated emission from device under test such as electronic devices must be carefully manipulated and accurately characterized. Instead of resorting to the direct far-field measurement, in this paper, a novel approach is proposed to model the radiated emission from electronic devices placed in shielding enclosures by using the near electric field only. Based on the Schelkkunoff’s equivalence principle and Raleigh–Carson reciprocity theorem, only the tangential components of the electric field over the ventilation slots and apertures of the shielding enclosure are sufficient to obtain the radiated emissions outside the shielding box if the inside of the shielding enclosure was filled with perfectly electric conductor (PEC). In order to efficiently model wideband emission, the time-domain sampling scheme is employed. Due to the lack of analytical Green’s function for arbitrary PEC boxes, the radiated emission must be obtained via the full-wave numerical methods by considering the total radiated emission as the superposition between the direct radiation from the equivalent magnetic currents in free space and the scattered field generated by the PEC shielding box. In this study, the state-of-the-art discontinuous Galerkin time-domain (DGTD) method is utilized, which has the flexibility to model irregular geometries, keep high-order accuracy, and more importantly involves only local operations. For open-region problems, a hybridized DGTD and time-domain boundary integration method applied to rigorously truncate the computational domain. To validate the proposed approach, several representative examples are presented and compared with both analytical and numerical results.
Institute of Scientific and Technical Information of China (English)
HEJRANFAR Kazem; FATTAH-HESARY Kasra
2011-01-01
A numerical treatment for the prediction of cavitating flows is presented and assessed.The algorithm uses the preconditioned multiphase Euler equations with appropriate mass transfer terms.A central difference finite volume scheme with suitable dissipation terms to account for density jumps across the cavity interface is shown to yield an effective method for solving the multiphase Euler equations.The Euler equations are utilized herein for the cavitation modeling, because some certain characteristics of cavitating flows can be obtained using the solution of this system of equations with relative low computational effort.In addition, the Euler equations are appropriate for the assessment of the numerical method used, because of the sensitivity of the solution to the numerical instabilities.For this reason, a sensitivity study is conducted to evaluate the effects of various parameters, such as numerical dissipation coefficients and grid size, on the accuracy and performance of the solution.The computations are performed for steady cavitating flows around the NACA 0012 and NACA 66 (MOD) hydrofoils and also an axisymmetric hemispherical fore-body under different conditions and the results are compared with the available numerical and experimental data.The solution procedure presented is shown to be accurate and efficient for predicting steady sheet- and super-cavitation for 2D/axisymmetric geometries.
Institute of Scientific and Technical Information of China (English)
Zhang Yufeng; Zhang Dachang
2012-01-01
Multi-pylon multi-span suspension bridge is a new type super flexible structure system, and the rigidity design of middle pylon is one of the main difficult technical issues. Due to the requirements of longitudinal rigidity, the structural form and the corresponding foundation type of middle pylon are different from those of the ordinary steel pylon, and the complicated dynamic characteristics make the calculation quite difficult. In this article, exploration has been made in selection of similarity ratio and model materials, section simulation, restriction conditions simulation, fixing of mass blocks, fabrication scheme and testing method by taking into account different construction and working conditions such as restriction conditions and working environment of a three-pylon suspension bridge, to conduct the test experimental design of the dynamic behavior of the middle pylon, with the purpose to reveal its dynamic characteristics and make comparison and analysis with theoretical assumptions, to provide basis for anti-wind and anti-seismic design and reference for the design and research of three-pylon two-span suspension bridges in the future.
Performance modeling of a two-tier primary-secondary network with IEEE 802.11 broadcast scheme
Khabazian, Mehdi
2011-03-01
In this paper, we study the performance of a two-tier primary-secondary network based on IEEE 802.11 broadcast scheme. We assume that a number of primary and secondary users coexist in the radio environment and share a single band. To protect the primary users\\' priority, the secondary users are allowed to contend for the channel only if they sense it idle for a certain sensing time. Considering an exponential packet inter-arrival time for the primary network, we model each primary user as an independent M/G/1 queue. Subsequently, we determine the primary users\\' average medium access delay in the presence of secondary users as well as the hybrid network\\'s throughput. Numerical results and discussions show the effects of parameters pertaining to the secondary users, such as as sensing time, packet payload size and population size, on the performance of the primary network. Furthermore, we provide simulation results which confirm the accuracy of the proposed analysis. © 2011 IEEE.
Institute of Scientific and Technical Information of China (English)
LIN Zhenhua; ZHAO Dongliang; SONG Jinbao
2011-01-01
Different advection schemes and two-equation turbulence closure models based on eddy viscosity concept are used to compute the drag coefficient around a circular cylinder at high Reynolds number (106).The numerical results from these simulations are compared with each other and with experimental data in order to evaluate the performance of different combinations of advection scheme and two-equation turbulence model.The separate contributions from form drag and friction drag are also analyzed.The computational results show that the widely used standard k-ε turbulence closure is not suitable for such kind of study,while the other two-equation turbulence closure models produce acceptable results.The influence of the different advection schemes on the final results are small compared to that produced by the choice of turbulence closure method.The present study serves as a reference for the choice of advection schemes and turbulence closure models for more complex numerical simulation of the flow around a circular cylinder at high Reynolds number.
Improved Ternary Subdivision Interpolation Scheme
Institute of Scientific and Technical Information of China (English)
WANG Huawei; QIN Kaihuai
2005-01-01
An improved ternary subdivision interpolation scheme was developed for computer graphics applications that can manipulate open control polygons unlike the previous ternary scheme, with the resulting curve proved to be still C2-continuous. Parameterizations of the limit curve near the two endpoints are given with expressions for the boundary derivatives. The split joint problem is handled with the interpolating ternary subdivision scheme. The improved scheme can be used for modeling interpolation curves in computer aided geometric design systems, and provides a method for joining two limit curves of interpolating ternary subdivisions.
2008-10-01
Remote sensing of ocean color provides synoptic surface ocean bio -optical properties but is limited to real-time or climatological applications. Many...this, we couple satellite imagery with numerical circulation models to provide short-term (24-48 hr) forecasts of bio -optical properties. These are...physical processes control the bio -optical distribution patterns. We compare optical forecast results from three Navy models and two advection
Chaouch, Naira; Temimi, Marouane; Weston, Michael; Ghedira, Hosni
2017-05-01
In this study, we intercompare seven different PBL schemes in WRF in the United Arab Emirates (UAE) and we assess their impact on the performance of the simulations. The study covered five fog events reported in 2014 at Abu Dhabi International Airport. The analysis of Synoptic conditions indicated that during all examined events, the UAE was under a high geopotential pressure and light wind that does not exceed 7 m/s at 850 hPa ( 1.5 km). Seven PBL schemes, namely, Yonsei University (YSU), Mellor-Yamada-Janjic (MYJ), Moller-Yamada Nakanishi and Niino (MYNN) level 2.5, Quasi-Normal Scale Elimination (QNSE-EDMF), Asymmetric Convective Model (ACM2), Grenier-Bretherton-McCaa (GBM) and MYNN level 3 were tested. In situ observations used in the model's assessment included radiosonde data from the Abu Dhabi International Airport and surface measurements of relative humidity (RH), dew point temperature, wind speed, and temperature profiles. Overall, all the tested PBL schemes showed comparable skills with relatively higher performance with the QNSE scheme. The average RH Root Mean Square Error (RMSE) and BIAS for all PBLs were 15.75% and - 9.07%, respectively, whereas the obtained RMSE and BIAS when QNSE was used were 14.65% and - 6.3% respectively. Comparable skills were obtained for the rest of the variables. Local PBL schemes showed better performance than non-local schemes. Discrepancies between simulated and observed values were higher at the surface level compared to high altitude values. The sensitivity to lead time showed that best simulation performances were obtained when the lead time varies between 12 and 18 h. In addition, the results of the simulations show that better performance is obtained when the starting condition is dry.
Hu, Kainan; Zhang, Hongwu; Geng, Shaojuan
2016-10-01
A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio is proposed. The local equilibrium distribution function including the rotational velocity of particle is decoupled into two parts, i.e., the local equilibrium distribution function of the translational velocity of particle and that of the rotational velocity of particle. From these two local equilibrium functions, two lattice Boltzmann models are derived via the Hermite expansion, namely one is in relation to the translational velocity and the other is connected with the rotational velocity. Accordingly, the distribution function is also decoupled. After this, the evolution equation is decoupled into the evolution equation of the translational velocity and that of the rotational velocity. The two evolution equations evolve separately. The lattice Boltzmann models used in the scheme proposed by this work are constructed via the Hermite expansion, so it is easy to construct new schemes of higher-order accuracy. To validate the proposed scheme, a one-dimensional shock tube simulation is performed. The numerical results agree with the analytical solutions very well.
Hu, Kainan; Zhang, Hongwu; Geng, Shaojuan
2016-10-01
A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio is proposed. The local equilibrium distribution function including the rotational velocity of particle is decoupled into two parts, i.e., the local equilibrium distribution function of the translational velocity of particle and that of the rotational velocity of particle. From these two local equilibrium functions, two lattice Boltzmann models are derived via the Hermite expansion, namely one is in relation to the translational velocity and the other is connected with the rotational velocity. Accordingly, the distribution function is also decoupled. After this, the evolution equation is decoupled into the evolution equation of the translational velocity and that of the rotational velocity. The two evolution equations evolve separately. The lattice Boltzmann models used in the scheme proposed by this work are constructed via the Hermite expansion, so it is easy to construct new schemes of higher-order accuracy. To validate the proposed scheme, a one-dimensional shock tube simulation is performed. The numerical results agree with the analytical solutions very well.
Directory of Open Access Journals (Sweden)
D. Boutelier
2011-05-01
Full Text Available We present an experimental apparatus for 3-D thermo-mechanical analogue modeling of plate tectonic processes such as oceanic and continental subductions, arc-continent or continental collisions. The model lithosphere, made of temperature-sensitive elasto-plastic analogue materials with strain softening, is submitted to a constant temperature gradient causing a strength reduction with depth in each layer. The surface temperature is imposed using infrared emitters, which allows maintaining an unobstructed view of the model surface and the use of a high resolution optical strain monitoring technique (Particle Imaging Velocimetry. Subduction experiments illustrate how the stress conditions on the interplate zone can be estimated using a force sensor attached to the back of the upper plate and adjusted via the density and strength of the subducting lithosphere or the lubrication of the plate boundary. The first experimental results reveal the potential of the experimental set-up to investigate the three-dimensional solid-mechanics interactions of lithospheric plates in multiple natural situations.
Shepherd, Tristan J.; Walsh, Kevin J.
2017-08-01
This study investigates the effect of the choice of convective parameterization (CP) scheme on the simulated tracks of three intense tropical cyclones (TCs), using the Weather Research and Forecasting (WRF) model. We focus on diagnosing the competing influences of large-scale steering flow, beta drift and convectively induced changes in track, as represented by four different CP schemes (Kain-Fritsch (KF), Betts-Miller-Janjic (BMJ), Grell-3D (G-3), and the Tiedtke (TD) scheme). The sensitivity of the results to initial conditions, model domain size and shallow convection is also tested. We employ a diagnostic technique by Chan et al. (J Atmos Sci 59:1317-1336, 2002) that separates the influence of the large-scale steering flow, beta drift and the modifications of the steering flow by the storm-scale convection. The combined effect of the steering flow and the beta drift causes TCs typically to move in the direction of the wavenumber-1 (WN-1) cyclonic potential vorticity tendency (PVT). In instances of asymmetrical TCs, the simulated TC motion does not necessarily match the motion expected from the WN-1 PVT due to changes in the convective pattern. In the present study, we test this concept in the WRF simulations and investigate whether if the diagnosed motion from the WN-1 PVT and the TC motion do not match, this can be related to the emerging evolution of changes in convective structure. Several systematic results are found across the three cyclone cases. The sensitivity of TC track to initial conditions (the initialisation time and model domain size) is less than the sensitivity of TC track to changing the CP scheme. The simulated track is not overly sensitive to shallow convection in the KF, BMJ, and TD schemes, compared to the track difference between CP schemes. The G3 scheme, however, is highly sensitive to shallow convection being used. Furthermore, while agreement between the simulated TC track direction and the WN-1 diagnostic is usually good, there are
Shepherd, Tristan J.; Walsh, Kevin J.
2016-08-01
This study investigates the effect of the choice of convective parameterization (CP) scheme on the simulated tracks of three intense tropical cyclones (TCs), using the Weather Research and Forecasting (WRF) model. We focus on diagnosing the competing influences of large-scale steering flow, beta drift and convectively induced changes in track, as represented by four different CP schemes (Kain-Fritsch (KF), Betts-Miller-Janjic (BMJ), Grell-3D (G-3), and the Tiedtke (TD) scheme). The sensitivity of the results to initial conditions, model domain size and shallow convection is also tested. We employ a diagnostic technique by Chan et al. (J Atmos Sci 59:1317-1336, 2002) that separates the influence of the large-scale steering flow, beta drift and the modifications of the steering flow by the storm-scale convection. The combined effect of the steering flow and the beta drift causes TCs typically to move in the direction of the wavenumber-1 (WN-1) cyclonic potential vorticity tendency (PVT). In instances of asymmetrical TCs, the simulated TC motion does not necessarily match the motion expected from the WN-1 PVT due to changes in the convective pattern. In the present study, we test this concept in the WRF simulations and investigate whether if the diagnosed motion from the WN-1 PVT and the TC motion do not match, this can be related to the emerging evolution of changes in convective structure. Several systematic results are found across the three cyclone cases. The sensitivity of TC track to initial conditions (the initialisation time and model domain size) is less than the sensitivity of TC track to changing the CP scheme. The simulated track is not overly sensitive to shallow convection in the KF, BMJ, and TD schemes, compared to the track difference between CP schemes. The G3 scheme, however, is highly sensitive to shallow convection being used. Furthermore, while agreement between the simulated TC track direction and the WN-1 diagnostic is usually good, there are
Blatter, Heinz
2014-01-01
The enthalpy method for the thermodynamics of polythermal glaciers and ice sheets is tested and verified by a one-dimensional problem (parallel-sided slab). The enthalpy method alone does not include explicitly the Stefan-type energy- and mass-flux matching conditions at the cold-temperate transition surface (CTS) that separates the upper cold from the lower temperate layer. However, these transition conditions are important for correctly determining the position of the CTS. For the numerical solution of the polythermal slab problem, we consider a two-layer front-tracking scheme as well as three different one-layer schemes that feature a single grid for both layers. Computed steady-state temperature and moisture profiles are verified with exact solutions, and transient solutions computed by the one-layer schemes are compared with those of the two-layer scheme, considered to be a reliable reference. While the conventional one-layer enthalpy scheme (that does not include the transition conditions at the CTS) ca...
Identity-based Signcryption Scheme for Multiple PKG in Standard Model%标准模型中基于身份的多PKG签密方案
Institute of Scientific and Technical Information of China (English)
冀会芳; 韩文报; 刘连东
2011-01-01
在随机预言模型中,现有的基于身份签密的多私钥生成器(PKG)方案都是可证明安全的.基于此,提出在标准模型中基于身份的多PKG签密方案,并证明其安全性.在DBDH假设下,方案的机密性满足在适应性选择密文攻击时,密文不可区分.在CDH假设下,方案的不可伪造性满足在适应性选择消息攻击时,签名存在性不可伪造.和已有方案相比,该方案在签密阶段效率较高.%Several identity-based signcryption schemes for multiple Private Key Generator(PKG) are proved security in random oracle model, a new identity-based signcryption scheme for multiple PKG is proposed with security proof in standard model. The confidentiality against adaptive chosen ciphertext attack is obtained under DBDH assumption, and the unforgeability against adaptive chosen message attack is derived under the CDH assumption. Compared with the existing schemes, the new scheme's signcryption phase is more efficient.
标准模型下强安全的无证书签密方案%Strongly secure certificateless signcryption scheme in standard model
Institute of Scientific and Technical Information of China (English)<