A Survey of Health Care Models that Encompass Multiple Departments
Vanberkel, Peter T.; Boucherie, Richardus J.; Hans, Elias W.; Hurink, Johann L.; Litvak, Nelli
2009-01-01
In this survey we review quantitative health care models to illustrate the extent to which they encompass multiple hospital departments. The paper provides general overviews of the relationships that exists between major hospital departments and describes how these relationships are accounted for by
A survey of health care models that encompass multiple departments
Vanberkel, P.T.; Boucherie, Richardus J.; Hans, Elias W.; Hurink, Johann L.; Litvak, Nelli
2010-01-01
In this survey we review quantitative health care models to illustrate the extent to which they encompass multiple hospital departments. The paper provides general overviews of the relationships that exists between major hospital departments and describes how these relationships are accounted for by
A survey of health care models that encompass multiple departments
Vanberkel, P.T.; Boucherie, Richardus J.; Hans, Elias W.; Hurink, Johann L.; Litvak, Nelli
In this survey we review quantitative health care models to illustrate the extent to which they encompass multiple hospital departments. The paper provides general overviews of the relationships that exists between major hospital departments and describes how these relationships are accounted for by
Two adaptive radiative transfer schemes for numerical weather prediction models
Directory of Open Access Journals (Sweden)
V. Venema
2007-11-01
Full Text Available Radiative transfer calculations in atmospheric models are computationally expensive, even if based on simplifications such as the δ-two-stream approximation. In most weather prediction models these parameterisation schemes are therefore called infrequently, accepting additional model error due to the persistence assumption between calls. This paper presents two so-called adaptive parameterisation schemes for radiative transfer in a limited area model: A perturbation scheme that exploits temporal correlations and a local-search scheme that mainly takes advantage of spatial correlations. Utilising these correlations and with similar computational resources, the schemes are able to predict the surface net radiative fluxes more accurately than a scheme based on the persistence assumption. An important property of these adaptive schemes is that their accuracy does not decrease much in case of strong reductions in the number of calls to the δ-two-stream scheme. It is hypothesised that the core idea can also be employed in parameterisation schemes for other processes and in other dynamical models.
SYNTHESIS OF VISCOELASTIC MATERIAL MODELS (SCHEMES
Directory of Open Access Journals (Sweden)
V. Bogomolov
2014-10-01
Full Text Available The principles of structural viscoelastic schemes construction for materials with linear viscoelastic properties in accordance with the given experimental data on creep tests are analyzed. It is shown that there can be only four types of materials with linear visco-elastic properties.
An operator model-based filtering scheme
International Nuclear Information System (INIS)
Sawhney, R.S.; Dodds, H.L.; Schryer, J.C.
1990-01-01
This paper presents a diagnostic model developed at Oak Ridge National Laboratory (ORNL) for off-normal nuclear power plant events. The diagnostic model is intended to serve as an embedded module of a cognitive model of the human operator, one application of which could be to assist control room operators in correctly responding to off-normal events by providing a rapid and accurate assessment of alarm patterns and parameter trends. The sequential filter model is comprised of two distinct subsystems --- an alarm analysis followed by an analysis of interpreted plant signals. During the alarm analysis phase, the alarm pattern is evaluated to generate hypotheses of possible initiating events in order of likelihood of occurrence. Each hypothesis is further evaluated through analysis of the current trends of state variables in order to validate/reject (in the form of increased/decreased certainty factor) the given hypothesis. 7 refs., 4 figs
Iteration schemes for parallelizing models of superconductivity
Energy Technology Data Exchange (ETDEWEB)
Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)
1996-12-31
The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.
Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations
Christensen, H. M.; Dawson, A.; Palmer, T.
2017-12-01
Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.
Multi-model ensemble schemes for predicting northeast monsoon ...
Indian Academy of Sciences (India)
An attempt has been made to improve the accuracy of predicted rainfall using three different multi-model ensemble (MME) schemes, viz., simple arithmetic mean of models (EM), principal component regression (PCR) and singular value decomposition based multiple linear regressions (SVD). It is found out that among ...
Inflationary gravitational waves in collapse scheme models
Energy Technology Data Exchange (ETDEWEB)
Mariani, Mauro, E-mail: mariani@carina.fcaglp.unlp.edu.ar [Facultad de Ciencias Astronómicas y Geofísicas, Universidad Nacional de La Plata, Paseo del Bosque S/N, 1900 La Plata (Argentina); Bengochea, Gabriel R., E-mail: gabriel@iafe.uba.ar [Instituto de Astronomía y Física del Espacio (IAFE), UBA-CONICET, CC 67, Suc. 28, 1428 Buenos Aires (Argentina); León, Gabriel, E-mail: gleon@df.uba.ar [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria – Pab. I, 1428 Buenos Aires (Argentina)
2016-01-10
The inflationary paradigm is an important cornerstone of the concordance cosmological model. However, standard inflation cannot fully address the transition from an early homogeneous and isotropic stage, to another one lacking such symmetries corresponding to our present universe. In previous works, a self-induced collapse of the wave function has been suggested as the missing ingredient of inflation. Most of the analysis regarding the collapse hypothesis has been solely focused on the characteristics of the spectrum associated to scalar perturbations, and within a semiclassical gravity framework. In this Letter, working in terms of a joint metric-matter quantization for inflation, we calculate, for the first time, the tensor power spectrum and the tensor-to-scalar ratio corresponding to the amplitude of primordial gravitational waves resulting from considering a generic self-induced collapse.
Inflationary gravitational waves in collapse scheme models
Directory of Open Access Journals (Sweden)
Mauro Mariani
2016-01-01
Full Text Available The inflationary paradigm is an important cornerstone of the concordance cosmological model. However, standard inflation cannot fully address the transition from an early homogeneous and isotropic stage, to another one lacking such symmetries corresponding to our present universe. In previous works, a self-induced collapse of the wave function has been suggested as the missing ingredient of inflation. Most of the analysis regarding the collapse hypothesis has been solely focused on the characteristics of the spectrum associated to scalar perturbations, and within a semiclassical gravity framework. In this Letter, working in terms of a joint metric-matter quantization for inflation, we calculate, for the first time, the tensor power spectrum and the tensor-to-scalar ratio corresponding to the amplitude of primordial gravitational waves resulting from considering a generic self-induced collapse.
Inflationary gravitational waves in collapse scheme models
International Nuclear Information System (INIS)
Mariani, Mauro; Bengochea, Gabriel R.; León, Gabriel
2016-01-01
The inflationary paradigm is an important cornerstone of the concordance cosmological model. However, standard inflation cannot fully address the transition from an early homogeneous and isotropic stage, to another one lacking such symmetries corresponding to our present universe. In previous works, a self-induced collapse of the wave function has been suggested as the missing ingredient of inflation. Most of the analysis regarding the collapse hypothesis has been solely focused on the characteristics of the spectrum associated to scalar perturbations, and within a semiclassical gravity framework. In this Letter, working in terms of a joint metric-matter quantization for inflation, we calculate, for the first time, the tensor power spectrum and the tensor-to-scalar ratio corresponding to the amplitude of primordial gravitational waves resulting from considering a generic self-induced collapse.
Introducing a moisture scheme to a nonhydrostatic sigma coordinate model
CSIR Research Space (South Africa)
Bopape, Mary-Jane M
2011-09-01
Full Text Available and precipitation in mid-latitude cyclones. VII: A model for the ?seeder-feeder? process in warm-frontal rainbands. Journal of the Atmospheric Sciences, 40, 1185-1206. Stensrud DJ, 2007: Parameterization schemes. Keys to understanding numerical weather...
Acharya Nachiketa Multi-model ensemble schemes for predicting ...
Indian Academy of Sciences (India)
AUTHOR INDEX. Acharya Nachiketa. Multi-model ensemble schemes for predicting northeast monsoon rainfall over peninsular India. 795. Agarwal Neeraj see Shahi Naveen R. 337. Aggarwal Neha see Jha Neerja. 663. Ahmed Shakeel see Sarah S. 399. Alavi Amir Hossein see Mousavi Seyyed Mohammad. 1001.
A new parallelization algorithm of ocean model with explicit scheme
Fu, X. D.
2017-08-01
This paper will focus on the parallelization of ocean model with explicit scheme which is one of the most commonly used schemes in the discretization of governing equation of ocean model. The characteristic of explicit schema is that calculation is simple, and that the value of the given grid point of ocean model depends on the grid point at the previous time step, which means that one doesn’t need to solve sparse linear equations in the process of solving the governing equation of the ocean model. Aiming at characteristics of the explicit scheme, this paper designs a parallel algorithm named halo cells update with tiny modification of original ocean model and little change of space step and time step of the original ocean model, which can parallelize ocean model by designing transmission module between sub-domains. This paper takes the GRGO for an example to implement the parallelization of GRGO (Global Reduced Gravity Ocean model) with halo update. The result demonstrates that the higher speedup can be achieved at different problem size.
An integrated urban drainage system model for assessing renovation scheme.
Dong, X; Zeng, S; Chen, J; Zhao, D
2012-01-01
Due to sustained economic growth in China over the last three decades, urbanization has been on a rapidly expanding track. In recent years, regional industrial relocations were also accelerated across the country from the east coast to the west inland. These changes have led to a large-scale redesign of urban infrastructures, including the drainage system. To help the reconstructed infrastructures towards a better sustainability, a tool is required for assessing the efficiency and environmental performance of different renovation schemes. This paper developed an integrated dynamic modeling tool, which consisted of three models for describing the sewer, the wastewater treatment plant (WWTP) and the receiving water body respectively. Three auxiliary modules were also incorporated to conceptualize the model, calibrate the simulations, and analyze the results. The developed integrated modeling tool was applied to a case study in Shenzhen City, which is one of the most dynamic cities and facing considerable challenges for environmental degradation. The renovation scheme proposed to improve the environmental performance of Shenzhen City's urban drainage system was modeled and evaluated. The simulation results supplied some suggestions for the further improvement of the renovation scheme.
Adaptive Packet Combining Scheme in Three State Channel Model
Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak
2018-01-01
The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.
Model building by Coset Space Dimensional Reduction scheme
Jittoh, Toshifumi; Koike, Masafumi; Nomura, Takaaki; Sato, Joe; Shimomura, Takashi
2009-04-01
We investigate the gauge-Higgs unification models within the scheme of the coset space dimensional reduction, beginning with a gauge theory in a fourteen-dimensional spacetime where extra-dimensional space has the structure of a ten-dimensional compact coset space. We found seventeen phenomenologically acceptable models through an exhaustive search for the candidates of the coset spaces, the gauge group in fourteen dimension, and fermion representation. Of the seventeen, ten models led to SO(10)(×U(1)) GUT-like models after dimensional reduction, three models led to SU(5)×U(1) GUT-like models, and four to SU(3)×SU(2)×U(1)×U(1) Standard-Model-like models. The combinations of the coset space, the gauge group in the fourteen-dimensional spacetime, and the representation of the fermion contents of such models are listed.
An Industrial Model Based Disturbance Feedback Control Scheme
DEFF Research Database (Denmark)
Kawai, Fukiko; Nakazawa, Chikashi; Vinther, Kasper
2014-01-01
This paper presents a model based disturbance feedback control scheme. Industrial process systems have been traditionally controlled by using relay and PID controller. However these controllers are affected by disturbances and model errors and these effects degrade control performance. The authors...... propose a new control method that can decrease the negative impact of disturbance and model errors. The control method is motivated by industrial practice by Fuji Electric. Simulation tests are examined with a conventional PID controller and the disturbance feedback control. The simulation results...... demonstrate the effectiveness of the proposed method comparing with the conventional PID controller...
Generalized Roe's numerical scheme for a two-fluid model
International Nuclear Information System (INIS)
Toumi, I.; Raymond, P.
1993-01-01
This paper is devoted to a mathematical and numerical study of a six equation two-fluid model. We will prove that the model is strictly hyperbolic due to the inclusion of the virtual mass force term in the phasic momentum equations. The two-fluid model is naturally written under a nonconservative form. To solve the nonlinear Riemann problem for this nonconservative hyperbolic system, a generalized Roe's approximate Riemann solver, is used, based on a linearization of the nonconservative terms. A Godunov type numerical scheme is built, using this approximate Riemann solver. 10 refs., 5 figs,
An integration scheme for stiff solid-gas reactor models
Directory of Open Access Journals (Sweden)
Bjarne A. Foss
2001-04-01
Full Text Available Many dynamic models encounter numerical integration problems because of a large span in the dynamic modes. In this paper we develop a numerical integration scheme for systems that include a gas phase, and solid and liquid phases, such as a gas-solid reactor. The method is based on neglecting fast dynamic modes and exploiting the structure of the algebraic equations. The integration method is suitable for a large class of industrially relevant systems. The methodology has proven remarkably efficient. It has in practice performed excellent and been a key factor for the success of the industrial simulator for electrochemical furnaces for ferro-alloy production.
Fast Proton Titration Scheme for Multiscale Modeling of Protein Solutions.
Teixeira, Andre Azevedo Reis; Lund, Mikael; da Silva, Fernando Luís Barroso
2010-10-12
Proton exchange between titratable amino acid residues and the surrounding solution gives rise to exciting electric processes in proteins. We present a proton titration scheme for studying acid-base equilibria in Metropolis Monte Carlo simulations where salt is treated at the Debye-Hückel level. The method, rooted in the Kirkwood model of impenetrable spheres, is applied on the three milk proteins α-lactalbumin, β-lactoglobulin, and lactoferrin, for which we investigate the net-charge, molecular dipole moment, and charge capacitance. Over a wide range of pH and salt conditions, excellent agreement is found with more elaborate simulations where salt is explicitly included. The implicit salt scheme is orders of magnitude faster than the explicit analog and allows for transparent interpretation of physical mechanisms. It is shown how the method can be expanded to multiscale modeling of aqueous salt solutions of many biomolecules with nonstatic charge distributions. Important examples are protein-protein aggregation, protein-polyelectrolyte complexation, and protein-membrane association.
Study on noise prediction model and control schemes for substation.
Chen, Chuanmin; Gao, Yang; Liu, Songtao
2014-01-01
With the government's emphasis on environmental issues of power transmission and transformation project, noise pollution has become a prominent problem now. The noise from the working transformer, reactor, and other electrical equipment in the substation will bring negative effect to the ambient environment. This paper focuses on using acoustic software for the simulation and calculation method to control substation noise. According to the characteristics of the substation noise and the techniques of noise reduction, a substation's acoustic field model was established with the SoundPLAN software to predict the scope of substation noise. On this basis, 4 reasonable noise control schemes were advanced to provide some helpful references for noise control during the new substation's design and construction process. And the feasibility and application effect of these control schemes can be verified by using the method of simulation modeling. The simulation results show that the substation always has the problem of excessive noise at boundary under the conventional measures. The excess noise can be efficiently reduced by taking the corresponding noise reduction methods.
Study on Noise Prediction Model and Control Schemes for Substation
Gao, Yang; Liu, Songtao
2014-01-01
With the government's emphasis on environmental issues of power transmission and transformation project, noise pollution has become a prominent problem now. The noise from the working transformer, reactor, and other electrical equipment in the substation will bring negative effect to the ambient environment. This paper focuses on using acoustic software for the simulation and calculation method to control substation noise. According to the characteristics of the substation noise and the techniques of noise reduction, a substation's acoustic field model was established with the SoundPLAN software to predict the scope of substation noise. On this basis, 4 reasonable noise control schemes were advanced to provide some helpful references for noise control during the new substation's design and construction process. And the feasibility and application effect of these control schemes can be verified by using the method of simulation modeling. The simulation results show that the substation always has the problem of excessive noise at boundary under the conventional measures. The excess noise can be efficiently reduced by taking the corresponding noise reduction methods. PMID:24672356
An intracloud lightning parameterization scheme for a storm electrification model
Helsdon, John H., Jr.; Wu, Gang; Farley, Richard D.
1992-01-01
The parameterization of an intracloud lightning discharge has been implemented in the present storm electrification model. The initiation, propagation direction, and termination of the discharge are computed using the magnitude and direction of the electric field vector as the determining criteria. The charge redistribution due to the lightning is approximated assuming the channel to be an isolated conductor with zero net charge over its entire length. Various simulations involving differing amounts of charge transferred and distribution of charges have been done. Values of charge transfer, dipole moment change, and electrical energy dissipation computed in the model are consistent with observations. The effects of the lightning-produced ions on the hydrometeor charges and electric field components depend strongly on the amount of charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge shows favorable agreement. Limitations of the parameterization scheme are discussed.
Dynamics Model Abstraction Scheme Using Radial Basis Functions
Directory of Open Access Journals (Sweden)
Silvia Tolu
2012-01-01
Full Text Available This paper presents a control model for object manipulation. Properties of objects and environmental conditions influence the motor control and learning. System dynamics depend on an unobserved external context, for example, work load of a robot manipulator. The dynamics of a robot arm change as it manipulates objects with different physical properties, for example, the mass, shape, or mass distribution. We address active sensing strategies to acquire object dynamical models with a radial basis function neural network (RBF. Experiments are done using a real robot’s arm, and trajectory data are gathered during various trials manipulating different objects. Biped robots do not have high force joint servos and the control system hardly compensates all the inertia variation of the adjacent joints and disturbance torque on dynamic gait control. In order to achieve smoother control and lead to more reliable sensorimotor complexes, we evaluate and compare a sparse velocity-driven versus a dense position-driven control scheme.
Algorithms for Optimal Model Distributions in Adaptive Switching Control Schemes
Directory of Open Access Journals (Sweden)
Debarghya Ghosh
2016-03-01
Full Text Available Several multiple model adaptive control architectures have been proposed in the literature. Despite many advances in theory, the crucial question of how to synthesize the pairs model/controller in a structurally optimal way is to a large extent not addressed. In particular, it is not clear how to place the pairs model/controller is such a way that the properties of the switching algorithm (e.g., number of switches, learning transient, final performance are optimal with respect to some criteria. In this work, we focus on the so-called multi-model unfalsified adaptive supervisory switching control (MUASSC scheme; we define a suitable structural optimality criterion and develop algorithms for synthesizing the pairs model/controller in such a way that they are optimal with respect to the structural optimality criterion we defined. The peculiarity of the proposed optimality criterion and algorithms is that the optimization is carried out so as to optimize the entire behavior of the adaptive algorithm, i.e., both the learning transient and the steady-state response. A comparison is made with respect to the model distribution of the robust multiple model adaptive control (RMMAC, where the optimization considers only the steady-state ideal response and neglects any learning transient.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Two nonlinear control schemes contrasted on a hydrodynamiclike model
Keefe, Laurence R.
1993-01-01
The principles of two flow control strategies, those of Huebler (Luescher and Huebler, 1989) and of Ott et al. (1990) are discussed, and the two schemes are compared for their ability to control shear flow, using fully developed and transitional solutions of the Ginzburg-Landau equation as models for such flows. It was found that the effectiveness of both methods in obtaining control of fully developed flows depended strongly on the 'distance' in state space between the uncontrolled flow and goal dynamics. There were conceptual difficulties in applying the Ott et al. method to transitional convectively unstable flows. On the other hand, the Huebler method worked well, within certain limitations, although at a large cost in energy terms.
Radiolytic oxidation of propane: computer modeling of the reaction scheme
International Nuclear Information System (INIS)
Gupta, A.K.; Hanrahan, R.J.
1991-01-01
The oxidation of gaseous propane under gamma radiolysis was studied at 100 torr pressure and 25 o C, at oxygen pressures from 1 to 15 torr. Major oxygen-containing products and their G-values with 10% added oxygen are as follows: acetone, 0.98; i-propyl alcohol, 0.86; propionaldehyde, 0.43; n-propyl alcohol, 0.11; acrolein, 0.14; and allyl alcohol, 0.038. The formation of major oxygen-containing products was explained on the basis that the alkyl radicals combine with molecular oxygen to give peroxyl radicals; the peroxyl radicals react with one another to give alkoxyl radicals, which in turn react with one another to form carbonyl compounds and alcohols. The reaction scheme for the formation of major products was examined using computer modeling based on a mechanism involving 28 reactions. Yields could be brought into agreement with the data within experimental error in nearly all cases. (author)
Educational NASA Computational and Scientific Studies (enCOMPASS)
Memarsadeghi, Nargess
2013-01-01
Educational NASA Computational and Scientific Studies (enCOMPASS) is an educational project of NASA Goddard Space Flight Center aimed at bridging the gap between computational objectives and needs of NASA's scientific research, missions, and projects, and academia's latest advances in applied mathematics and computer science. enCOMPASS achieves this goal via bidirectional collaboration and communication between NASA and academia. Using developed NASA Computational Case Studies in university computer science/engineering and applied mathematics classes is a way of addressing NASA's goals of contributing to the Science, Technology, Education, and Math (STEM) National Objective. The enCOMPASS Web site at http://encompass.gsfc.nasa.gov provides additional information. There are currently nine enCOMPASS case studies developed in areas of earth sciences, planetary sciences, and astrophysics. Some of these case studies have been published in AIP and IEEE's Computing in Science and Engineering magazines. A few university professors have used enCOMPASS case studies in their computational classes and contributed their findings to NASA scientists. In these case studies, after introducing the science area, the specific problem, and related NASA missions, students are first asked to solve a known problem using NASA data and past approaches used and often published in a scientific/research paper. Then, after learning about the NASA application and related computational tools and approaches for solving the proposed problem, students are given a harder problem as a challenge for them to research and develop solutions for. This project provides a model for NASA scientists and engineers on one side, and university students, faculty, and researchers in computer science and applied mathematics on the other side, to learn from each other's areas of work, computational needs and solutions, and the latest advances in research and development. This innovation takes NASA science and
Zhao, Wenjie; Peng, Yiran; Wang, Bin; Yi, Bingqi; Lin, Yanluan; Li, Jiangnan
2018-05-01
A newly implemented Baum-Yang scheme for simulating ice cloud optical properties is compared with existing schemes (Mitchell and Fu schemes) in a standalone radiative transfer model and in the global climate model (GCM) Community Atmospheric Model Version 5 (CAM5). This study systematically analyzes the effect of different ice cloud optical schemes on global radiation and climate by a series of simulations with a simplified standalone radiative transfer model, atmospheric GCM CAM5, and a comprehensive coupled climate model. Results from the standalone radiative model show that Baum-Yang scheme yields generally weaker effects of ice cloud on temperature profiles both in shortwave and longwave spectrum. CAM5 simulations indicate that Baum-Yang scheme in place of Mitchell/Fu scheme tends to cool the upper atmosphere and strengthen the thermodynamic instability in low- and mid-latitudes, which could intensify the Hadley circulation and dehydrate the subtropics. When CAM5 is coupled with a slab ocean model to include simplified air-sea interaction, reduced downward longwave flux to surface in Baum-Yang scheme mitigates ice-albedo feedback in the Arctic as well as water vapor and cloud feedbacks in low- and mid-latitudes, resulting in an overall temperature decrease by 3.0/1.4 °C globally compared with Mitchell/Fu schemes. Radiative effect and climate feedback of the three ice cloud optical schemes documented in this study can be referred for future improvements on ice cloud simulation in CAM5.
Radiolytic oxidation of propane: Computer modeling of the reaction scheme
Gupta, Avinash K.; Hanrahan, Robert J.
The oxidation of gaseous propane under gamma radiolysis was studied at 100 torr pressure and 25°C, at oxygen pressures from 1 to 15 torr. Major oxygen-containing products and their G-values with 10% added oxygen are as follows: acetone, 0.98; i-propyl alcohol, 0.86; propionaldehyde, 0.43; n-propyl alcohol, 0.11; acrolein, 0.14; and allyl alcohol, 0.038. Minor products include i-butyl alcohol, t-amyl alcohol, n-butyl alcohol, n-amyl alcohol, and i-amyl alcohol. Small yields of i-hexyl alcohol and n-hexyl alcohol were also observed. There was no apparent difference in the G-values at pressures of 50, 100 and 150 torr. When the oxygen concentration was decreased below 5%, the yields of acetone, i-propyl alcohol, and n-propyl alcohol increased, the propionaldehyde yield decreased, and the yields of other products remained constant. The formation of major oxygen-containing products was explained on the basis that the alkyl radicals combine with molecular oxygen to give peroxyl radicals; the peroxyl radicals react with one another to give alkoxyl radicals, which in turn react with one another to form carbonyl compounds and alcohols. The reaction scheme for the formation of major products was examined using computer modeling based on a mechanism involving 28 reactions. Yields could be brought into agreement with the data within experimental error in nearly all cases.
3 Lectures: "Lagrangian Models", "Numerical Transport Schemes", and "Chemical and Transport Models"
Douglass, A.
2005-01-01
The topics for the three lectures for the Canadian Summer School are Lagrangian Models, numerical transport schemes, and chemical and transport models. In the first lecture I will explain the basic components of the Lagrangian model (a trajectory code and a photochemical code), the difficulties in using such a model (initialization) and show some applications in interpretation of aircraft and satellite data. If time permits I will show some results concerning inverse modeling which is being used to evaluate sources of tropospheric pollutants. In the second lecture I will discuss one of the core components of any grid point model, the numerical transport scheme. I will explain the basics of shock capturing schemes, and performance criteria. I will include an example of the importance of horizontal resolution to polar processes. We have learned from NASA's global modeling initiative that horizontal resolution matters for predictions of the future evolution of the ozone hole. The numerical scheme will be evaluated using performance metrics based on satellite observations of long-lived tracers. The final lecture will discuss the evolution of chemical transport models over the last decade. Some of the problems with assimilated winds will be demonstrated, using satellite data to evaluate the simulations.
Nitrogen and Phosphorus Biomass-Kinetic Model for Chlorella vulgaris in a Biofuel Production Scheme
2010-03-01
NITROGEN AND PHOSPHORUS BIOMASS-KINETIC MODEL FOR CHLORELLA VULGARIS IN A BIOFUEL PRODUCTION SCHEME THESIS William M. Rowley, Major...States Government. AFIT/GES/ENV/10-M04 NITROGEN AND PHOSPHORUS BIOMASS-KINETIC MODEL FOR CHLORELLA VULGARIS IN A BIOFUEL...MODEL FOR CHLORELLA VULGARIS IN A BIOFUEL PRODUCTION SCHEME William M. Rowley, BS Major, USMC Approved
Armand J, K. M.
2017-12-01
In this study, version 4 of the regional climate model (RegCM4) is used to perform 6 years simulation including one year for spin-up (from January 2001 to December 2006) over Central Africa using four convective schemes: The Emmanuel scheme (MIT), the Grell scheme with Arakawa-Schulbert closure assumption (GAS), the Grell scheme with Fritsch-Chappell closure assumption (GFC) and the Anthes-Kuo scheme (Kuo). We have investigated the ability of the model to simulate precipitation, surface temperature, wind and aerosols optical depth. Emphasis in the model results were made in December-January-February (DJF) and July-August-September (JAS) periods. Two subregions have been identified for more specific analysis namely: zone 1 which corresponds to the sahel region mainly classified as desert and steppe and zone 2 which is a region spanning the tropical rain forest and is characterised by a bimodal rain regime. We found that regardless of periods or simulated parameters, MIT scheme generally has a tendency to overestimate. The GAS scheme is more suitable in simulating the aforementioned parameters, as well as the diurnal cycle of precipitations everywhere over the study domain irrespective of the season. In JAS, model results are similar in the representation of regional wind circulation. Apart from the MIT scheme, all the convective schemes give the same trends in aerosols optical depth simulations. Additional experiment reveals that the use of BATS instead of Zeng scheme to calculate ocean flux appears to improve the quality of the model simulations.
On usage of CABARET scheme for tracer transport in INM ocean model
International Nuclear Information System (INIS)
Diansky, Nikolay; Kostrykin, Sergey; Gusev, Anatoly; Salnikov, Nikolay
2010-01-01
The contemporary state of ocean numerical modelling sets some requirements for the numerical advection schemes used in ocean general circulation models (OGCMs). The most important requirements are conservation, monotonicity and numerical efficiency including good parallelization properties. Investigation of some advection schemes shows that one of the best schemes satisfying the criteria is CABARET scheme. 3D-modification of the CABARET scheme was used to develop a new transport module (for temperature and salinity) for the Institute of Numerical Mathematics ocean model (INMOM). Testing of this module on some common benchmarks shows a high accuracy in comparison with the second-order advection scheme used in the INMOM. This new module was incorporated in the INMOM and experiments with the modified model showed a better simulation of oceanic circulation than its previous version.
Universal block diagram based modeling and simulation schemes for fractional-order control systems.
Bai, Lu; Xue, Dingyü
2017-05-08
Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
BOT schemes as financial model of hydro power projects
International Nuclear Information System (INIS)
Grausam, A.
1997-01-01
Build-operate-transfer (BOT) schemes are the latest methods adopted in the developing infrastructure projects. This paper outlines the project financing through BOT schemes and briefly focuses on the factors particularly relevant to hydro power projects. Hydro power development provides not only the best way to produce electricity, it can also solve problems in different fields, such as navigation problems in case of run-of-the river plants, ground water management systems and flood control etc. This makes HPP projects not cheaper, but hydro energy is a clean and renewable energy and the hydro potential worldwide will play a major role to meet the increased demand in future. 5 figs
Hérivaux, Cécile; Orban, Philippe; Brouyère, Serge
2013-10-15
In Europe, 30% of groundwater bodies are considered to be at risk of not achieving the Water Framework Directive (WFD) 'good status' objective by 2015, and 45% are in doubt of doing so. Diffuse agricultural pollution is one of the main pressures affecting groundwater bodies. To tackle this problem, the WFD requires Member States to design and implement cost-effective programs of measures to achieve the 'good status' objective by 2027 at the latest. Hitherto, action plans have mainly consisted of promoting the adoption of Agri-Environmental Schemes (AES). This raises a number of questions concerning the effectiveness of such schemes for improving groundwater status, and the economic implications of their implementation. We propose a hydro-economic model that combines a hydrogeological model to simulate groundwater quality evolution with agronomic and economic components to assess the expected costs, effectiveness, and benefits of AES implementation. This hydro-economic model can be used to identify cost-effective AES combinations at groundwater-body scale and to show the benefits to be expected from the resulting improvement in groundwater quality. The model is applied here to a rural area encompassing the Hesbaye aquifer, a large chalk aquifer which supplies about 230,000 inhabitants in the city of Liege (Belgium) and is severely contaminated by agricultural nitrates. We show that the time frame within which improvements in the Hesbaye groundwater quality can be expected may be much longer than that required by the WFD. Current WFD programs based on AES may be inappropriate for achieving the 'good status' objective in the most productive agricultural areas, in particular because these schemes are insufficiently attractive. Achieving 'good status' by 2027 would demand a substantial change in the design of AES, involving costs that may not be offset by benefits in the case of chalk aquifers with long renewal times. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modeling and Simulation of Downlink Subcarrier Allocation Schemes in LTE
DEFF Research Database (Denmark)
Popovska Avramova, Andrijana; Yan, Ying; Dittmann, Lars
2012-01-01
The efficient utilization of the air interface in the LTE standard is achieved through a combination of subcarrier allocation schemes, adaptive modulation and coding, and transmission power allotment. The scheduler in the base station has a major role in achieving the required QoS and the overall...
Analyses of models for promotion schemes and ownership arrangements
DEFF Research Database (Denmark)
Hansen, Lise-Lotte Pade; Schröder, Sascha Thorsten; Münster, Marie
2011-01-01
as increase the national competitiveness. The stationary fuel cell technology is still in a rather early stage of development and faces a long list of challenges and barriers of which some are linked directly to the technology through the need of cost decrease and reliability improvements. Others are linked...... countries should opt to support stationary fuel cells, we find that in Denmark it would be promising to apply the net metering based support scheme for households with an electricity consumption exceeding the electricity production from the fuel cell. In France and Portugal the most promising support scheme...... is price premium when the fuel cell is run as a part of a virtual power plant. From a system perspective, it appears that it is more important which kind of energy system (represented by country) the FC’s are implemented in, rather than which operation strategy is used. In an energy system with lots...
Unconditionally energy stable numerical schemes for phase-field vesicle membrane model
Guillén-González, F.; Tierra, G.
2018-02-01
Numerical schemes to simulate the deformation of vesicles membranes via minimizing the bending energy have been widely studied in recent times due to its connection with many biological motivated problems. In this work we propose a new unconditionally energy stable numerical scheme for a vesicle membrane model that satisfies exactly the conservation of volume constraint and penalizes the surface area constraint. Moreover, we extend these ideas to present an unconditionally energy stable splitting scheme decoupling the interaction of the vesicle with a surrounding fluid. Finally, the well behavior of the proposed schemes are illustrated through several computational experiments.
A New Key Predistribution Scheme for Multiphase Sensor Networks Using a New Deployment Model
Directory of Open Access Journals (Sweden)
Boqing Zhou
2014-01-01
Full Text Available During the lifecycle of sensor networks, making use of the existing key predistribution schemes using deployment knowledge for pairwise key establishment and authentication between nodes, a new challenge is elevated. Either the resilience against node capture attacks or the global connectivity will significantly decrease with time. In this paper, a new deployment model is developed for multiphase deployment sensor networks, and then a new key management scheme is further proposed. Compared with the existing schemes using deployment knowledge, our scheme has better performance in global connectivity, resilience against node capture attacks throughout their lifecycle.
New Identity-Based Blind Signature and Blind Decryption Scheme in the Standard Model
Phong, Le Trieu; Ogata, Wakaha
We explicitly describe and analyse blind hierachical identity-based encryption (blind HIBE) schemes, which are natural generalizations of blind IBE schemes [20]. We then uses the blind HIBE schemes to construct: (1) An identity-based blind signature scheme secure in the standard model, under the computational Diffie-Hellman (CDH) assumption, and with much shorter signature size and lesser communication cost, compared to existing proposals. (2) A new mechanism supporting a user to buy digital information over the Internet without revealing what he/she has bought, while protecting the providers from cheating users.
An improved snow scheme for the ECMWF land surface model: Description and offline validation
Emanuel Dutra; Gianpaolo Balsamo; Pedro Viterbo; Pedro M. A. Miranda; Anton Beljaars; Christoph Schar; Kelly Elder
2010-01-01
A new snow scheme for the European Centre for Medium-Range Weather Forecasts (ECMWF) land surface model has been tested and validated. The scheme includes a new parameterization of snow density, incorporating a liquid water reservoir, and revised formulations for the subgrid snow cover fraction and snow albedo. Offline validation (covering a wide range of spatial and...
Eliseev, A. V.; Coumou, D.; Chernokulsky, A. V.; Petoukhov, V.; Petri, S.
2013-01-01
In this study we present a scheme for calculating the characteristics of multi-layer cloudiness and precipitation for Earth system models of intermediate complexity (EMICs). This scheme considers three-layer stratiform cloudiness and single-column convective clouds. It distinguishes between ice and
Enhanced Physics-Based Numerical Schemes for Two Classes of Turbulence Models
Directory of Open Access Journals (Sweden)
Leo G. Rebholz
2009-01-01
Full Text Available We present enhanced physics-based finite element schemes for two families of turbulence models, the NS- models and the Stolz-Adams approximate deconvolution models. These schemes are delicate extensions of a method created for the Navier-Stokes equations in Rebholz (2007, that achieve high physical fidelity by admitting balances of both energy and helicity that match the true physics. The schemes' development requires carefully chosen discrete curl, discrete Laplacian, and discrete filtering operators, in order to permit the necessary differential operator commutations.
Godunov-type schemes for hydrodynamic and magnetohydrodynamic modeling
International Nuclear Information System (INIS)
Vides-Higueros, Jeaniffer
2014-01-01
The main objective of this thesis concerns the study, design and numerical implementation of finite volume schemes based on the so-Called Godunov-Type solvers for hyperbolic systems of nonlinear conservation laws, with special attention given to the Euler equations and ideal MHD equations. First, we derive a simple and genuinely two-Dimensional Riemann solver for general conservation laws that can be regarded as an actual 2D generalization of the HLL approach, relying heavily on the consistency with the integral formulation and on the proper use of Rankine-Hugoniot relations to yield expressions that are simple enough to be applied in the structured and unstructured contexts. Then, a comparison between two methods aiming to numerically maintain the divergence constraint of the magnetic field for the ideal MHD equations is performed and we show how the 2D Riemann solver can be employed to obtain robust divergence-Free simulations. Next, we derive a relaxation scheme that incorporates gravity source terms derived from a potential into the hydrodynamic equations, an important problem in astrophysics, and finally, we review the design of finite volume approximations in curvilinear coordinates, providing a fresher view on an alternative discretization approach. Throughout this thesis, numerous numerical results are shown. (author) [fr
An Efficient Code-Based Threshold Ring Signature Scheme with a Leader-Participant Model
Directory of Open Access Journals (Sweden)
Guomin Zhou
2017-01-01
Full Text Available Digital signature schemes with additional properties have broad applications, such as in protecting the identity of signers allowing a signer to anonymously sign a message in a group of signers (also known as a ring. While these number-theoretic problems are still secure at the time of this research, the situation could change with advances in quantum computing. There is a pressing need to design PKC schemes that are secure against quantum attacks. In this paper, we propose a novel code-based threshold ring signature scheme with a leader-participant model. A leader is appointed, who chooses some shared parameters for other signers to participate in the signing process. This leader-participant model enhances the performance because every participant including the leader could execute the decoding algorithm (as a part of signing process upon receiving the shared parameters from the leader. The time complexity of our scheme is close to Courtois et al.’s (2001 scheme. The latter is often used as a basis to construct other types of code-based signature schemes. Moreover, as a threshold ring signature scheme, our scheme is as efficient as the normal code-based ring signature.
SEMPATH Ontology: modeling multidisciplinary treatment schemes utilizing semantics.
Alexandrou, Dimitrios Al; Pardalis, Konstantinos V; Bouras, Thanassis D; Karakitsos, Petros; Mentzas, Gregoris N
2012-03-01
A dramatic increase of demand for provided treatment quality has occurred during last decades. The main challenge to be confronted, so as to increase treatment quality, is the personalization of treatment, since each patient constitutes a unique case. Healthcare provision encloses a complex environment since healthcare provision organizations are highly multidisciplinary. In this paper, we present the conceptualization of the domain of clinical pathways (CP). The SEMPATH (SEMantic PATHways) Oontology comprises three main parts: 1) the CP part; 2) the business and finance part; and 3) the quality assurance part. Our implementation achieves the conceptualization of the multidisciplinary domain of healthcare provision, in order to be further utilized for the implementation of a Semantic Web Rules (SWRL rules) repository. Finally, SEMPATH Ontology is utilized for the definition of a set of SWRL rules for the human papillomavirus) disease and its treatment scheme. © 2012 IEEE
Soft rotator model and {sup 246}Cm low-lying level scheme
Energy Technology Data Exchange (ETDEWEB)
Porodzinskij, Yu.V.; Sukhovitskij, E.Sh. [Radiation Physics and Chemistry Problems Inst., Minsk-Sosny (Belarus)
1997-03-01
Non-axial soft rotator nuclear model is suggested as self-consistent approach for interpretation of level schemes, {gamma}-transition probabilities and neutron interaction with even-even nuclei. (author)
Directory of Open Access Journals (Sweden)
Ku David N
2010-07-01
Full Text Available Abstract Background The finite volume solver Fluent (Lebanon, NH, USA is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. Methods An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK schemes. The fluid has a diffusivity of 3.125 × 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. Results The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140
Carroll, Gráinne T; Devereux, Paul D; Ku, David N; McGloughlin, Timothy M; Walsh, Michael T
2010-07-19
The finite volume solver Fluent (Lebanon, NH, USA) is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK) schemes. The fluid has a diffusivity of 3.125 x 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140% and 116% was demonstrated between the experimental
Impact of WRF model PBL schemes on air quality simulations over Catalonia, Spain.
Banks, R F; Baldasano, J M
2016-12-01
Here we analyze the impact of four planetary boundary-layer (PBL) parametrization schemes from the Weather Research and Forecasting (WRF) numerical weather prediction model on simulations of meteorological variables and predicted pollutant concentrations from an air quality forecast system (AQFS). The current setup of the Spanish operational AQFS, CALIOPE, is composed of the WRF-ARW V3.5.1 meteorological model tied to the Yonsei University (YSU) PBL scheme, HERMES v2 emissions model, CMAQ V5.0.2 chemical transport model, and dust outputs from BSC-DREAM8bv2. We test the performance of the YSU scheme against the Assymetric Convective Model Version 2 (ACM2), Mellor-Yamada-Janjic (MYJ), and Bougeault-Lacarrère (BouLac) schemes. The one-day diagnostic case study is selected to represent the most frequent synoptic condition in the northeast Iberian Peninsula during spring 2015; regional recirculations. It is shown that the ACM2 PBL scheme performs well with daytime PBL height, as validated against estimates retrieved using a micro-pulse lidar system (mean bias=-0.11km). In turn, the BouLac scheme showed WRF-simulated air and dew point temperature closer to METAR surface meteorological observations. Results are more ambiguous when simulated pollutant concentrations from CMAQ are validated against network urban, suburban, and rural background stations. The ACM2 scheme showed the lowest mean bias (-0.96μgm -3 ) with respect to surface ozone at urban stations, while the YSU scheme performed best with simulated nitrogen dioxide (-6.48μgm -3 ). The poorest results were with simulated particulate matter, with similar results found with all schemes tested. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
A hybrid convection scheme for use in non-hydrostatic numerical weather prediction models
Directory of Open Access Journals (Sweden)
Volker Kuell
2008-12-01
Full Text Available The correct representation of convection in numerical weather prediction (NWP models is essential for quantitative precipitation forecasts. Due to its small horizontal scale convection usually has to be parameterized, e.g. by mass flux convection schemes. Classical schemes originally developed for use in coarse grid NWP models assume zero net convective mass flux, because the whole circulation of a convective cell is confined to the local grid column and all convective mass fluxes cancel out. However, in contemporary NWP models with grid sizes of a few kilometers this assumption becomes questionable, because here convection is partially resolved on the grid. To overcome this conceptual problem we propose a hybrid mass flux convection scheme (HYMACS in which only the convective updrafts and downdrafts are parameterized. The generation of the larger scale environmental subsidence, which may cover several grid columns, is transferred to the grid scale equations. This means that the convection scheme now has to generate a net convective mass flux exerting a direct dynamical forcing to the grid scale model via pressure gradient forces. The hybrid convection scheme implemented into the COSMO model of Deutscher Wetterdienst (DWD is tested in an idealized simulation of a sea breeze circulation initiating convection in a realistic manner. The results are compared with analogous simulations with the classical Tiedtke and Kain-Fritsch convection schemes.
Transfer Scheme Evaluation Model for a Transportation Hub based on Vectorial Angle Cosine
Directory of Open Access Journals (Sweden)
Li-Ya Yao
2014-07-01
Full Text Available As the most important node in public transport network, efficiency of a transport hub determines the entire efficiency of the whole transport network. In order to put forward effective transfer schemes, a comprehensive evaluation index system of urban transport hubs’ transfer efficiency was built, evaluation indexes were quantified, and an evaluation model of a multi-objective decision hub transfer scheme was established based on vectorial angle cosine. Qualitative and quantitative analysis on factors affecting transfer efficiency is conducted, which discusses the passenger satisfaction, transfer coordination, transfer efficiency, smoothness, economy, etc. Thus, a new solution to transfer scheme utilization was proposed.
Energy Technology Data Exchange (ETDEWEB)
Silva, Filipe da, E-mail: tanatos@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal); Pinto, Martin Campos, E-mail: campos@ann.jussieu.fr [CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Després, Bruno, E-mail: despres@ann.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Heuraux, Stéphane, E-mail: stephane.heuraux@univ-lorraine.fr [Institut Jean Lamour, UMR 7198, CNRS – University Lorraine, Vandoeuvre (France)
2015-08-15
This work analyzes the stability of the Yee scheme for non-stationary Maxwell's equations coupled with a linear current model with density fluctuations. We show that the usual procedure may yield unstable scheme for physical situations that correspond to strongly magnetized plasmas in X-mode (TE) polarization. We propose to use first order clustered discretization of the vectorial product that gives back a stable coupling. We validate the schemes on some test cases representative of direct numerical simulations of X-mode in a magnetic fusion plasma including turbulence.
Modeling stable orographic precipitation at small scales. The impact of the autoconversion scheme
Energy Technology Data Exchange (ETDEWEB)
Zaengl, Guenther; Seifert, Axel [Deutscher Wetterdienst, Offenbach (Germany); Wobrock, Wolfram [Clermont Univ., Univ. Blaise Pascal, Lab. de Meteorologie Physique, Clermont-Ferrand (France); CNRS, INSU, UMR, LaMP, Aubiere (France)
2010-10-15
This study presents idealized numerical simulations of moist airflow over a narrow isolated mountain in order to investigate the impact of the autoconversion scheme on simulated precipitation. The default setup generates an isolated water cloud over the mountain, implying that autoconversion of cloud water into rain is the only process capable of initiating precipitation. For comparison, a set of sensitivity experiments considers the classical seeder-feeder configuration, which means that ambient precipitation generated by large-scale lifting is intensified within the orographic cloud. Most simulations have been performed with the nonhydrostatic COSMO model developed at the German Weather Service (DWD), comparing three different autoconversion schemes of varying sophistication. For reference, a subset of experiments has also been performed with a spectral (bin) microphysics model. While precipitation enhancement via the seeder-feeder mechanism turns out to be relatively insensitive against the autoconversion scheme because accretion is the leading process in this case, simulated precipitation amounts can vary by 1-2 orders of magnitude for purely orographic precipitation. By comparison to the reference experiments conducted with the bin model, the Seifert-Beheng autoconversion scheme (which is the default in the COSMO model) and the Berry-Reinhardt scheme are found to represent the nonlinear behaviour of orographic precipitation reasonably well, whereas the linear approach of the Kessler scheme appears to be less adequate. (orig.)
Post-processing scheme for modelling the lithospheric magnetic field
Directory of Open Access Journals (Sweden)
V. Lesur
2013-03-01
Full Text Available We investigated how the noise in satellite magnetic data affects magnetic lithospheric field models derived from these data in the special case where this noise is correlated along satellite orbit tracks. For this we describe the satellite data noise as a perturbation magnetic field scaled independently for each orbit, where the scaling factor is a random variable, normally distributed with zero mean. Under this assumption, we have been able to derive a model for errors in lithospheric models generated by the correlated satellite data noise. Unless the perturbation field is known, estimating the noise in the lithospheric field model is a non-linear inverse problem. We therefore proposed an iterative post-processing technique to estimate both the lithospheric field model and its associated noise model. The technique has been successfully applied to derive a lithospheric field model from CHAMP satellite data up to spherical harmonic degree 120. The model is in agreement with other existing models. The technique can, in principle, be extended to all sorts of potential field data with "along-track" correlated errors.
Chek, Mohd Zaki Awang; Ahmad, Abu Bakar; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md.; Jamal, Nur Faezah; Ismail, Isma Liana; Zulkifli, Faiz; Noor, Syamsul Ikram Mohd
2012-09-01
The main objective of this study is to forecast the future claims amount of Invalidity Pension Scheme (IPS). All data were derived from SOCSO annual reports from year 1972 - 2010. These claims consist of all claims amount from 7 benefits offered by SOCSO such as Invalidity Pension, Invalidity Grant, Survivors Pension, Constant Attendance Allowance, Rehabilitation, Funeral and Education. Prediction of future claims of Invalidity Pension Scheme will be made using Univariate Forecasting Models to predict the future claims among workforce in Malaysia.
Adaptive multiresolution WENO schemes for multi-species kinematic flow models
International Nuclear Information System (INIS)
Buerger, Raimund; Kozakevicius, Alice
2007-01-01
Multi-species kinematic flow models lead to strongly coupled, nonlinear systems of first-order, spatially one-dimensional conservation laws. The number of unknowns (the concentrations of the species) may be arbitrarily high. Models of this class include a multi-species generalization of the Lighthill-Whitham-Richards traffic model and a model for the sedimentation of polydisperse suspensions. Their solutions typically involve kinematic shocks separating areas of constancy, and should be approximated by high resolution schemes. A fifth-order weighted essentially non-oscillatory (WENO) scheme is combined with a multiresolution technique that adaptively generates a sparse point representation (SPR) of the evolving numerical solution. Thus, computational effort is concentrated on zones of strong variation near shocks. Numerical examples from the traffic and sedimentation models demonstrate the effectiveness of the resulting WENO multiresolution (WENO-MRS) scheme
A Lattice-Based Identity-Based Proxy Blind Signature Scheme in the Standard Model
Directory of Open Access Journals (Sweden)
Lili Zhang
2014-01-01
Full Text Available A proxy blind signature scheme is a special form of blind signature which allowed a designated person called proxy signer to sign on behalf of original signers without knowing the content of the message. It combines the advantages of proxy signature and blind signature. Up to date, most proxy blind signature schemes rely on hard number theory problems, discrete logarithm, and bilinear pairings. Unfortunately, the above underlying number theory problems will be solvable in the postquantum era. Lattice-based cryptography is enjoying great interest these days, due to implementation simplicity and provable security reductions. Moreover, lattice-based cryptography is believed to be hard even for quantum computers. In this paper, we present a new identity-based proxy blind signature scheme from lattices without random oracles. The new scheme is proven to be strongly unforgeable under the standard hardness assumption of the short integer solution problem (SIS and the inhomogeneous small integer solution problem (ISIS. Furthermore, the secret key size and the signature length of our scheme are invariant and much shorter than those of the previous lattice-based proxy blind signature schemes. To the best of our knowledge, our construction is the first short lattice-based identity-based proxy blind signature scheme in the standard model.
A Scratchpad Memory Allocation Scheme for Dataflow Models
2008-08-25
perform via static analysis of C/C++. We use the heterochronous dataflow (HDF) model of computation [16, 39] in Ptolemy II [11] as a means to specify the...buffer data) as the key memory requirements [9]. 4.1 Structure of an HDF Model We use Ptolemy II’s graphical interface and the HDF domain to specify...algorithm. The allocation algorithm was implemented in Ptolemy II [11], a Java-based framework for studying modeling, simulation and design of concurrent
A seawater desalination scheme for global hydrological models
Hanasaki, Naota; Yoshikawa, Sayaka; Kakinuma, Kaoru; Kanae, Shinjiro
2016-10-01
Seawater desalination is a practical technology for providing fresh water to coastal arid regions. Indeed, the use of desalination is rapidly increasing due to growing water demand in these areas and decreases in production costs due to technological advances. In this study, we developed a model to estimate the areas where seawater desalination is likely to be used as a major water source and the likely volume of production. The model was designed to be incorporated into global hydrological models (GHMs) that explicitly include human water usage. The model requires spatially detailed information on climate, income levels, and industrial and municipal water use, which represent standard input/output data in GHMs. The model was applied to a specific historical year (2005) and showed fairly good reproduction of the present geographical distribution and national production of desalinated water in the world. The model was applied globally to two periods in the future (2011-2040 and 2041-2070) under three distinct socioeconomic conditions, i.e., SSP (shared socioeconomic pathway) 1, SSP2, and SSP3. The results indicate that the usage of seawater desalination will have expanded considerably in geographical extent, and that production will have increased by 1.4-2.1-fold in 2011-2040 compared to the present (from 2.8 × 109 m3 yr-1 in 2005 to 4.0-6.0 × 109 m3 yr-1), and 6.7-17.3-fold in 2041-2070 (from 18.7 to 48.6 × 109 m3 yr-1). The estimated global costs for production for each period are USD 1.1-10.6 × 109 (0.002-0.019 % of the total global GDP), USD 1.6-22.8 × 109 (0.001-0.020 %), and USD 7.5-183.9 × 109 (0.002-0.100 %), respectively. The large spreads in these projections are primarily attributable to variations within the socioeconomic scenarios.
Extension of the time-average model to Candu refueling schemes involving reshuffling
International Nuclear Information System (INIS)
Rouben, Benjamin; Nichita, Eleodor
2008-01-01
Candu reactors consist of a horizontal non-pressurized heavy-water-filled vessel penetrated axially by fuel channels, each containing twelve 50-cm-long fuel bundles cooled by pressurized heavy water. Candu reactors are refueled on-line and, as a consequence, the core flux and power distributions change continuously. For design purposes, a 'time-average' model was developed in the 1970's to calculate the average over time of the flux and power distribution and to study the effects of different refueling schemes. The original time-average model only allows treatment of simple push-through refueling schemes whereby fresh fuel is inserted at one end of the channel and irradiated fuel is removed from the other end. With the advent of advanced fuel cycles and new Candu designs, novel refueling schemes may be considered, such as reshuffling discharged fuel from some channels into other channels, to achieve better overall discharge burnup. Such reshuffling schemes cannot be handled by the original time-average model. This paper presents an extension of the time-average model to allow for the treatment of refueling schemes with reshuffling. Equations for the extended model are presented, together with sample results for a simple demonstration case. (authors)
A Reconfiguration Control Scheme for a Quadrotor Helicopter via Combined Multiple Models
Directory of Open Access Journals (Sweden)
Fuyang Chen
2014-08-01
Full Text Available In this paper, an optimal reconfiguration control scheme is proposed for a quadrotor helicopter with actuator faults via adaptive control and combined multiple models. The combined models set contains several fixed models, an adaptive model and a reinitialized adaptive model. The fixed models and the adaptive model can describe the failure system under different fault conditions. Moreover, the proposed reinitialized adaptive model refers to the closest model of the current system and can improve the speed of convergence effectively. In addition, the reference model is designed in consideration of an optimal control performance index and the principle of the minimum cost to achieve perfect tracking performance. Finally, some simulation results demonstrate the effectiveness of the proposed reconfiguration control scheme for faulty cases.
A novel interacting multiple model based network intrusion detection scheme
Xin, Ruichi; Venkatasubramanian, Vijay; Leung, Henry
2006-04-01
In today's information age, information and network security are of primary importance to any organization. Network intrusion is a serious threat to security of computers and data networks. In internet protocol (IP) based network, intrusions originate in different kinds of packets/messages contained in the open system interconnection (OSI) layer 3 or higher layers. Network intrusion detection and prevention systems observe the layer 3 packets (or layer 4 to 7 messages) to screen for intrusions and security threats. Signature based methods use a pre-existing database that document intrusion patterns as perceived in the layer 3 to 7 protocol traffics and match the incoming traffic for potential intrusion attacks. Alternately, network traffic data can be modeled and any huge anomaly from the established traffic pattern can be detected as network intrusion. The latter method, also known as anomaly based detection is gaining popularity for its versatility in learning new patterns and discovering new attacks. It is apparent that for a reliable performance, an accurate model of the network data needs to be established. In this paper, we illustrate using collected data that network traffic is seldom stationary. We propose the use of multiple models to accurately represent the traffic data. The improvement in reliability of the proposed model is verified by measuring the detection and false alarm rates on several datasets.
Combining modelling tools to evaluate a goose management scheme
Baveco, Hans; Bergjord, Anne Kari; Bjerke, Jarle W.; Chudzińska, Magda E.; Pellissier, Loïc; Simonsen, Caroline E.; Madsen, Jesper; Tombre, Ingunn M.; Nolet, Bart A.
2017-01-01
Many goose species feed on agricultural land, and with growing goose numbers, conflicts with agriculture are increasing. One possible solution is to designate refuge areas where farmers are paid to leave geese undisturbed. Here, we present a generic modelling tool that can be used to designate the
Combining modelling tools to evaluate a goose management scheme.
Baveco, J.M.; Bergjord, A.K.; Bjerke, J.W.; Chudzińska, M.E.; Pellissier, L.; Simonsen, C.E.; Madsen, J.; Tombre, Ingunn M.; Nolet, B.A.
2017-01-01
Many goose species feed on agricultural land, and with growing goose numbers, conflicts with agriculture are increasing. One possible solution is to designate refuge areas where farmers are paid to leave geese undisturbed. Here, we present a generic modelling tool that can be used to designate the
Ensemble-based data assimilation schemes for atmospheric chemistry models
Barbu, A.L.
2010-01-01
The atmosphere is a complex system which includes physical, chemical and biological processes. Many of these processes affecting the atmosphere are subject to various interactions and can be highly nonlinear. This complexity makes it necessary to apply computer models in order to understand the
Multi-model ensemble schemes for predicting northeast monsoon ...
Indian Academy of Sciences (India)
Northeast monsoon; multi-model ensemble; rainfall; prediction; principal component regression; single value decomposition. J. Earth Syst. Sci. 120, No. 5, October 2011, pp. 795–805 c Indian Academy of Sciences. 795 ... Rakecha 1983; Krishnan 1984; Raj and Jamadar. 1990; Sridharan and Muthusamy 1990; Singh and.
A seawater desalination scheme for global hydrological models
Directory of Open Access Journals (Sweden)
N. Hanasaki
2016-10-01
Full Text Available Seawater desalination is a practical technology for providing fresh water to coastal arid regions. Indeed, the use of desalination is rapidly increasing due to growing water demand in these areas and decreases in production costs due to technological advances. In this study, we developed a model to estimate the areas where seawater desalination is likely to be used as a major water source and the likely volume of production. The model was designed to be incorporated into global hydrological models (GHMs that explicitly include human water usage. The model requires spatially detailed information on climate, income levels, and industrial and municipal water use, which represent standard input/output data in GHMs. The model was applied to a specific historical year (2005 and showed fairly good reproduction of the present geographical distribution and national production of desalinated water in the world. The model was applied globally to two periods in the future (2011–2040 and 2041–2070 under three distinct socioeconomic conditions, i.e., SSP (shared socioeconomic pathway 1, SSP2, and SSP3. The results indicate that the usage of seawater desalination will have expanded considerably in geographical extent, and that production will have increased by 1.4–2.1-fold in 2011–2040 compared to the present (from 2.8 × 109 m3 yr−1 in 2005 to 4.0–6.0 × 109 m3 yr−1, and 6.7–17.3-fold in 2041–2070 (from 18.7 to 48.6 × 109 m3 yr−1. The estimated global costs for production for each period are USD 1.1–10.6 × 109 (0.002–0.019 % of the total global GDP, USD 1.6–22.8 × 109 (0.001–0.020 %, and USD 7.5–183.9 × 109 (0.002–0.100 %, respectively. The large spreads in these projections are primarily attributable to variations within the socioeconomic scenarios.
Central upwind scheme for a compressible two-phase flow model.
Ahmed, Munshoor; Saleem, M Rehan; Zia, Saqib; Qamar, Shamsul
2015-01-01
In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.
Central upwind scheme for a compressible two-phase flow model.
Directory of Open Access Journals (Sweden)
Munshoor Ahmed
Full Text Available In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.
Internal validation of risk models in clustered data: a comparison of bootstrap schemes
Bouwmeester, W.; Moons, K.G.M.; Kappen, T.H.; van Klei, W.A.; Twisk, J.W.R.; Eijkemans, M.J.C.; Vergouwe, Y.
2013-01-01
Internal validity of a risk model can be studied efficiently with bootstrapping to assess possible optimism in model performance. Assumptions of the regular bootstrap are violated when the development data are clustered. We compared alternative resampling schemes in clustered data for the estimation
The two-dimensional Godunov scheme and what it means for macroscopic pedestrian flow models
Van Wageningen-Kessels, F.L.M.; Daamen, W.; Hoogendoorn, S.P.
2015-01-01
An efficient simulation method for two-dimensional continuum pedestrian flow models is introduced. It is a two-dimensional and multi-class extension of the Go-dunov scheme for one-dimensional road traffic flow models introduced in the mid 1990’s. The method can be applied to continuum pedestrian
A low-bias simulation scheme for the SABR stochastic volatility model
B. Chen (Bin); C.W. Oosterlee (Cornelis); J.A.M. van der Weide
2012-01-01
htmlabstractThe Stochastic Alpha Beta Rho Stochastic Volatility (SABR-SV) model is widely used in the financial industry for the pricing of fixed income instruments. In this paper we develop an lowbias simulation scheme for the SABR-SV model, which deals efficiently with (undesired)
A dynamic neutral fluid model for the PIC scheme
Wu, Alan; Lieberman, Michael; Verboncoeur, John
2010-11-01
Fluid diffusion is an important aspect of plasma simulation. A new dynamic model is implemented using the continuity and boundary equations in OOPD1, an object oriented one-dimensional particle-in-cell code developed at UC Berkeley. The model is described and compared with analytical methods given in [1]. A boundary absorption parameter can be adjusted from ideal absorption to ideal reflection. Simulations exhibit good agreement with analytic time dependent solutions for the two ideal cases, as well as steady state solutions for mixed cases. For the next step, fluid sources and sinks due to particle-particle or particle-fluid collisions within the simulation volume and to surface reactions resulting in emission or absorption of fluid species will be implemented. The resulting dynamic interaction between particle and fluid species will be an improvement to the static fluid in the existing code. As the final step in the development, diffusion for multiple fluid species will be implemented. [4pt] [1] M.A. Lieberman and A.J. Lichtenberg, Principles of Plasma Discharges and Materials Processing, 2nd Ed, Wiley, 2005.
Algebraic K-theory of generalized schemes
DEFF Research Database (Denmark)
Anevski, Stella Victoria Desiree
Nikolai Durov has developed a generalization of conventional scheme theory in which commutative algebraic monads replace commutative unital rings as the basic algebraic objects. The resulting geometry is expressive enough to encompass conventional scheme theory, tropical algebraic geometry and ge...
Algebraic K-theory of generalized schemes
DEFF Research Database (Denmark)
Anevski, Stella Victoria Desiree
Nikolai Durov has developed a generalization of conventional scheme theory in which commutative algebraic monads replace commutative unital rings as the basic algebraic objects. The resulting geometry is expressive enough to encompass conventional scheme theory, tropical algebraic geometry...
Modelling of Substrate Noise and Mitigation Schemes for UWB Systems
DEFF Research Database (Denmark)
Shen, Ming; Mikkelsen, Jan H.; Larsen, Torben
2012-01-01
-mode designs, digital switching noise is an ever-present problem that needs to be taken into consideration. This is of particular importance when low cost implementation technologies, e.g. lightly doped substrates, are aimed for. For traditional narrow-band designs much of the issue can be mitigated using...... tuned elements in the signal paths. However, for UWB designs this is not a viable option and other means are therefore required. Moreover, owing to the ultra-wideband nature and low power spectral density of the signal, UWB mixed-signal integrated circuits are more sensitive to substrate noise compared...... with narrow-band circuits. This chapter presents a study on the modeling and mitigation of substrate noise in mixed-signal integrated circuits (ICs), focusing on UWB system/circuit designs. Experimental impact evaluation of substrate noise on UWB circuits is presented. It shows how a wide-band circuit can...
A gradient stable scheme for a phase field model for the moving contact line problem
Gao, Min
2012-02-01
In this paper, an efficient numerical scheme is designed for a phase field model for the moving contact line problem, which consists of a coupled system of the Cahn-Hilliard and Navier-Stokes equations with the generalized Navier boundary condition [1,2,4]. The nonlinear version of the scheme is semi-implicit in time and is based on a convex splitting of the Cahn-Hilliard free energy (including the boundary energy) together with a projection method for the Navier-Stokes equations. We show, under certain conditions, the scheme has the total energy decaying property and is unconditionally stable. The linearized scheme is easy to implement and introduces only mild CFL time constraint. Numerical tests are carried out to verify the accuracy and stability of the scheme. The behavior of the solution near the contact line is examined. It is verified that, when the interface intersects with the boundary, the consistent splitting scheme [21,22] for the Navier Stokes equations has the better accuracy for pressure. © 2011 Elsevier Inc.
A method of LED free-form tilted lens rapid modeling based on scheme language
Dai, Yidan
2017-10-01
According to nonimaging optical principle and traditional LED free-form surface lens, a new kind of LED free-form tilted lens was designed. And a method of rapid modeling based on Scheme language was proposed. The mesh division method was applied to obtain the corresponding surface configuration according to the character of the light source and the desired energy distribution on the illumination plane. Then 3D modeling software and the Scheme language programming are used to generate lens model respectively. With the help of optical simulation software, a light source with the size of 1mm*1mm*1mm in volume is used in experiment, and the lateral migration distance of illumination area is 0.5m, in which total one million rays are computed. We could acquire the simulated results of both models. The simulated output result shows that the Scheme language can prevent the model deformation problems caused by the process of the model transfer, and the degree of illumination uniformity is reached to 82%, and the offset angle is 26°. Also, the efficiency of modeling process is greatly increased by using Scheme language.
Evaluation of nourishment schemes based on long-term morphological modeling
DEFF Research Database (Denmark)
Grunnet, Nicholas; Kristensen, Sten Esbjørn; Drønen, Nils
2012-01-01
A recently developed long-term morphological modeling concept is applied to evaluate the impact of nourishment schemes. The concept combines detailed two-dimensional morphological models and simple one-line models for the coastline evolution and is particularly well suited for long-term simulatio...... site. This study strongly indicates that the hybrid model may be used as an engineering tool to predict shoreline response following the implementation of a nourishment project....
Difference schemes for numerical solutions of lagging models of heat conduction
Cabrera Sánchez, Jesús; Castro López, María Ángeles; Rodríguez Mateo, Francisco; Martín Alustiza, José Antonio
2013-01-01
Non-Fourier models of heat conduction are increasingly being considered in the modeling of microscale heat transfer in engineering and biomedical heat transfer problems. The dual-phase-lagging model, incorporating time lags in the heat flux and the temperature gradient, and some of its particular cases and approximations, result in heat conduction modeling equations in the form of delayed or hyperbolic partial differential equations. In this work, the application of difference schemes for the...
SOLVING FRACTIONAL-ORDER COMPETITIVE LOTKA-VOLTERRA MODEL BY NSFD SCHEMES
Directory of Open Access Journals (Sweden)
S.ZIBAEI
2016-12-01
Full Text Available In this paper, we introduce fractional-order into a model competitive Lotka- Volterra prey-predator system. We will discuss the stability analysis of this fractional system. The non-standard nite difference (NSFD scheme is implemented to study the dynamic behaviors in the fractional-order Lotka-Volterra system. Proposed non-standard numerical scheme is compared with the forward Euler and fourth order Runge-Kutta methods. Numerical results show that the NSFD approach is easy and accurate for implementing when applied to fractional-order Lotka-Volterra model.
Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model
Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.
2017-10-01
We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.
White, Jeremy T.; Langevin, Christian D.; Hughes, Joseph D.
2010-01-01
Calibration of highly‐parameterized numerical models typically requires explicit Tikhonovtype regularization to stabilize the inversion process. This regularization can take the form of a preferred parameter values scheme or preferred relations between parameters, such as the preferred equality scheme. The resulting parameter distributions calibrate the model to a user‐defined acceptable level of model‐to‐measurement misfit, and also minimize regularization penalties on the total objective function. To evaluate the potential impact of these two regularization schemes on model predictive ability, a dataset generated from a synthetic model was used to calibrate a highly-parameterized variable‐density SEAWAT model. The key prediction is the length of time a synthetic pumping well will produce potable water. A bi‐objective Pareto analysis was used to explicitly characterize the relation between two competing objective function components: measurement error and regularization error. Results of the Pareto analysis indicate that both types of regularization schemes affect the predictive ability of the calibrated model.
An efficient numerical progressive diagonalization scheme for the quantum Rabi model revisited
International Nuclear Information System (INIS)
Pan, Feng; Bao, Lina; Dai, Lianrong; Draayer, Jerry P
2017-01-01
An efficient numerical progressive diagonalization scheme for the quantum Rabi model is revisited. The advantage of the scheme lies in the fact that the quantum Rabi model can be solved almost exactly by using the scheme that only involves a finite set of one variable polynomial equations. The scheme is especially efficient for a specified eigenstate of the model, for example, the ground state. Some low-lying level energies of the model for several sets of parameters are calculated, of which one set of the results is compared to that obtained from the Braak’s exact solution proposed recently. It is shown that the derivative of the entanglement measure defined in terms of the reduced von Neumann entropy with respect to the coupling parameter does reach the maximum near the critical point deduced from the classical limit of the Dicke model, which may provide a probe of the critical point of the crossover in finite quantum many-body systems, such as that in the quantum Rabi model. (paper)
Directory of Open Access Journals (Sweden)
B. Ervens
2012-07-01
Full Text Available Ice nucleation in clouds is often observed at temperatures >235 K, pointing to heterogeneous freezing as a predominant mechanism. Many models deterministically predict the number concentration of ice particles as a function of temperature and/or supersaturation. Several laboratory experiments, at constant temperature and/or supersaturation, report heterogeneous freezing as a stochastic, time-dependent process that follows classical nucleation theory; this might appear to contradict deterministic models that predict singular freezing behavior.
We explore the extent to which the choice of nucleation scheme (deterministic/stochastic, single/multiple contact angles θ affects the prediction of the fraction of frozen ice nuclei (IN and cloud evolution for a predetermined maximum IN concentration. A box model with constant temperature and supersaturation is used to mimic published laboratory experiments of immersion freezing of monodisperse (800 nm kaolinite particles (~243 K, and the fitness of different nucleation schemes. Sensitivity studies show that agreement of all five schemes is restricted to the narrow parameter range (time, temperature, IN diameter in the original laboratory studies, and that model results diverge for a wider range of conditions.
The schemes are implemented in an adiabatic parcel model that includes feedbacks of the formation and growth of drops and ice particles on supersaturation during ascent. Model results for the monodisperse IN population (800 nm show that these feedbacks limit ice nucleation events, often leading to smaller differences in number concentration of ice particles and ice water content (IWC between stochastic and deterministic approaches than expected from the box model studies. However, because the different parameterizations of θ distributions and time-dependencies are highly sensitive to IN size, simulations using polydisperse IN result in great differences in predicted ice number
Change in Farm Production Structure Within Different CAP Schemes – an LP Modelling Approach
Directory of Open Access Journals (Sweden)
Jaka ŽGAJNAR
2008-01-01
Full Text Available After accession to European Union in 2004 direct payments became veryimportant income source also for farmers in Slovenia. But agricultural policy inplace at accession changed significantly in year 2007 as result of CAP reformimplementation. The objective of this study was to evaluate decision makingimpacts of direct payments scheme implemented with the reform: regional or morelikely hybrid scheme. The change in farm production structure was simulated withmodel, applying gross margin maximisation, based on static linear programmingapproach. The model has been developed in a spreadsheet framework in MS Excelplatform. A hypothetical farm has been chosen to analyse different scenarios andspecializations. Focus of the analysis was on cattle sector, since it is expected thatdecoupling is going to have significant influence on its optimal productionstructure. The reason is high level of direct payments that could in pre-reformscheme rise up to 70 % of total gross margin. Model results confirm that the reformshould have unfavourable impacts on cattle farms with intensive productionpractice. The results show that hybrid scheme has minor negative impacts in allcattle specializations, while regional scheme would be better option for sheepspecialized farm. Analysis has also shown growing importance of CAP pillar IIpayments, among them particularly agri-environmental measures. In all threeschemes budgetary payments enable farmers to improve financial results and inboth reform schemes they alleviate economic impacts of the CAP reform.
Performance of the Goddard multiscale modeling framework with Goddard ice microphysical schemes
Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Matsui, Toshihisa; Li, J.-L. F.; Mohr, Karen I.; Skofronick-Jackson, Gail M.; Peters-Lidard, Christa D.
2016-03-01
The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has become a new approach for climate modeling. The embedded CRMs make it possible to apply CRM-based cloud microphysics directly within a GCM. However, most such schemes have never been tested in a global environment for long-term climate simulation. The benefits of using an MMF to evaluate rigorously and improve microphysics schemes are here demonstrated. Four one-moment microphysical schemes are implemented into the Goddard MMF and their results validated against three CloudSat/CALIPSO cloud ice products and other satellite data. The new four-class (cloud ice, snow, graupel, and frozen drops/hail) ice scheme produces a better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes, with biases within the observational uncertainties. Sensitivity experiments are conducted to examine the impact of recently upgraded microphysical processes on global hydrometeor distributions. Five processes dominate the global distributions of cloud ice and snow amount in long-term simulations: (1) allowing for ice supersaturation in the saturation adjustment, (2) three additional correction terms in the depositional growth of cloud ice to snow, (3) accounting for cloud ice fall speeds, (4) limiting cloud ice particle size, and (5) new size-mapping schemes for snow and graupel. Despite the cloud microphysics improvements, systematic errors associated with subgrid processes, cyclic lateral boundaries in the embedded CRMs, and momentum transport remain and will require future improvement.
Chern, J.; Tao, W.; Lang, S. E.; Matsui, T.
2012-12-01
The accurate representation of clouds and cloud processes in atmospheric general circulation models (GCMs) with relatively coarse resolution (~100 km) has been a long-standing challenge. With the rapid advancement in computational technology, new breed of GCMs that are capable of explicitly resolving clouds have been developed. Though still computationally very expensive, global cloud-resolving models (GCRMs) with horizontal resolutions of 3.5 to 14 km are already being run in an exploratory manner. Another less computationally demanding approach is the multi-scale modeling framework (MMF) that replaces conventional cloud parameterizations with a cloud-resolving model (CRM) in each grid column of a GCM. The Goddard MMF is based on the coupling of the Goddard Cumulus Ensemble (GCE), a CRM model, and the GEOS global model. In recent years a few new and improved microphysical schemes are developed and implemented to the GCE based on observations from field campaigns. It is important to evaluating these microphysical schemes for global applications such as the MMFs and GCRMs. Two-year (2007-2008) MMF sensitivity experiments have been carried out with different cloud microphysical schemes. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against TRMM, CloudSat and CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to evaluate the performance of different cloud microphysical schemes. We will assess the strengths and/or deficiencies in of these microphysics schemes and provide guidance on how to improve
Fuzzy Multiple Criteria Decision Making Model with Fuzzy Time Weight Scheme
Chin-Yao Low; Sung-Nung Lin
2013-01-01
In this study, we purpose a common fuzzy multiple criteria decision making model. A brand new concept - fuzzy time weighted scheme is adopted for considering in the model to establish a fuzzy multiple criteria decision making with time weight (FMCDMTW) model. A real case of fuzzy multiple criteria decision making (FMCDM) problems to be considering in this study. The performance evaluation of auction websites based on all criteria proposed in related literature. Obviously, the problem under in...
Jha, Pradeep Kumar
Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow
End-point parametrization and guaranteed stability for a model predictive control scheme
Weiland, Siep; Stoorvogel, Antonie Arij; Tiagounov, Andrei A.
2001-01-01
In this paper we consider the closed-loop asymptotic stability of the model predictive control scheme which involves the minimization of a quadratic criterion with a varying weight on the end-point state. In particular, we investigate the stability properties of the (MPC-) controlled system as
DEFF Research Database (Denmark)
Hyun, Jaeyub; Kook, Junghwan; Wang, Semyung
2015-01-01
and basis vectors for use according to the target system. The proposed model reduction scheme is applied to the numerical simulation of the simple mass-damping-spring system and the acoustic metamaterial systems (i.e., acoustic lens and acoustic cloaking device) for the first time. Through these numerical...
RELAP5 two-phase fluid model and numerical scheme for economic LWR system simulation
International Nuclear Information System (INIS)
Ransom, V.H.; Wagner, R.J.; Trapp, J.A.
1981-01-01
The RELAP5 two-phase fluid model and the associated numerical scheme are summarized. The experience accrued in development of a fast running light water reactor system transient analysis code is reviewed and example of the code application are given
A hybrid scheme for absorbing edge reflections in numerical modeling of wave propagation
Liu, Yang
2010-03-01
We propose an efficient scheme to absorb reflections from the model boundaries in numerical solutions of wave equations. This scheme divides the computational domain into boundary, transition, and inner areas. The wavefields within the inner and boundary areas are computed by the wave equation and the one-way wave equation, respectively. The wavefields within the transition area are determined by a weighted combination of the wavefields computed by the wave equation and the one-way wave equation to obtain a smooth variation from the inner area to the boundary via the transition zone. The results from our finite-difference numerical modeling tests of the 2D acoustic wave equation show that the absorption enforced by this scheme gradually increases with increasing width of the transition area. We obtain equally good performance using pseudospectral and finite-element modeling with the same scheme. Our numerical experiments demonstrate that use of 10 grid points for absorbing edge reflections attains nearly perfect absorption. © 2010 Society of Exploration Geophysicists.
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks
DEFF Research Database (Denmark)
Hagen, Espen; Dahmen, David; Stavrinou, Maria L
2016-01-01
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical...... and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely...... on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network...
Korpusik, Adam
2017-02-01
We present a nonstandard finite difference scheme for a basic model of cellular immune response to viral infection. The main advantage of this approach is that it preserves the essential qualitative features of the original continuous model (non-negativity and boundedness of the solution, equilibria and their stability conditions), while being easy to implement. All of the qualitative features are preserved independently of the chosen step-size. Numerical simulations of our approach and comparison with other conventional simulation methods are presented.
A new numerical scheme for bounding acceleration in the LWR model
LECLERCQ, L; ELSEVIER
2005-01-01
This paper deals with the numerical resolution of bounded acceleration extensions of the LWR model. Two different manners for bounding accelerations in the LWR model will be presented: introducing a moving boundary condition in front of an accelerating flow or defining a field of constraints on the maximum allowed speed in the (x,t) plane. Both extensions lead to the same solutions if the declining branch of the fundamental diagram is linear. The existing numerical scheme for the latter exte...
Additive operator-difference schemes splitting schemes
Vabishchevich, Petr N
2013-01-01
Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.
Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T
2016-12-01
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.
Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme
Veljović, K.; Rajković, B.; Mesinger, F.
2009-04-01
Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat
A study of the spreading scheme for viral marketing based on a complex network model
Yang, Jianmei; Yao, Canzhong; Ma, Weicheng; Chen, Guanrong
2010-02-01
Buzzword-based viral marketing, known also as digital word-of-mouth marketing, is a marketing mode attached to some carriers on the Internet, which can rapidly copy marketing information at a low cost. Viral marketing actually uses a pre-existing social network where, however, the scale of the pre-existing network is believed to be so large and so random, so that its theoretical analysis is intractable and unmanageable. There are very few reports in the literature on how to design a spreading scheme for viral marketing on real social networks according to the traditional marketing theory or the relatively new network marketing theory. Complex network theory provides a new model for the study of large-scale complex systems, using the latest developments of graph theory and computing techniques. From this perspective, the present paper extends the complex network theory and modeling into the research of general viral marketing and develops a specific spreading scheme for viral marking and an approach to design the scheme based on a real complex network on the QQ instant messaging system. This approach is shown to be rather universal and can be further extended to the design of various spreading schemes for viral marketing based on different instant messaging systems.
Directory of Open Access Journals (Sweden)
Chang-bae Moon
2010-12-01
Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.
Directory of Open Access Journals (Sweden)
Chang-bae Moon
2011-01-01
Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.
Primdahl, Jørgen; Vesterager, Jens Peter; Finn, John A; Vlahos, George; Kristensen, Lone; Vejre, Henrik
2010-06-01
Agri-Environment Schemes (AES) to maintain or promote environmentally-friendly farming practices were implemented on about 25% of all agricultural land in the EU by 2002. This article analyses and discusses the actual and potential use of impact models in supporting the design, implementation and evaluation of AES. Impact models identify and establish the causal relationships between policy objectives and policy outcomes. We review and discuss the role of impact models at different stages in the AES policy process, and present results from a survey of impact models underlying 60 agri-environmental schemes in seven EU member states. We distinguished among three categories of impact models (quantitative, qualitative or common sense), depending on the degree of evidence in the formal scheme description, additional documents, or key person interviews. The categories of impact models used mainly depended on whether scheme objectives were related to natural resources, biodiversity or landscape. A higher proportion of schemes dealing with natural resources (primarily water) were based on quantitative impact models, compared to those concerned with biodiversity or landscape. Schemes explicitly targeted either on particular parts of individual farms or specific areas tended to be based more on quantitative impact models compared to whole-farm schemes and broad, horizontal schemes. We conclude that increased and better use of impact models has significant potential to improve efficiency and effectiveness of AES. (c) 2009 Elsevier Ltd. All rights reserved.
A gas dynamics scheme for a two moments model of radiative transfer
International Nuclear Information System (INIS)
Buet, Ch.; Despres, B.
2007-01-01
We address the discretization of the Levermore's two moments and entropy model of the radiative transfer equation. We present a new approach for the discretization of this model: first we rewrite the moment equations as a Compressible Gas Dynamics equation by introducing an additional quantity that plays the role of a density. After that we discretize using a Lagrange-projection scheme. The Lagrange-projection scheme permits us to incorporate the source terms in the fluxes of an acoustic solver in the Lagrange step, using the well-known piecewise steady approximation and thus to capture correctly the diffusion regime. Moreover we show that the discretization is entropic and preserve the flux-limited property of the moment model. Numerical examples illustrate the feasibility of our approach. (authors)
A theoretical framework for an access programme encompassing ...
African Journals Online (AJOL)
A theoretical framework for an access programme encompassing further education training: remedy for educational wastage? ... learners who have dropped out of school without completing their secondary-school education, there are the special needs of adult learners in the workplace that must be taken into consideration.
Encompassing Sexual Medicine within Psychiatry: Pros and Cons
Segraves, Robert Taylor
2010-01-01
Objective: This article examines the positive and negative aspects of psychiatry encompassing sexual medicine within its purview. Methods: MEDLINE searches for the period between 1980 to the present were performed with the terms "psychiatry," "sexual medicine," and "sexual dysfunction." In addition, sexual medicine texts were reviewed for chapters…
Contemporary Christian Spirituality: An “Encompassing Field ...
African Journals Online (AJOL)
Contemporary Christian spirituality, understood as both an experiential, lived-life phenomenon and an academic discipline gives a new-found universal perspective to the reflective Christian. It constitutes an encompassing, incorporative “field” through occupying a “give-and-take” inter-disciplinary place in a general ...
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
Reinharz, Vladimir; Dahari, Harel; Barash, Danny
2018-03-15
Age-structured PDE models have been developed to study viral infection and treatment. However, they are notoriously difficult to solve. Here, we investigate the numerical solutions of an age-based multiscale model of hepatitis C virus (HCV) dynamics during antiviral therapy and compare them with an analytical approximation, namely its long-term approximation. First, starting from a simple yet flexible numerical solution that also considers an integral approximated over previous iterations, we show that the long-term approximation is an underestimate of the PDE model solution as expected since some infection events are being ignored. We then argue for the importance of having a numerical solution that takes into account previous iterations for the associated integral, making problematic the use of canned solvers. Second, we demonstrate that the governing differential equations are stiff and the stability of the numerical scheme should be considered. Third, we show that considerable gain in efficiency can be achieved by using adaptive stepsize methods over fixed stepsize methods for simulating realistic scenarios when solving multiscale models numerically. Finally, we compare between several numerical schemes for the solution of the equations and demonstrate the use of a numerical optimization scheme for the parameter estimation performed directly from the equations. Copyright © 2018 Elsevier Inc. All rights reserved.
Model Building by Coset Space Dimensional Reduction Scheme Using Ten-Dimensional Coset Spaces
Jittoh, T.; Koike, M.; Nomura, T.; Sato, J.; Shimomura, T.
2008-12-01
We investigate the gauge-Higgs unification models within the scheme of the coset space dimensional reduction, beginning with a gauge theory in a fourteen-dimensional spacetime where extra-dimensional space has the structure of a ten-dimensional compact coset space. We found seventeen phenomenologically acceptable models through an exhaustive search for the candidates of the coset spaces, the gauge group in fourteen dimension, and fermion representation. Of the seventeen, ten models led to {SO}(10) (× {U}(1)) GUT-like models after dimensional reduction, three models led to {SU}(5) × {U}(1) GUT-like models, and four to {SU}(3) × {SU}(2) × {U}(1) × {U}(1) Standard-Model-like models. The combinations of the coset space, the gauge group in the fourteen-dimensional spacetime, and the representation of the fermion contents of such models are listed.
Nazarova, G.; Ivashkina, E.; Ivanchina, E.; Kiseleva, S.; Stebeneva, V.
2015-11-01
The issue of improving the energy and resource efficiency of advanced petroleum processing can be solved by the development of adequate mathematical model based on physical and chemical regularities of process reactions with a high predictive potential in the advanced petroleum refining. In this work, the development of formalized hydrocarbon conversion scheme of catalytic cracking was performed using thermodynamic parameters of reaction defined by the Density Functional Theory. The list of reaction was compiled according to the results of feedstock structural-group composition definition, which was done by the n-d-m-method, the Hazelvuda method, qualitative composition of feedstock defined by gas chromatography-mass spectrometry and individual composition of catalytic cracking gasoline fraction. Formalized hydrocarbon conversion scheme of catalytic cracking will become the basis for the development of the catalytic cracking kinetic model.
A New Repeating Color Watermarking Scheme Based on Human Visual Model
Directory of Open Access Journals (Sweden)
Chang Chin-Chen
2004-01-01
Full Text Available This paper proposes a human-visual-model-based scheme that effectively protects the intellectual copyright of digital images. In the proposed method, the theory of the visual secret sharing scheme is used to create a master watermark share and a secret watermark share. The watermark share is kept secret by the owner. The master watermark share is embedded into the host image to generate a watermarked image based on the human visual model. The proposed method conforms to all necessary conditions of an image watermarking technique. After the watermarked image is put under various attacks such as lossy compression, rotating, sharpening, blurring, and cropping, the experimental results show that the extracted digital watermark from the attacked watermarked images can still be robustly detected using the proposed method.
Model-based fault diagnosis techniques design schemes, algorithms, and tools
Ding, Steven
2008-01-01
The objective of this book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms, and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers. This is a textbook with extensive examples and references. Most methods are given in the form of an algorithm that enables a direct implementation in a programme. Comparisons among different methods are included when possible.
Circuit QED scheme for realization of the Lipkin-Meshkov-Glick model
Larson, Jonas
2010-01-01
We propose a scheme in which the Lipkin-Meshkov-Glick model is realized within a circuit QED system. An array of N superconducting qubits interacts with a driven cavity mode. In the dispersive regime, the cavity mode is adiabatically eliminated generating an effective model for the qubits alone. The characteristic long-range order of the Lipkin-Meshkov-Glick model is here mediated by the cavity field. For a closed qubit system, the inherent second order phase transition of the qubits is refle...
Implementation of a gust front head collapse scheme in the WRF numerical model
Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje
2018-05-01
Gust fronts are thunderstorm-related phenomena usually associated with severe winds which are of great importance in theoretical meteorology, weather forecasting, cloud dynamics and precipitation, and wind engineering. An important feature of gust fronts demonstrated through both theoretical and observational studies is the periodic collapse and rebuild of the gust front head. This cyclic behavior of gust fronts results in periodic forcing of vertical velocity ahead of the parent thunderstorm, which consequently influences the storm dynamics and microphysics. This paper introduces the first gust front pulsation parameterization scheme in the WRF-ARW model (Weather Research and Forecasting-Advanced Research WRF). The influence of this new scheme on model performances is tested through investigation of the characteristics of an idealized supercell cumulonimbus cloud, as well as studying a real case of thunderstorms above the United Arab Emirates. In the ideal case, WRF with the gust front scheme produced more precipitation and showed different time evolution of mixing ratios of cloud water and rain, whereas the mixing ratios of ice and graupel are almost unchanged when compared to the default WRF run without the parameterization of gust front pulsation. The included parameterization did not disturb the general characteristics of thunderstorm cloud, such as the location of updraft and downdrafts, and the overall shape of the cloud. New cloud cells in front of the parent thunderstorm are also evident in both ideal and real cases due to the included forcing of vertical velocity caused by the periodic collapse of the gust front head. Despite some differences between the two WRF simulations and satellite observations, the inclusion of the gust front parameterization scheme produced more cumuliform clouds and seem to match better with real observations. Both WRF simulations gave poor results when it comes to matching the maximum composite radar reflectivity from radar
Thermal Error Modeling of a Machine Tool Using Data Mining Scheme
Wang, Kun-Chieh; Tseng, Pai-Chang
In this paper the knowledge discovery technique is used to build an effective and transparent mathematic thermal error model for machine tools. Our proposed thermal error modeling methodology (called KRL) integrates the schemes of K-means theory (KM), rough-set theory (RS), and linear regression model (LR). First, to explore the machine tool's thermal behavior, an integrated system is designed to simultaneously measure the temperature ascents at selected characteristic points and the thermal deformations at spindle nose under suitable real machining conditions. Second, the obtained data are classified by the KM method, further reduced by the RS scheme, and a linear thermal error model is established by the LR technique. To evaluate the performance of our proposed model, an adaptive neural fuzzy inference system (ANFIS) thermal error model is introduced for comparison. Finally, a verification experiment is carried out and results reveal that the proposed KRL model is effective in predicting thermal behavior in machine tools. Our proposed KRL model is transparent, easily understood by users, and can be easily programmed or modified for different machining conditions.
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
Analyzing numerics of bulk microphysics schemes in community models: warm rain processes
Directory of Open Access Journals (Sweden)
I. Sednev
2012-08-01
Full Text Available Implementation of bulk cloud microphysics (BLK parameterizations in atmospheric models of different scales has gained momentum in the last two decades. Utilization of these parameterizations in cloud-resolving models when timesteps used for the host model integration are a few seconds or less is justified from the point of view of cloud physics. However, mechanistic extrapolation of the applicability of BLK schemes to the regional or global scales and the utilization of timesteps of hundreds up to thousands of seconds affect both physics and numerics.
We focus on the mathematical aspects of BLK schemes, such as stability and positive-definiteness. We provide a strict mathematical definition for the problem of warm rain formation. We also derive a general analytical condition (SM-criterion that remains valid regardless of parameterizations for warm rain processes in an explicit Eulerian time integration framework used to advanced finite-difference equations, which govern warm rain formation processes in microphysics packages in the Community Atmosphere Model and the Weather Research and Forecasting model. The SM-criterion allows for the existence of a unique positive-definite stable mass-conserving numerical solution, imposes an additional constraint on the timestep permitted due to the microphysics (like the Courant-Friedrichs-Lewy condition for the advection equation, and prohibits use of any additional assumptions not included in the strict mathematical definition of the problem under consideration.
By analyzing the numerics of warm rain processes in source codes of BLK schemes implemented in community models we provide general guidelines regarding the appropriate choice of time steps in these models.
Production and reception of meaningful sound in Foville's 'encompassing convolution'.
Schiller, F
1999-04-01
In the history of neurology. Achille Louis Foville (1799-1879) is a name deserving to be remembered. In the course of time, his circonvolution d'enceinte of 1844 (surrounding the Sylvian fissure) became the 'convolution encompassing' every aspect of aphasiology, including amusia, ie., the localization in a coherent semicircle of semicircle of cerebral cortext serving the production and perception of language, song and instrumental music in health and disease.
Directory of Open Access Journals (Sweden)
Mohammad Iranmanesh
2014-12-01
Full Text Available Many standard brands sell products under the volume discount scheme (VDS as more and more consumers are fond of purchasing products under this scheme. Despite volume discount being commonly practiced, there is a dearth of research, both conceptual and empirical, focusing on purchase characteristics factors and consumer internal evaluation concerning the purchase of products under VDS. To attempt to fill this void, this article develops a conceptual model on VDS with the intention of delineating the influence of the purchase characteristics factors on the consumer intention to purchase products under VDS and provides an explanation of their effects through consumer internal evaluation. Finally, the authors discuss the managerial implications of their research and offer guidelines for future empirical research.
Relaxation approximations to second-order traffic flow models by high-resolution schemes
International Nuclear Information System (INIS)
Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.
2015-01-01
A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers
Relaxation approximations to second-order traffic flow models by high-resolution schemes
Energy Technology Data Exchange (ETDEWEB)
Nikolos, I.K.; Delis, A.I.; Papageorgiou, M. [School of Production Engineering and Management, Technical University of Crete, University Campus, Chania 73100, Crete (Greece)
2015-03-10
A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers.
Conti, Costanza; Romani, Lucia
2010-09-01
Univariate subdivision schemes are efficient iterative methods to generate smooth limit curves starting from a sequence of arbitrary points. Aim of this paper is to present and investigate a new family of 6-point interpolatory non-stationary subdivision schemes capable of reproducing important curves of great interest in geometric modeling and engineering applications, if starting from uniformly spaced initial samples. This new family can reproduce conic sections since it is obtained by a parameter depending affine combination of the cubic exponential B-spline symbol generating functions in the space V4,γ = {1,x,etx,e-tx} with t∈{0,s,is|s>0}. Moreover, the free parameter can be chosen to reproduce also other interesting analytic curves by imposing the algebraic conditions for the reproduction of an additional pair of exponential polynomials giving rise to different extensions of the space V4,γ.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models
van Elburg, R.A.J.; van Ooyen, A.
2009-01-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on
Boosting flood warning schemes with fast emulator of detailed hydrodynamic models
Bellos, V.; Carbajal, J. P.; Leitao, J. P.
2017-12-01
Floods are among the most destructive catastrophic events and their frequency has incremented over the last decades. To reduce flood impact and risks, flood warning schemes are installed in flood prone areas. Frequently, these schemes are based on numerical models which quickly provide predictions of water levels and other relevant observables. However, the high complexity of flood wave propagation in the real world and the need of accurate predictions in urban environments or in floodplains hinders the use of detailed simulators. This sets the difficulty, we need fast predictions that meet the accuracy requirements. Most physics based detailed simulators although accurate, will not fulfill the speed demand. Even if High Performance Computing techniques are used (the magnitude of required simulation time is minutes/hours). As a consequence, most flood warning schemes are based in coarse ad-hoc approximations that cannot take advantage a detailed hydrodynamic simulation. In this work, we present a methodology for developing a flood warning scheme using an Gaussian Processes based emulator of a detailed hydrodynamic model. The methodology consists of two main stages: 1) offline stage to build the emulator; 2) online stage using the emulator to predict and generate warnings. The offline stage consists of the following steps: a) definition of the critical sites of the area under study, and the specification of the observables to predict at those sites, e.g. water depth, flow velocity, etc.; b) generation of a detailed simulation dataset to train the emulator; c) calibration of the required parameters (if measurements are available). The online stage is carried on using the emulator to predict the relevant observables quickly, and the detailed simulator is used in parallel to verify key predictions of the emulator. The speed gain given by the emulator allows also to quantify uncertainty in predictions using ensemble methods. The above methodology is applied in real
SMAFS, Steady-state analysis Model for Advanced Fuel cycle Schemes
International Nuclear Information System (INIS)
LEE, Kwang-Seok
2006-01-01
1 - Description of program or function: The model was developed as a part of the study, 'Advanced Fuel Cycles and Waste Management', which was performed during 2003-2005 by an ad-hoc expert group under the Nuclear Development Committee in the OECD/NEA. The model was designed for an efficient conduct of nuclear fuel cycle scheme cost analyses. It is simple, transparent and offers users the capability to track down the cost analysis results. All the fuel cycle schemes considered in the model are represented in a graphic format and all values related to a fuel cycle step are shown in the graphic interface, i.e., there are no hidden values embedded in the calculations. All data on the fuel cycle schemes considered in the study including mass flows, waste generation, cost data, and other data such as activities, decay heat and neutron sources of spent fuel and high-level waste along time are included in the model and can be displayed. The user can modify easily the values of mass flows and/or cost parameters and see the corresponding changes in the results. The model calculates: front-end fuel cycle mass flows such as requirements of enrichment and conversion services and natural uranium; mass of waste based on the waste generation parameters and the mass flow; and all costs. It performs Monte Carlo simulations with changing the values of all unit costs within their respective ranges (from lower to upper bounds). 2 - Methods: In Monte Carlo simulation, it is assumed that all unit costs follow a triangular probability distribution function, i.e., the probability that the unit cost has a value increases linearly from its lower bound to the nominal value and then decreases linearly to its upper bound. 3 - Restrictions on the complexity of the problem: The limit for the Monte Carlo iterations is the one of an Excel worksheet, i.e. 65,536
APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study
Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak
2017-04-01
In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.
Energy Technology Data Exchange (ETDEWEB)
Zubov, V.A.; Rozanov, E.V. [Main Geophysical Observatory, St.Petersburg (Russian Federation); Schlesinger, M.E.; Andronova, N.G. [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Atmospheric Sciences
1997-12-31
The problems of ozone depletion, climate change and atmospheric pollution strongly depend on the processes of production, destruction and transport of chemical species. A hybrid transport scheme was developed, consisting of the semi-Lagrangian scheme for horizontal advection and the Prather scheme for vertical transport, which have been used for the Atmospheric Chemical Transport model to calculate the distributions of different chemical species. The performance of the new hybrid scheme has been evaluated in comparison with other transport schemes on the basis of specially designed tests. The seasonal cycle of the distribution of N{sub 2}O simulated by the model, as well as the dispersion of NO{sub x} exhausted from subsonic aircraft, are in a good agreement with published data. (author) 8 refs.
Energy Technology Data Exchange (ETDEWEB)
Mengelkamp, H.T.; Warrach, K.; Raschke, E. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Atmosphaerenphysik
1997-12-31
A soil-vegetation-atmosphere-transfer scheme is presented here which solves the coupled system of the Surface Energy and Water Balance (SEWAB) equations considering partly vegetated surfaces. It is based on the one-layer concept for vegetation. In the soil the diffusion equations for heat and moisture are solved on a multi-layer grid. SEWAB has been developed to serve as a land-surface scheme for atmospheric circulation models. Being forced with atmospheric data from either simulations or measurements it calculates surface and subsurface runoff that can serve as input to hydrologic models. The model has been validated with field data from the FIFE experiment and has participated in the PILPS project for intercomparison of land-surface parameterization schemes. From these experiments we feel that SEWAB reasonably well partitions the radiation and precipitation into sensible and latent heat fluxes as well as into runoff and soil moisture Storage. (orig.) [Deutsch] Ein Landoberflaechenschema wird vorgestellt, das den Transport von Waerme und Wasser zwischen dem Erdboden, der Vegetation und der Atmosphaere unter Beruecksichtigung von teilweise bewachsenem Boden beschreibt. Im Erdboden werden die Diffusionsgleichungen fuer Waerme und Feuchte auf einem Gitter mit mehreren Schichten geloest. Das Schema SEWAB (Surface Energy and Water Balance) beschreibt die Landoberflaechenprozesse in atmosphaerischen Modellen und berechnet den Oberflaechenabfluss und den Basisabfluss, die als Eingabedaten fuer hydrologische Modelle genutzt werden koennen. Das Modell wurde mit Daten des FIFE-Experiments kalibriert und hat an Vergleichsexperimenten fuer Landoberflaechen-Schemata im Rahmen des PILPS-Projektes teilgenommen. Dabei hat sich gezeigt, dass die Aufteilung der einfallenden Strahlung und des Niederschlages in den sensiblen und latenten Waermefluss und auch in Abfluss und Speicherung der Bodenfeuchte in SEWAB den beobachteten Daten recht gut entspricht. (orig.)
Impact of an improved shortwave radiation scheme in the MAECHAM5 General Circulation Model
Directory of Open Access Journals (Sweden)
J. J. Morcrette
2007-05-01
Full Text Available In order to improve the representation of ozone absorption in the stratosphere of the MAECHAM5 general circulation model, the spectral resolution of the shortwave radiation parameterization used in the model has been increased from 4 to 6 bands. Two 20-years simulations with the general circulation model have been performed, one with the standard and the other with the newly introduced parameterization respectively, to evaluate the temperature and dynamical changes arising from the two different representations of the shortwave radiative transfer. In the simulation with the increased spectral resolution in the radiation parameterization, a significant warming of almost the entire model domain is reported. At the summer stratopause the temperature increase is about 6 K and alleviates the cold bias present in the model when the standard radiation scheme is used. These general circulation model results are consistent both with previous validation of the radiation scheme and with the offline clear-sky comparison performed in the current work with a discrete ordinate 4 stream scattering line by line radiative transfer model. The offline validation shows a substantial reduction of the daily averaged shortwave heating rate bias (1–2 K/day cooling that occurs for the standard radiation parameterization in the upper stratosphere, present under a range of atmospheric conditions. Therefore, the 6 band shortwave radiation parameterization is considered to be better suited for the representation of the ozone absorption in the stratosphere than the 4 band parameterization. Concerning the dynamical response in the general circulation model, it is found that the reported warming at the summer stratopause induces stronger zonal mean zonal winds in the middle atmosphere. These stronger zonal mean zonal winds thereafter appear to produce a dynamical feedback that results in a dynamical warming (cooling of the polar winter (summer mesosphere, caused by an
Riaz, Faisal; Niazi, Muaz A
2017-01-01
This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.
Niazi, Muaz A.
2017-01-01
This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson’s arms race model has also been presented. The performance of the proposed social agent has been validated at two levels–firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme. PMID:29040294
Directory of Open Access Journals (Sweden)
Faisal Riaz
Full Text Available This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs, which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM level of the Cognitive Agent Based Computing (CABC framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.
A design of mathematical modelling for the mudharabah scheme in shariah insurance
Cahyandari, R.; Mayaningsih, D.; Sukono
2017-01-01
Indonesian Shariah Insurance Association (AASI) believes that 2014 is the year of Indonesian Shariah insurance, since its growth was above the conventional insurance. In December 2013, 43% growth was recorded for shariah insurance, while the conventional insurance was only hit 20%. This means that shariah insurance has tremendous potential to remain growing in the future. In addition, the growth can be predicted from the number of conventional insurance companies who open sharia division, along with the development of Islamic banking development which automatically demand the role of shariah insurance to protect assets and banking transactions. The development of shariah insurance should be accompanied by the development of premium fund management mechanism, in order to create innovation on shariah insurance products which beneficial for the society. The development of premium fund management model shows a positive progress through the emergence of Mudharabah, Wakala, Hybrid (Mudharabah-Wakala), and Wakala-Waqf. However, ‘model’ term that referred in this paper is regarded as an operational model in form of a scheme of management mechanism. Therefore, this paper will describe a mathematical modeling for premium fund management scheme, especially for Mudharabah concept. Mathematical modeling is required for an analysis process that can be used to predict risks that could be faced by a company in the future, so that the company could take a precautionary policy to minimize those risks.
Numerical Modeling of Deep Mantle Convection: Advection and Diffusion Schemes for Marker Methods
Mulyukova, Elvira; Dabrowski, Marcin; Steinberger, Bernhard
2013-04-01
Thermal and chemical evolution of Earth's deep mantle can be studied by modeling vigorous convection in a chemically heterogeneous fluid. Numerical modeling of such a system poses several computational challenges. Dominance of heat advection over the diffusive heat transport, and a negligible amount of chemical diffusion results in sharp gradients of thermal and chemical fields. The exponential dependence of the viscosity of mantle materials on temperature also leads to high gradients of the velocity field. The accuracy of many numerical advection schemes degrades quickly with increasing gradient of the solution, while the computational effort, in terms of the scheme complexity and required resolution, grows. Additional numerical challenges arise due to a large range of length-scales characteristic of a thermochemical convection system with highly variable viscosity. To examplify, the thickness of the stem of a rising thermal plume may be a few percent of the mantle thickness. An even thinner filament of an anomalous material that is entrained by that plume may consitute less than a tenth of a percent of the mantle thickness. We have developed a two-dimensional FEM code to model thermochemical convection in a hollow cylinder domain, with a depth- and temperature-dependent viscosity representative of the mantle (Steinberger and Calderwood, 2006). We use marker-in-cell method for advection of chemical and thermal fields. The main advantage of perfoming advection using markers is absence of numerical diffusion during the advection step, as opposed to the more diffusive field-methods. However, in the common implementation of the marker-methods, the solution of the momentum and energy equations takes place on a computational grid, and nodes do not generally coincide with the positions of the markers. Transferring velocity-, temperature-, and chemistry- information between nodes and markers introduces errors inherent to inter- and extrapolation. In the numerical scheme
From Balancing the Numbers to an Encompassing Business Case
DEFF Research Database (Denmark)
Labucay, Inéz
2013-01-01
The Business Case of Diversity Management has evolved as the predominant concept underlying many diversity studies and practices in the field. In this line of reasoning, corporate bottom line results like an increased return on investment (ROI) are partially explained by the existence of Diversity......, Diversity measurement) are presented in more detail, followed by a summary and conclusion on its applicability and relevance for diversity practitioners. An outlook on further research ensues. The paper aims at delineating an approach to building a more encompassing Business Case....
Zhao, F.; Veldkamp, T.; Frieler, K.; Schewe, J.; Ostberg, S.; Willner, S. N.; Schauberger, B.; Gosling, S.; Mueller Schmied, H.; Portmann, F. T.; Leng, G.; Huang, M.; Liu, X.; Tang, Q.; Hanasaki, N.; Biemans, H.; Gerten, D.; Satoh, Y.; Pokhrel, Y. N.; Stacke, T.; Ciais, P.; Chang, J.; Ducharne, A.; Guimberteau, M.; Wada, Y.; Kim, H.; Yamazaki, D.
2017-12-01
Global hydrological models (GHMs) have been applied to assess global flood hazards, but their capacity to capture the timing and amplitude of peak river discharge—which is crucial in flood simulations—has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a project. The runoff simulations were used as input for the global river routing model CaMa-Flood. The simulated daily discharge was compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, likely induced by the buffering capacity of floodplain reservoirs. For a majority of river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over about 2/3 of the analysed basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme choice in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not represented in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.
Development and evaluation of a building energy model integrated in the TEB scheme
Directory of Open Access Journals (Sweden)
B. Bueno
2012-03-01
Full Text Available The use of air-conditioning systems is expected to increase as a consequence of global-scale and urban-scale climate warming. In order to represent future scenarios of urban climate and building energy consumption, the Town Energy Balance (TEB scheme must be improved. This paper presents a new building energy model (BEM that has been integrated in the TEB scheme. BEM-TEB makes it possible to represent the energy effects of buildings and building systems on the urban climate and to estimate the building energy consumption at city scale (~10 km with a resolution of a neighbourhood (~100 m. The physical and geometric definition of buildings in BEM has been intentionally kept as simple as possible, while maintaining the required features of a comprehensive building energy model. The model considers a single thermal zone, where the thermal inertia of building materials associated with multiple levels is represented by a generic thermal mass. The model accounts for heat gains due to transmitted solar radiation, heat conduction through the enclosure, infiltration, ventilation, and internal heat gains. BEM allows for previously unavailable sophistication in the modelling of air-conditioning systems. It accounts for the dependence of the system capacity and efficiency on indoor and outdoor air temperatures and solves the dehumidification of the air passing through the system. Furthermore, BEM includes specific models for passive systems, such as window shadowing devices and natural ventilation. BEM has satisfactorily passed different evaluation processes, including testing its modelling assumptions, verifying that the chosen equations are solved correctly, and validating the model with field data.
Development and evaluation of a building energy model integrated in the TEB scheme
Bueno, B.; Pigeon, G.; Norford, L. K.; Zibouche, K.; Marchadier, C.
2012-03-01
The use of air-conditioning systems is expected to increase as a consequence of global-scale and urban-scale climate warming. In order to represent future scenarios of urban climate and building energy consumption, the Town Energy Balance (TEB) scheme must be improved. This paper presents a new building energy model (BEM) that has been integrated in the TEB scheme. BEM-TEB makes it possible to represent the energy effects of buildings and building systems on the urban climate and to estimate the building energy consumption at city scale (~10 km) with a resolution of a neighbourhood (~100 m). The physical and geometric definition of buildings in BEM has been intentionally kept as simple as possible, while maintaining the required features of a comprehensive building energy model. The model considers a single thermal zone, where the thermal inertia of building materials associated with multiple levels is represented by a generic thermal mass. The model accounts for heat gains due to transmitted solar radiation, heat conduction through the enclosure, infiltration, ventilation, and internal heat gains. BEM allows for previously unavailable sophistication in the modelling of air-conditioning systems. It accounts for the dependence of the system capacity and efficiency on indoor and outdoor air temperatures and solves the dehumidification of the air passing through the system. Furthermore, BEM includes specific models for passive systems, such as window shadowing devices and natural ventilation. BEM has satisfactorily passed different evaluation processes, including testing its modelling assumptions, verifying that the chosen equations are solved correctly, and validating the model with field data.
Shirmin, G. I.
1980-08-01
In the present paper, an averaging on the basis of Fatou's (1931) scheme is obtained within the framework of a version of the doubly restricted problem of four bodies. A proof is obtained for the existence of particular solutions that are analogous to the Eulerian and Lagrangian solutions. The solutions are applied to an analysis of first-order secular disturbances in the positions of libration points, caused by the influence of a body whose attraction is neglected in the classical model of the restricted three-body problem. These disturbances are shown to lead to continuous displacements of the libration points.
A Certificateless Ring Signature Scheme with High Efficiency in the Random Oracle Model
Directory of Open Access Journals (Sweden)
Yingying Zhang
2017-01-01
Full Text Available Ring signature is a kind of digital signature which can protect the identity of the signer. Certificateless public key cryptography not only overcomes key escrow problem but also does not lose some advantages of identity-based cryptography. Certificateless ring signature integrates ring signature with certificateless public key cryptography. In this paper, we propose an efficient certificateless ring signature; it has only three bilinear pairing operations in the verify algorithm. The scheme is proved to be unforgeable in the random oracle model.
Energy Technology Data Exchange (ETDEWEB)
Park, Ju Yeop; In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok [Korea Atomic Energy Research Institute, Taejeon (Korea)
2000-02-01
The development of orthogonal 2-dimensional numerical code is made. The present code contains 9 kinds of turbulence models that are widely used. They include a standard k-{epsilon} model and 8 kinds of low Reynolds number ones. They also include 6 kinds of numerical schemes including 5 kinds of low order schemes and 1 kind of high order scheme such as QUICK. To verify the present numerical code, pipe flow, channel flow and expansion pipe flow are solved by this code with various options of turbulence models and numerical schemes and the calculated outputs are compared to experimental data. Furthermore, the discretization error that originates from the use of standard k-{epsilon} turbulence model with wall function is much more diminished by introducing a new grid system than a conventional one in the present code. 23 refs., 58 figs., 6 tabs. (Author)
Deschamps, Kevin; Eerdekens, Maarten; Desmet, Dirk; Matricali, Giovanni Arnoldo; Wuite, Sander; Staes, Filip
2017-08-16
Recent studies which estimated foot segment kinetic patterns were found to have inconclusive data on one hand, and did not dissociate the kinetics of the chopart and lisfranc joint. The current study aimed therefore at reproducing independent, recently published three-segment foot kinetic data (Study 1) and in a second stage expand the estimation towards a four-segment model (Study 2). Concerning the reproducibility study, two recently published three segment foot models (Bruening et al., 2014; Saraswat et al., 2014) were reproduced and kinetic parameters were incorporated in order to calculate joint moments and powers of paediatric cohorts during gait. Ground reaction forces were measured with an integrated force/pressure plate measurement set-up and a recently published proportionality scheme was applied to determine subarea total ground reaction forces. Regarding Study 2, moments and powers were estimated with respect to the Instituto Ortopedico Rizzoli four-segment model. The proportionality scheme was expanded in this study and the impact of joint centre location on kinetic data was evaluated. Findings related to Study 1 showed in general good agreement with the kinetic data published by Bruening et al. (2014). Contrarily, the peak ankle, midfoot and hallux powers published by Saraswat et al. (2014) are disputed. Findings of Study 2 revealed that the chopart joint encompasses both power absorption and generation, whereas the Lisfranc joint mainly contributes to power generation. The results highlights the necessity for further studies in the field of foot kinetic models and provides a first estimation of the kinetic behaviour of the Lisfranc joint. Copyright © 2017 Elsevier Ltd. All rights reserved.
Comprehending isospin breaking effects of X (3872 ) in a Friedrichs-model-like scheme
Zhou, Zhi-Yong; Xiao, Zhiguang
2018-02-01
Recently, we have shown that the X (3872 ) state can be naturally generated as a bound state by incorporating the hadron interactions into the Godfrey-Isgur quark model using a Friedrichs-like model combined with the quark pair creation model, in which the wave function for the X (3872 ) as a combination of the bare c c ¯ state and the continuum states can also be obtained. Under this scheme, we now investigate the isospin-breaking effect of X (3872 ) in its decays to J /ψ π+π- and J /ψ π+π-π0. By coupling its dominant continuum parts to J /ψ ρ and J /ψ ω through the quark rearrangement process, one could obtain the reasonable ratio of B (X (3872 )→J /ψ π+π-π0)/B (X (3872 )→J /ψ π+π-)≃ (0.58 - 0.92 ) . It is also shown that the D ¯D* invariant mass distributions in the B →D ¯D*K decays could be understood qualitatively at the same time. This scheme may provide more insight into the enigmatic nature of the X (3872 ) state.
Koster, Rindal D.; Milly, P. C. D.
1997-01-01
The Project for Intercomparison of Land-surface Parameterization Schemes (PILPS) has shown that different land surface models (LSMS) driven by the same meteorological forcing can produce markedly different surface energy and water budgets, even when certain critical aspects of the LSMs (vegetation cover, albedo, turbulent drag coefficient, and snow cover) are carefully controlled. To help explain these differences, the authors devised a monthly water balance model that successfully reproduces the annual and seasonal water balances of the different PILPS schemes. Analysis of this model leads to the identification of two quantities that characterize an LSM's formulation of soil water balance dynamics: (1) the efficiency of the soil's evaporation sink integrated over the active soil moisture range, and (2) the fraction of this range over which runoff is generated. Regardless of the LSM's complexity, the combination of these two derived parameters with rates of interception loss, potential evaporation, and precipitation provides a reasonable estimate for the LSM's simulated annual water balance. The two derived parameters shed light on how evaporation and runoff formulations interact in an LSM, and the analysis as a whole underscores the need for compatibility in these formulations.
A self-organized internal models architecture for coding sensory-motor schemes
Directory of Open Access Journals (Sweden)
Esaú eEscobar Juárez
2016-04-01
Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.
Directory of Open Access Journals (Sweden)
Tao Chen
2017-05-01
Full Text Available The spatial distribution of precipitation is an important aspect of water-related research. The use of different interpolation schemes in the same catchment may cause large differences and deviations from the actual spatial distribution of rainfall. Our study analyzes different methods of spatial rainfall interpolation at annual, daily, and hourly time scales to provide a comprehensive evaluation. An improved regression-based scheme is proposed using principal component regression with residual correction (PCRR and is compared with inverse distance weighting (IDW and multiple linear regression (MLR interpolation methods. In this study, the meso-scale catchment of the Fuhe River in southeastern China was selected as a typical region. Furthermore, a hydrological model HEC-HMS was used to calculate streamflow and to evaluate the impact of rainfall interpolation methods on the results of the hydrological model. Results show that the PCRR method performed better than the other methods tested in the study and can effectively eliminate the interpolation anomalies caused by terrain differences between observation points and surrounding areas. Simulated streamflow showed different characteristics based on the mean, maximum, minimum, and peak flows. The results simulated by PCRR exhibited the lowest streamflow error and highest correlation with measured values at the daily time scale. The application of the PCRR method is found to be promising because it considers multicollinearity among variables.
An all-encompassing study of an authentic court setting
DEFF Research Database (Denmark)
Christensen, Tina Paulsen
Most professional interpreters and interpreting researchers probably see quality or "professiona¬lism" as the main goal of interpreting in general, but still there is no agreement within the inter¬preting community of how to define interpreting quality. Facing the fact that interpreting can...... necessarily be judged from a particular (subjective) perspective on the communicative event. In this paper I shall address the issue of interpreting quality in an all-encompassing perspective on an authentic Danish courtroom setting. The aim of the empirical case-based survey is unlike that of most existing...... studies which generally have taken either one particular perspective - that of inter¬preters, clients or users - or been experimental in nature - to investigate to which extent different users (judge, defence counsel, prosecutor and non-majority-language speaking user) in a specific courtroom setting...
Development of a Multi-Model Ensemble Scheme for the Tropical Cyclone Forecast
Jun, S.; Lee, W. J.; Kang, K.; Shin, D. H.
2015-12-01
A Multi-Model Ensemble (MME) prediction scheme using selected and weighted method was developed and evaluated for tropical cyclone forecast. The analyzed tropical cyclone track and intensity data set provided by Korea Meteorological Administration and 11 numerical model outputs - GDAPS, GEPS, GFS (data resolution; 50 and 100 km), GFES, HWRF, IFS(data resolution; 50 and 100 km), IFS EPS, JGSM, and TEPS - during 2011-2014 were used for this study. The procedure suggested in this study was divided into two stages: selecting and weighting process. First several numerical models were chosen based on the past model's performances in the selecting stage. Next, weights, referred to as regression coefficients, for each model forecasts were calculated by applying the linear and nonlinear regression technique to past model forecast data in the weighting stage. Finally, tropical cyclone forecasts were determined by using both selected and weighted multi-model values at that forecast time. The preliminary result showed that selected MME's improvement rate (%) was more than 5% comparing with non-selected MME at 72 h track forecast.
DEFF Research Database (Denmark)
Primdahl, Jorgen; Vesterager, Jens Peter; Finn, John A.
2010-01-01
Agri-Environment Schemes (AES) to maintain or promote environmentally-friendly farming practices were implemented on about 25% of all agricultural land in the EU by 2002. This article analyses and discusses the actual and potential use of impact models in supporting the design, implementation...... and evaluation of AES. Impact models identify and establish the causal relationships between policy objectives and policy outcomes. We review and discuss the role of impact models at different stages in the AES policy process, and present results from a survey of impact models underlying 60 agri-environmental...... schemes in seven EU member states. We distinguished among three categories of impact models (quantitative, qualitative or common sense), depending on the degree of evidence in the formal scheme description, additional documents, or key person interviews. The categories of impact models used mainly...
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
DEFF Research Database (Denmark)
Avolio, E.; Federico, S.; Miglietta, M.
2017-01-01
the surface, where the model uncertainties are, usually, smaller than at the surface. A general anticlockwise rotation of the simulated flow with height is found at all levels. The mixing height is overestimated by all schemes and a possible role of the simulated sensible heat fluxes for this mismatching......The sensitivity of boundary layer variables to five (two non-local and three local) planetary boundary-layer (PBL) parameterization schemes, available in the Weather Research and Forecasting (WRF) mesoscale meteorological model, is evaluated in an experimental site in Calabria region (southern...... is investigated. On a single-case basis, significantly better results are obtained when the atmospheric conditions near the measurement site are dominated by synoptic forcing rather than by local circulations. From this study, it follows that the two first order non-local schemes, ACM2 and YSU, are the schemes...
Ambara, M. D.; Gunawan, P. H.
2018-03-01
The impact of a dam-break wave on an erodible embankment with a steep slope has been studied recently using both experimental and numerical approaches. In this paper, the semi-implicit staggered scheme for approximating the shallow water-Exner model will be elaborated to describe the erodible sediment on a steep slope. This scheme is known as a robust scheme to approximate shallow water-Exner model. The results are shown in a good agreement with the experimental data. The comparisons of numerical results with data experiment using slopes Φ = 59.04 and Φ = 41.42 by coefficient of Grass formula Ag = 2 × 10‑5 and Ag = 10‑5 respectively are found the closest results to the experiment. This paper can be seen as the additional validation of semi-implicit staggered scheme in the paper of Gunawan, et al (2015).
Directory of Open Access Journals (Sweden)
U.N. Band
Full Text Available Abstract A transition element is developed for the local global analysis of laminated composite beams. It bridges one part of the domain modelled with a higher order theory and other with a 2D mixed layerwise theory (LWT used at critical zone of the domain. The use of developed transition element makes the analysis for interlaminar stresses possible with significant accuracy. The mixed 2D model incorporates the transverse normal and shear stresses as nodal degrees of freedom (DOF which inherently ensures continuity of these stresses. Non critical zones are modelled with higher order equivalent single layer (ESL theory leading to the global mesh with multiple models applied simultaneously. Use of higher order ESL in non critical zones reduces the total number of elements required to map the domain. A substantial reduction in DOF as compared to a complete 2D mixed model is obvious. This computationally economical multiple modelling scheme using the transition element is applied to static and free vibration analyses of laminated composite beams. Results obtained are in good agreement with benchmarks available in literature.
Zhang, Yong; Meerschaert, Mark M.; Baeumer, Boris; LaBolle, Eric M.
2015-08-01
This study develops an explicit two-step Lagrangian scheme based on the renewal-reward process to capture transient anomalous diffusion with mixed retention and early arrivals in multidimensional media. The resulting 3-D anomalous transport simulator provides a flexible platform for modeling transport. The first step explicitly models retention due to mass exchange between one mobile zone and any number of parallel immobile zones. The mobile component of the renewal process can be calculated as either an exponential random variable or a preassigned time step, and the subsequent random immobile time follows a Hyper-exponential distribution for finite immobile zones or a tempered stable distribution for infinite immobile zones with an exponentially tempered power-law memory function. The second step describes well-documented early arrivals which can follow streamlines due to mechanical dispersion using the method of subordination to regional flow. Applicability and implementation of the Lagrangian solver are further checked against transport observed in various media. Results show that, although the time-nonlocal model parameters are predictable for transport with retention in alluvial settings, the standard time-nonlocal model cannot capture early arrivals. Retention and early arrivals observed in porous and fractured media can be efficiently modeled by our Lagrangian solver, allowing anomalous transport to be incorporated into 2-D/3-D models with irregular flow fields. Extensions of the particle-tracking approach are also discussed for transport with parameters conditioned on local aquifer properties, as required by transient flow and nonstationary media.
Transition point prediction in a multicomponent lattice Boltzmann model: Forcing scheme dependencies
Küllmer, Knut; Krämer, Andreas; Joppich, Wolfgang; Reith, Dirk; Foysi, Holger
2018-02-01
Pseudopotential-based lattice Boltzmann models are widely used for numerical simulations of multiphase flows. In the special case of multicomponent systems, the overall dynamics are characterized by the conservation equations for mass and momentum as well as an additional advection diffusion equation for each component. In the present study, we investigate how the latter is affected by the forcing scheme, i.e., by the way the underlying interparticle forces are incorporated into the lattice Boltzmann equation. By comparing two model formulations for pure multicomponent systems, namely the standard model [X. Shan and G. D. Doolen, J. Stat. Phys. 81, 379 (1995), 10.1007/BF02179985] and the explicit forcing model [M. L. Porter et al., Phys. Rev. E 86, 036701 (2012), 10.1103/PhysRevE.86.036701], we reveal that the diffusion characteristics drastically change. We derive a generalized, potential function-dependent expression for the transition point from the miscible to the immiscible regime and demonstrate that it is shifted between the models. The theoretical predictions for both the transition point and the mutual diffusion coefficient are validated in simulations of static droplets and decaying sinusoidal concentration waves, respectively. To show the universality of our analysis, two common and one new potential function are investigated. As the shift in the diffusion characteristics directly affects the interfacial properties, we additionally show that phenomena related to the interfacial tension such as the modeling of contact angles are influenced as well.
Third Order Reconstruction of the KP Scheme for Model of River Tinnelva
Directory of Open Access Journals (Sweden)
Susantha Dissanayake
2017-01-01
Full Text Available The Saint-Venant equation/Shallow Water Equation is used to simulate flow of river, flow of liquid in an open channel, tsunami etc. The Kurganov-Petrova (KP scheme which was developed based on the local speed of discontinuity propagation, can be used to solve hyperbolic type partial differential equations (PDEs, hence can be used to solve the Saint-Venant equation. The KP scheme is semi discrete: PDEs are discretized in the spatial domain, resulting in a set of Ordinary Differential Equations (ODEs. In this study, the common 2nd order KP scheme is extended into 3rd order scheme while following the Weighted Essentially Non-Oscillatory (WENO and Central WENO (CWENO reconstruction steps. Both the 2nd order and 3rd order schemes have been used in simulation in order to check the suitability of the KP schemes to solve hyperbolic type PDEs. The simulation results indicated that the 3rd order KP scheme shows some better stability compared to the 2nd order scheme. Computational time for the 3rd order KP scheme for variable step-length ode solvers in MATLAB is less compared to the computational time of the 2nd order KP scheme. In addition, it was confirmed that the order of the time integrators essentially should be lower compared to the order of the spatial discretization. However, for computation of abrupt step changes, the 2nd order KP scheme shows a more accurate solution.
Modeling the mechanics of HMX detonation using a Taylor–Galerkin scheme
Directory of Open Access Journals (Sweden)
Adam V. Duran
2016-05-01
Full Text Available Design of energetic materials is an exciting area in mechanics and materials science. Energetic composite materials are used as propellants, explosives, and fuel cell components. Energy release in these materials are accompanied by extreme events: shock waves travel at typical speeds of several thousand meters per second and the peak pressures can reach hundreds of gigapascals. In this paper, we develop a reactive dynamics code for modeling detonation wave features in one such material. The key contribution in this paper is an integrated algorithm to incorporate equations of state, Arrhenius kinetics, and mixing rules for particle detonation in a Taylor–Galerkin finite element simulation. We show that the scheme captures the distinct features of detonation waves, and the detonation velocity compares well with experiments reported in literature.
Incorporation of UK Met Office's radiation scheme into CPTEC's global model
Chagas, Júlio C. S.; Barbosa, Henrique M. J.
2009-03-01
Current parameterization of radiation in the CPTEC's (Center for Weather Forecast and Climate Studies, Cachoeira Paulista, SP, Brazil) operational AGCM has its origins in the work of Harshvardhan et al. (1987) and uses the formulation of Ramaswamy and Freidenreich (1992) for the short-wave absorption by water vapor. The UK Met Office's radiation code (Edwards and Slingo, 1996) was incorporated into CPTEC's global model, initially for short-wave only, and some impacts of that were shown by Chagas and Barbosa (2006). Current paper presents some impacts of the complete incorporation (both short-wave and long-wave) of UK Met Office's scheme. Selected results from off-line comparisons with line-by-line benchmark calculations are shown. Impacts on the AGCM's climate are assessed by comparing output of climate runs of current and modified AGCM with products from GEWEX/SRB (Surface Radiation Budget) project.
Investigation of thermalization in giant-spin models by different Lindblad schemes
Energy Technology Data Exchange (ETDEWEB)
Beckmann, Christian; Schnack, Jürgen, E-mail: jschnack@uni-bielefeld.de
2017-09-01
Highlights: • The non-equilibrium magnetization is investigated with quantum master equations that rest on Lindblad schemes. • It is studied how different couplings to the bath modify the magnetization. • Various field protocols are employed; relaxation times are deduced. • Result: the time evolution depends strongly on the details of the transition operator used in the Lindblad term. - Abstract: The theoretical understanding of time-dependence in magnetic quantum systems is of great importance in particular for cases where a unitary time evolution is accompanied by relaxation processes. A key example is given by the dynamics of single-molecule magnets where quantum tunneling of the magnetization competes with thermal relaxation over the anisotropy barrier. In this article we investigate how good a Lindblad approach describes the relaxation in giant spin models and how the result depends on the employed operator that transmits the action of the thermal bath.
Hyperbolic reformulation of a 1D viscoelastic blood flow model and ADER finite volume schemes
International Nuclear Information System (INIS)
Montecinos, Gino I.; Müller, Lucas O.; Toro, Eleuterio F.
2014-01-01
The applicability of ADER finite volume methods to solve hyperbolic balance laws with stiff source terms in the context of well-balanced and non-conservative schemes is extended to solve a one-dimensional blood flow model for viscoelastic vessels, reformulated as a hyperbolic system, via a relaxation time. A criterion for selecting relaxation times is found and an empirical convergence rate assessment is carried out to support this result. The proposed methodology is validated by applying it to a network of viscoelastic vessels for which experimental and numerical results are available. The agreement between the results obtained in the present paper and those available in the literature is satisfactory. Key features of the present formulation and numerical methodologies, such as accuracy, efficiency and robustness, are fully discussed in the paper
An implicit turbulence model for low-Mach Roe scheme using truncated Navier-Stokes equations
Li, Chung-Gang; Tsubokura, Makoto
2017-09-01
The original Roe scheme is well-known to be unsuitable in simulations of turbulence because the dissipation that develops is unsatisfactory. Simulations of turbulent channel flow for Reτ = 180 show that, with the 'low-Mach-fix for Roe' (LMRoe) proposed by Rieper [J. Comput. Phys. 230 (2011) 5263-5287], the Roe dissipation term potentially equates the simulation to an implicit large eddy simulation (ILES) at low Mach number. Thus inspired, a new implicit turbulence model for low Mach numbers is proposed that controls the Roe dissipation term appropriately. Referred to as the automatic dissipation adjustment (ADA) model, the method of solution follows procedures developed previously for the truncated Navier-Stokes (TNS) equations and, without tuning of parameters, uses the energy ratio as a criterion to automatically adjust the upwind dissipation. Turbulent channel flow at two different Reynold numbers and the Taylor-Green vortex were performed to validate the ADA model. In simulations of turbulent channel flow for Reτ = 180 at Mach number of 0.05 using the ADA model, the mean velocity and turbulence intensities are in excellent agreement with DNS results. With Reτ = 950 at Mach number of 0.1, the result is also consistent with DNS results, indicating that the ADA model is also reliable at higher Reynolds numbers. In simulations of the Taylor-Green vortex at Re = 3000, the kinetic energy is consistent with the power law of decaying turbulence with -1.2 exponents for both LMRoe with and without the ADA model. However, with the ADA model, the dissipation rate can be significantly improved near the dissipation peak region and the peak duration can be also more accurately captured. With a firm basis in TNS theory, applicability at higher Reynolds number, and ease in implementation as no extra terms are needed, the ADA model offers to become a promising tool for turbulence modeling.
Günther, T; Büttner, C; Käsbohrer, A; Filter, M
2015-01-01
Mathematical models on properties and behavior of harmful organisms in the food chain are an increas- ingly relevant approach of the agriculture and food industry. As a consequence, there are many efforts to develop biological models in science, economics and risk assessment nowadays. However, there is a lack of international harmonized standards on model annotation and model formats, which would be neces- sary to set up efficient tools supporting broad model application and information exchange. There are some established standards in the field of systems biology, but there is currently no corresponding provi- sion in the area of plant protection. This work therefore aimed at the development of an annotation scheme using domain-specific metadata. The proposed scheme has been validated in a prototype implementation of a web-database model repository. This prototypic community resource currently contains models on aflatoxin secreting fungal Aspergillus flavus in maize, as these models have a high relevance to food safety and economic impact. Specifically, models describing biological processes of the fungus (growth, Aflatoxin secreting), as well as dose-response- and carry over models were included. Furthermore, phenological models for maize were integrated as well. The developed annotation scheme is based on the well-established data exchange format SBML, which is broadly applied in the field of systems biology. The identified example models were annotated according to the developed scheme and entered into a Web-table (Google Sheets), which was transferred to a web based demonstrator available at https://sites.google.com/site/test782726372685/. By implementation of a software demonstrator it became clear that the proposed annotation scheme can be applied to models on plant pathogens and that broad adoption within the domain could promote communication and application of mathematical models.
Modelling tools for managing Induced RiverBank Filtration MAR schemes
De Filippis, Giovanna; Barbagli, Alessio; Marchina, Chiara; Borsi, Iacopo; Mazzanti, Giorgio; Nardi, Marco; Vienken, Thomas; Bonari, Enrico; Rossetto, Rudy
2017-04-01
Induced RiverBank Filtration (IRBF) is a widely used technique in Managed Aquifer Recharge (MAR) schemes, when aquifers are hydraulically connected with surface water bodies, with proven positive effects on quality and quantity of groundwater. IRBF allows abstraction of a large volume of water, avoiding large decrease in groundwater heads. Moreover, thanks to the filtration process through the soil, the concentration of chemical species in surface water can be reduced, thus becoming an excellent resource for the production of drinking water. Within the FP7 MARSOL project (demonstrating Managed Aquifer Recharge as a SOLution to water scarcity and drought; http://www.marsol.eu/), the Sant'Alessio IRBF (Lucca, Italy) was used to demonstrate the feasibility and technical and economic benefits of managing IRBF schemes (Rossetto et al., 2015a). The Sant'Alessio IRBF along the Serchio river allows to abstract an overall amount of about 0.5 m3/s providing drinking water for 300000 people of the coastal Tuscany (mainly to the town of Lucca, Pisa and Livorno). The supplied water is made available by enhancing river bank infiltration into a high yield (10-2 m2/s transmissivity) sandy-gravelly aquifer by rising the river head and using ten vertical wells along the river embankment. A Decision Support System, consisting in connected measurements from an advanced monitoring network and modelling tools was set up to manage the IRBF. The modelling system is based on spatially distributed and physically based coupled ground-/surface-water flow and solute transport models integrated in the FREEWAT platform (developed within the H2020 FREEWAT project - FREE and Open Source Software Tools for WATer Resource Management; Rossetto et al., 2015b), an open source and public domain GIS-integrated modelling environment for the simulation of the hydrological cycle. The platform aims at improving water resource management by simplifying the application of EU water-related Directives and at
Zeroual, Abdelhafid
2017-08-19
Monitoring vehicle traffic flow plays a central role in enhancing traffic management, transportation safety and cost savings. In this paper, we propose an innovative approach for detection of traffic congestion. Specifically, we combine the flexibility and simplicity of a piecewise switched linear (PWSL) macroscopic traffic model and the greater capacity of the exponentially-weighted moving average (EWMA) monitoring chart. Macroscopic models, which have few, easily calibrated parameters, are employed to describe a free traffic flow at the macroscopic level. Then, we apply the EWMA monitoring chart to the uncorrelated residuals obtained from the constructed PWSL model to detect congested situations. In this strategy, wavelet-based multiscale filtering of data has been used before the application of the EWMA scheme to improve further the robustness of this method to measurement noise and reduce the false alarms due to modeling errors. The performance of the PWSL-EWMA approach is successfully tested on traffic data from the three lane highway portion of the Interstate 210 (I-210) highway of the west of California and the four lane highway portion of the State Route 60 (SR60) highway from the east of California, provided by the Caltrans Performance Measurement System (PeMS). Results show the ability of the PWSL-EWMA approach to monitor vehicle traffic, confirming the promising application of this statistical tool to the supervision of traffic flow congestion.
Yasunari, Teppei
2012-01-01
Recently the issue on glacier retreats comes up and many factors should be relevant to the issue. The absorbing aerosols such as dust and black carbon (BC) are considered to be one of the factors. After they deposited onto the snow surface, it will reduce snow albedo (called snow darkening effect) and probably contribute to further melting of glacier. The Goddard Earth Observing System version 5 (GEOS-5) has developed at NASA/GSFC. However, the original snowpack model used in the land surface model in the GEOS-5 did not consider the snow darkening effect. Here we developed the new snow albedo scheme which can consider the snow darkening effect. In addition, another scheme on calculating mass concentrations on the absorbing aerosols in snowpack was also developed, in which the direct aerosol depositions from the chemical transport model in the GEOS-5 were used. The scheme has been validated with the observed data obtained at backyard of the Institute of Low Temperature Science, Hokkaido University, by Dr. Teruo Aoki (Meteorological Research Institute) et aL including me. The observed data was obtained when I was Ph.D. candidate. The original GEOS-5during 2007-2009 over the Himalayas and Tibetan Plateau region showed more reductions of snow than that of the new GEOS-5 because the original one used lower albedo settings. On snow cover fraction, the new GEOS-5 simulated more realistic snow-covered area comparing to the MODIS snow cover fraction. The reductions on snow albedo, snow cover fraction, and snow water equivalent were seen with statistically significance if we consider the snow darkening effect comparing to the results without the snow darkening effect. In the real world, debris cover, inside refreezing process, surface flow of glacier, etc. affect glacier mass balance and the simulated results immediately do not affect whole glacier retreating. However, our results indicate that some surface melting over non debris covered parts of the glacier would be
Iyer, Subramaniam
2017-01-01
Among the systems in place in different countries for the protection of the population against the long-term contingencies of old-age (or retirement), disability and death (or survivorship), defined-benefit social security pension schemes, i.e. social insurance pension schemes, by far predominate, despite the recent trend towards defined-contribution arrangements in social security reforms. Actuarial valuations of these schemes, unlike other branches of insurance, continue to be carried out a...
Yarrow, Maurice; VanderWijngaart, Rob; Kutler, Paul (Technical Monitor)
1997-01-01
The first release of the MPI version of the LU NAS Parallel Benchmark (NPB2.0) performed poorly compared to its companion NPB2.0 codes. The later LU release (NPB2.1 & 2.2) runs up to two and a half times faster, thanks to a revised point access scheme and related communications scheme. The new scheme sends substantially fewer messages. is cache "friendly", and has a better load balance. We detail the, observations and modifications that resulted in this efficiency improvement, and show that the poor behavior of the original code resulted from deriving a message passing scheme from an algorithm originally devised for a vector architecture.
Directory of Open Access Journals (Sweden)
C. Bommaraju
2005-01-01
Full Text Available Numerical methods are extremely useful in solving real-life problems with complex materials and geometries. However, numerical methods in the time domain suffer from artificial numerical dispersion. Standard numerical techniques which are second-order in space and time, like the conventional Finite Difference 3-point (FD3 method, Finite-Difference Time-Domain (FDTD method, and Finite Integration Technique (FIT provide estimates of the error of discretized numerical operators rather than the error of the numerical solutions computed using these operators. Here optimally accurate time-domain FD operators which are second-order in time as well as in space are derived. Optimal accuracy means the greatest attainable accuracy for a particular type of scheme, e.g., second-order FD, for some particular grid spacing. The modified operators lead to an implicit scheme. Using the first order Born approximation, this implicit scheme is transformed into a two step explicit scheme, namely predictor-corrector scheme. The stability condition (maximum time step for a given spatial grid interval for the various modified schemes is roughly equal to that for the corresponding conventional scheme. The modified FD scheme (FDM attains reduction of numerical dispersion almost by a factor of 40 in 1-D case, compared to the FD3, FDTD, and FIT. The CPU time for the FDM scheme is twice of that required by the FD3 method. The simulated synthetic data for a 2-D P-SV (elastodynamics problem computed using the modified scheme are 30 times more accurate than synthetics computed using a conventional scheme, at a cost of only 3.5 times as much CPU time. The FDM is of particular interest in the modeling of large scale (spatial dimension is more or equal to one thousand wave lengths or observation time interval is very high compared to reference time step wave propagation and scattering problems, for instance, in ultrasonic antenna and synthetic scattering data modeling for Non
Maity, S.; Satyanarayana, A. N. V.; Mandal, M.; Nayak, S.
2017-11-01
In this study, an attempt has been made to investigate the sensitivity of land surface models (LSM) and cumulus convection schemes (CCS) using a regional climate model, RegCM Version-4.1 in simulating the Indian Summer Monsoon (ISM). Numerical experiments were conducted in seasonal scale (May-September) for three consecutive years: 2007, 2008, 2009 with two LSMs (Biosphere Atmosphere Transfer Scheme (BATS), Community Land Model (CLM 3.5) and five CCSs (MIT, KUO, GRELL, GRELL over land and MIT over ocean (GL_MO), GRELL over ocean and MIT over land (GO_ML)). Important synoptic features are validated using various reanalysis datasets and satellite derived products from TRMM and CRU data. Seasonally averaged surface temperature is reasonably well simulated by the model using both the LSMs along with CCSs namely, MIT, GO_ML and GL_MO schemes. Model simulations reveal slight warm bias using these schemes whereas significant cold bias is seen with KUO and GRELL schemes during all three years. It is noticed that the simulated Somali Jet (SJ) is weak in all simulations except MIT scheme in the simulations with (both BATS and CLM) in which the strength of SJ reasonably well captured. Although the model is able to simulate the Tropical Easterly Jet (TEJ) and Sub-Tropical Westerly Jet (STWJ) with all the CCSs in terms of their location and strength, the performance of MIT scheme seems to be better than the rest of the CCSs. Seasonal rainfall is not well simulated by the model. Significant underestimation of Indian Summer Monsoon Rainfall (ISMR) is observed over Central and North West India. Spatial distribution of seasonal ISMR is comparatively better simulated by the model with MIT followed by GO_ML scheme in combination with CLM although it overestimates rainfall over heavy precipitation zones. On overall statistical analysis, it is noticed that RegCM4 shows better skill in simulating ISM with MIT scheme using CLM.
A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...
The U.S. Environmental Protection Agency (USEPA) has a team of scientists developing a next generation air quality modeling system employing the Model for Prediction Across Scales – Atmosphere (MPAS-A) as its meteorological foundation. Several preferred physics schemes and ...
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad
2017-07-01
This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.
Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools
Ding, Steven X
2013-01-01
Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: · new material on fault isolation and identification, and fault detection in feedback control loops; · extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and · enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...
Incompressible Turbulent Flow Simulation Using the κ-ɛ Model and Upwind Schemes
Directory of Open Access Journals (Sweden)
V. G. Ferreira
2007-01-01
Full Text Available In the computation of turbulent flows via turbulence modeling, the treatment of the convective terms is a key issue. In the present work, we present a numerical technique for simulating two-dimensional incompressible turbulent flows. In particular, the performance of the high Reynolds κ-ɛ model and a new high-order upwind scheme (adaptative QUICKEST by Kaibara et al. (2005 is assessed for 2D confined and free-surface incompressible turbulent flows. The model equations are solved with the fractional-step projection method in primitive variables. Solutions are obtained by using an adaptation of the front tracking GENSMAC (Tomé and McKee (1994 methodology for calculating fluid flows at high Reynolds numbers. The calculations are performed by using the 2D version of the Freeflow simulation system (Castello et al. (2000. A specific way of implementing wall functions is also tested and assessed. The numerical procedure is tested by solving three fluid flow problems, namely, turbulent flow over a backward-facing step, turbulent boundary layer over a flat plate under zero-pressure gradients, and a turbulent free jet impinging onto a flat surface. The numerical method is then applied to solve the flow of a horizontal jet penetrating a quiescent fluid from an entry port beneath the free surface.
An iterative representer-based scheme for data inversion in reservoir modeling
International Nuclear Information System (INIS)
Iglesias, Marco A; Dawson, Clint
2009-01-01
In this paper, we develop a mathematical framework for data inversion in reservoir models. A general formulation is presented for the identification of uncertain parameters in an abstract reservoir model described by a set of nonlinear equations. Given a finite number of measurements of the state and prior knowledge of the uncertain parameters, an iterative representer-based scheme (IRBS) is proposed to find improved parameters. In this approach, the representer method is used to solve a linear data assimilation problem at each iteration of the algorithm. We apply the theory of iterative regularization to establish conditions for which the IRBS will converge to a stable approximation of a solution to the parameter identification problem. These theoretical results are applied to the identification of the second-order coefficient of a forward model described by a parabolic boundary value problem. Numerical results are presented to show the capabilities of the IRBS for the reconstruction of hydraulic conductivity from the steady-state of groundwater flow, as well as the absolute permeability in the single-phase Darcy flow through porous media
Directory of Open Access Journals (Sweden)
C. A. Randles
2013-03-01
Full Text Available In this study we examine the performance of 31 global model radiative transfer schemes in cloud-free conditions with prescribed gaseous absorbers and no aerosols (Rayleigh atmosphere, with prescribed scattering-only aerosols, and with more absorbing aerosols. Results are compared to benchmark results from high-resolution, multi-angular line-by-line radiation models. For purely scattering aerosols, model bias relative to the line-by-line models in the top-of-the atmosphere aerosol radiative forcing ranges from roughly −10 to 20%, with over- and underestimates of radiative cooling at lower and higher solar zenith angle, respectively. Inter-model diversity (relative standard deviation increases from ~10 to 15% as solar zenith angle decreases. Inter-model diversity in atmospheric and surface forcing decreases with increased aerosol absorption, indicating that the treatment of multiple-scattering is more variable than aerosol absorption in the models considered. Aerosol radiative forcing results from multi-stream models are generally in better agreement with the line-by-line results than the simpler two-stream schemes. Considering radiative fluxes, model performance is generally the same or slightly better than results from previous radiation scheme intercomparisons. However, the inter-model diversity in aerosol radiative forcing remains large, primarily as a result of the treatment of multiple-scattering. Results indicate that global models that estimate aerosol radiative forcing with two-stream radiation schemes may be subject to persistent biases introduced by these schemes, particularly for regional aerosol forcing.
Zhang, Yichuan; Wang, Jiangping
2015-07-01
Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.
Directory of Open Access Journals (Sweden)
Richard Yao Kuma Agyeman
2017-01-01
Full Text Available Seasonal predictions of precipitation, among others, are important to help mitigate the effects of drought and floods on agriculture, hydropower generation, disasters, and many more. This work seeks to obtain a suitable combination of physics schemes of the Weather Research and Forecasting (WRF model for seasonal precipitation simulation over Ghana. Using the ERA-Interim reanalysis as forcing data, simulation experiments spanning eight months (from April to November were performed for two different years: a dry year (2001 and a wet year (2008. A double nested approach was used with the outer domain at 50 km resolution covering West Africa and the inner domain covering Ghana at 10 km resolution. The results suggest that the WRF model generally overestimated the observed precipitation by a mean value between 3% and 64% for both years. Most of the scheme combinations overestimated (underestimated precipitation over coastal (northern zones of Ghana for both years but estimated precipitation reasonably well over forest and transitional zones. On the whole, the combination of WRF Single-Moment 6-Class Microphysics Scheme, Grell-Devenyi Ensemble Cumulus Scheme, and Asymmetric Convective Model Planetary Boundary Layer Scheme simulated the best temporal pattern and temporal variability with the least relative bias for both years and therefore is recommended for Ghana.
Directory of Open Access Journals (Sweden)
P. Chitra
2017-04-01
Full Text Available Recently, wireless network technologies were designed for most of the applications. Congestion raised in the wireless network degrades the performance and reduces the throughput. Congestion-free network is quit essen- tial in the transport layer to prevent performance degradation in a wireless network. Game theory is a branch of applied mathematics and applied sciences that used in wireless network, political science, biology, computer science, philosophy and economics. e great challenges of wireless network are their congestion by various factors. E ective congestion-free alternate path routing is pretty essential to increase network performance. Stackelberg game theory model is currently employed as an e ective tool to design and formulate conges- tion issues in wireless networks. is work uses a Stackelberg game to design alternate path model to avoid congestion. In this game, leaders and followers are selected to select an alternate routing path. e correlated equilibrium is used in Stackelberg game for making better decision between non-cooperation and cooperation. Congestion was continuously monitored to increase the throughput in the network. Simulation results show that the proposed scheme could extensively improve the network performance by reducing congestion with the help of Stackelberg game and thereby enhance throughput.
Khairoutdinov, M.
2015-12-01
The representation of microphysics, especially ice microphysics, remains one of the major uncertainties in cloud-resolving models (CRMs). Most of the cloud schemes use the so-called bulk microphysics approach, in which a few moments of such distributions are used as the prognostic variables. The System for Atmospheric Modeling (SAM) is the CRM that employs two such schemes. The single-moment scheme, which uses only mass for each of the water phases, and the two-moment scheme, which adds the particle concentration for each of the hydrometeor category. Of the two, the single-moment scheme is much more computationally efficient as it uses only two prognostic microphysics variables compared to ten variables used by the two-moment scheme. The efficiency comes from a rather considerable oversimplification of the microphysical processes. For instance, only a sum of the liquid and icy cloud water is predicted with the temperature used to diagnose the mixing ratios of different hydrometeors. The main motivation for using such simplified microphysics has been computational efficiency, especially in the applications of SAM as the super-parameterization in global climate models. Recently, we have extended the single-moment microphysics by adding only one additional prognostic variable, which has, nevertheless, allowed us to separate the cloud ice from liquid water. We made use of some of the recent observations of ice microphysics collected at various parts of the world to parameterize several aspects of ice microphysics that have not been explicitly represented before in our sing-moment scheme. For example, we use the observed broad dependence of ice concentration on temperature to diagnose the ice concentration in addition to prognostic mass. Also, there is no artificial separation between the pristine ice and snow, often used by bulk models. Instead we prescribed the ice size spectrum as the gamma distribution, with the distribution shape parameter controlled by the
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2015-10-01
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.
Kumar, R.; Samaniego, L. E.
2011-12-01
Spatially distributed hydrologic models at mesoscale level are based on the conceptualization and generalization of hydrological processes. Therefore, such models require parameter adjustment for its successful application at a given scale. Automatic computer-based algorithms are commonly used for the calibration purpose. While such algorithms can provide much faster and efficient results as compared to the traditional manual calibration method, they are also prone to overtraining of a parameter set for a given catchment. As a result, the transferability of model parameters from a calibration site to un-calibrated site is limited. In this study, we propose a regional multi-basin calibration scheme to prevent the overtraining of model parameters in a specific catchment. The idea is to split the available catchments into two disjoint groups in such a way that catchments belonging to the first group can be used for calibration (i.e. for minimization or maximization of objective functions), and catchments belonging to other group are used to cross-validation of the model performance for each generated parameter set. The calibration process should be stopped if the model shows a significant decrease in its performance at cross-validation catchments while increasing performance at calibration sites. Hydrologically diverse catchments were selected as members of each calibration and cross-validation groups to obtain a regional set of robust parameter. A dissimilarity measure based on runoff and antecedent precipitation copulas was used for the selection of the disjoint sets. The proposed methodology was used to calibrate transfer function parameters of a distributed mesoscale hydrologic model (mHM), whose parameter fields are linked to catchment characteristics through a set of transfer functions using a multiscale parameter regionalisation method. This study was carried out in 106 south German catchments ranging in size from 4 km2 to 12 700 km2. Initial test results
Evaluation of European air quality modelled by CAMx including the volatility basis set scheme
Directory of Open Access Journals (Sweden)
G. Ciarelli
2016-08-01
Full Text Available Four periods of EMEP (European Monitoring and Evaluation Programme intensive measurement campaigns (June 2006, January 2007, September–October 2008 and February–March 2009 were modelled using the regional air quality model CAMx with VBS (volatility basis set approach for the first time in Europe within the framework of the EURODELTA-III model intercomparison exercise. More detailed analysis and sensitivity tests were performed for the period of February–March 2009 and June 2006 to investigate the uncertainties in emissions as well as to improve the modelling of organic aerosol (OA. Model performance for selected gas phase species and PM2.5 was evaluated using the European air quality database AirBase. Sulfur dioxide (SO2 and ozone (O3 were found to be overestimated for all the four periods, with O3 having the largest mean bias during June 2006 and January–February 2007 periods (8.9 pbb and 12.3 ppb mean biases respectively. In contrast, nitrogen dioxide (NO2 and carbon monoxide (CO were found to be underestimated for all the four periods. CAMx reproduced both total concentrations and monthly variations of PM2.5 for all the four periods with average biases ranging from −2.1 to 1.0 µg m−3. Comparisons with AMS (aerosol mass spectrometer measurements at different sites in Europe during February–March 2009 showed that in general the model overpredicts the inorganic aerosol fraction and underpredicts the organic one, such that the good agreement for PM2.5 is partly due to compensation of errors. The effect of the choice of VBS scheme on OA was investigated as well. Two sensitivity tests with volatility distributions based on previous chamber and ambient measurements data were performed. For February–March 2009 the chamber case reduced the total OA concentrations by about 42 % on average. In contrast, a test based on ambient measurement data increased OA concentrations by about 42 % for the same period bringing
International Nuclear Information System (INIS)
Moiseenko, Vitali; Battista, Jerry; Van Dyk, Jake
2000-01-01
Purpose: To evaluate the impact of dose-volume histogram (DVH) reduction schemes and models of normal tissue complication probability (NTCP) on ranking of radiation treatment plans. Methods and Materials: Data for liver complications in humans and for spinal cord in rats were used to derive input parameters of four different NTCP models. DVH reduction was performed using two schemes: 'effective volume' and 'preferred Lyman'. DVHs for competing treatment plans were derived from a sample DVH by varying dose uniformity in a high dose region so that the obtained cumulative DVHs intersected. Treatment plans were ranked according to the calculated NTCP values. Results: Whenever the preferred Lyman scheme was used to reduce the DVH, competing plans were indistinguishable as long as the mean dose was constant. The effective volume DVH reduction scheme did allow us to distinguish between these competing treatment plans. However, plan ranking depended on the radiobiological model used and its input parameters. Conclusions: Dose escalation will be a significant part of radiation treatment planning using new technologies, such as 3-D conformal radiotherapy and tomotherapy. Such dose escalation will depend on how the dose distributions in organs at risk are interpreted in terms of expected complication probabilities. The present study indicates considerable variability in predicted NTCP values because of the methods used for DVH reduction and radiobiological models and their input parameters. Animal studies and collection of standardized clinical data are needed to ascertain the effects of non-uniform dose distributions and to test the validity of the models currently in use
Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang
2015-11-06
As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ'E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ'E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models.
Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang
2015-01-01
As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ′E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ′E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models. PMID:26541245
2009-09-01
FVCOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7 ICOM ...coastal re- gions featured with complex irregular geometry and steep bottom topography. ICOM Imperial College Ocean Model FEM (CG and DG) using...the ICOM group. This code is written in C. Finel was also developed by the same group, and it is a three-dimensional non- hydrostatic finite element
Temimi, Marouane; Chaouch, Naira; Weston, Michael; Ghedira, Hosni
2017-04-01
This study covers five fog events reported in 2014 at Abu Dhabi International Airport in the United Arab Emirates (UAE). We assess the performance of WRF-ARW model during fog conditions and we intercompare seven different PBL schemes and assess their impact on the performance of the simulations. Seven PBL schemes, namely, Yonsei University (YSU), Mellor-Yamada-Janjic (MYJ), Moller-Yamada Nakanishi and Niino (MYNN) level 2.5, Quasi-Normal Scale Elimination (QNSE-EDMF), Asymmetric Convective Model (ACM2), Grenier-Bretherton-McCaa (GBM) and MYNN level 3 were tested. Radiosonde data from the Abu Dhabi International Airport and surface measurements of relative humidity (RH), dew point temperature, wind speed, and temperature profiles were used to assess the performance of the model. All PBL schemes showed comparable skills with relatively higher performance with the QNSE scheme. The average RH Root Mean Square Error (RMSE) and BIAS for all PBLs were 15.75 % and -9.07 %, respectively, whereas the obtained RMSE and BIAS when QNSE was used were 14.65 % and -6.3 % respectively. Comparable skills were obtained for the rest of the variables. Local PBL schemes showed better performance than non-local schemes. Discrepancies between simulated and observed values were higher at the surface level compared to high altitude values. The sensitivity to lead time showed that best simulation performances were obtained when the lead time varies between 12 and 18 hours. In addition, the results of the simulations show that better performance is obtained when the starting condition is dry.
Francisco, R. V.; Argete, J.; Giorgi, F.; Pal, J.; Bi, X.; Gutowski, W. J.
2006-09-01
The latest version of the Abdus Salam International Centre for Theoretical Physics (ICTP) regional model RegCM is used to investigate summer monsoon precipitation over the Philippine archipelago and surrounding ocean waters, a region where regional climate models have not been applied before. The sensitivity of simulated precipitation to driving lateral boundary conditions (NCEP and ERA40 reanalyses) and ocean surface flux scheme (BATS and Zeng) is assessed for 5 monsoon seasons. The ability of the RegCM to simulate the spatial patterns and magnitude of monsoon precipitation is demonstrated, both in response to the prominent large scale circulations over the region and to the local forcing by the physiographical features of the Philippine islands. This provides encouraging indications concerning the development of a regional climate modeling system for the Philippine region. On the other hand, the model shows a substantial sensitivity to the analysis fields used for lateral boundary conditions as well as the ocean surface flux schemes. The use of ERA40 lateral boundary fields consistently yields greater precipitation amounts compared to the use of NCEP fields. Similarly, the BATS scheme consistently produces more precipitation compared to the Zeng scheme. As a result, different combinations of lateral boundary fields and surface ocean flux schemes provide a good simulation of precipitation amounts and spatial structure over the region. The response of simulated precipitation to using different forcing analysis fields is of the same order of magnitude as the response to using different surface flux parameterizations in the model. As a result it is difficult to unambiguously establish which of the model configurations is best performing.
Energy Technology Data Exchange (ETDEWEB)
Barriopedro, D. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal); Universidad de Extremadura, Departamento de Fisica, Facultad de Ciencias, Badajoz (Spain); Garcia-Herrera, R. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain); Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal)
2010-12-15
This paper aims to provide a new blocking definition with applicability to observations and model simulations. An updated review of previous blocking detection indices is provided and some of their implications and caveats discussed. A novel blocking index is proposed by reconciling two traditional approaches based on anomaly and absolute flows. Blocks are considered from a complementary perspective as a signature in the anomalous height field capable of reversing the meridional jet-based height gradient in the total flow. The method succeeds in identifying 2-D persistent anomalies associated to a weather regime in the total flow with blockage of the westerlies. The new index accounts for the duration, intensity, extension, propagation, and spatial structure of a blocking event. In spite of its increased complexity, the detection efficiency of the method is improved without hampering the computational time. Furthermore, some misleading identification problems and artificial assumptions resulting from previous single blocking indices are avoided with the new approach. The characteristics of blocking for 40 years of reanalysis (1950-1989) over the Northern Hemisphere are described from the perspective of the new definition and compared to those resulting from two standard blocking indices and different critical thresholds. As compared to single approaches, the novel index shows a better agreement with reported proxies of blocking activity, namely climatological regions of simultaneous wave amplification and maximum band-pass filtered height standard deviation. An additional asset of the method is its adaptability to different data sets. As critical thresholds are specific of the data set employed, the method is useful for observations and model simulations of different resolutions, temporal lengths and time variant basic states, optimizing its value as a tool for model validation. Special attention has been paid on the devise of an objective scheme easily applicable
From Balancing the Numbers to an Encompassing Business Case
DEFF Research Database (Denmark)
Labucay, Inéz
2013-01-01
and Horwitz 2007). The focus of the paper is on further developing and building on theoretical concepts of diversity. It also establishes links to non-mainstream theories like social network theory. After a short introduction to the model, the three stages of the model (Diversity concept, Diversity goals...
Reddy, Sunita; Mary, Immaculate
2013-01-01
The Rajiv Aarogyasri Community Health Insurance (RACHI) in Andhra Pradesh (AP) has been very popular social insurance scheme with a private public partnership model to deal with the problems of catastrophic medical expenditures at tertiary level care for the poor households. A brief analysis of the RACHI scheme based on officially available data and media reports has been undertaken from a public health perspective to understand the nature and financing of partnership and the lessons it provides. The analysis of the annual budget spent on the surgeries in private hospitals compared to tertiary public hospitals shows that the current scheme is not sustainable and pose huge burden on the state exchequers. The private hospital association's in AP, further acts as pressure groups to increase the budget or threaten to withdraw services. Thus, profits are privatized and losses are socialized.
Zhu, Guangpu
2018-01-26
In this paper, a fully discrete scheme which considers temporal and spatial discretizations is presented for the coupled Cahn-Hilliard equation in conserved form with the dynamic contact line condition and the Navier-Stokes equation with the generalized Navier boundary condition. Variable densities and viscosities are incorporated in this model. A rigorous proof of energy stability is provided for the fully discrete scheme based on a semi-implicit temporal discretization and a finite difference method on the staggered grids for the spatial discretization. A splitting method based on the pressure stabilization is implemented to solve the Navier-Stokes equation, while the stabilization approach is also used for the Cahn-Hilliard equation. Numerical results in both 2-D and 3-D demonstrate the accuracy, efficiency and decaying property of discrete energy of the proposed scheme.
Directory of Open Access Journals (Sweden)
Isaac Osei
2016-11-01
Full Text Available Techno-economic models for optimised utilisation of jatropha oil under an out-grower farming scheme were developed based on different considerations for oil and by-product utilisation. Model 1: Out-grower scheme where oil is exported and press cake utilised for compost. Model 2: Out-grower scheme with six scenarios considered for the utilisation of oil and by-products. Linear programming models were developed based on outcomes of the models to optimise the use of the oil through profit maximisation. The findings revealed that Model 1 was financially viable from the processors’ perspective but not for the farmer at seed price of $0.07/kg. All scenarios considered under Model 2 were financially viable from the processors perspective but not for the farmer at seed price of $0.07/kg; however, at seed price of $0.085/kg, financial viability was achieved for both parties. Optimising the utilisation of the oil resulted in an annual maximum profit of $123,300.
International Nuclear Information System (INIS)
Chinese, D.; Patrizio, P.; Nardin, G.
2014-01-01
Italy has witnessed an extraordinary growth in biogas generation from livestock effluents and agricultural activities in the last few years as well as a severe isomorphic process, leading to a market dominance of 999 kW power plants owned by “entrepreneurial farms”. Under the pressure of the economic crisis in the country, the Italian government has restructured renewable energy support schemes, introducing a new program in 2013. In this paper, the effects of the previous and current support schemes on the optimal plant size, feedstock mix and profitability were investigated by introducing a spatially explicit biogas supply chain optimization model, which accounts for different incentive structures. By applying the model to a regional case study, homogenization observed to date is recognized as a result of former incentive structures. Considerable reductions in local economic potentials for agricultural biogas power plants without external heat use, are estimated. New plants are likely to be manure-based and due to the lower energy density of such feedstock, wider supply chains are expected although optimal plant size will be smaller. The new support scheme will therefore most likely eliminate past distortions but also slow down investments in agricultural biogas plants. - Highlights: • We review the evolution of agricultural biogas support schemes in Italy over last 20 years. • A biogas supply chain optimization model which accounts for feed-in-tariffs is introduced. • The model is applied to a regional case study under the two most recent support schemes. • Incentives in force until 2013 caused homogenization towards maize based 999 kW el plants. • Wider, manure based supply chains feeding smaller plants are expected with future incentives
Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia
2018-03-01
Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.
Energy Technology Data Exchange (ETDEWEB)
Beliaev, J.; Trunov, N.; Tschekin, I. [OKB Gidropress (Russian Federation); Luther, W. [GRS Garching (Germany); Spolitak, S. [RNC-KI (Russian Federation)
1995-12-31
Currently the ATHLET code is widely applied for modelling of several Power Plants of WWER type with horizontal steam generators. A main drawback of all these applications is the insufficient verification of the models for the steam generator. This paper presents the nodalization schemes for the secondary side of the steam generator, the results of stationary calculations, and preliminary comparisons to experimental data. The consideration of circulation in the water inventory of the secondary side is proved to be necessary. (orig.). 3 refs.
A general coarse and fine mesh solution scheme for fluid flow modeling in VHTRS
International Nuclear Information System (INIS)
Clifford, I; Ivanov, K; Avramova, M.
2011-01-01
Coarse mesh Computational Fluid Dynamics (CFD) methods offer several advantages over traditional coarse mesh methods for the safety analysis of helium-cooled graphite-moderated Very High Temperature Reactors (VHTRs). This relatively new approach opens up the possibility for system-wide calculations to be carried out using a consistent set of field equations throughout the calculation, and subsequently the possibility for hybrid coarse/fine mesh or hierarchical multi scale CFD simulations. To date, a consistent methodology for hierarchical multi-scale CFD has not been developed. This paper describes work carried out in the initial development of a multi scale CFD solver intended to be used for the safety analysis of VHTRs. The VHTR is considered on any scale to consist of a homogenized two-phase mixture of fluid and stationary solid material of varying void fraction. A consistent set of conservation equations was selected such that they reduce to the single-phase conservation equations for the case where void fraction is unity. The discretization of the conservation equations uses a new pressure interpolation scheme capable of capturing the discontinuity in pressure across relatively large changes in void fraction. Based on this, a test solver was developed which supports fully unstructured meshes for three-dimensional time-dependent compressible flow problems, including buoyancy effects. For typical VHTR flow phenomena the new solver shows promise as an effective candidate for predicting the flow behavior on multiple scales, as it is capable of modeling both fine mesh single phase flows as well as coarse mesh flows in homogenized regions containing both fluid and solid materials. (author)
Diagnosis and Modeling of the Explosive Development of Winter Storms: Sensitivity to PBL Schemes
Liberato, Margarida L. R.; Pradhan, Prabodha K.
2014-05-01
The correct representation of extreme windstorms in regional models is of great importance for impact studies of climate change. The Iberian Peninsula has recently witnessed major damage from winter extratropical intense cyclones like Klaus (January 2009), Xynthia (February 2010) and Gong (January 2013) which formed over the mid-Atlantic, experienced explosive intensification while travelling eastwards at lower latitudes than usual [Liberato et al. 2011; 2013]. In this paper the explosive development of these storms is simulated by the advanced mesoscale Weather Research and Forecasting Model (WRF v 3.4.1), initialized with NCEP Final Analysis (FNL) data as initial and lateral boundary conditions (boundary conditions updated in every 3 hours intervals). The simulation experiments are conducted with two domains, a coarser (25km) and nested (8.333km), covering the entire North Atlantic and Iberian Peninsula region. The characteristics of these storms (e.g. wind speed, precipitation) are studied from WRF model and compared with multiple observations. In this context simulations with different Planetary Boundary Layer (PBL) schemes are performed. This approach aims at understanding which mechanisms favor the explosive intensification of these storms at a lower than usual latitudes, thus improving the knowledge of atmospheric dynamics (including small-scale processes) on controlling the life cycle of midlatitude extreme storms and contributing to the improvement in predictability and in our ability to forecast storms' impacts over Iberian Peninsula. Acknowledgments: This work was partially supported by FEDER (Fundo Europeu de Desenvolvimento Regional) funds through the COMPETE (Programa Operacional Factores de Competitividade) and by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project STORMEx FCOMP-01-0124-FEDER- 019524 (PTDC/AAC-CLI/121339/2010). References: Liberato M.L.R., J.G. Pinto, I.F. Trigo, R.M. Trigo (2011) Klaus - an
Hydrodynamic modelling of the shock ignition scheme for inertial confinement fusion
International Nuclear Information System (INIS)
Vallet, Alexandra
2014-01-01
. That significant pressure enhancement is explained by contribution of hot-electrons generated by non-linear laser/plasma interaction in the corona. The proposed analytical models allow to optimize the shock ignition scheme, including the influence of the implosion parameters. Analytical, numerical and experimental results are mutually consistent. (author) [fr
Avolio, E.; Federico, S.; Miglietta, M. M.; Lo Feudo, T.; Calidonna, C. R.; Sempreviva, A. M.
2017-08-01
The sensitivity of boundary layer variables to five (two non-local and three local) planetary boundary-layer (PBL) parameterization schemes, available in the Weather Research and Forecasting (WRF) mesoscale meteorological model, is evaluated in an experimental site in Calabria region (southern Italy), in an area characterized by a complex orography near the sea. Results of 1 km × 1 km grid spacing simulations are compared with the data collected during a measurement campaign in summer 2009, considering hourly model outputs. Measurements from several instruments are taken into account for the performance evaluation: near surface variables (2 m temperature and relative humidity, downward shortwave radiation, 10 m wind speed and direction) from a surface station and a meteorological mast; vertical wind profiles from Lidar and Sodar; also, the aerosol backscattering from a ceilometer to estimate the PBL height. Results covering the whole measurement campaign show a cold and moist bias near the surface, mostly during daytime, for all schemes, as well as an overestimation of the downward shortwave radiation and wind speed. Wind speed and direction are also verified at vertical levels above the surface, where the model uncertainties are, usually, smaller than at the surface. A general anticlockwise rotation of the simulated flow with height is found at all levels. The mixing height is overestimated by all schemes and a possible role of the simulated sensible heat fluxes for this mismatching is investigated. On a single-case basis, significantly better results are obtained when the atmospheric conditions near the measurement site are dominated by synoptic forcing rather than by local circulations. From this study, it follows that the two first order non-local schemes, ACM2 and YSU, are the schemes with the best performance in representing parameters near the surface and in the boundary layer during the analyzed campaign.
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2016-04-01
This contribution presents a framework, which enables the use of an Evolutionary Algorithm (EA) for the calibration and regionalization of the hydrological model COSEROreg. COSEROreg uses an updated version of the HBV-type model COSERO (Kling et al. 2014) for the modelling of hydrological processes and is embedded in a parameter regionalization scheme based on Samaniego et al. (2010). The latter uses subscale-information to estimate model via a-priori chosen transfer functions (often derived from pedotransfer functions). However, the transferability of the regionalization scheme to different model-concepts and the integration of new forms of subscale information is not straightforward. (i) The usefulness of (new) single sub-scale information layers is unknown beforehand. (ii) Additionally, the establishment of functional relationships between these (possibly meaningless) sub-scale information layers and the distributed model parameters remain a central challenge in the implementation of a regionalization procedure. The proposed method theoretically provides a framework to overcome this challenge. The implementation of the EA encompasses the following procedure: First, a formal grammar is specified (Ryan et al., 1998). The construction of the grammar thereby defines the set of possible transfer functions and also allows to incorporate hydrological domain knowledge into the search itself. The EA iterates over the given space by combining parameterized basic functions (e.g. linear- or exponential functions) and sub-scale information layers into transfer functions, which are then used in COSEROreg. However, a pre-selection model is applied beforehand to sort out unfeasible proposals by the EA and to reduce the necessary model runs. A second optimization routine is used to optimize the parameters of the transfer functions proposed by the EA. This concept, namely using two nested optimization loops, is inspired by the idea of Lamarckian Evolution and Baldwin Effect
Modelling the mortality of members of group schemes in South Africa
African Journals Online (AJOL)
In this paper, the methodology underlying the graduation of the mortality of members of group schemes in South Africa underwritten by life insurance companies under group life-insurance arrangements is described and the results are presented. A multivariate parametric curve was fitted to the data for the working ages 25 ...
Directory of Open Access Journals (Sweden)
Thang M. Luong
2018-01-01
Full Text Available A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF. A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.
Tang, Yu Jia; Li, Ling Jun; Zhou, Yi Ming; Zhang, Da Wei; Yin, Wen Jun; Zhang, Meng; Xie, Bao Guo; Cheng, Nianliang
2017-04-01
Dust produced by wind erosion is a major source of atmospheric dust pollutions which have impacts on air quality, weather and climate. It is difficult to calculate dust concentration in the atmosphere with certainty unless the dust-emission rate can be estimated with accuracy. Hence, due to the unreliable estimation of dust-emission rate flux from ground surface, the dust forecast accuracy in air quality models is low. The main reason is that the parameter that describes the dust-emission rate in the regional air quality model is constant and cannot reflect the reality of surface dust-emission changes. A new scheme which uses the vegetation information from satellite remote sensing data and meteorological condition provided by meteorological forecast model is developed to estimate the actual dust-emission rete from the ground surface. The results shows that the new scheme can improve dust simulation and forecast performance significantly and reduce the root mean square error by 25% 68%. The DDR scheme can be coupled with any current air quality model (e.g. WRF-Chem, CMAQ, CAMx) and produce more accurate dust forecast.
Luong, Thang
2018-01-22
A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers) to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF) cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF). A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.
Kannan, Kidambi S.; Dasgupta, Abhijit
1998-04-01
Deformation control of smart structures and damage detection in smart composites by magneto-mechanical tagging are just a few of the increasing number of applications of polydomain, polycrystalline magnetostrictive materials that are currently being researched. Robust computational models of bulk magnetostriction will be of great assistance to designers of smart structures for optimization of performance and development of control strategies. This paper discusses the limitations of existing tools, and reports on the work of the authors in developing a 3D nonlinear continuum finite element scheme for magnetostrictive structures, based on an appropriate Galerkin variational principle and incremental constitutive relations. The unique problems posed by the form of the equations governing magneto-mechanical interactions as well as their impact on the proper choice of variational and finite element discretization schemes are discussed. An adaptation of vectorial edge functions for interpolation of magnetic field in hexahedral elements is outlined. The differences between the proposed finite element scheme and available formations are also discussed in this paper. Computational results obtained from the newly proposed scheme will be presented in a future paper.
Chuvatin, Alexandre S.; Rudakov, Leonid I.; Kokshenev, Vladimir A.; Aranchuk, Leonid E.; Huet, Dominique; Gasilov, Vladimir A.; Krukovskii, Alexandre Yu.; Kurmaev, Nikolai E.; Fursov, Fiodor I.
2002-12-01
This work introduces an inductive energy storage (IES) scheme which aims pulsed-power conditioning at multi- MJ energies. The key element of the scheme represents an additional plasma volume, where a magnetically accelerated wire array is used for inductive current switching. This plasma acceleration volume is connected in parallel to a microsecond capacitor bank and to a 100-ns current ruse-time useful load. Simple estimates suggest that optimized scheme parameters could be reachable even when operating at ultra-high currents. We describe first proof-of-principle experiments carried out on GIT12 generator [1] at the wire-array current level of 2 MA. The obtained confirmation of the concept consists in generation of a 200 kV voltage directly at an inductive load. This load voltage value can be already sufficient to transfer the available magnetic energy into kinetic energy of a liner at this current level. Two-dimensional modeling with the radiational MHD numerical tool Marple [2] confirms the development of inductive voltage in the system. However, the average voltage increase is accompanied by short-duration voltage drops due to interception of the current by the low-density upstream plasma. Upon our viewpoint, this instability of the current distribution represents the main physical limitation to the scheme performance.
Directory of Open Access Journals (Sweden)
Yanxue Yu
2017-01-01
Full Text Available As a basic building block in power systems, the three-phase voltage-source inverter (VSI connects the distributed energy to the grid. For the inductor-capacitor-inductor (LCL-filter three-phase VSI, according to different current sampling position and different reference frame, there mainly exist four control schemes. Different control schemes present different impedance characteristics in their corresponding determined frequency range. To analyze the existing resonance phenomena due to the variation of grid impedances, the sequence impedance models of LCL-type grid-connected three-phase inverters under different control schemes are presented using the harmonic linearization method. The impedance-based stability analysis approach is then applied to compare the relative stability issues due to the impedance differences at some frequencies and to choose the best control scheme and the better controller parameters regulating method for the LCL-type three-phase VSI. The simulation and experiments both validate the resonance analysis results.
Energy Technology Data Exchange (ETDEWEB)
Goudon, Thierry, E-mail: thierry.goudon@inria.fr [Team COFFEE, INRIA Sophia Antipolis Mediterranee (France); Labo. J.A. Dieudonne CNRS and Univ. Nice-Sophia Antipolis (UMR 7351), Parc Valrose, 06108 Nice cedex 02 (France); Parisot, Martin, E-mail: martin.parisot@gmail.com [Project-Team SIMPAF, INRIA Lille Nord Europe, Park Plazza, 40 avenue Halley, F-59650 Villeneuve d' Ascq cedex (France)
2012-10-15
In the so-called Spitzer-Haerm regime, equations of plasma physics reduce to a nonlinear parabolic equation for the electronic temperature. Coming back to the derivation of this limiting equation through hydrodynamic regime arguments, one is led to construct a hierarchy of models where the heat fluxes are defined through a non-local relation which can be reinterpreted as well by introducing coupled diffusion equations. We address the question of designing numerical methods to simulate these equations. The basic requirement for the scheme is to be asymptotically consistent with the Spitzer-Haerm regime. Furthermore, the constraints of physically realistic simulations make the use of unstructured meshes unavoidable. We develop a Finite Volume scheme, based on Vertex-Based discretization, which reaches these objectives. We discuss on numerical grounds the efficiency of the method, and the ability of the generalized models in capturing relevant phenomena missed by the asymptotic problem.
Feiccabrino, James; Lundberg, Angela; Sandström, Nils
2013-04-01
Many hydrological models determine precipitation phase using surface weather station data. However, there are a declining number of augmented weather stations reporting manually observed precipitation phases, and a large number of automated observing systems (AOS) which do not report precipitation phase. Automated precipitation phase determination suffers from low accuracy in the precipitation phase transition zone (PPTZ), i.e. temperature range -1° C to 5° C where rain, snow and mixed precipitation is possible. Therefore, it is valuable to revisit surface based precipitation phase determination schemes (PPDS) while manual verification is still widely available. Hydrological and meteorological approaches to PPDS are vastly different. Most hydrological models apply surface meteorological data into one of two main PPDS approaches. The first is a single rain/snow threshold temperature (TRS), the second uses a formula to describe how mixed precipitation phase changes between the threshold temperatures TS (below this temperature all precipitation is considered snow) and TR (above this temperature all precipitation is considered rain). However, both approaches ignore the effect of lower tropospheric conditions on surface precipitation phase. An alternative could be to apply a meteorological approach in a hydrological model. Many meteorological approaches rely on weather balloon data to determine initial precipitation phase, and latent heat transfer for the melting or freezing of precipitation falling through the lower troposphere. These approaches can improve hydrological PPDS, but would require additional input data. Therefore, it would be beneficial to link expected lower tropospheric conditions to AOS data already used by the model. In a single air mass, rising air can be assumed to cool at a steady rate due to a decrease in atmospheric pressure. When two air masses meet, warm air is forced to ascend the more dense cold air. This causes a thin sharp warming (frontal
Shan, Y.; Eric, W.; Gao, L.; Zhao, T.; Yin, Y.
2015-12-01
In this study, we have evaluated the performance of size distribution functions (SDF) with 2- and 3-moments in fitting the observed size distribution of rain droplets at three different heights. The goal is to improve the microphysics schemes in meso-scale models, such as Weather Research and Forecast (WRF). Rain droplets were observed during eight periods of different rain types at three stations on the Yellow Mountain in East China. The SDF in this study were M-P distribution with a fixed shape parameter in Gamma SDF(FSP). Where the Gamma SDFs were obtained with three diagnosis methods with the shape parameters based on Milbrandt (2010; denoted DSPM10), Milbrandt (2005; denoted DSPM05) and Seifert (2008; denoted DSPS08) for solving the shape parameter(SSP) and Lognormal SDF. Based on the preliminary experiments, three ensemble methods deciding Gamma SDF was also developed and assessed. The magnitude of average relative error caused by applying a FSP was 10-2 for fitting 0-order moment of the observed rain droplet distribution, and the magnitude of average relative error changed to 10-1 and 100 respectively for 1-4 order moments and 5-6 order moments. To different extent, DSPM10, DSPM05, DSPS08, SSP and ensemble methods could improve fitting accuracies for 0-6 order moments, especially the one coupling SSP and DSPS08 methods, which provided a average relative error 6.46% for 1-4 order moments and 11.90% for 5-6 order moments, respectively. The relative error of fitting three moments using the Lognormal SDF was much larger than that of Gamma SDF. The threshold value of shape parameter ranged from 0 to 8, because values beyond this range could cause overflow in the calculation. When average diameter of rain droplets was less than 2mm, the possibility of unavailable shape parameter value(USPV) increased with a decreasing droplet size. There was strong sensitivity of moment group in fitting accuracy. When ensemble method coupling SSP and DSPS08 was used, a better fit
International Nuclear Information System (INIS)
Mukhamedov, Farrukh; Saburov, Mansoor
2010-06-01
In the present paper we study forward Quantum Markov Chains (QMC) defined on a Cayley tree. Using the tree structure of graphs, we give a construction of quantum Markov chains on a Cayley tree. By means of such constructions we prove the existence of a phase transition for the XY-model on a Cayley tree of order three in QMC scheme. By the phase transition we mean the existence of two distinct QMC for the given family of interaction operators {K }. (author)
Peng, Qiujin
2017-09-18
In this paper, we present two second-order numerical schemes to solve the fourth order parabolic equation derived from a diffuse interface model with Peng-Robinson Equation of state (EOS) for pure substance. The mass conservation, energy decay property, unique solvability and L-infinity convergence of these two schemes are proved. Numerical results demonstrate the good approximation of the fourth order equation and confirm reliability of these two schemes.
Xie, Zhipeng; Hu, Zeyong; Xie, Zhenghui; Jia, Binghao; Sun, Genhou; Du, Yizhen; Song, Haiqing
2018-02-01
This paper presents the impact of two snow cover schemes (NY07 and SL12) in the Community Land Model version 4.5 (CLM4.5) on the snow distribution and surface energy budget over the Tibetan Plateau. The simulated snow cover fraction (SCF), snow depth, and snow cover days were evaluated against in situ snow depth observations and a satellite-based snow cover product and snow depth dataset. The results show that the SL12 scheme, which considers snow accumulation and snowmelt processes separately, has a higher overall accuracy (81.8%) than the NY07 (75.8%). The newer scheme performs better in the prediction of overall accuracy compared with the NY07; however, SL12 yields a 15.1% underestimation rate while NY07 overestimated the SCF with a 15.2% overestimation rate. Both two schemes capture the distribution of the maximum snow depth well but show large positive biases in the average value through all periods (3.37, 3.15, and 1.48 cm for NY07; 3.91, 3.52, and 1.17 cm for SL12) and overestimate snow cover days compared with the satellite-based product and in situ observations. Higher altitudes show larger root-mean-square errors (RMSEs) in the simulations of snow depth and snow cover days during the snow-free period. Moreover, the surface energy flux estimations from the SL12 scheme are generally superior to the simulation from NY07 when evaluated against ground-based observations, in particular for net radiation and sensible heat flux. This study has great implications for further improvement of the subgrid-scale snow variations over the Tibetan Plateau.
Hamdi, R.; Degrauwe, D.; Duerinckx, A.; Cedilnik, J.; Costa, V.; Dalkilic, T.; Essaouini, K.; Jerczynki, M.; Kocaman, F.; Kullmann, L.; Mahfouf, J.-F.; Meier, F.; Sassi, M.; Schneider, S.; Váňa, F.; Termonia, P.
2014-01-01
The newly developed land surface scheme SURFEX (SURFace EXternalisée) is implemented into a limited-area numerical weather prediction model running operationally in a number of countries of the ALADIN and HIRLAM consortia. The primary question addressed is the ability of SURFEX to be used as a new land surface scheme and thus assessing its potential use in an operational configuration instead of the original ISBA (Interactions between Soil, Biosphere, and Atmosphere) scheme. The results show that the introduction of SURFEX either shows improvement for or has a neutral impact on the 2 m temperature, 2 m relative humidity and 10 m wind. However, it seems that SURFEX has a tendency to produce higher maximum temperatures at high-elevation stations during winter daytime, which degrades the 2 m temperature scores. In addition, surface radiative and energy fluxes improve compared to observations from the Cabauw tower. The results also show that promising improvements with a demonstrated positive impact on the forecast performance are achieved by introducing the town energy balance (TEB) scheme. It was found that the use of SURFEX has a neutral impact on the precipitation scores. However, the implementation of TEB within SURFEX for a high-resolution run tends to cause rainfall to be locally concentrated, and the total accumulated precipitation obviously decreases during the summer. One of the novel features developed in SURFEX is the availability of a more advanced surface data assimilation using the extended Kalman filter. The results over Belgium show that the forecast scores are similar between the extended Kalman filter and the classical optimal interpolation scheme. Finally, concerning the vertical scores, the introduction of SURFEX either shows improvement for or has a neutral impact in the free atmosphere.
Directory of Open Access Journals (Sweden)
Saulo Frietas
2012-01-01
Full Text Available An advection scheme, which maintains the initial monotonic characteristics of a tracer field being transported and at the same time produces low numerical diffusion, is implemented in the Coupled Chemistry-Aerosol-Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CCATT-BRAMS. Several comparisons of transport modeling using the new and original (non-monotonic CCATT-BRAMS formulations are performed. Idealized 2-D non-divergent or divergent and stationary or time-dependent wind fields are used to transport sharply localized tracer distributions, as well as to verify if an existent correlation of the mass mixing ratios of two interrelated tracers is kept during the transport simulation. Further comparisons are performed using realistic 3-D wind fields. We then perform full simulations of real cases using data assimilation and complete atmospheric physics. In these simulations, we address the impacts of both advection schemes on the transport of biomass burning emissions and the formation of secondary species from non-linear chemical reactions of precursors. The results show that the new scheme produces much more realistic transport patterns, without generating spurious oscillations and under- and overshoots or spreading mass away from the local peaks. Increasing the numerical diffusion in the original scheme in order to remove the spurious oscillations and maintain the monotonicity of the transported field causes excessive smoothing in the tracer distribution, reducing the local gradients and maximum values and unrealistically spreading mass away from the local peaks. As a result, huge differences (hundreds of % for relatively inert tracers (like carbon monoxide are found in the smoke plume cores. In terms of the secondary chemical species formed by non-linear reactions (like ozone, we found differences of up to 50% in our simulations.
Verification of a Higher-Order Finite Difference Scheme for the One-Dimensional Two-Fluid Model
Directory of Open Access Journals (Sweden)
William D. Fullmer
2013-06-01
Full Text Available The one-dimensional two-fluid model is widely acknowledged as the most detailed and accurate macroscopic formulation model of the thermo-fluid dynamics in nuclear reactor safety analysis. Currently the prevailing one-dimensional thermal hydraulics codes are only first-order accurate. The benefit of first-order schemes is numerical viscosity, which serves as a regularization mechanism for many otherwise ill-posed two-fluid models. However, excessive diffusion in regions of large gradients leads to poor resolution of phenomena related to void wave propagation. In this work, a higher-order shock capturing method is applied to the basic equations for incompressible and isothermal flow of the one-dimensional two-fluid model. The higher-order accuracy is gained by a strong stability preserving multi-step scheme for the time discretization and a minmod flux limiter scheme for the convection terms. Additionally the use of a staggered grid allows for several second-order centered terms, when available. The continuity equations are first tested by manipulating the two-fluid model into a pair of linear wave equations and tested for smooth and discontinuous initial data. The two-fluid model is benchmarked with the water faucet problem. With the higher-order method, the ill-posed nature of the governing equations presents severe challenges due to a growing void fraction jump in the solution. Therefore the initial and boundary conditions of the problem are modified in order to eliminate a large counter-current flow pattern that develops. With the modified water faucet problem the numerical models behave well and allow a convergence study. Using the L1 norm of the liquid fraction, it is verified that the first and higher-order numerical schemes converge to the quasi-analytical solution at a rate of O(1/2 and O(2/3, respectively. It is also shown that the growing void jump is a contact discontinuity, i.e. it is a linearly degenerate wave. The sub
DEFF Research Database (Denmark)
van Leeuwen, Theo
2013-01-01
This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....
Chu, Chunlei
2009-01-01
We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.
Bonne, François; Alamir, Mazen; Bonnay, Patrick
2014-01-01
In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.
Directory of Open Access Journals (Sweden)
R. Sitharthan
2016-09-01
Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.
Fan, Xiaolin
2017-01-19
This paper presents a componentwise convex splitting scheme for numerical simulation of multicomponent two-phase fluid mixtures in a closed system at constant temperature, which is modeled by a diffuse interface model equipped with the Van der Waals and the Peng-Robinson equations of state (EoS). The Van der Waals EoS has a rigorous foundation in physics, while the Peng-Robinson EoS is more accurate for hydrocarbon mixtures. First, the phase field theory of thermodynamics and variational calculus are applied to a functional minimization problem of the total Helmholtz free energy. Mass conservation constraints are enforced through Lagrange multipliers. A system of chemical equilibrium equations is obtained which is a set of second-order elliptic equations with extremely strong nonlinear source terms. The steady state equations are transformed into a transient system as a numerical strategy on which the scheme is based. The proposed numerical algorithm avoids the indefiniteness of the Hessian matrix arising from the second-order derivative of homogeneous contribution of total Helmholtz free energy; it is also very efficient. This scheme is unconditionally componentwise energy stable and naturally results in unconditional stability for the Van der Waals model. For the Peng-Robinson EoS, it is unconditionally stable through introducing a physics-preserving correction term, which is analogous to the attractive term in the Van der Waals EoS. An efficient numerical algorithm is provided to compute the coefficient in the correction term. Finally, some numerical examples are illustrated to verify the theoretical results and efficiency of the established algorithms. The numerical results match well with laboratory data.
Ferrero, Enrico; Alessandrini, Stefano; Vandenberghe, Francois
2018-03-01
We tested several planetary-boundary-layer (PBL) schemes available in the Weather Research and Forecasting (WRF) model against measured wind speed and direction, temperature and turbulent kinetic energy (TKE) at three levels (5, 9, 25 m). The Urban Turbulence Project dataset, gathered from the outskirts of Turin, Italy and used for the comparison, provides measurements made by sonic anemometers for more than 1 year. In contrast to other similar studies, which have mainly focused on short-time periods, we considered 2 months of measurements (January and July) representing both the seasonal and the daily variabilities. To understand how the WRF-model PBL schemes perform in an urban environment, often characterized by low wind-speed conditions, we first compared six PBL schemes against observations taken by the highest anemometer located in the inertial sub-layer. The availability of the TKE measurements allows us to directly evaluate the performances of the model; results of the model evaluation are presented in terms of quantile versus quantile plots and statistical indices. Secondly, we considered WRF-model PBL schemes that can be coupled to the urban-surface exchange parametrizations and compared the simulation results with measurements from the two lower anemometers located inside the canopy layer. We find that the PBL schemes accounting for TKE are more accurate and the model representation of the roughness sub-layer improves when the urban model is coupled to each PBL scheme.
A hybrid finite-volume and finite difference scheme for depth-integrated non-hydrostatic model
Yin, Jing; Sun, Jia-wen; Wang, Xing-gang; Yu, Yong-hai; Sun, Zhao-chen
2017-06-01
A depth-integrated, non-hydrostatic model with hybrid finite difference and finite volume numerical algorithm is proposed in this paper. By utilizing a fraction step method, the governing equations are decomposed into hydrostatic and non-hydrostatic parts. The first part is solved by using the finite volume conservative discretization method, whilst the latter is considered by solving discretized Poisson-type equations with the finite difference method. The second-order accuracy, both in time and space, of the finite volume scheme is achieved by using an explicit predictor-correction step and linear construction of variable state in cells. The fluxes across the cell faces are computed in a Godunov-based manner by using MUSTA scheme. Slope and flux limiting technique is used to equip the algorithm with total variation dimensioning property for shock capturing purpose. Wave breaking is treated as a shock by switching off the non-hydrostatic pressure in the steep wave front locally. The model deals with moving wet/dry front in a simple way. Numerical experiments are conducted to verify the proposed model.
Ricciuto, D. M.; Yang, X.; Thornton, P. E.
2015-12-01
Soils contain the largest pool of carbon in terrestrial ecosystems. Soil carbon dynamics and associated nutrient dynamics play significant roles in regulating global carbon cycle and atmospheric CO2 concentrations. Our capability to predict future climate change depends to a large extent on a well-constrained representation of soil carbon dynamics in ESMs. Here we evaluate two decomposition schemes - converging trophic cascade (CTC) and Century - in CLM4.5/ACME V0 using data from the long-term intersite decomposition experiment team (LIDET), radiocarbon (14C) observations, and Harmonized World Soil Database (HWSD). For the evaluation against LIDET, We exercise the full CLM4.5/ ACME V0 land model, including seasonal variability in nitrogen limitation and environmental scalars (temperature, moisture, O2), in order to represent LIDET experiment in a realistic way. We show that the proper design of model experiments is crucial to model evaluation using data from field experiments such as LIDET. We also use 14C profile data at 10 sites to evaluate the performance of CTC and CENTURY decomposition scheme. We find that the 14C profiles at these sites are most sensitive to the depth dependent decomposition parameters, consistent with previous studies.
International Nuclear Information System (INIS)
Guertin, Chantal
1995-01-01
This thesis is part of the validation process of using coupled 3D neutronics and thermal-hydraulics codes for studying accidental situations with boiling. First part is dedicated to a numerical stability analysis of neutronics and thermal-hydraulics coupled schemes. Both explicit and semi-implicit coupling schemes were applied to solve the set of equations describing the linearized neutronics and thermal-hydraulics of point reactor. Point reactor modelling was preferred to obtain analytical expressions of eigenvalues of the discretized Systems. Stability criteria, based on eigenvalues, was calculated as well as neutronic and thermalhydraulic responses of the System following insertion of a reactivity step. Results show no severe restriction of time domain, stability wise. Actual transient calculations using coupled neutronics and thermal-hydraulics codes, like COCCINELLE and THYC developed at Electricite de France, do not show stability problems. Second part introduces surface spline as a new neutronic feedback model. The cross influences of feedback parameters is now taken into account. Moderator temperature and density were modeled. This method, simple and accurate, allows an homogeneous description of cross-sections overall operating reactor situations including accidents with boiling. (author) [fr
Directory of Open Access Journals (Sweden)
Xiangyang Zhou
2016-01-01
Full Text Available This paper describes a method to suppress the effect of nonlinear and time-varying mass unbalance torque disturbance on the dynamic performances of an aerial inertially stabilized platform (ISP. To improve the tracking accuracy and robustness of the ISP, a compound control scheme based on both of model reference adaptive control (MRAC and PID control methods is proposed. The dynamic model is first developed which reveals the unbalance torque disturbance with the characteristic of being nonlinear and time-varying. Then, the MRAC/PID compound controller is designed, in which the PID parameters are adaptively adjusted based on the output errors between the reference model and the actual system. In this way, the position errors derived from the prominent unbalance torque disturbance are corrected in real time so that the tracking accuracy is improved. To verify the method, the simulations and experiments are, respectively, carried out. The results show that the compound scheme has good ability in mass unbalance disturbance rejection, by which the system obtains higher stability accuracy compared with the PID method.
International Nuclear Information System (INIS)
Chang, Chih-Hao; Liou, Meng-Sing
2007-01-01
In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion
Chen, Bor-Sen; Tsai, Kun-Wei; Li, Cheng-Wei
2015-01-01
Molecular biologists have long recognized carcinogenesis as an evolutionary process that involves natural selection. Cancer is driven by the somatic evolution of cell lineages. In this study, the evolution of somatic cancer cell lineages during carcinogenesis was modeled as an equilibrium point (ie, phenotype of attractor) shifting, the process of a nonlinear stochastic evolutionary biological network. This process is subject to intrinsic random fluctuations because of somatic genetic and epigenetic variations, as well as extrinsic disturbances because of carcinogens and stressors. In order to maintain the normal function (ie, phenotype) of an evolutionary biological network subjected to random intrinsic fluctuations and extrinsic disturbances, a network robustness scheme that incorporates natural selection needs to be developed. This can be accomplished by selecting certain genetic and epigenetic variations to modify the network structure to attenuate intrinsic fluctuations efficiently and to resist extrinsic disturbances in order to maintain the phenotype of the evolutionary biological network at an equilibrium point (attractor). However, during carcinogenesis, the remaining (or neutral) genetic and epigenetic variations accumulate, and the extrinsic disturbances become too large to maintain the normal phenotype at the desired equilibrium point for the nonlinear evolutionary biological network. Thus, the network is shifted to a cancer phenotype at a new equilibrium point that begins a new evolutionary process. In this study, the natural selection scheme of an evolutionary biological network of carcinogenesis was derived from a robust negative feedback scheme based on the nonlinear stochastic Nash game strategy. The evolvability and phenotypic robustness criteria of the evolutionary cancer network were also estimated by solving a Hamilton-Jacobi inequality - constrained optimization problem. The simulation revealed that the phenotypic shift of the lung cancer
Choudhury, Devanil; Das, Someshwar
2017-06-01
The Advanced Research WRF (ARW) model is used to simulate Very Severe Cyclonic Storms (VSCS) Hudhud (7-13 October, 2014), Phailin (8-14 October, 2013) and Lehar (24-29 November, 2013) to investigate the sensitivity to microphysical schemes on the skill of forecasting track and intensity of the tropical cyclones for high-resolution (9 and 3 km) 120-hr model integration. For cloud resolving grid scale (CONTROL forecast. This study is aimed to investigate the sensitivity to microphysics on the track and intensity with explicitly resolved convection scheme. It shows that the Goddard one-moment bulk liquid-ice microphysical scheme provided the highest skill on the track whereas for intensity both Thompson and Goddard microphysical schemes perform better. The Thompson scheme indicates the highest skill in intensity at 48, 96 and 120 hr, whereas at 24 and 72 hr, the Goddard scheme provides the highest skill in intensity. It is known that higher resolution domain produces better intensity and structure of the cyclones and it is desirable to resolve the convection with sufficiently high resolution and with the use of explicit cloud physics. This study suggests that the Goddard cumulus ensemble microphysical scheme is suitable for high resolution ARW simulation for TC's track and intensity over the BoB. Although the present study is based on only three cyclones, it could be useful for planning real-time predictions using ARW modelling system.
Energy Technology Data Exchange (ETDEWEB)
Buet, Ch.; Despres, B
2007-07-01
We address the discretization of the Levermore's two moments and entropy model of the radiative transfer equation. We present a new approach for the discretization of this model: first we rewrite the moment equations as a Compressible Gas Dynamics equation by introducing an additional quantity that plays the role of a density. After that we discretize using a Lagrange-projection scheme. The Lagrange-projection scheme permits us to incorporate the source terms in the fluxes of an acoustic solver in the Lagrange step, using the well-known piecewise steady approximation and thus to capture correctly the diffusion regime. Moreover we show that the discretization is entropic and preserve the flux-limited property of the moment model. Numerical examples illustrate the feasibility of our approach. (authors)
Directory of Open Access Journals (Sweden)
Yandy G. Mayor
2015-01-01
Full Text Available This paper evaluates the sensitivity to cumulus and microphysics schemes, as represented in numerical simulations of the Weather Research and Forecasting model, in characterizing a deep convection event over the Cuban island on 1 May 2012. To this end, 30 experiments combining five cumulus and six microphysics schemes, in addition to two experiments in which the cumulus parameterization was turned off, are tested in order to choose the combination that represents the event precipitation more accurately. ERA Interim is used as lateral boundary condition data for the downscaling procedure. Results show that convective schemes are more important than microphysics schemes for determining the precipitation areas within a high-resolution domain simulation. Also, while one cumulus scheme captures the overall spatial convective structure of the event more accurately than others, it fails to capture the precipitation intensity. This apparent discrepancy leads to sensitivity related to the verification method used to rank the scheme combinations. This sensitivity is also observed in a comparison between parameterized and explicit cumulus formation when the Kain-Fritsch scheme was used. A loss of added value is also found when the Grell-Freitas cumulus scheme was activated at 1 km grid spacing.
Physics beyond the standard model in the non-perturbative unification scheme
International Nuclear Information System (INIS)
Kapetanakis, D.; Zoupanos, G.
1990-01-01
The non-perturbative unification scenario predicts reasonably well the low energy gauge couplings of the standard model. Agreement with the measured low energy couplings is obtained by assuming certain kind of physics beyond the standard model. A number of possibilities for physics beyond the standard model is examined. The best candidates so far are the standard model with eight fermionic families and a similar number of Higgs doublets, and the supersymmetric standard model with five families. (author)
High-order scheme for the source-sink term in a one-dimensional water temperature model.
Directory of Open Access Journals (Sweden)
Zheng Jing
Full Text Available The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China. The modeling results were in an excellent agreement with measured data.
Meng, Tianhui; Li, Xiaofan; Zhang, Sha; Zhao, Yubin
2016-09-28
Wireless sensor networks (WSNs) have recently gained popularity for a wide spectrum of applications. Monitoring tasks can be performed in various environments. This may be beneficial in many scenarios, but it certainly exhibits new challenges in terms of security due to increased data transmission over the wireless channel with potentially unknown threats. Among possible security issues are timing attacks, which are not prevented by traditional cryptographic security. Moreover, the limited energy and memory resources prohibit the use of complex security mechanisms in such systems. Therefore, balancing between security and the associated energy consumption becomes a crucial challenge. This paper proposes a secure scheme for WSNs while maintaining the requirement of the security-performance tradeoff. In order to proceed to a quantitative treatment of this problem, a hybrid continuous-time Markov chain (CTMC) and queueing model are put forward, and the tradeoff analysis of the security and performance attributes is carried out. By extending and transforming this model, the mean time to security attributes failure is evaluated. Through tradeoff analysis, we show that our scheme can enhance the security of WSNs, and the optimal rekeying rate of the performance and security tradeoff can be obtained.
J.K. Hoogland (Jiri); C.D.D. Neumann
2000-01-01
textabstractIn this article we present a new approach to the numerical valuation of derivative securities. The method is based on our previous work where we formulated the theory of pricing in terms of tradables. The basic idea is to fit a finite difference scheme to exact solutions of the pricing
Energy Technology Data Exchange (ETDEWEB)
Kim, M; Rockhill, J; Phillips, M [University Washington, Seattle, WA (United States)
2016-06-15
Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3 cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without increasing EUD to
International Nuclear Information System (INIS)
Kim, M; Rockhill, J; Phillips, M
2016-01-01
Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3 cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without increasing EUD to
Conceptual design and modeling of a six-dimensional bunch merging scheme for a muon collider
Directory of Open Access Journals (Sweden)
Yu Bao
2016-03-01
Full Text Available A high luminosity muon collider requires single, intense, muon bunches with small emittances: just one of each sign. An efficient front end and a cooling channel have been designed and simulated within the collaboration of the Muon Accelerator Program. The muons are first bunched and phase rotated into 21 bunches, and then cooled in six dimensions. When they are cool enough, they are merged into single bunches: one of each sign. The bunch merging scheme has been outlined with preliminary simulations in previous studies. In this paper we present a comprehensive design with its end-to-end simulation. The 21 bunches are first merged in longitudinal phase space into seven bunches. These are directed into seven “trombone” paths with different lengths, to bring them to the same time, and then merged transversely in a collecting “funnel” into the required single larger bunches. Detailed numerical simulations show that the 6D emittance of the resulting bunch reaches the parameters needed for high acceptance into the downstream cooling channel.
Directory of Open Access Journals (Sweden)
A. Roy
2013-06-01
Full Text Available Snow grain size is a key parameter for modeling microwave snow emission properties and the surface energy balance because of its influence on the snow albedo, thermal conductivity and diffusivity. A model of the specific surface area (SSA of snow was implemented in the one-layer snow model in the Canadian LAnd Surface Scheme (CLASS version 3.4. This offline multilayer model (CLASS-SSA simulates the decrease of SSA based on snow age, snow temperature and the temperature gradient under dry snow conditions, while it considers the liquid water content of the snowpack for wet snow metamorphism. We compare the model with ground-based measurements from several sites (alpine, arctic and subarctic with different types of snow. The model provides simulated SSA in good agreement with measurements with an overall point-to-point comparison RMSE of 8.0 m2 kg–1, and a root mean square error (RMSE of 5.1 m2 kg–1 for the snowpack average SSA. The model, however, is limited under wet conditions due to the single-layer nature of the CLASS model, leading to a single liquid water content value for the whole snowpack. The SSA simulations are of great interest for satellite passive microwave brightness temperature assimilations, snow mass balance retrievals and surface energy balance calculations with associated climate feedbacks.
The global increase of noxious bloom occurrences has increased the need for phytoplankton management schemes. Such schemes require the ability to predict phytoplankton succession. Equilibrium Resources Competition theory, which is popular for predicting succession in lake systems...
Modeling the Structural Response of Reinforced Glass Beams using an SLA Scheme
Louter, P.C.; Graaf, van de Anne; Rots, J.G.; Bos, Freek; Louter, Pieter Christiaan; Veer, Fred
2010-01-01
This paper investigates whether a novel computational sequentially linear analysis (SLA) technique, which is especially developed for modeling brittle material response, is applicable for modeling the structural response of metal reinforced glass beams. To do so, computational SLA results are
Talib, A.; Desai, A. R.
2017-12-01
The Central Sands region of Wisconsin is characterized by productive trout streams, lakes, farmland and forest. However, stream channelization, past wetland drainage, and ground water withdrawals have disrupted the hydrology of this Central Sands region. Climatically driven conditions in last decade (2000-2008) alone are unable to account for the severely depressed water levels. Increased interception and evapotranspiration from afforested areas in central sand Wisconsin may also be culprit for reduced water recharge. Hence, there is need to study the cumulative effects of changing precipitation patterns, groundwater withdrawals, and forest evapotranspiration to improve projections of the future of lake levels and water availability in this region. Here, the SWAT-MODFLOW coupled model approach was applied at large spatio-temporal scale. The coupled model fully integrates a watershed model (SWAT) with a groundwater flow model (MODFLOW). Surface water and ground water flows were simulated integratively at daily time step to estimate the groundwater discharge to the stream network in Central Sands that encompasses high capacity wells. The model was calibrated (2010-2013) and validated (2014-2017) based on streamflow, groundwater extraction, and water table elevation. As the long-term trends in some of the primary drivers is presently ambiguous in Central Sands under future climate, as is the case for total precipitation or timing of precipitation, we relied on a sensitivity student to quantitatively access how primary and secondary drivers may influence future net groundwater recharge. We demonstrate how such an approach could then be coupled with decision-making models to evaluate the effectiveness of groundwater withdrawal policies under a changing climate.
Tateo, Andrea; Marcello Miglietta, Mario; Fedele, Francesca; Menegotto, Micaela; Monaco, Alfonso; Bellotti, Roberto
2017-04-01
The Weather Research and Forecasting mesoscale model (WRF) was used to simulate hourly 10 m wind speed and direction over the city of Taranto, Apulia region (south-eastern Italy). This area is characterized by a large industrial complex including the largest European steel plant and is subject to a Regional Air Quality Recovery Plan. This plan constrains industries in the area to reduce by 10 % the mean daily emissions by diffuse and point sources during specific meteorological conditions named wind days. According to the Recovery Plan, the Regional Environmental Agency ARPA-PUGLIA is responsible for forecasting these specific meteorological conditions with 72 h in advance and possibly issue the early warning. In particular, an accurate wind simulation is required. Unfortunately, numerical weather prediction models suffer from errors, especially for what concerns near-surface fields. These errors depend primarily on uncertainties in the initial and boundary conditions provided by global models and secondly on the model formulation, in particular the physical parametrizations used to represent processes such as turbulence, radiation exchange, cumulus and microphysics. In our work, we tried to compensate for the latter limitation by using different Planetary Boundary Layer (PBL) parameterization schemes. Five combinations of PBL and Surface Layer (SL) schemes were considered. Simulations are implemented in a real-time configuration since our intention is to analyze the same configuration implemented by ARPA-PUGLIA for operational runs; the validation is focused over a time range extending from 49 to 72 h with hourly time resolution. The assessment of the performance was computed by comparing the WRF model output with ground data measured at a weather monitoring station in Taranto, near the steel plant. After the analysis of the simulations performed with different PBL schemes, both simple (e.g. average) and more complex post-processing methods (e.g. weighted average
Line Differential Protection Scheme Modelling for Underground 420 kV Cable Systems
DEFF Research Database (Denmark)
Sztykiel, Michal; Bak, Claus Leth; Dollerup, Sebastian
2011-01-01
models can be applied with various systems, allowing to obtain the most optimal configuration of the protective relaying. The present paper describes modelling methodology on the basis of Siemens SIPROTEC 4 7SD522/610. Relay model was verified experimentally with its real equivalent by both EMTP...
Asinari, Pietro
2009-11-01
A finite difference lattice Boltzmann scheme for homogeneous mixture modeling, which recovers Maxwell-Stefan diffusion model in the continuum limit, without the restriction of the mixture-averaged diffusion approximation, was recently proposed [P. Asinari, Phys. Rev. E 77, 056706 (2008)]. The theoretical basis is the Bhatnagar-Gross-Krook-type kinetic model for gas mixtures [P. Andries, K. Aoki, and B. Perthame, J. Stat. Phys. 106, 993 (2002)]. In the present paper, the recovered macroscopic equations in the continuum limit are systematically investigated by varying the ratio between the characteristic diffusion speed and the characteristic barycentric speed. It comes out that the diffusion speed must be at least one order of magnitude (in terms of Knudsen number) smaller than the barycentric speed, in order to recover the Navier-Stokes equations for mixtures in the incompressible limit. Some further numerical tests are also reported. In particular, (1) the solvent and dilute test cases are considered, because they are limiting cases in which the Maxwell-Stefan model reduces automatically to Fickian cases. Moreover, (2) some tests based on the Stefan diffusion tube are reported for proving the complete capabilities of the proposed scheme in solving Maxwell-Stefan diffusion problems. The proposed scheme agrees well with the expected theoretical results.
Li, Haifeng; Cui, Guixiang; Zhang, Zhaoshun
2018-04-01
A coupling scheme is proposed for the simulation of microscale flow and dispersion in which both the mesoscale field and small-scale turbulence are specified at the boundary of a microscale model. The small-scale turbulence is obtained individually in the inner and outer layers by the transformation of pre-computed databases, and then combined in a weighted sum. Validation of the results of a flow over a cluster of model buildings shows that the inner- and outer-layer transition height should be located in the roughness sublayer. Both the new scheme and the previous scheme are applied in the simulation of the flow over the central business district of Oklahoma City (a point source during intensive observation period 3 of the Joint Urban 2003 experimental campaign), with results showing that the wind speed is well predicted in the canopy layer. Compared with the previous scheme, the new scheme improves the prediction of the wind direction and turbulent kinetic energy (TKE) in the canopy layer. The flow field influences the scalar plume in two ways, i.e. the averaged flow field determines the advective flux and the TKE field determines the turbulent flux. Thus, the mean, root-mean-square and maximum of the concentration agree better with the observations with the new scheme. These results indicate that the new scheme is an effective means of simulating the complex flow and dispersion in urban canopies.
Proposed Robot Scheme with 5 DoF and Dynamic Modelling Using Maple Software
Shala Ahmet; Bruçi Mirlind
2017-01-01
In this paper is represented Dynamical Modelling of robots which is commonly first important step of Modelling, Analysis and Control of robotic systems. This paper is focused on using Denavit-Hartenberg (DH) convention for kinematics and Newton-Euler Formulations for dynamic modelling of 5 DoF - Degree of Freedom of 3D robot. The process of deriving of dynamical model is done using Software Maple. Derived Dynamical Model of 5 DoF robot is converted for Matlab use for future analysis, control ...
Proposed Robot Scheme with 5 DoF and Dynamic Modelling Using Maple Software
Directory of Open Access Journals (Sweden)
Shala Ahmet
2017-11-01
Full Text Available In this paper is represented Dynamical Modelling of robots which is commonly first important step of Modelling, Analysis and Control of robotic systems. This paper is focused on using Denavit-Hartenberg (DH convention for kinematics and Newton-Euler Formulations for dynamic modelling of 5 DoF - Degree of Freedom of 3D robot. The process of deriving of dynamical model is done using Software Maple. Derived Dynamical Model of 5 DoF robot is converted for Matlab use for future analysis, control and simulations.
Zhang, Bin; Deng, Congying; Zhang, Yi
2018-03-01
Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.
Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes.
Yates, Katherine L; Mellin, Camille; Caley, M Julian; Radford, Ben T; Meeuwig, Jessica J
2016-01-01
Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are
International Nuclear Information System (INIS)
Hua Jinsong; Lin Ping; Liu Chun; Wang Qi
2011-01-01
Highlights: → We study phase-field models for multi-phase flow computation. → We develop an energy-law preserving C0 FEM. → We show that the energy-law preserving method work better. → We overcome unphysical oscillation associated with the Cahn-Hilliard model. - Abstract: We use the idea in to develop the energy law preserving method and compute the diffusive interface (phase-field) models of Allen-Cahn and Cahn-Hilliard type, respectively, governing the motion of two-phase incompressible flows. We discretize these two models using a C 0 finite element in space and a modified midpoint scheme in time. To increase the stability in the pressure variable we treat the divergence free condition by a penalty formulation, under which the discrete energy law can still be derived for these diffusive interface models. Through an example we demonstrate that the energy law preserving method is beneficial for computing these multi-phase flow models. We also demonstrate that when applying the energy law preserving method to the model of Cahn-Hilliard type, un-physical interfacial oscillations may occur. We examine the source of such oscillations and a remedy is presented to eliminate the oscillations. A few two-phase incompressible flow examples are computed to show the good performance of our method.
Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes
Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris
2017-12-01
Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.
Directory of Open Access Journals (Sweden)
Shahriar Afandizadeh
2016-02-01
Full Text Available Congestion pricing strategy has been recognized as an effective countermeasure in the practical field of urban traffic congestion mitigation. Despite the positive effects of congestion pricing, its implementation has faced problems. This paper investigates the issue of environmental equity in cordon pricing and a park-and-ride scheme. Although pollution decreases inside the cordon by implementation of cordon pricing, air pollutants emission may increase in some links and in the whole network. Therefore, an increase in air emissions in the network means more emission outside the cordon. In fact, due to the implementation of this policy, air pollutants emission may transfer from inside to outside the cordon, creating a type of environmental inequity. To reduce this inequity, a bi-level optimization model with an equity constraint is developed. The proposed solution algorithm based on the second version of the strength Pareto evolutionary algorithm (SPEA2 is applied to the city network in Tehran. The results revealed that it seems reasonable to consider environmental equity as an objective function in cordon pricing. In addition, we can create a sustainable situation for the transportation system by improving environmental inequity with a relatively low reduction in social welfare. Moreover, there are environmental inequity impacts in real networks, which should be considered in the cordon pricing scheme.
Kiessling, Jonas
2014-05-06
Option prices in exponential Lévy models solve certain partial integro-differential equations. This work focuses on developing novel, computable error approximations for a finite difference scheme that is suitable for solving such PIDEs. The scheme was introduced in (Cont and Voltchkova, SIAM J. Numer. Anal. 43(4):1596-1626, 2005). The main results of this work are new estimates of the dominating error terms, namely the time and space discretisation errors. In addition, the leading order terms of the error estimates are determined in a form that is more amenable to computations. The payoff is only assumed to satisfy an exponential growth condition, it is not assumed to be Lipschitz continuous as in previous works. If the underlying Lévy process has infinite jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time and space discretisation errors on this approximation. Consequently, it is possible to determine how to jointly choose the space and time grid sizes and the cut off parameter (Formula presented.). © 2014 Springer Science+Business Media Dordrecht.
Yao, Weiguang; Merchant, Thomas E; Farr, Jonathan B
2016-10-03
The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.
Gao, Min
2014-09-01
In this paper, we develop an efficient numerical method for the two phase moving contact line problem with variable density, viscosity, and slip length. The physical model is based on a phase field approach, which consists of a coupled system of the Cahn-Hilliard and Navier-Stokes equations with the generalized Navier boundary condition [1,2,5]. To overcome the difficulties due to large density and viscosity ratio, the Navier-Stokes equations are solved by a splitting method based on a pressure Poisson equation [11], while the Cahn-Hilliard equation is solved by a convex splitting method. We show that the method is stable under certain conditions. The linearized schemes are easy to implement and introduce only mild CFL time constraint. Numerical tests are carried out to verify the accuracy, stability and efficiency of the schemes. The method allows us to simulate the interface problems with extremely small interface thickness. Three dimensional simulations are included to validate the efficiency of the method. © 2014 Elsevier Inc.
A New Scheme for Experimental-Based Modeling of a Traveling Wave Ultrasonic Motor
DEFF Research Database (Denmark)
Mojallali, Hamed; Amini, R.; Izadi-Zamanabadi, Roozbeh
2005-01-01
In this paper, a new method for equivalent circuit modeling of a traveling wave ultrasonic motor is presented. The free stator of the motor is modeled by an equivalent circuit containing complex circuit elements. A systematic approach for identifying the elements of the equivalent circuit...
Line Differential Protection Scheme Modelling for Underground 420 kV Cable Systems
DEFF Research Database (Denmark)
Sztykiel, Michal; Bak, Claus Leth; Wiechowski, Wojciech
2010-01-01
can be applied with various systems, allowing to obtain the most optimal configuration of the protective relaying. The present paper describes modelling methodology on the basis of Siemens SIPROTEC 4 7SD522/610. Relay model was verified experimentally with its real equivalent by both EMTP...
A general scheme for training and optimization of the Grenander deformable template model
DEFF Research Database (Denmark)
Fisker, Rune; Schultz, Nette; Duta, N.
2000-01-01
parameters, a very fast general initialization algorithm and an adaptive likelihood model based on local means. The model parameters are trained by a combination of a 2D shape learning algorithm and a maximum likelihood based criteria. The fast initialization algorithm is based on a search approach using...
Shafiei, M.; Gharari, S.; Pande, S.; Bhulai, S.
2014-01-01
Posterior sampling methods are increasingly being used to describe parameter and model predictive uncertainty in hydrologic modelling. This paper proposes an alternative to random walk chains (such as DREAM-zs). We propose a sampler based on independence chains with an embedded feature of
The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...
Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris
2018-01-01
Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.
Directory of Open Access Journals (Sweden)
Prokhorov V.B.,
2018-04-01
Full Text Available The important problem of developing the low-cost technologies that will be able to provide a deep decrease in the concentration of nitrogen oxides while maintaining fuel burn-up efficiency is considered. This paper presents the results of the aerodynamics study of the furnace of boiler TPP-210A on the base of the physical and mathematical models in the case when boiler retrofitting from liquid to solid slag removal with two to three times reduction of nitrogen oxide emissions and replacing the vortex burners with direct-flow burners. The need for these studies is due to the fact that the direct-flow burners are "collective action" burners, and efficient fuel combustion can be provided only by the interaction of fuel jets, secondary and tertiary air jets in the furnace volume. The new scheme of air staged combustion in a system of vertical vortexes of opposite rotation with direct-flow burners and nozzles and direct injection of Kuznetsky lean coal dust was developed. In order to test the functional ability and efficiency of the proposed combustion scheme, studies on the physical model of the boiler furnace and the mathematical model of the experimental furnace bench for the case of an isothermal fluid flow were carried out. Comparison showed an acceptable degree of coincidence of these results. In all studied regimes, pronounced vortices remain in both the vertical and horizontal planes, that indicates a high degree of mass exchange between jets and combustion products and the furnace aerodynamics stability to changes in regime factors.
Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes
Directory of Open Access Journals (Sweden)
J.-L. Guerrero
2017-12-01
Full Text Available Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere – heat-exchange fluxes – is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM, a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd. A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE, was used to perform sensitivity analysis (SA and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue – different parameter-value combinations yielding equivalent results – the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.
DEFF Research Database (Denmark)
Mashayekhi, Sima; Hugger, Jens
2015-01-01
Several nonlinear Black-Scholes models have been proposed to take transaction cost, large investor performance and illiquid markets into account. One of the most comprehensive models introduced by Barles and Soner in [4] considers transaction cost in the hedging strategy and risk from an illiquid...... market. In this paper, we compare several finite difference methods for the solution of this model with respect to precision and order of convergence within a computationally feasible domain allowing at most 200 space steps and 10000 time steps. We conclude that standard explicit Euler comes out...
Coupled Atmospheric Chemistry Schemes for Modeling Regional and Global Atmospheric Chemistry
Saunders, E.; Stockwell, W. R.
2016-12-01
Atmospheric chemistry models require chemical reaction mechanisms to simulate the production of air pollution. GACM (Global Atmospheric Chemistry Mechanism) is intended for use in global scale atmospheric chemistry models to provide chemical boundary conditions for regional scale simulations by models such as CMAQ. GACM includes additional chemistry for marine environments while reducing its treatment of the chemistry needed for highly polluted urban regions. This keeps GACM's size small enough to allow it to be used efficiently in global models. GACM's chemistry of volatile organic compounds (VOC) is highly compatible with the VOC chemistry in RACM2 allowing a global model with GACM to provide VOC boundary conditions to a regional scale model with RACM2 with reduced error. The GACM-RACM2 system of mechanisms should yield more accurate forecasts by regional air quality models such as CMAQ. Chemical box models coupled with the regional and global atmospheric chemistry mechanisms (RACM2 & GACM) will be used to make simulations of tropospheric ozone, nitric oxides, and volatile organic compounds that are produced in regional and global domains. The simulations will focus on the Los Angeles' South Coast Air Basin (SoCAB) where the Pacific Ocean meets a highly polluted urban area. These two mechanisms will be compared on the basis of simulated ozone concentrations over this marine-urban region. Simulations made with the more established RACM2 will be compared with simulations made with the newer GACM. In addition WRF-Chem will be used to simulate how RACM2 will produce regional simulations of tropospheric ozone and NOx, which can be further, analyzed for air quality impacts. Both the regional and global model in WRF-Chem will be used to predict how the concentrations of ozone and nitrogen oxides change over land and ocean. The air quality model simulation results will be applied to EPA's BenMAP-CE (Environmental Benefits Mapping & Analysis Program-Community Edition
Model Reference Adaptive Scheme for Multi-drug Infusion for Blood Pressure Control
Enbiya, Saleh; Mahieddine, Fatima; Hossain, Alamgir
2011-01-01
Using multiple interacting drugs to control both the mean arterial pressure (MAP) and cardiac output (CO) of patients with different sensitivity to drugs is a challenging task which this paper attempts to address. A multivariable model reference adaptive control (MRAC) algorithm is developed using a two-input, two-output patient model. The control objective is to maintain the homodynamic variables MAP and CO at the normal values by simultaneously administering two drugs; sodium nitroprusside ...
DEFF Research Database (Denmark)
Clausen, Bjørn; Lorentzen, Torben
1997-01-01
The uniaxial behavior of aluminum polycrystals is simulated using a rate-independent incremental self-consistent elastic-plastic polycrystal deformation model, and the results are evaluated by neutron diffraction measurements. The elastic strains deduced from the model show good agreement...... with the experimental results for the 111 and 220 reflections, whereas the predicted elastic strain level for the 200 reflection is, in general, approximately 10 pct too low in the plastic regime....
Pontes, J.; Walgraef, D.; Christov, C. I.
2010-11-01
Strain localization and dislocation pattern formation are typical features of plastic deformation in metals and alloys. Glide and climb dislocation motion along with accompanying production/annihilation processes of dislocations lead to the occurrence of instabilities of initially uniform dislocation distributions. These instabilities result into the development of various types of dislocation micro-structures, such as dislocation cells, slip and kink bands, persistent slip bands, labyrinth structures, etc., depending on the externally applied loading and the intrinsic lattice constraints. The Walgraef-Aifantis (WA) (Walgraef and Aifanits, J. Appl. Phys., 58, 668, 1985) model is an example of a reaction-diffusion model of coupled nonlinear equations which describe 0 formation of forest (immobile) and gliding (mobile) dislocation densities in the presence of cyclic loading. This paper discuss two versions of the WA model and focus on a finite difference, second order in time 1-Nicolson semi-implicit scheme, with internal iterations at each time step and a spatial splitting using the Stabilizing, Correction (Christov and Pontes, Mathematical and Computer Modelling, 35, 87, 2002) for solving the model evolution equations in two dimensions. The results of two simulations are presented. More complete results will appear in a forthcoming paper.
Fuentes-Franco, Ramón; Giorgi, Filippo; Coppola, Erika; Zimmermann, Klaus
2017-07-01
The sensitivity of simulated tropical cyclones (TCs) to resolution, convection scheme and ocean surface flux parameterization is investigated with a regional climate model (RegCM4) over the CORDEX Central America domain, including the Tropical North Atlantic (TNA) and Eastern Tropical Pacific (ETP) basins. Simulations for the TC seasons of the ten-year period (1989-1998) driven by ERA-Interim reanalysis fields are completed using 50 and 25 km grid spacing, two convection schemes (Emanuel, Em; and Kain-Fritsch, KF) and two ocean surface flux representations, a Monin-Obukhov scheme available in the BATS land surface package (Dickinson et al. 1993), and the scheme of Zeng et al. (J Clim 11(10):2628-2644, 1998). The model performance is assessed against observed TC characteristics for the simulation period. In general, different sensitivities are found over the two basins investigated. The simulations using the KF scheme show higher TC density, longer TC duration (up to 15 days) and stronger peak winds (>50 ms-1) than those using Em (<40 ms-1). All simulations show a better spatial representation of simulated TC density and interannual variability over the TNA than over the ETP. The 25 km resolution simulations show greater TC density, duration and intensity compared to the 50 km resolution ones, especially over the ETP basin, and generally more in line with observations. Simulated TCs show a strong sensitivity to ocean fluxes, especially over the TNA basin, with the Monin-Obukhov scheme leading to an overestimate of the TC number, and the Zeng scheme being closer to observations. All simulations capture the density of cyclones during active TC seasons over the TNA, however, without data assimilation, the tracks of individual events do not match closely the corresponding observed ones. Overall, the best model performance is obtained when using the KF and Zeng schemes at 25 km grid spacing.
Directory of Open Access Journals (Sweden)
O. Yu. Mukhomorova
2015-01-01
Full Text Available Anti-angiogenesis therapy is an alternative and successfully employed method for treatment of cancerous tumour. However, this therapy isn't widely used in medicine because of expensive drugs. It leads naturally to elaboration of such treatment regimens which use minimum amount of drugs.The aim of the paper is to investigate the model of development of illness and elaborate appropriate treatment regimens in the case of early diagnosis of the disease. The given model reflects the therapy at an intermediate stage of the disease treatment. Further treatment is aimed to destroy cancer cells and may be continued by other means, which are not reflected in the model.Analysis of the main properties of the model was carried out with consideration of two types of auxiliary systems. In the first case, the system is considered without control, as a model of tumour development in the absence of medical treatment. The study of the equilibrium point and determination of its type allowed us to describe disease dynamics and to determine tumour size resulting in death. In the second case a model with a constant control was investigated. The study of its equilibrium point showed that continuous control is not sufficient to support satisfactory patient's condition, and it is necessary to elaborate more complex treatment regimens. For this purpose, we used the method of terminal problems consisting in the search for such program control which forces system to a given final state. Selecting the initial and final states is due to medical grounds.As a result, we found two treatment regimens | one-stage treatment regimen and multi-stage one. The properties of each treatment regimen are analyzed and compared. The total amount of used drugs was a criterion for comparing these two treatment regimens. The theoretical conclusions obtained in this work are supported by computer modeling in MATLAB environment.
Energy Technology Data Exchange (ETDEWEB)
Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.
Directory of Open Access Journals (Sweden)
Pascalle C. Smith
2012-11-01
Full Text Available This paper presents a simple approach for estimating the spatial and temporal variability of seasonal net irrigation water requirement (IWR at the catchment scale, based on gridded land use, soil and daily weather data at 500 × 500 m resolution. In this approach, IWR is expressed as a bounded, linear function of the atmospheric water budget, whereby the latter is defined as the difference between seasonal precipitation and reference evapotranspiration. To account for the effects of soil and crop properties on the soil water balance, the coefficients of the linear relation are expressed as a function of the soil water holding capacity and the so-called crop coefficient. The 12 parameters defining the relation were estimated with good coefficients of determination from a systematic analysis of simulations performed at daily time step with a FAO-type point-scale model for five climatically contrasted sites around the River Rhone and for combinations of six crop and ten soil types. The simple scheme was found to reproduce well results obtained with the daily model at six additional verification sites. We applied the simple scheme to the assessment of irrigation requirements in the whole Swiss Rhone catchment. The results suggest seasonal requirements of 32 × 106 m3 per year on average over 1981–2009, half of which at altitudes above 1500 m. They also disclose a positive trend in the intensity of extreme events over the study period, with an estimated total IWR of 55 × 106 m3 in 2009, and indicate a 45% increase in water demand of grasslands during the 2003 European heat wave in the driest area of the studied catchment. In view of its simplicity, the approach can be extended to other applications, including assessments of the impacts of climate and land-use change.
CSIR Research Space (South Africa)
Jovanovic, Nebo
2017-01-01
Full Text Available .4314/wsa.v43i1.15 Available on website http://www.wrc.org.za ISSN 1816-7950 (Online) = Water SA Vol. 43 No. 1 January 2017 Published under a Creative Commons Attribution Licence Hydrogeological modelling of the Atlantis aquifer for management support... and the delineation of groundwater protection zones. Keywords: Groundwater abstraction; managed aquifer recharge; MODFLOW; particle tracking; scenario modelling 123 http://dx.doi.org/10.4314/wsa.v43i1.15 Available on website http://www.wrc.org.za ISSN 1816...
Experimental Modeling of Monolithic Resistors for Silicon ICS with a Robust Optimizer-Driving Scheme
Directory of Open Access Journals (Sweden)
Philippe Leduc
2002-06-01
Full Text Available Today, an exhaustive library of models describing the electrical behavior of integrated passive components in the radio-frequency range is essential for the simulation and optimization of complex circuits. In this work, a preliminary study has been done on Tantalum Nitride (TaN resistors integrated on silicon, and this leads to a single p-type lumped-element circuit. An efficient extraction technique will be presented to provide a computer-driven optimizer with relevant initial model parameter values (the "guess-timate". The results show the unicity in most cases of the lumped element determination, which leads to a precise simulation of self-resonant frequencies.
Directory of Open Access Journals (Sweden)
P. Jiménez-Guerrero
2011-05-01
Full Text Available A number of attempts have been made to incorporate sea-salt aerosol (SSA source functions in chemistry transport models with varying results according to the complexity of the scheme considered. This contribution compares the inclusion of two different SSA algorithms in two chemistry transport models: CMAQ and CHIMERE. The main goal is to examine the differences in average SSA mass and composition and to study the seasonality of the prediction of SSA when applied to the Mediterranean area with high resolution for a reference year. Dry and wet deposition schemes are also analyzed to better understand the differences observed between both models in the target area. The applied emission algorithm in CHIMERE uses a semi-empirical formulation which obtains the surface emission rate of SSA as a function of the particle size and the surface wind speed raised to the power 3.41. The emission parameterization included within CMAQ is somehow more sophisticated, since fluxes of SSA are corrected with relative humidity. In order to evaluate their strengths and weaknesses, the participating algorithms as implemented in the chemistry transport models were evaluated against AOD measurements from Aeronet and available surface measurements in Southern Europe and the Mediterranean area, showing biases around −0.002 and −1.2 μg m^{−3}, respectively. The results indicate that both models represent accurately the patterns and dynamics of SSA and its non-uniform behavior in the Mediterranean basin, showing a strong seasonality. The levels of SSA strongly vary across the Western and the Eastern Mediterranean, reproducing CHIMERE higher annual levels in the Aegean Sea (12 μg m^{−3} and CMAQ in the Gulf of Lion (9 μg m^{−3}. The large difference found for the ratio PM_{2.5}/total SSA in CMAQ and CHIMERE is also investigated. The dry and wet removal rates are very similar for both models despite the different schemes
A new adaptive control scheme based on the interacting multiple model (IMM) estimation
International Nuclear Information System (INIS)
Afshari, Hamed H.; Al-Ani, Dhafar; Habibi, Saeid
2016-01-01
In this paper, an Interacting multiple model (IMM) adaptive estimation approach is incorporated to design an optimal adaptive control law for stabilizing an Unmanned vehicle. Due to variations of the forward velocity of the Unmanned vehicle, its aerodynamic derivatives are constantly changing. In order to stabilize the unmanned vehicle and achieve the control objectives for in-flight conditions, one seeks for an adaptive control strategy that can adjust itself to varying flight conditions. In this context, a bank of linear models is used to describe the vehicle dynamics in different operating modes. Each operating mode represents a particular dynamic with a different forward velocity. These models are then used within an IMM filter containing a bank of Kalman filters (KF) in a parallel operating mechanism. To regulate and stabilize the vehicle, a Linear quadratic regulator (LQR) law is designed and implemented for each mode. The IMM structure determines the particular mode based on the stored models and in-flight input-output measurements. The LQR controller also provides a set of controllers; each corresponds to a particular flight mode and minimizes the tracking error. Finally, the ultimate control law is obtained as a weighted summation of all individual controllers whereas weights are obtained using mode probabilities of each operating mode.
A new adaptive control scheme based on the interacting multiple model (IMM) estimation
Energy Technology Data Exchange (ETDEWEB)
Afshari, Hamed H.; Al-Ani, Dhafar; Habibi, Saeid [McMaster University, Hamilton (Canada)
2016-06-15
In this paper, an Interacting multiple model (IMM) adaptive estimation approach is incorporated to design an optimal adaptive control law for stabilizing an Unmanned vehicle. Due to variations of the forward velocity of the Unmanned vehicle, its aerodynamic derivatives are constantly changing. In order to stabilize the unmanned vehicle and achieve the control objectives for in-flight conditions, one seeks for an adaptive control strategy that can adjust itself to varying flight conditions. In this context, a bank of linear models is used to describe the vehicle dynamics in different operating modes. Each operating mode represents a particular dynamic with a different forward velocity. These models are then used within an IMM filter containing a bank of Kalman filters (KF) in a parallel operating mechanism. To regulate and stabilize the vehicle, a Linear quadratic regulator (LQR) law is designed and implemented for each mode. The IMM structure determines the particular mode based on the stored models and in-flight input-output measurements. The LQR controller also provides a set of controllers; each corresponds to a particular flight mode and minimizes the tracking error. Finally, the ultimate control law is obtained as a weighted summation of all individual controllers whereas weights are obtained using mode probabilities of each operating mode.
Noh, S.J.; Rakovec, O.; Weerts, A.H.; Tachikawa, Y.
2014-01-01
We investigate the effects of noise specification on the quality of hydrological forecasts via an advanced data assimilation (DA) procedure using a distributed hydrological model driven by numerical weather predictions. The sequential DA procedure is based on (1) a multivariate rainfall ensemble
DEFF Research Database (Denmark)
Weerts, Hermanus H. M.; Shafiei, Seyed Ehsan; Stoustrup, Jakob
2014-01-01
A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predicti...
International Nuclear Information System (INIS)
Georges, Gabriel
2016-01-01
High Energy Density Physics (HEDP) flows are multi-material flows characterized by strong shock waves and large changes in the domain shape due to rare faction waves. Numerical schemes based on the Lagrangian formalism are good candidates to model this kind of flows since the computational grid follows the fluid motion. This provides accurate results around the shocks as well as a natural tracking of multi-material interfaces and free-surfaces. In particular, cell-centered Finite Volume Lagrangian schemes such as GLACE (Godunov-type Lagrangian scheme Conservative for total Energy) and EUCCLHYD (Explicit Unstructured Cell-Centered Lagrangian Hydrodynamics) provide good results on both the modeling of gas dynamics and elastic-plastic equations. The work produced during this PhD thesis is in continuity with the work of Maire and Nkonga [JCP, 2009] for the hydrodynamic part and the work of Kluth and Despres [JCP, 2010] for the hyper elasticity part. More precisely, the aim of this thesis is to develop robust and accurate methods for the 3D extension of the EUCCLHYD scheme with a second-order extension based on MUSCL (Monotonic Upstream-centered Scheme for Conservation Laws) and GRP (Generalized Riemann Problem) procedures. A particular care is taken on the preservation of symmetries and the monotonicity of the solutions. The scheme robustness and accuracy are assessed on numerous Lagrangian test cases for which the 3D extensions are very challenging. (author) [fr
Energy Technology Data Exchange (ETDEWEB)
Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew
2006-01-01
Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and
Integrating a reservoir regulation scheme into a spatially distributed hydrological model
Energy Technology Data Exchange (ETDEWEB)
Zhao, Gang; Gao, Huilin; Naz, Bibi S.; Kao, Shih-Chieh; Voisin, Nathalie
2016-12-01
During the past several decades, numerous reservoirs have been built across the world for a variety of purposes such as flood control, irrigation, municipal/industrial water supplies, and hydropower generation. Consequently, natural streamflow timing and magnitude have been altered significantly by reservoir operations. In addition, the hydrological cycle can be modified by land use/land cover and climate changes. To understand the fine scale feedback between hydrological processes and water management decisions, a distributed hydrological model embedded with a reservoir component is of desire. In this study, a multi-purpose reservoir module with predefined complex operational rules was integrated into the Distributed Hydrology Soil Vegetation Model (DHSVM). Conditional operating rules, which are designed to reduce flood risk and enhance water supply reliability, were adopted in this module. The performance of the integrated model was tested over the upper Brazos River Basin in Texas, where two U.S. Army Corps of Engineers reservoirs, Lake Whitney and Aquilla Lake, are located. The integrated DHSVM model was calibrated and validated using observed reservoir inflow, outflow, and storage data. The error statistics were summarized for both reservoirs on a daily, weekly, and monthly basis. Using the weekly reservoir storage for Lake Whitney as an example, the coefficients of determination (R2) and the Nash-Sutcliff Efficiency (NSE) are 0.85 and 0.75, respectively. These results suggest that this reservoir module has promise for use in sub-monthly hydrological simulations. Enabled with the new reservoir component, the DHSVM model provides a platform to support adaptive water resources management under the impacts of evolving anthropogenic activities and substantial environmental changes.
Lapenta, William M.; Suggs, Ron; McNider, Richard T.; Jedlovec, Gary; Dembek, Scott R.; Goodman, H. Michael (Technical Monitor)
2000-01-01
A technique has been developed for assimilating GOES-derived skin temperature tendencies and insolation into the surface energy budget equation of a mesoscale model so that the simulated rate of temperature change closely agrees with the satellite observations. A critical assumption of the technique is that the availability of moisture (either from the soil or vegetation) is the least known term in the model's surface energy budget. Therefore, the simulated latent heat flux, which is a function of surface moisture availability, is adjusted based upon differences between the modeled and satellite-observed skin temperature tendencies. An advantage of this technique is that satellite temperature tendencies are assimilated in an energetically consistent manner that avoids energy imbalances and surface stability problems that arise from direct assimilation of surface shelter temperatures. The fact that the rate of change of the satellite skin temperature is used rather than the absolute temperature means that sensor calibration is not as critical. The technique has been employed on a semi-operational basis at the GHCC within the PSU/NCAR MM5. Assimilation has been performed on a grid centered over the Southeastern US since November 1998. Results from the past year show that assimilation of the satellite data reduces both the bias and RMSE for simulations of surface air temperature and relative humidity. These findings are based on comparison of assimilation runs with a control using the simple 5-layer soil model available in MM5. A significant development in the past several months was the inclusion of the detailed Oregon State University land surface model (OSU/LSM) as an option within MM5. One of our working hypotheses has been that the assimilation technique, although simple, may provide better short-term forecasts than a detailed LSM that requires significant number initialized parameters. Preliminary results indicate that the assimilation out performs the OSU
Huang, Yao-Hsien; Tsai, Yuan-Yu
2015-06-01
Reversibility is the ability to recover the stego media back to the cover media without any error after correctly extracting the secret message. This study proposes a reversible data hiding scheme for 3D polygonal models based on histogram shifting. Specifically, the histogram construction is based on the geometric similarity between neighboring vertices. The distances between the neighboring vertices in a 3D model with some point in the 3D space are usually similar, especially for a high-resolution 3D model. Therefore, the difference between the above distances of the neighboring vertices has a small value for a high probability. This study uses the modified breadth-first search to traverse each vertex once in a sequential order and determine the unique referencing neighbor for each vertex. The histogram is then constructed based on the normalized distance difference of neighboring vertices. This approach significantly increases embedding capacity. Experimental results show that the proposed algorithm can achieve higher embedding capacity than existing algorithms while still maintaining acceptable model distortion. This algorithm also provides greater robustness against similarity transformation attacks and vertex reordering attacks. The proposed technique is feasible for 3D reversible data hiding.
A Bayesian spatial assimilation scheme for snow coverage observations in a gridded snow model
Directory of Open Access Journals (Sweden)
S. Kolberg
2006-01-01
Full Text Available A method for assimilating remotely sensed snow covered area (SCA into the snow subroutine of a grid distributed precipitation-runoff model (PRM is presented. The PRM is assumed to simulate the snow state in each grid cell by a snow depletion curve (SDC, which relates that cell's SCA to its snow cover mass balance. The assimilation is based on Bayes' theorem, which requires a joint prior distribution of the SDC variables in all the grid cells. In this paper we propose a spatial model for this prior distribution, and include similarities and dependencies among the grid cells. Used to represent the PRM simulated snow cover state, our joint prior model regards two elevation gradients and a degree-day factor as global variables, rather than describing their effect separately for each cell. This transformation results in smooth normalised surfaces for the two related mass balance variables, supporting a strong inter-cell dependency in their joint prior model. The global features and spatial interdependency in the prior model cause each SCA observation to provide information for many grid cells. The spatial approach similarly facilitates the utilisation of observed discharge. Assimilation of SCA data using the proposed spatial model is evaluated in a 2400 km2 mountainous region in central Norway (61° N, 9° E, based on two Landsat 7 ETM+ images generalized to 1 km2 resolution. An image acquired on 11 May, a week before the peak flood, removes 78% of the variance in the remaining snow storage. Even an image from 4 May, less than a week after the melt onset, reduces this variance by 53%. These results are largely improved compared to a cell-by-cell independent assimilation routine previously reported. Including observed discharge in the updating information improves the 4 May results, but has weak effect on 11 May. Estimated elevation gradients are shown to be sensitive to informational deficits occurring at high altitude, where snowmelt has not started
Time-Varying Scheme for Noncentralized Model Predictive Control of Large-Scale Systems
Directory of Open Access Journals (Sweden)
Alfredo Núñez
2015-01-01
Full Text Available The noncentralized model predictive control (NC-MPC framework in this paper refers to any distributed, hierarchical, or decentralized model predictive controller (or a combination of them the structure of which can change over time and the control actions of which are not obtained based on a centralized computation. Within this framework, we propose suitable online methods to decide which information is shared and how this information is used between the different local predictive controllers operating in a decentralized, distributed, and/or hierarchical way. Evaluating all the possible structures of the NC-MPC controller leads to a combinatorial optimization problem. Therefore, we also propose heuristic reduction methods, to keep the number of NC-MPC problems tractable to be solved. To show the benefits of the proposed framework, a case study of a set of coupled water tanks is presented.
Network Regulation and Support Schemes
DEFF Research Database (Denmark)
Ropenus, Stephanie; Schröder, Sascha Thorsten; Jacobsen, Henrik
2009-01-01
At present, there exists no explicit European policy framework on distributed generation. Various Directives encompass distributed generation; inherently, their implementation is to the discretion of the Member States. The latter have adopted different kinds of support schemes, ranging from feed-in...... tariffs to market-based quota systems, and network regulation approaches, comprising rate-of-return and incentive regulation. National regulation and the vertical structure of the electricity sector shape the incentives of market agents, notably of distributed generators and network operators...
Roulet, Yves-Alain F.; Clappier, Alain
2005-01-01
Growing population, extensive use (and abuse) of the natural resources, increasing pollutants emissions in the atmosphere: these are a few obstacles (and not the least) one has to face with nowadays to ensure the sustainability of our planet in general, and of the air quality in particular. In the case of air pollution, the processes that govern the transport and the chemical transformation of pollutants are highly complex and non-linear. The use of numerical models for simulating meteorologi...
Directory of Open Access Journals (Sweden)
Zhaohui Cen
2015-01-01
Full Text Available Maximum power point tracking (MPPT for photovoltaic (PV arrays is essential to optimize conversion efficiency under variable and nonuniform irradiance conditions. Unfortunately, conventional MPPT algorithms such as perturb and observe (P&O, incremental conductance, and current sweep method need to iterate command current or voltage and frequently operate power converters with associated losses. Under partial overcast conditions, tracking the real MPP in multipeak P-I or P-V curve model becomes highly challenging, with associated increase in search time and converter operation, leading to unnecessary power being lost in the MPP tracking process. In this paper, the noted drawbacks in MPPT-controlled converters are addressed. In order to separate the search algorithms from converter operation, a model parameter identification approach is presented to estimate insolation conditions of each PV panel and build a real-time overall P-I curve of PV arrays. Subsequently a simple but effective global MPPT algorithm is proposed to track the MPP in the overall P-I curve obtained from the identified PV array model, ensuring that the converter works at the MPP. The novel MPPT is ultrafast, resulting in conserved power in the tracking process. Finally, simulations in different scenarios are executed to validate the novel scheme’s effectiveness and advantages.
A stable and robust calibration scheme of the log-periodic power law model
Filimonov, V.; Sornette, D.
2013-09-01
We present a simple transformation of the formulation of the log-periodic power law formula of the Johansen-Ledoit-Sornette (JLS) model of financial bubbles that reduces it to a function of only three nonlinear parameters. The transformation significantly decreases the complexity of the fitting procedure and improves its stability tremendously because the modified cost function is now characterized by good smooth properties with in general a single minimum in the case where the model is appropriate to the empirical data. We complement the approach with an additional subordination procedure that slaves two of the nonlinear parameters to the most crucial nonlinear parameter, the critical time tc, defined in the JLS model as the end of the bubble and the most probable time for a crash to occur. This further decreases the complexity of the search and provides an intuitive representation of the results of the calibration. With our proposed methodology, metaheuristic searches are not longer necessary and one can resort solely to rigorous controlled local search algorithms, leading to a dramatic increase in efficiency. Empirical tests on the Shanghai Composite index (SSE) from January 2007 to March 2008 illustrate our findings.
Gijben, Morné; Dyson, Liesl L.; Loots, Mattheus T.
2017-09-01
Cloud-to-ground lightning data from the Southern Africa Lightning Detection Network and numerical weather prediction model parameters from the Unified Model are used to develop a lightning threat index (LTI) for South Africa. The aim is to predict lightning for austral summer days (September to February) by means of a statistical approach. The austral summer months are divided into spring and summer seasons and analysed separately. Stepwise logistic regression techniques are used to select the most appropriate model parameters to predict lightning. These parameters are then utilized in a rare-event logistic regression analysis to produce equations for the LTI that predicts the probability of the occurrence of lightning. Results show that LTI forecasts have a high sensitivity and specificity for spring and summer. The LTI is less reliable during spring, since it over-forecasts the occurrence of lightning. However, during summer, the LTI forecast is reliable, only slightly over-forecasting lightning activity. The LTI produces sharp forecasts during spring and summer. These results show that the LTI will be useful early in the morning in areas where lightning can be expected during the day.
Clancy, Colm; Lynch, Peter
2010-05-01
A filtering numerical time-integration scheme is being developed. Using a modified inversion to the Laplace Transform (LT), the scheme is designed to remove spurious noise while faithfully simulating low frequency atmospheric modes. The method has been compared with traditional semi-implicit schemes in a shallow water framework and shows a number of advantages. In particular we are investigating the behaviour of a semi-Lagrangian formulation of the LT scheme in the presence of orography. We will also discuss its effects on the energy spectra of atmospheric simulations.
High-resolution weather forecasting is affected by many aspects, i.e. model initial conditions, subgrid-scale cumulus convection and cloud microphysics schemes. Recent 12km grid studies using the Weather Research and Forecasting (WRF) model have identified the importance of inco...
Gómez, I.; Ronda, R.J.; Caselles, V.; Estrela, M.J.
2016-01-01
This paper proposes the implementation of different non-local Planetary Boundary Layer schemes within the Regional Atmospheric Modeling System (RAMS) model. The two selected PBL parameterizations are the Medium-Range Forecast (MRF) PBL and its updated version, known as the Yonsei University (YSU)
Wang, Tianbo; Zhou, Wuneng; Zhao, Shouwei; Yu, Weiqin
2014-03-01
In this paper, the robust exponential synchronization problem for a class of uncertain delayed master-slave dynamical system is investigated by using the adaptive control method. Different from some existing master-slave models, the considered master-slave system includes bounded unmodeled dynamics. In order to compensate the effect of unmodeled dynamics and effectively achieve synchronization, a novel adaptive controller with simple updated laws is proposed. Moreover, the results are given in terms of LMIs, which can be easily solved by LMI Toolbox in Matlab. A numerical example is given to illustrate the effectiveness of the method. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Lode, Axel U.J.
2013-06-03
This thesis explores the quantum many-body tunneling dynamics of open ultracold bosonic systems with the recently developed multiconfigurational time-dependent Hartree for bosons (MCTDHB) method. The capabilities of MCTDHB to provide solutions to the full time-dependent many-body problem are assessed in a benchmark using the analytically solvable harmonic interaction Hamiltonian and a generalization of it with time-dependent both one- and two-body potentials. In a comparison with numerically exact MCTDHB results, it is shown that e.g. lattice methods fail qualitatively to describe the tunneling dynamics. A model assembling the many-body physics of the process from basic simultaneously happening single-particle processes is derived and verified with a numerically exact MCTDHB description. The generality of the model is demonstrated even for strong interactions and large particle numbers. The ejection of the bosons from the source occurs with characteristic velocities. These velocities are defined by the chemical potentials of systems with different particle numbers which are converted to kinetic energy. The tunneling process is accompanied by fragmentation: the ejected bosons lose their coherence with the source and among each other. It is shown that the various aspects of the tunneling dynamics' can be controlled well with the interaction and the potential threshold.
A Fovea Localization Scheme Using Vessel Origin-Based Parabolic Model
Directory of Open Access Journals (Sweden)
Chun-Yuan Yu
2014-09-01
Full Text Available At the center of the macula, fovea plays an important role in computer-aided diagnosis. To locate the fovea, this paper proposes a vessel origin (VO-based parabolic model, which takes the VO as the vertex of the parabola-like vasculature. Image processing steps are applied to accurately locate the fovea on retinal images. Firstly, morphological gradient and the circular Hough transform are used to find the optic disc. The structure of the vessel is then segmented with the line detector. Based on the characteristics of the VO, four features of VO are extracted, following the Bayesian classification procedure. Once the VO is identified, the VO-based parabolic model will locate the fovea. To find the fittest parabola and the symmetry axis of the retinal vessel, an Shift and Rotation (SR-Hough transform that combines the Hough transform with the shift and rotation of coordinates is presented. Two public databases of retinal images, DRIVE and STARE, are used to evaluate the proposed method. The experiment results show that the average Euclidean distances between the located fovea and the fovea marked by experts in two databases are 9.8 pixels and 30.7 pixels, respectively. The results are stronger than other methods and thus provide a better macular detection for further disease discovery.
Reduced 3d modeling on injection schemes for laser wakefield acceleration at plasma scale lengths
Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo
2017-10-01
Current modelling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) codes which are computationally demanding. In PIC simulations the laser wavelength λ0, in μm-range, has to be resolved over the acceleration lengths in meter-range. A promising approach is the ponderomotive guiding center solver (PGC) by only considering the laser envelope for laser pulse propagation. Therefore only the plasma skin depth λp has to be resolved, leading to speedups of (λp /λ0) 2. This allows to perform a wide-range of parameter studies and use it for λ0 Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014 and PD/BD/105882/2014.
Performance Assessment of the VSC Using Two Model Predictive Control Schemes
DEFF Research Database (Denmark)
Al hasheem, Mohamed; Abdelhakim, Ahmed; Dragicevic, Tomislav
2018-01-01
Finite control set model predictive control (FCS-MPC) methods in different power electronics application are gaining high attention due to their simplicity and fast dynamics. This paper introduces an experimental assessment of the two-level three-phase voltage source converter (2L-VSC) using two...... FCS-MPC algorithms. In order to perform such comparative evaluation, the 2L-VSC efficiency and total harmonics distortion voltage (THDv) have been measured where considering a linear load and non-linear load. The new algorithm gives better results than the conventional algorithm in terms of the THD...... and 2L-VSC efficiency. The results also demonstrate the performance of the system using carrier based pulse width modulation (CB-PWM). These findings have been validated for both linear and non-linear loads through experimental verification on 4 kW 2L-VSC prototype. It can be concluded that a comparable...
Prototype-based Models for the Supervised Learning of Classification Schemes
Biehl, Michael; Hammer, Barbara; Villmann, Thomas
2017-06-01
An introduction is given to the use of prototype-based models in supervised machine learning. The main concept of the framework is to represent previously observed data in terms of so-called prototypes, which reflect typical properties of the data. Together with a suitable, discriminative distance or dissimilarity measure, prototypes can be used for the classification of complex, possibly high-dimensional data. We illustrate the framework in terms of the popular Learning Vector Quantization (LVQ). Most frequently, standard Euclidean distance is employed as a distance measure. We discuss how LVQ can be equipped with more general dissimilarites. Moreover, we introduce relevance learning as a tool for the data-driven optimization of parameterized distances.
Fault detection in processes represented by PLS models using an EWMA control scheme
Harrou, Fouzi
2016-10-20
Fault detection is important for effective and safe process operation. Partial least squares (PLS) has been used successfully in fault detection for multivariate processes with highly correlated variables. However, the conventional PLS-based detection metrics, such as the Hotelling\\'s T and the Q statistics are not well suited to detect small faults because they only use information about the process in the most recent observation. Exponentially weighed moving average (EWMA), however, has been shown to be more sensitive to small shifts in the mean of process variables. In this paper, a PLS-based EWMA fault detection method is proposed for monitoring processes represented by PLS models. The performance of the proposed method is compared with that of the traditional PLS-based fault detection method through a simulated example involving various fault scenarios that could be encountered in real processes. The simulation results clearly show the effectiveness of the proposed method over the conventional PLS method.
Wong, Tony E.; Nusbaumer, Jesse; Noone, David C.
2017-06-01
All physical process models and field observations are inherently imperfect, so there is a need to both (1) obtain measurements capable of constraining quantities of interest and (2) develop frameworks for assessment in which the desired processes and their uncertainties may be characterized. Incorporation of stable water isotopes into land surface schemes offers a complimentary approach to constrain hydrological processes such as evapotranspiration, and yields acute insight into the hydrological and biogeochemical behaviors of the domain. Here a stable water isotopic scheme in the National Center for Atmospheric Research's version 4 of the Community Land Model (CLM4) is presented. An overview of the isotopic methods is given. Isotopic model results are compared to available data sets on site-level and global scales for validation. Comparisons of site-level soil moisture and isotope ratios reveal that surface water does not percolate as deeply into the soil as observed in field measurements. The broad success of the new model provides confidence in its use for a range of climate and hydrological studies, while the sensitivity of simulation results to kinetic processes stands as a reminder that new theoretical development and refinement of kinetic effect parameterizations is needed to achieve further improvements.
Development of Non-staggered, SMAC numerical scheme for a two-fluid model
International Nuclear Information System (INIS)
Yoon, H. Y.; Jeong, Jae Jun
2007-06-01
The SMAC(Simplified Marker And Cell) method, along with the SIMPLE, has long been used efficiently for the computational fluid dynamics. Usually, the majority of the applications are single phase compressible or incompressible fluids, and the numerical methods are to be modified to implement the following items for the analysis of two-phase flows. - Non-staggered grid for the analysis of a complex geometry - Application of the two-phase models - Coupling of the energy conservation equations - Two-phase flows with phase change. In this report, the SMAC Method is reviewed and extended to compressible two-phase flows with phase change. A pilot code CUPID-M is developed using the proposed numerical method. A set of verification calculations are carried out for th CUPID-M. At first, isothermal air-water flow is simulated to verify the numerical method against two-phase flow flow problems. Next, the two-phase flows with phase change are calculated using CUPID-M and the results are compared to that of CUPID-I, which is based on the coupled ICE method. The calculation time is shorter with CUPID-M than with CUPID-I, while the calculations are unstable with CUPID-M for the rapid phase change problems. Thus, CUPID-M and CUPID-I can be used as an user input considering the application problems
Phase transitions in two-dimensional uniformly frustrated XY models. II. General scheme
International Nuclear Information System (INIS)
Korshunov, S.E.
1986-01-01
For two-dimensional uniformly frustrated XY models the group of symmetry spontaneously broken in the ground state is a cross product of the group of two-dimensional rotations by some discrete group of finite order. Different possibilities of phase transitions in such systems are investigated. The transition to the Coulomb gas with noninteger charges is widely used when analyzing the properties of relevant topological excitations. The number of these excitations includes not only domain walls and traditional (integer) vortices, but also vortices with a fractional number of circulation quanta which are to be localized at bends and intersections of domain walls. The types of possible phase transitions prove to be dependent on their relative sequence: in the case the vanishing of domain wall free energy occurs earlier (at increasing temperature) than the dissociation of pairs of ordinary vortices, the second phase transition is to be associated with dissociation of pairs of fractional vortices. The general statements are illustrated with a number of examples
International Nuclear Information System (INIS)
Pin, F.G.
1993-01-01
Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ''minimal model'' for accomplishing given tasks and proposes to utilize only the minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept
Pei, Jin-Song; Mai, Eric C.
2007-04-01
This paper introduces a continuous effort towards the development of a heuristic initialization methodology for constructing multilayer feedforward neural networks to model nonlinear functions. In this and previous studies that this work is built upon, including the one presented at SPIE 2006, the authors do not presume to provide a universal method to approximate arbitrary functions, rather the focus is given to the development of a rational and unambiguous initialization procedure that applies to the approximation of nonlinear functions in the specific domain of engineering mechanics. The applications of this exploratory work can be numerous including those associated with potential correlation and interpretation of the inner workings of neural networks, such as damage detection. The goal of this study is fulfilled by utilizing the governing physics and mathematics of nonlinear functions and the strength of the sigmoidal basis function. A step-by-step graphical procedure utilizing a few neural network prototypes as "templates" to approximate commonly seen memoryless nonlinear functions of one or two variables is further developed in this study. Decomposition of complex nonlinear functions into a summation of some simpler nonlinear functions is utilized to exploit this prototype-based initialization methodology. Training examples are presented to demonstrate the rationality and effciency of the proposed methodology when compared with the popular Nguyen-Widrow initialization algorithm. Future work is also identfied.
CLSM: COUPLE LAYERED SECURITY MODEL A HIGH-CAPACITY DATA HIDING SCHEME USING WITH STEGANOGRAPHY
Directory of Open Access Journals (Sweden)
Cemal Kocak
2017-03-01
Full Text Available Cryptography and steganography are the two significant techniques used in secrecy of communications and in safe message transfer. In this study CLSM – Couple Layered Security Model is suggested which has a hybrid structure enhancing information security using features of cryptography and steganography. In CLSM system; the information which has been initially cryptographically encrypted is steganographically embedded in an image at the next step. The information is encrypted by means of a Text Keyword consisting of maximum 16 digits determined by the user in cryptography method. Similarly, the encrypted information is processed, during the embedding stage, using a 16 digit pin (I-PIN which is determined again by the user. The carrier images utilized in the study have been determined as 24 bit/pixel colour. Utilization of images in .jpeg, .tiff, .pnp format has also been provided. The performance of the CLSM method has been evaluated according to the objective quality measurement criteria of PSNR-dB (Peak Signal-to-Noise Ratio and SSIM (Structural Similarity Index. In the study, 12 different sized information between 1000 and 609,129 bits were embedded into images. Between 34.14 and 65.8 dB PSNR values and between 0.989 and 0.999 SSIM values were obtained. CLSM showed better results compared to Pixel Value Differencing (PVD method, Simulated Annealing (SA Algorithm and Mix column transform based on irreducible polynomial mathematics methods.
Energy Technology Data Exchange (ETDEWEB)
Pin, F.G.
1993-11-01
Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ``minimal model`` for accomplishing given tasks and proposes to utilize only the minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept.
DEFF Research Database (Denmark)
Uthes, Sandra; Sattler, Claudia; Piorr, Annette
2010-01-01
production orientations and grassland types was modeled under the presence and absence of the grassland extensification scheme using the bio-economic model MODAM. Farms were based on available accountancy data and surveyed production data, while information on farm location within the district was derived...... from a spatial allocation procedure. The reduction in total gross margin per unit area was used to measure on-farm compliance costs. A dimensionless environmental index was used to assess the suitability of the scheme to reduce the risk of nitrate-leaching. Calculated on-farm compliance costs...
Wu, Fu-Chun; Shao, Yun-Chuan; Chen, Yu-Chen
2011-09-01
The forcing effect of channel width variations on free bars is investigated in this study using a two-dimensional depth-averaged morphodynamic model. The novel feature of the model is the incorporation of a characteristic dissipative Galerkin (CDG) upwinding scheme in the bed evolution module. A correction for the secondary flows induced by streamline curvature is also included, allowing for simulations of bar growth and migration in channels with width variations beyond the small-amplitude regimes. The model is tested against a variety of experimental data ranging from purely forced and free bars to coexisting bed forms in the variable-width channel. The CDG scheme effectively dissipates local bed oscillations, thus sustains numerical stabilities. The results show that the global effect of width variations on bar height is invariably suppressive. Such effect increases with the dimensionless amplitude AC and wave number λC of width variations. For small AC, λC has little effects on bar height; for AC beyond small amplitudes, however, the suppressing effect depends on both AC and λC. The suppressing effect on bar length increases also with both AC and λC, but is much weaker than that on bar height. The global effect of width variations on bar celerity can be suppressive or enhancive, depending on the combination of AC and λC. For smaller λC, the effect on bar celerity is enhancive; for larger λC, bar celerity tends to increase at small AC but decreases for AC beyond small amplitudes. We present herein an unprecedented data set verifying the theoretical prediction on celerity enhancement. Full suppression of bar growth above the theoretically predicted threshold AC was not observed, regardless of the adopted amplitude of initial bed perturbation A. The global effects of width variations on free bars can be quantified using a forcing factor FC that integrates the effects of AC and λC. The suppressing effects on bar height and length are both proportional to FC
Alternative health insurance schemes
DEFF Research Database (Denmark)
Keiding, Hans; Hansen, Bodil O.
2002-01-01
In this paper, we present a simple model of health insurance with asymmetric information, where we compare two alternative ways of organizing the insurance market. Either as a competitive insurance market, where some risks remain uninsured, or as a compulsory scheme, where however, the level...... competitive insurance; this situation turns out to be at least as good as either of the alternatives...
DEFF Research Database (Denmark)
Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela
2013-01-01
of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit...
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-01-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-06-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.
Energy Technology Data Exchange (ETDEWEB)
McMillan, K; Bostani, M; McNitt-Gray, M [UCLA School of Medicine, Los Angeles, CA (United States); McCollough, C [Mayo Clinic, Rochester, MN (United States)
2015-06-15
Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate the complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not
The coupling of land surface models and hydrological models potentially improves the land surface representation, benefiting both the streamflow prediction capabilities as well as providing improved estimates of water and energy fluxes into the atmosphere. In this study, the simple biosphere model 2...
Hejazi, A.; Woodbury, A. D.; Loukili, Y.; Akinremi, W.
2012-12-01
The main goal of the present research is to contribute to the understanding of nutrient transport and transformations in soil and its impact on groundwater on a large scale. This paper specifically integrates the physical, chemical and biochemical nitrogen transport processes with a spatial and temporal Land Surface Scheme (LSS). Because solute transport highly depends on soil moisture and soil temperature, a vertical soil nitrogen transport and transformations model was coupled with the SABAE-HW model. Since manure is one of the most commonly available sources of nutrients, it is assumed that the main source of organic N is from animal manure in this study. A-single-pool nitrogen transformation is designed to simulate nitrogen dynamics. Mineralization and nitrification are modeled using the first order kinetics. The performance of the integrated model (SABAE-HWS) is calibrated and verified using 3 years field data from Carberry site in central Canada, Manitoba. Two rates of hog manure (2500 and 7500 gal/acre) were investigated to study the distribution of soil ammonium and soil nitrate within the 120 cm of soil profile. The results clearly showed that there is a good agreement between observed and simulated soil ammonium and soil nitrate for the two manure application rates in the first two years of study. However, there were significant differences between observations and simulations at lower depths with 7500 gal/acre by the end of growing season of 2004. Also, a 10-year climate data was used to evaluate the effect of manure rates on nitrate leaching at the Carberry site. The results indicated that to minimize the risk of nitrate leaching, the rate of manure application, accumulated soil nitrogen from earlier applications, and the atmospheric conditions should be all taken into account at the same time. The simulations clearly showed that to have a nitrate concentration below 10 mg/kg in the leachate, the manure rate should not exceed 2500 gal/acre. The model is
Scheme Program Documentation Tools
DEFF Research Database (Denmark)
Nørmark, Kurt
2004-01-01
This paper describes and discusses two different Scheme documentation tools. The first is SchemeDoc, which is intended for documentation of the interfaces of Scheme libraries (APIs). The second is the Scheme Elucidator, which is for internal documentation of Scheme programs. Although the tools...... as named functions in Scheme. Finally, the Scheme Elucidator is able to integrate SchemeDoc resources as part of an internal documentation resource....
Park, S.; Lee, S.; Park, J.; Kim, J.; Kihm, J.
2013-12-01
The objectives of this study are to predict quantitatively groundwater and carbon dioxide flow in deep saline sandstone aquifers under various carbon dioxide injection schemes (injection rate, injection period) and to analyze integratively impacts of such carbon dioxide injection schemes on deep groundwater (brine) and carbon dioxide leakage risk through abandoned wells or faults. In order to achieve the first objective, a series of process-level prediction modeling of groundwater and carbon dioxide flow in a deep saline sandstone aquifer under several carbon dioxide injection schemes was performed using a multiphase thermo-hydrological numerical model TOUGH2 (Pruess et al., 1999). The prediction modeling results show that the extent of carbon dioxide plume is significantly affected by such carbon dioxide injection schemes. In order to achieve the second objective, a series of system-level analysis modeling of deep groundwater and carbon dioxide leakage risk through an abandoned well or a fault under several carbon dioxide injection schemes was then performed using a brine and carbon dioxide leakage risk analysis model CO2-LEAK (Kim, 2012). The analysis modeling results show that the rates and amounts of deep groundwater and carbon dioxide leakage through an abandoned well or a fault increase as the carbon dioxide injection rate increases. However, the rates and amounts of deep groundwater and carbon dioxide leakage through an abandoned well or a fault decrease as the carbon dioxide injection period increases. These system-level analysis modeling results for deep groundwater and carbon dioxide leakage risk can be utilized as baseline data for establishing guidelines to mitigate anticipated environmental adverse effects on shallower groundwater systems (aquifers) when deep groundwater and carbon dioxide leakage occur. This work was supported by the Geo-Advanced Innovative Action (GAIA) Program funded by the Korea Environmental Industry and Technology Institute
Li, Ping
2016-01-13
To meet the electromagnetic interference regulation, the radiated emission from device under test such as electronic devices must be carefully manipulated and accurately characterized. Instead of resorting to the direct far-field measurement, in this paper, a novel approach is proposed to model the radiated emission from electronic devices placed in shielding enclosures by using the near electric field only. Based on the Schelkkunoff’s equivalence principle and Raleigh–Carson reciprocity theorem, only the tangential components of the electric field over the ventilation slots and apertures of the shielding enclosure are sufficient to obtain the radiated emissions outside the shielding box if the inside of the shielding enclosure was filled with perfectly electric conductor (PEC). In order to efficiently model wideband emission, the time-domain sampling scheme is employed. Due to the lack of analytical Green’s function for arbitrary PEC boxes, the radiated emission must be obtained via the full-wave numerical methods by considering the total radiated emission as the superposition between the direct radiation from the equivalent magnetic currents in free space and the scattered field generated by the PEC shielding box. In this study, the state-of-the-art discontinuous Galerkin time-domain (DGTD) method is utilized, which has the flexibility to model irregular geometries, keep high-order accuracy, and more importantly involves only local operations. For open-region problems, a hybridized DGTD and time-domain boundary integration method applied to rigorously truncate the computational domain. To validate the proposed approach, several representative examples are presented and compared with both analytical and numerical results.
Soini, Erkki; Asseburg, Christian; Taiha, Maarit; Puolakka, Kari; Purcaru, Oana; Luosujärvi, Riitta
2017-10-01
To model the American College of Rheumatology (ACR) outcomes, cost-effectiveness, and budget impact of certolizumab pegol (CZP) (with and without a hypothetical risk-sharing scheme at treatment initiation for biologic-naïve patients) versus the current mix of reimbursed biologics for treatment of moderate-to-severe rheumatoid arthritis (RA) in Finland. A probabilistic model with 12-week cycles and a societal approach was developed for the years 2015-2019, accounting for differences in ACR responses (meta-analysis), mortality, and persistence. The risk-sharing scheme included a treatment switch and refund of the costs associated with CZP acquisition if patients failed to achieve ACR20 response at week 12. For the current treatment mix, ACR20 at week 24 determined treatment continuation. Quality-adjusted life years were derived on the basis of the Health Utilities Index. In the Finnish target population, CZP treatment with a risk-sharing scheme led to a estimated annual net expenditure decrease ranging from 1.7% in 2015 to 5.6% in 2019 compared with the current treatment mix. Per patient over the 5 years, CZP risk sharing was estimated to decrease the time without ACR response by 5%-units, decrease work absenteeism by 24 days, and increase the time with ACR20, ACR50, and ACR70 responses by 5%-, 6%-, and 1%-units, respectively, with a gain of 0.03 quality-adjusted life years. The modeled risk-sharing scheme showed reduced costs of €7866 per patient, with a more than 95% probability of cost-effectiveness when compared with the current treatment mix. The present analysis estimated that CZP, with or without the risk-sharing scheme, is a cost-effective alternative treatment for RA patients in Finland. The surplus provided by the CZP risk-sharing scheme could fund treatment for 6% more Finnish RA patients. UCB Pharma.
Kim, K.; Kang, S.; Cho, H.; Kang, W.; Seo, C.; Park, C.; Lee, D.; Lim, H.; Lee, H.; Kim, G.; Park, S.; Park, J.; Kim, W.; Jeon, D.; Woo, T.; Oh, J.
2018-02-01
In conventional planar radiography, image visibility is often limited mainly due to the superimposition of the object structure under investigation and the artifacts caused by scattered x-rays and noise. Several methods, including computed tomography (CT) as a multiplanar imaging modality, air-gap and grid techniques for the reduction of scatters, phase-contrast imaging as another image-contrast modality, etc., have extensively been investigated in attempt to overcome these difficulties. However, those methods typically require higher x-ray doses or special equipment. In this work, as another approach, we propose a new model-based radiography restoration method based on simple scatter-degradation scheme where the intensity of scattered x-rays and the transmission function of a given object are estimated from a single x-ray image to restore the original degraded image. We implemented the proposed algorithm and performed an experiment to demonstrate its viability. Our results indicate that the degradation of image characteristics by scattered x-rays and noise was effectively recovered by using the proposed method, which improves the image visibility in radiography considerably.
Performance modeling of a two-tier primary-secondary network with IEEE 802.11 broadcast scheme
Khabazian, Mehdi
2011-03-01
In this paper, we study the performance of a two-tier primary-secondary network based on IEEE 802.11 broadcast scheme. We assume that a number of primary and secondary users coexist in the radio environment and share a single band. To protect the primary users\\' priority, the secondary users are allowed to contend for the channel only if they sense it idle for a certain sensing time. Considering an exponential packet inter-arrival time for the primary network, we model each primary user as an independent M/G/1 queue. Subsequently, we determine the primary users\\' average medium access delay in the presence of secondary users as well as the hybrid network\\'s throughput. Numerical results and discussions show the effects of parameters pertaining to the secondary users, such as as sensing time, packet payload size and population size, on the performance of the primary network. Furthermore, we provide simulation results which confirm the accuracy of the proposed analysis. © 2011 IEEE.
DEFF Research Database (Denmark)
Bak, Claus Leth; Sztykiel, Michal; Dollerup, Sebastian
2011-01-01
can be applied with various systems, allowing obtaining the most optimal configuration of the protective relaying. The present paper describes modelling methodology on the basis of Siemens SIPROTEC 4 7SD522/610. Relay model was verified experimentally with its real equivalent by both EMTP...
Schouten, M.A.H.; Polman, N.B.P.; Westerhof, E.J.G.M.; Opdam, P.
2011-01-01
This paper proposes a spatial explicit agent-based model to evaluate the impact of agri-environment schemes on the spatial cohesion of agricultural landscapes in the light of habitat network patterns. Networks of nature reserves are being proposed as a solution when the degree of fragmentation is
Sikder, Safat; Hossain, Faisal
2016-09-01
Some of the world's largest and flood-prone river basins experience a seasonal flood regime driven by the monsoon weather system. Highly populated river basins with extensive rain-fed agricultural productivity such as the Ganges, Indus, Brahmaputra, Irrawaddy, and Mekong are examples of monsoon-driven river basins. It is therefore appropriate to investigate how precipitation forecasts from numerical models can advance flood forecasting in these basins. In this study, the Weather Research and Forecasting model was used to evaluate downscaling of coarse-resolution global precipitation forecasts from a numerical weather prediction model. Sensitivity studies were conducted using the TOPSIS analysis to identify the likely best set of microphysics and cumulus parameterization schemes, and spatial resolution from a total set of 15 combinations. This identified best set can pinpoint specific parameterizations needing further development to advance flood forecasting in monsoon-dominated regimes. It was found that the Betts-Miller-Janjic cumulus parameterization scheme with WRF Single-Moment 5-class, WRF Single-Moment 6-class, and Thompson microphysics schemes exhibited the most skill in the Ganges-Brahmaputra-Meghna basins. Finer spatial resolution (3 km) without cumulus parameterization schemes did not yield significant improvements. The short-listed set of the likely best microphysics-cumulus parameterization configurations was found to also hold true for the Indus basin. The lesson learned from this study is that a common set of model parameterization and spatial resolution exists for monsoon-driven seasonal flood regimes at least in South Asian river basins.
Majasalmi, Titta; Eisner, Stephanie; Astrup, Rasmus; Fridman, Jonas; Bright, Ryan M.
2018-01-01
Forest management affects the distribution of tree species and the age class of a forest, shaping its overall structure and functioning and in turn the surface-atmosphere exchanges of mass, energy, and momentum. In order to attribute climate effects to anthropogenic activities like forest management, good accounts of forest structure are necessary. Here, using Fennoscandia as a case study, we make use of Fennoscandic National Forest Inventory (NFI) data to systematically classify forest cover into groups of similar aboveground forest structure. An enhanced forest classification scheme and related lookup table (LUT) of key forest structural attributes (i.e., maximum growing season leaf area index (LAImax), basal-area-weighted mean tree height, tree crown length, and total stem volume) was developed, and the classification was applied for multisource NFI (MS-NFI) maps from Norway, Sweden, and Finland. To provide a complete surface representation, our product was integrated with the European Space Agency Climate Change Initiative Land Cover (ESA CCI LC) map of present day land cover (v.2.0.7). Comparison of the ESA LC and our enhanced LC products (https://doi.org/10.21350/7zZEy5w3) showed that forest extent notably (κ = 0.55, accuracy 0.64) differed between the two products. To demonstrate the potential of our enhanced LC product to improve the description of the maximum growing season LAI (LAImax) of managed forests in Fennoscandia, we compared our LAImax map with reference LAImax maps created using the ESA LC product (and related cross-walking table) and PFT-dependent LAImax values used in three leading land models. Comparison of the LAImax maps showed that our product provides a spatially more realistic description of LAImax in managed Fennoscandian forests compared to reference maps. This study presents an approach to account for the transient nature of forest structural attributes due to human intervention in different land models.
Chaouch, Naira; Temimi, Marouane; Weston, Michael; Ghedira, Hosni
2017-05-01
In this study, we intercompare seven different PBL schemes in WRF in the United Arab Emirates (UAE) and we assess their impact on the performance of the simulations. The study covered five fog events reported in 2014 at Abu Dhabi International Airport. The analysis of Synoptic conditions indicated that during all examined events, the UAE was under a high geopotential pressure and light wind that does not exceed 7 m/s at 850 hPa ( 1.5 km). Seven PBL schemes, namely, Yonsei University (YSU), Mellor-Yamada-Janjic (MYJ), Moller-Yamada Nakanishi and Niino (MYNN) level 2.5, Quasi-Normal Scale Elimination (QNSE-EDMF), Asymmetric Convective Model (ACM2), Grenier-Bretherton-McCaa (GBM) and MYNN level 3 were tested. In situ observations used in the model's assessment included radiosonde data from the Abu Dhabi International Airport and surface measurements of relative humidity (RH), dew point temperature, wind speed, and temperature profiles. Overall, all the tested PBL schemes showed comparable skills with relatively higher performance with the QNSE scheme. The average RH Root Mean Square Error (RMSE) and BIAS for all PBLs were 15.75% and - 9.07%, respectively, whereas the obtained RMSE and BIAS when QNSE was used were 14.65% and - 6.3% respectively. Comparable skills were obtained for the rest of the variables. Local PBL schemes showed better performance than non-local schemes. Discrepancies between simulated and observed values were higher at the surface level compared to high altitude values. The sensitivity to lead time showed that best simulation performances were obtained when the lead time varies between 12 and 18 h. In addition, the results of the simulations show that better performance is obtained when the starting condition is dry.
Hu, Kainan; Zhang, Hongwu; Geng, Shaojuan
2016-10-01
A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio is proposed. The local equilibrium distribution function including the rotational velocity of particle is decoupled into two parts, i.e., the local equilibrium distribution function of the translational velocity of particle and that of the rotational velocity of particle. From these two local equilibrium functions, two lattice Boltzmann models are derived via the Hermite expansion, namely one is in relation to the translational velocity and the other is connected with the rotational velocity. Accordingly, the distribution function is also decoupled. After this, the evolution equation is decoupled into the evolution equation of the translational velocity and that of the rotational velocity. The two evolution equations evolve separately. The lattice Boltzmann models used in the scheme proposed by this work are constructed via the Hermite expansion, so it is easy to construct new schemes of higher-order accuracy. To validate the proposed scheme, a one-dimensional shock tube simulation is performed. The numerical results agree with the analytical solutions very well.
DEFF Research Database (Denmark)
Larsen, Morten Andreas Dahl; Højmark Rasmussen, Søren; Drews, Martin
2016-01-01
by HIRHAM simulated precipitation. The last two simulations include iv) a standard HIRHAM simulation, and v) a fully coupled HIRHAM-MIKE SHE simulation locally replacing the land surface scheme by MIKE SHE for the FIFE area, while HIRHAM in standard configuration is used for the remaining model area......The land surface-atmosphere interaction is described differently in large scale surface schemes of regional climate models and small scale spatially distributed hydrological models. In particular, the hydrological models include the influence of shallow groundwater on evapotranspiration during dry...... experiments include five simulations. First MIKE SHE is forced by observed climate data in two versions i) with groundwater at a fixed uniform depth, and ii) with a dynamical groundwater component simulating shallow groundwater conditions in river valleys. iii) In a third simulation MIKE SHE is forced...
For debate: consensus injury definitions in team sports should focus on encompassing all injuries.
Hodgson, Lisa; Gissane, Conor; Gabbett, Tim J; King, Doug A
2007-05-01
The purpose of this paper is to highlight the most effective method of collecting injury data by using a definition that encompasses all injuries into the data collection system. The definition provides an accurate picture of injury incidence and also allows filtering of records so that data can be reported in a variety of comparable ways. A qualitative review of literature in team sports, plus expert opinion, served as the basis for data collection strategies. Articles were retrieved from SportsDiscus and PubMed using the terms "sports injury definition" and "injury definition." These terms were searched for the period 1966 to November 2006. One of the major results (from this paper) that supports the use of an all-encompassing injury definition is that 70% to 92% of all injuries sustained fall into the transient category--that is, by only recording injuries that result in missed matches, the majority of injuries are missed and therefore injury rates are underreported. An injury definition should be the most encompassing definition that enables a true, global picture of injury incidence to be seen in participation in any team sport.
Morrison, James L.; Oladunjoye, Ganiyu Titi
2002-01-01
A survey of 287 business faculty found that few were infusing electronic commerce topics into existing curricula despite its growing use in business. Responses were similar regardless of faculty gender, region, and program size or level. (SK)
Analogical Argument Schemes and Complex Argument Structure
Directory of Open Access Journals (Sweden)
Andre Juthe
2015-09-01
Full Text Available This paper addresses several issues in argumentation theory. The over-arching goal is to discuss how a theory of analogical argument schemes fits the pragma-dialectical theory of argument schemes and argument structures, and how one should properly reconstruct both single and complex argumentation by analogy. I also propose a unified model that explains how formal valid deductive argumentation relates to argument schemes in general and to analogical argument schemes in particular. The model suggests “scheme-specific-validity” i.e. that there are contrasting species of validity for each type of argument scheme that derive from one generic conception of validity.
Directory of Open Access Journals (Sweden)
D. Boutelier
2011-05-01
Full Text Available We present an experimental apparatus for 3-D thermo-mechanical analogue modeling of plate tectonic processes such as oceanic and continental subductions, arc-continent or continental collisions. The model lithosphere, made of temperature-sensitive elasto-plastic analogue materials with strain softening, is submitted to a constant temperature gradient causing a strength reduction with depth in each layer. The surface temperature is imposed using infrared emitters, which allows maintaining an unobstructed view of the model surface and the use of a high resolution optical strain monitoring technique (Particle Imaging Velocimetry. Subduction experiments illustrate how the stress conditions on the interplate zone can be estimated using a force sensor attached to the back of the upper plate and adjusted via the density and strength of the subducting lithosphere or the lubrication of the plate boundary. The first experimental results reveal the potential of the experimental set-up to investigate the three-dimensional solid-mechanics interactions of lithospheric plates in multiple natural situations.
Shepherd, Tristan J.; Walsh, Kevin J.
2017-08-01
This study investigates the effect of the choice of convective parameterization (CP) scheme on the simulated tracks of three intense tropical cyclones (TCs), using the Weather Research and Forecasting (WRF) model. We focus on diagnosing the competing influences of large-scale steering flow, beta drift and convectively induced changes in track, as represented by four different CP schemes (Kain-Fritsch (KF), Betts-Miller-Janjic (BMJ), Grell-3D (G-3), and the Tiedtke (TD) scheme). The sensitivity of the results to initial conditions, model domain size and shallow convection is also tested. We employ a diagnostic technique by Chan et al. (J Atmos Sci 59:1317-1336, 2002) that separates the influence of the large-scale steering flow, beta drift and the modifications of the steering flow by the storm-scale convection. The combined effect of the steering flow and the beta drift causes TCs typically to move in the direction of the wavenumber-1 (WN-1) cyclonic potential vorticity tendency (PVT). In instances of asymmetrical TCs, the simulated TC motion does not necessarily match the motion expected from the WN-1 PVT due to changes in the convective pattern. In the present study, we test this concept in the WRF simulations and investigate whether if the diagnosed motion from the WN-1 PVT and the TC motion do not match, this can be related to the emerging evolution of changes in convective structure. Several systematic results are found across the three cyclone cases. The sensitivity of TC track to initial conditions (the initialisation time and model domain size) is less than the sensitivity of TC track to changing the CP scheme. The simulated track is not overly sensitive to shallow convection in the KF, BMJ, and TD schemes, compared to the track difference between CP schemes. The G3 scheme, however, is highly sensitive to shallow convection being used. Furthermore, while agreement between the simulated TC track direction and the WN-1 diagnostic is usually good, there are
1981-03-25
jM 1.25(W P Il~ II I [ 6 :, LEVd~5 I ~JAMMOR jkppovo reecm * 1 L81 31 042 300 Unicorn Park Drive Wobum, Massachuset 0C601 _ __ _JA YCOR DEVELOPMENT...for the growth 4 of the tropical cyclone, and leads to a gradual shift of the storm center toward the warm ocean. "Test of a Planetary Boundary Layer... growth characteristics because gravity waves and model physics act to smooth them. Besides, random observational errors are not the major problem with
Directory of Open Access Journals (Sweden)
Yongkai An
2015-07-01
Full Text Available This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately.
Directory of Open Access Journals (Sweden)
B. M. Monge-Sanz
2013-09-01
Full Text Available This study evaluates effects and applications of a new linear parameterisation for stratospheric methane and water vapour. The new scheme (CoMeCAT is derived from a 3-D full-chemistry-transport model (CTM. It is suitable for any global model, and is shown here to produce realistic profiles in the TOMCAT/SLIMCAT 3-D CTM and the ECMWF (European Centre for Medium-Range Weather Forecasts general circulation model (GCM. Results from the new scheme are in good agreement with the full-chemistry CTM CH4 field and with observations from the Halogen Occultation Experiment (HALOE. The scheme is also used to derive stratospheric water increments, which in the CTM produce vertical and latitudinal H2O variations in fair agreement with satellite observations. Stratospheric H2O distributions in the ECMWF GCM show realistic overall features, although concentrations are smaller than in the CTM run (up to 0.5 ppmv smaller above 10 hPa. The potential of the new CoMeCAT tracer for evaluating stratospheric transport is exploited to assess the impacts of nudging the free-running GCM to ERA-40 and ERA-Interim reanalyses. The nudged GCM shows similar transport patterns to the offline CTM forced by the corresponding reanalysis data. The new scheme also impacts radiation and temperature in the model. Compared to the default CH4 climatology and H2O used by the ECMWF radiation scheme, the main effect on ECMWF temperatures when considering both CH4 and H2O from CoMeCAT is a decrease of up to 1.0 K over the tropical mid/low stratosphere. The effect of using the CoMeCAT scheme for radiative forcing (RF calculations is investigated using the offline Edwards–Slingo radiative transfer model. Compared to the default model option of a tropospheric global 3-D CH4 value, the CoMeCAT distribution produces an overall change in the annual mean net RF of up to −30 mW m−2.
Spectral scheme for spacetime physics
International Nuclear Information System (INIS)
Seriu, Masafumi
2002-01-01
Based on the spectral representation of spatial geometry, we construct an analysis scheme for spacetime physics and cosmology, which enables us to compare two or more universes with each other. In this scheme the spectral distance plays a central role, which is the measure of closeness between two geometries defined in terms of the spectra. We apply this scheme for analyzing the averaging problem in cosmology; we explicitly investigate the time evolution of the spectra, distance between two nearby spatial geometries, simulating the relation between the real Universe and its model. We then formulate the criteria for a model to be a suitable one
Energy Technology Data Exchange (ETDEWEB)
Eum, Hyung-Il; Laprise, Rene [University of Quebec at Montreal, ESCER (Etude et Simulation du Climat a l' Echelle Regionale), Montreal, QC (Canada); Gachon, Philippe [University of Quebec at Montreal, ESCER (Etude et Simulation du Climat a l' Echelle Regionale), Montreal, QC (Canada); Environment Canada, Adaptation and Impacts Research Section, Climate Research Division, Montreal, QC (Canada); Ouarda, Taha [University of Quebec, INRS-ETE (Institut National de la Recherche Scientifique, Centre Eau-Terre-Environnement), Quebec, QC (Canada)
2012-04-15
This study presents a combined weighting scheme which contains five attributes that reflect accuracy of climate data, i.e. short-term (daily), mid-term (annual), and long-term (decadal) timescales, as well as spatial pattern, and extreme values, as simulated from Regional Climate Models (RCMs) with respect to observed and regional reanalysis products. Southern areas of Quebec and Ontario provinces in Canada are used for the study area. Three series of simulation from two different versions of the Canadian RCM (CRCM4.1.1, and CRCM4.2.3) are employed over 23 years from 1979 to 2001, driven by both NCEP and ERA40 global reanalysis products. One series of regional reanalysis dataset (i.e. NARR) over North America is also used as reference for comparison and validation purpose, as well as gridded historical observed daily data of precipitation and temperatures, both series have been beforehand interpolated on the CRCM 45-km grid resolution. Monthly weighting factors are calculated and then combined into four seasons to reflect seasonal variability of climate data accuracy. In addition, this study generates weight averaged references (WARs) with different weighting factors and ensemble size as new reference climate data set. The simulation results indicate that the NARR is in general superior to the CRCM simulated precipitation values, but the CRCM4.1.1 provides the highest weighting factors during the winter season. For minimum and maximum temperature, both the CRCM4.1.1 and the NARR products provide the highest weighting factors, respectively. The NARR provides more accurate short- and mid-term climate data, but the two versions of the CRCM provide more precise long-term data, spatial pattern and extreme events. Or study confirms also that the global reanalysis data (i.e. NCEP vs. ERA40) used as boundary conditions in the CRCM runs has non-negligible effects on the accuracy of CRCM simulated precipitation and temperature values. In addition, this study demonstrates
Faustin, J. M.; Graves, J. P.; Cooper, W. A.; Lanthaler, S.; Villard, L.; Pfefferlé, D.; Geiger, J.; Kazakov, Ye O.; Van Eester, D.
2017-08-01
Absorption of ion-cyclotron range of frequencies waves at the fundamental resonance is an efficient source of plasma heating and fast ion generation in tokamaks and stellarators. This heating method is planned to be exploited as a fast ion source in the Wendelstein 7-X stellarator. The work presented here assesses the possibility of using the newly developed three-ion species scheme (Kazakov et al (2015) Nucl. Fusion 55 032001) in tokamak and stellarator plasmas, which could offer the capability of generating more energetic ions than the traditional minority heating scheme with moderate input power. Using the SCENIC code, it is found that fast ions in the MeV range of energy can be produced in JET-like plasmas. The RF-induced particle pinch is seen to strongly impact the fast ion pressure profile in particular. Our results show that in typical high-density W7-X plasmas, the three-ion species scheme generates more energetic ions than the more traditional minority heating scheme, which makes three-ion scenario promising for fast-ion confinement studies in W7-X.
Chung, Yun Won; Hwang, Ho Young
2010-01-01
In sensor network, energy conservation is one of the most critical issues since sensor nodes should perform a sensing task for a long time (e.g., lasting a few years) but the battery of them cannot be replaced in most practical situations. For this purpose, numerous energy conservation schemes have been proposed and duty cycling scheme is considered the most suitable power conservation technique, where sensor nodes alternate between states having different levels of power consumption. In order to analyze the energy consumption of energy conservation scheme based on duty cycling, it is essential to obtain the probability of each state. In this paper, we analytically derive steady state probability of sensor node states, i.e., sleep, listen, and active states, based on traffic characteristics and timer values, i.e., sleep timer, listen timer, and active timer. The effect of traffic characteristics and timer values on the steady state probability and energy consumption is analyzed in detail. Our work can provide sensor network operators guideline for selecting appropriate timer values for efficient energy conservation. The analytical methodology developed in this paper can be extended to other energy conservation schemes based on duty cycling with different sensor node states, without much difficulty.
Four-dimensional Hooke's law can encompass linear elasticity and inertia
International Nuclear Information System (INIS)
Antoci, S.; Mihich, L.
1999-01-01
The question is examined whether the formally straightforward extension of Hooke's time-honoured stress-strain relation to the four dimensions of special and of general relativity can make physical sense. The four-dimensional Hooke law is found able to account for the inertia of matter; in the flat-space, slow-motion approximation the field equations for the displacement four-vector field ξ i can encompass both linear elasticity and inertia. In this limit one just recovers the equations of motion of the classical theory of elasticity
Han, Mei; Braun, Scott A.; Olson, William S.; Persson, P. Ola G.; Bao, Jian-Wen
2009-01-01
Seen by the human eye, precipitation particles are commonly drops of rain, flakes of snow, or lumps of hail that reach the ground. Remote sensors and numerical models usually deal with information about large collections of rain, snow, and hail (or graupel --also called soft hail ) in a volume of air. Therefore, the size and number of the precipitation particles and how particles interact, evolve, and fall within the volume of air need to be represented using physical laws and mathematical tools, which are often implemented as cloud and precipitation microphysical parameterizations in numerical models. To account for the complexity of the precipitation physical processes, scientists have developed various types of such schemes in models. The accuracy of numerical weather forecasting may vary dramatically when different types of these schemes are employed. Therefore, systematic evaluations of cloud and precipitation schemes are of great importance for improvement of weather forecasts. This study is one such endeavor; it pursues quantitative assessment of all the available cloud and precipitation microphysical schemes in a weather model (MM5) through comparison with the observations obtained by National Aeronautics and Space Administration (NASA) s and Japan Aerospace Exploration Agency (JAXA) s Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) and microwave imager (TMI). When satellite sensors (like PR or TMI) detect information from precipitation particles, they cannot directly observe the microphysical quantities (e.g., water species phase, density, size, and amount etc.). Instead, they tell how much radiation is absorbed by rain, reflected away from the sensor by snow or graupel, or reflected back to the satellite. On the other hand, the microphysical quantities in the model are usually well represented in microphysical schemes and can be converted to radiative properties that can be directly compared to the corresponding PR and TMI observations
de Menezes Neto, Otacilio L.; Coutinho, Mariane M.; Marengo, José A.; Capistrano, Vinícius B.
2017-08-01
Seasonal forest fires in the Amazon are the largest source of pollutants in South America. The impacts of aerosols due to biomass burning on the temperature and energy balance in South America are investigated using climate simulations from 1979 to 2005 using HadGEM2-ES, which includes the hot plume-rise scheme (HPR) developed by Freitas et al. (Estudos Avançados 19:167-185, 2005, Atmos Chem Phys 7:3385-3398, 2007, Atmos Chem Phys 10:585-594, 2010). The HPR scheme is used to estimate the vertical heights of biomass-burning aerosols based on the thermodynamic characteristics of the underlying model. Three experiments are performed. The first experiment includes the HPR scheme, the second experiment turns off the HPR scheme and the effects of biomass aerosols (BIOMASS OFF), and the final experiment assumes that all biomass aerosols are released at the surface (HPR OFF). Relative to the BIOMASS OFF experiment, the temperature decreased in the HPR experiment as the net shortwave radiation at the surface decreased in a region with a large amount of biomass aerosols. When comparing the HPR and HPR OFF experiments, the release of biomass aerosols higher on the atmosphere impacts on temperature and the energy budget because the aerosols were transported by strong winds in the upper atmospheric levels.
Häfliger, V.; Martin, E.; Boone, A. A.; Habets, F.; David, C. H.; Garambois, P. A.; Roux, H.; Ricci, S. M.; Thévenin, A.; Berthon, L.; Biancamaria, S.
2014-12-01
The ability of a regional hydrometeorological model to simulate water depth is assessed in order to prepare for the SWOT (Surface Water and Ocean Topography) mission that will observe free surface water elevations for rivers having a width larger than 50/100 m. The Garonne river (56 000 km2, in south-western France) has been selected owing to the availability of operational gauges, and the fact that different modeling platforms, the hydrometeorological model SAFRAN-ISBA-MODCOU and several fine scale hydraulic models, have been extensively evaluated over two reaches of the river. Several routing schemes, ranging from the simple Muskingum method to time-variable parameter kinematic and diffusive waves schemes with time varying parameters, are tested using predetermined hydraulic parameters. The results show that the variable flow velocity scheme is advantageous for discharge computations when compared to the original Muskingum routing method. Additionally, comparisons between water level computations and in situ observations led to root mean square errors of 50-60 cm for the improved Muskingum method and 40-50 cm for the kinematic-diffusive wave method, in the downstream Garonne river. The error is larger than the anticipated SWOT resolution, showing the potential of the mission to improve knowledge of the continental water cycle. Discharge computations are also shown to be comparable to those obtained with high-resolution hydraulic models over two reaches. However, due to the high variability of river parameters (e.g. slope and river width), a robust averaging method is needed to compare the hydraulic model outputs and the regional model. Sensitivity tests are finally performed in order to have a better understanding of the mechanisms which control the key hydrological processes. The results give valuable information about the linearity, Gaussianity and symetry of the model, in order to prepare the assimilation of river heights in the model.
Yue, Qin
2016-01-01
We propose a modified Leslie-Gower predator-prey model with Holling-type II schemes and a prey refuge. The structure of equilibria and their linearized stability is investigated. By using the iterative technique and further precise analysis, sufficient conditions on the global attractivity of a positive equilibrium are obtained. Our results not only supplement but also improve some existing ones. Numerical simulations show the feasibility of our results.
Potocki-Shaffer deletion encompassing ALX4 in a patient with frontonasal dysplasia phenotype.
Ferrarini, Alessandra; Gaillard, Muriel; Guerry, Frederic; Ramelli, Gianpaolo; Heidi, Fodstad; Keddache, Caroline Verley; Wieland, Ilse; Beckmann, Jacques S; Jaquemont, Sébastien; Martinet, Danielle
2014-02-01
Frontonasal dysplasia (FND) is a genetically heterogeneous malformation spectrum with marked hypertelorism, broad nasal tip and bifid nose. Only a small number of genes have been associated with FND phenotypes until now, the first gene being EFNB1, related to craniofrontonasal syndrome (CFNS) with craniosynostosis in addition, and more recently the aristaless-like homeobox genes ALX3, ALX4, and ALX1, which have been related with distinct phenotypes named FND1, FND2, and FND3 respectively. We here report on a female patient presenting with severe FND features along with partial alopecia, hypogonadism and intellectual disability. While molecular investigations did not reveal mutations in any of the known genes, ALX4, ALX3, ALX1 and EFNB1, comparative genomic hybridization (array CGH) techniques showed a large heterozygous de novo deletion at 11p11.12p12, encompassing the ALX4 gene. Deletions in this region have been described in patients with Potocki-Shaffer syndrome (PSS), characterized by biparietal foramina, multiple exostoses, and intellectual disability. Although the patient reported herein manifests some overlapping features of FND and PPS, it is likely that the observed phenotype maybe due to a second unidentified mutation in the ALX4 gene. The phenotype will be discussed in view of the deleted region encompassing the ALX4 gene. © 2013 Wiley Periodicals, Inc.
Petrova, Desislava; Koopman, Siem Jan; Ballester, Joan; Rodó, Xavier
2017-02-01
El Niño (EN) is a dominant feature of climate variability on inter-annual time scales driving changes in the climate throughout the globe, and having wide-spread natural and socio-economic consequences. In this sense, its forecast is an important task, and predictions are issued on a regular basis by a wide array of prediction schemes and climate centres around the world. This study explores a novel method for EN forecasting. In the state-of-the-art the advantageous statistical technique of unobserved components time series modeling, also known as structural time series modeling, has not been applied. Therefore, we have developed such a model where the statistical analysis, including parameter estimation and forecasting, is based on state space methods, and includes the celebrated Kalman filter. The distinguishing feature of this dynamic model is the decomposition of a time series into a range of stochastically time-varying components such as level (or trend), seasonal, cycles of different frequencies, irregular, and regression effects incorporated as explanatory covariates. These components are modeled separately and ultimately combined in a single forecasting scheme. Customary statistical models for EN prediction essentially use SST and wind stress in the equatorial Pacific. In addition to these, we introduce a new domain of regression variables accounting for the state of the subsurface ocean temperature in the western and central equatorial Pacific, motivated by our analysis, as well as by recent and classical research, showing that subsurface processes and heat accumulation there are fundamental for the genesis of EN. An important feature of the scheme is that different regression predictors are used at different lead months, thus capturing the dynamical evolution of the system and rendering more efficient forecasts. The new model has been tested with the prediction of all warm events that occurred in the period 1996-2015. Retrospective forecasts of these
Energy Technology Data Exchange (ETDEWEB)
Silva, Goncalo, E-mail: goncalo.nuno.silva@gmail.com [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France); Talon, Laurent, E-mail: talon@fast.u-psud.fr [CNRS (UMR 7608), Laboratoire FAST, Batiment 502, Campus University, 91405 Orsay (France); Ginzburg, Irina, E-mail: irina.ginzburg@irstea.fr [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France)
2017-04-15
The present contribution focuses on the accuracy of reflection-type boundary conditions in the Stokes–Brinkman–Darcy modeling of porous flows solved with the lattice Boltzmann method (LBM), which we operate with the two-relaxation-time (TRT) collision and the Brinkman-force based scheme (BF), called BF-TRT scheme. In parallel, we compare it with the Stokes–Brinkman–Darcy linear finite element method (FEM) where the Dirichlet boundary conditions are enforced on grid vertices. In bulk, both BF-TRT and FEM share the same defect: in their discretization a correction to the modeled Brinkman equation appears, given by the discrete Laplacian of the velocity-proportional resistance force. This correction modifies the effective Brinkman viscosity, playing a crucial role in the triggering of spurious oscillations in the bulk solution. While the exact form of this defect is available in lattice-aligned, straight or diagonal, flows; in arbitrary flow/lattice orientations its approximation is constructed. At boundaries, we verify that such a Brinkman viscosity correction has an even more harmful impact. Already at the first order, it shifts the location of the no-slip wall condition supported by traditional LBM boundary schemes, such as the bounce-back rule. For that reason, this work develops a new class of boundary schemes to prescribe the Dirichlet velocity condition at an arbitrary wall/boundary-node distance and that supports a higher order accuracy in the accommodation of the TRT-Brinkman solutions. For their modeling, we consider the standard BF scheme and its improved version, called IBF; this latter is generalized in this work to suppress or to reduce the viscosity correction in arbitrarily oriented flows. Our framework extends the one- and two-point families of linear and parabolic link-wise boundary schemes, respectively called B-LI and B-MLI, which avoid the interference of the Brinkman viscosity correction in their closure relations. The performance of LBM
Energy Technology Data Exchange (ETDEWEB)
Mihailovic, D.T.; Pielke, R.A.; Rajkovic, B.; Lee, T.J.; Jeftic, M. (Novi Sad Univ. (Yugoslavia) Colorado State Univ., Fort Collins (United States) Belgrade Univ. (Yugoslavia))
1993-06-01
In the parameterization of land surface processes, attention must be devoted to surface evaporation, one of the main processes in the air-land energy exchange. One of the most used approaches is the resistance representation which requires the calculation of aerodynamic resistances. These resistances are calculated using K theory for different morphologies of plant communities; then, the performance of the evaporation schemes within the alpha, beta, and their combination approaches that parameterize evaporation from bare and partly plant-covered soil surfaces are discussed. Additionally, a new alpha scheme is proposed based on an assumed power dependence alpha on volumetric soil moisture content and its saturated value. Finally, the performance of the considered and the proposed schemes is tested based on time integrations using real data. The first set was for 4 June 1982, and the second for 3 June 1981 at the experimental site in Rimski Sancevi, Yugoslavia, on chernozem soil, as representative for a bare, and partly plant-covered surface, respectively. 63 refs.
Tightly Secure Signatures From Lossy Identification Schemes
Abdalla , Michel; Fouque , Pierre-Alain; Lyubashevsky , Vadim; Tibouchi , Mehdi
2015-01-01
International audience; In this paper, we present three digital signature schemes with tight security reductions in the random oracle model. Our first signature scheme is a particularly efficient version of the short exponent discrete log-based scheme of Girault et al. (J Cryptol 19(4):463–487, 2006). Our scheme has a tight reduction to the decisional short discrete logarithm problem, while still maintaining the non-tight reduction to the computational version of the problem upon which the or...
Directory of Open Access Journals (Sweden)
F. F. Pereira
2017-09-01
Full Text Available Land surface models are excellent tools for studying how climate change and land use affect surface hydrology. However, in order to assess the impacts of Earth processes on river flows, simulated changes in runoff need to be routed through the landscape. In this technical note, we describe the integration of the Ecosystem Demography (ED2 model with a hydrological routing scheme. The purpose of the study was to create a tool capable of incorporating to hydrological predictions the terrestrial ecosystem responses to climate, carbon dioxide, and land-use change, as simulated with terrestrial biosphere models. The resulting ED2+R model calculates the lateral routing of surface and subsurface runoff resulting from the terrestrial biosphere models' vertical water balance in order to determine spatiotemporal patterns of river flows within the simulated region. We evaluated the ED2+R model in the Tapajós, a 476 674 km2 river basin in the southeastern Amazon, Brazil. The results showed that the integration of ED2 with the lateral routing scheme results in an adequate representation (Nash–Sutcliffe efficiency up to 0.76, Kling–Gupta efficiency up to 0.86, Pearson's R up to 0.88, and volume ratio up to 1.06 of daily to decadal river flow dynamics in the Tapajós. These results are a consistent step forward with respect to the no river representation common among terrestrial biosphere models, such as the initial version of ED2.
Kumar, Sivakumar Prasanth; Jha, Prakash C; Jasrai, Yogesh T; Pandya, Himanshu A
2016-01-01
The estimation of atomic partial charges of the small molecules to calculate molecular interaction fields (MIFs) is an important process in field-based quantitative structure-activity relationship (QSAR). Several studies showed the influence of partial charge schemes that drastically affects the prediction accuracy of the QSAR model and focused on the selection of appropriate charge models that provide highest cross-validated correlation coefficient ([Formula: see text] or q(2)) to explain the variation in chemical structures against biological endpoints. This study shift this focus in a direction to understand the molecular regions deemed to explain SAR in various charge models and recognize a consensus picture of activity-correlating molecular regions. We selected eleven diverse dataset and developed MIF-based QSAR models using various charge schemes including Gasteiger-Marsili, Del Re, Merck Molecular Force Field, Hückel, Gasteiger-Hückel, and Pullman. The generalized resultant QSAR models were then compared with Open3DQSAR model to interpret the MIF descriptors decisively. We suggest the regions of activity contribution or optimization can be effectively determined by studying various charge-based models to understand SAR precisely.
Pereira, Fabio F.; Farinosi, Fabio; Arias, Mauricio E.; Lee, Eunjee; Briscoe, John; Moorcroft, Paul R.
2017-09-01
Land surface models are excellent tools for studying how climate change and land use affect surface hydrology. However, in order to assess the impacts of Earth processes on river flows, simulated changes in runoff need to be routed through the landscape. In this technical note, we describe the integration of the Ecosystem Demography (ED2) model with a hydrological routing scheme. The purpose of the study was to create a tool capable of incorporating to hydrological predictions the terrestrial ecosystem responses to climate, carbon dioxide, and land-use change, as simulated with terrestrial biosphere models. The resulting ED2+R model calculates the lateral routing of surface and subsurface runoff resulting from the terrestrial biosphere models' vertical water balance in order to determine spatiotemporal patterns of river flows within the simulated region. We evaluated the ED2+R model in the Tapajós, a 476 674 km2 river basin in the southeastern Amazon, Brazil. The results showed that the integration of ED2 with the lateral routing scheme results in an adequate representation (Nash-Sutcliffe efficiency up to 0.76, Kling-Gupta efficiency up to 0.86, Pearson's R up to 0.88, and volume ratio up to 1.06) of daily to decadal river flow dynamics in the Tapajós. These results are a consistent step forward with respect to the no river representation common among terrestrial biosphere models, such as the initial version of ED2.
Energy Technology Data Exchange (ETDEWEB)
Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)
2007-10-15
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
International Nuclear Information System (INIS)
Rybynok, V O; Kyriacou, P A
2007-01-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media
Ramezanpour, H R; Setayeshi, S; Akbari, M E
2011-01-01
Determining the optimal and effective scheme for administrating the chemotherapy agents in breast cancer is the main goal of this scientific research. The most important issue here is the amount of drug or radiation administrated in chemotherapy and radiotherapy for increasing patient's survival. This is because in these cases, the therapy not only kills the tumor cells, but also kills some of the healthy tissues and causes serious damages. In this paper we investigate optimal drug scheduling effect for breast cancer model which consist of nonlinear ordinary differential time-delay equations. In this paper, a mathematical model of breast cancer tumors is discussed and then optimal control theory is applied to find out the optimal drug adjustment as an input control of system. Finally we use Sensitivity Approach (SA) to solve the optimal control problem. The goal of this paper is to determine optimal and effective scheme for administering the chemotherapy agent, so that the tumor is eradicated, while the immune systems remains above a suitable level. Simulation results confirm the effectiveness of our proposed procedure. In this paper a new scheme is proposed to design a therapy protocol for chemotherapy in Breast Cancer. In contrast to traditional pulse drug delivery, a continuous process is offered and optimized, according to the optimal control theory for time-delay systems.
Zheng, Yang; Zhou, Jianzhong; Xu, Yanhe; Zhang, Yuncheng; Qian, Zhongdong
2017-05-01
This paper proposes a distributed model predictive control based load frequency control (MPC-LFC) scheme to improve control performances in the frequency regulation of power system. In order to reduce the computational burden in the rolling optimization with a sufficiently large prediction horizon, the orthonormal Laguerre functions are utilized to approximate the predicted control trajectory. The closed-loop stability of the proposed MPC scheme is achieved by adding a terminal equality constraint to the online quadratic optimization and taking the cost function as the Lyapunov function. Furthermore, the treatments of some typical constraints in load frequency control have been studied based on the specific Laguerre-based formulations. Simulations have been conducted in two different interconnected power systems to validate the effectiveness of the proposed distributed MPC-LFC as well as its superiority over the comparative methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
F. Montané
2017-09-01
Full Text Available How carbon (C is allocated to different plant tissues (leaves, stem, and roots determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI measurements to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM, the Community Land Model (CLM4.5. We ran CLM4.5 for nine temperate (including evergreen and deciduous forests in North America between 1980 and 2013 using four different C allocation schemes: i. dynamic C allocation scheme (named "D-CLM4.5" with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP; ii. an alternative dynamic C allocation scheme (named "D-Litton", where, similar to (i, C allocation is a dynamic function of annual NPP, but unlike (i includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.–iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen" and the other of observations in deciduous forests (named "F-Deciduous". D-CLM4.5 generally overestimated gross primary production (GPP and ecosystem respiration, and underestimated net ecosystem exchange (NEE. In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m−2 for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011 was highly underestimated (between 1222 and 7557 g C m−2 for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C–LAI relationship in the model did not match the
Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.; MacBean, Natasha; Alexander, M. Ross; Dye, Alex; Bishop, Daniel A.; Trouet, Valerie; Babst, Flurin; Hessl, Amy E.; Pederson, Neil; Blanken, Peter D.; Bohrer, Gil; Gough, Christopher M.; Litvak, Marcy E.; Novick, Kimberly A.; Phillips, Richard P.; Wood, Jeffrey D.; Moore, David J. P.
2017-09-01
How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocation schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.-iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m-2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m-2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C-LAI relationship in the model did not match the observed leaf C
Differential Chromatin Structure Encompassing Replication Origins in Transformed and Normal Cells
Di Paola, Domenic; Rampakakis, Emmanouil; Chan, Man Kid
2012-01-01
This study examines the chromatin structure encompassing replication origins in transformed and normal cells. Analysis of the global levels of histone H3 acetylated at K9&14 (open chromatin) and histone H3 trimethylated at K9 (closed chromatin) revealed a higher ratio of open to closed chromatin in the transformed cells. Also, the trithorax and polycomb group proteins, Brg-1 and Bmi-1, respectively, were overexpressed and more abundantly bound to chromatin in the transformed cells. Quantitative comparative analyses of episomal and in situ chromosomal replication origin activity as well as chromatin immunoprecipitation (ChIP) assays, using specific antibodies targeting members of the pre-replication complex (pre-RC) as well as open/closed chromatin markers encompassing both episomal and chromosomal origins, revealed that episomal origins had similar levels of in vivo activity, nascent DNA abundance, pre-RC protein association, and elevated open chromatin structure at the origin in both cell types. In contrast, the chromosomal origins corresponding to 20mer1, 20mer2, and c-myc displayed a 2- to 3-fold higher activity and pre-RC protein abundance as well as higher ratios of open to closed chromatin and of Brg-1 to Bmi-1 in the transformed cells, whereas the origin associated with the housekeeping lamin B2 gene exhibited similar levels of activity, pre-RC protein abundance, and higher ratios of open to closed chromatin and of Brg-1 to Bmi-1 in both cell types. Nucleosomal positioning analysis, using an MNase-Southern blot assay, showed that all the origin regions examined were situated within regions of inconsistently positioned nucleosomes, with the nucleosomes being spaced farther apart from each other prior to the onset of S phase in both cell types. Overall, the results indicate that cellular transformation is associated with differential epigenetic regulation, whereby chromatin structure is more open, rendering replication origins more accessible to initiator
Cultural Respect Encompassing Simulation Training: Being Heard About Health Through Broadband
Min-Yu Lau, Phyllis; Woodward-Kron, Robyn; Livesay, Karen; Elliott, Kristine; Nicholson, Patricia
2016-01-01
Background Cultural Respect Encompassing Simulation Training (CREST) is a learning program that uses simulation to provide health professional students and practitioners with strategies to communicate sensitively with culturally and linguistically diverse (CALD) patients. It consists of training modules with a cultural competency evaluation framework and CALD simulated patients to interact with trainees in immersive simulation scenarios. The aim of this study was to test the feasibility of expanding the delivery of CREST to rural Australia using live video streaming; and to investigate the fidelity of cultural sensitivity – defined within the process of cultural competency which includes awareness, knowledge, skills, encounters and desire – of the streamed simulations. Design and Methods In this mixed-methods evaluative study, health professional trainees were recruited at three rural academic campuses and one rural hospital to pilot CREST sessions via live video streaming and simulation from the city campus in 2014. Cultural competency, teaching and learning evaluations were conducted. Results Forty-five participants rated 26 reliable items before and after each session and reported statistically significant improvement in 4 of 5 cultural competency domains, particularly in cultural skills (PRespect Encompassing Simulation Training (CREST) program offers opportunities to health professional students and practitioners to learn and develop communication skills with professionally trained culturally and linguistically diverse simulated patients who contribute their experiences and health perspectives. It has already been shown to contribute to health professionals' learning and is effective in improving cultural competency in urban settings. This study demonstrates that CREST when delivered via live video-streaming and simulation can achieve similar results in rural settings. PMID:27190975
Zhang, Chunxi; Wang, Yuqing
2018-01-01
The sensitivity of simulated tropical cyclones (TCs) to the choice of cumulus parameterization (CP) scheme in the advanced Weather Research and Forecasting Model (WRF-ARW) version 3.5 is analyzed based on ten seasonal simulations with 20-km horizontal grid spacing over the western North Pacific. Results show that the simulated frequency and intensity of TCs are very sensitive to the choice of the CP scheme. The sensitivity can be explained well by the difference in the low-level circulation in a height and sorted moisture space. By transporting moist static energy from dry to moist region, the low-level circulation is important to convective self-aggregation which is believed to be related to genesis of TC-like vortices (TCLVs) and TCs in idealized settings. The radiative and evaporative cooling associated with low-level clouds and shallow convection in dry regions is found to play a crucial role in driving the moisture-sorted low-level circulation. With shallow convection turned off in a CP scheme, relatively strong precipitation occurs frequently in dry regions. In this case, the diabatic cooling can still drive the low-level circulation but its strength is reduced and thus TCLV/TC genesis is suppressed. The inclusion of the cumulus momentum transport (CMT) in a CP scheme can considerably suppress genesis of TCLVs/TCs, while changes in the moisture-sorted low-level circulation and horizontal distribution of precipitation are trivial, indicating that the CMT modulates the TCLVs/TCs activities in the model by mechanisms other than the horizontal transport of moist static energy.
Simple scheme for gauge mediation
International Nuclear Information System (INIS)
Murayama, Hitoshi; Nomura, Yasunori
2007-01-01
We present a simple scheme for constructing models that achieve successful gauge mediation of supersymmetry breaking. In addition to our previous work [H. Murayama and Y. Nomura, Phys. Rev. Lett. 98, 151803 (2007)] that proposed drastically simplified models using metastable vacua of supersymmetry breaking in vectorlike theories, we show there are many other successful models using various types of supersymmetry-breaking mechanisms that rely on enhanced low-energy U(1) R symmetries. In models where supersymmetry is broken by elementary singlets, one needs to assume U(1) R violating effects are accidentally small, while in models where composite fields break supersymmetry, emergence of approximate low-energy U(1) R symmetries can be understood simply on dimensional grounds. Even though the scheme still requires somewhat small parameters to sufficiently suppress gravity mediation, we discuss their possible origins due to dimensional transmutation. The scheme accommodates a wide range of the gravitino mass to avoid cosmological problems
TVD schemes for open channel flow
Delis, A. I.; Skeels, C. P.
1998-04-01
The Saint Venant equations for modelling flow in open channels are solved in this paper, using a variety of total variation diminishing (TVD) schemes. The performance of second- and third-order-accurate TVD schemes is investigated for the computation of free-surface flows, in predicting dam-breaks and extreme flow conditions created by the river bed topography. Convergence of the schemes is quantified by comparing error norms between subsequent iterations. Automatically calculated time steps and entropy corrections allow high CFL numbers and smooth transition between different conditions. In order to compare different approaches with TVD schemes, the most accurate of each type was chosen. All four schemes chosen proved acceptably accurate. However, there are important differences between the schemes in the occurrence of clipping, overshooting and oscillating behaviour and in the highest CFL numbers allowed by a scheme. These variations in behaviour stem from the different orders and inherent properties of the four schemes.
International Nuclear Information System (INIS)
Galán, J; Verleysen, P; Lebensohn, R A
2014-01-01
A new algorithm for the solution of the deformation of a polycrystalline material using a self-consistent scheme, and its integration as part of the finite element software Abaqus/Standard are presented. The method is based on the original VPSC formulation by Lebensohn and Tomé and its integration with Abaqus/Standard by Segurado et al. The new algorithm has been implemented as a set of Fortran 90 modules, to be used either from a standalone program or from Abaqus subroutines. The new implementation yields the same results as VPSC7, but with a significantly better performance, especially when used in multicore computers. (paper)
Numerical schemes for explosion hazards
International Nuclear Information System (INIS)
Therme, Nicolas
2015-01-01
In nuclear facilities, internal or external explosions can cause confinement breaches and radioactive materials release in the environment. Hence, modeling such phenomena is crucial for safety matters. Blast waves resulting from explosions are modeled by the system of Euler equations for compressible flows, whereas Navier-Stokes equations with reactive source terms and level set techniques are used to simulate the propagation of flame front during the deflagration phase. The purpose of this thesis is to contribute to the creation of efficient numerical schemes to solve these complex models. The work presented here focuses on two major aspects: first, the development of consistent schemes for the Euler equations, then the buildup of reliable schemes for the front propagation. In both cases, explicit in time schemes are used, but we also introduce a pressure correction scheme for the Euler equations. Staggered discretization is used in space. It is based on the internal energy formulation of the Euler system, which insures its positivity and avoids tedious discretization of the total energy over staggered grids. A discrete kinetic energy balance is derived from the scheme and a source term is added in the discrete internal energy balance equation to preserve the exact total energy balance at the limit. High order methods of MUSCL type are used in the discrete convective operators, based solely on material velocity. They lead to positivity of density and internal energy under CFL conditions. This ensures that the total energy cannot grow and we can furthermore derive a discrete entropy inequality. Under stability assumptions of the discrete L8 and BV norms of the scheme's solutions one can prove that a sequence of converging discrete solutions necessarily converges towards the weak solution of the Euler system. Besides it satisfies a weak entropy inequality at the limit. Concerning the front propagation, we transform the flame front evolution equation (the so
Good governance for pension schemes
Thornton, Paul
2011-01-01
Regulatory and market developments have transformed the way in which UK private sector pension schemes operate. This has increased demands on trustees and advisors and the trusteeship governance model must evolve in order to remain fit for purpose. This volume brings together leading practitioners to provide an overview of what today constitutes good governance for pension schemes, from both a legal and a practical perspective. It provides the reader with an appreciation of the distinctive characteristics of UK occupational pension schemes, how they sit within the capital markets and their social and fiduciary responsibilities. Providing a holistic analysis of pension risk, both from the trustee and the corporate perspective, the essays cover the crucial role of the employer covenant, financing and investment risk, developments in longevity risk hedging and insurance de-risking, and best practice scheme administration.
Directory of Open Access Journals (Sweden)
Kowal Robert
2016-12-01
Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Multiple areas of research function within its scope. One of the many fundamental questions in the model concerns proving the efficiency of the most commonly used OLS estimators and examining their properties. In the literature of the subject one can find taking back to this scope and certain solutions in that regard. Methodically, they are borrowed from the multiple regression model or also from a boundary partial model. Not everything, however, is here complete and consistent. In the paper a completely new scheme is proposed, based on the implementation of the Cauchy-Schwarz inequality in the arrangement of the constraint aggregated from calibrated appropriately secondary constraints of unbiasedness which in a result of choice the appropriate calibrator for each variable directly leads to showing this property. A separate range-is a matter of choice of such a calibrator. These deliberations, on account of the volume and kinds of the calibration, were divided into a few parts. In the one the efficiency of OLS estimators is proven in a mixed scheme of the calibration by averages, that is preliminary, and in the most basic frames of the proposed methodology. In these frames the future outlines and general premises constituting the base of more distant generalizations are being created.
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
Deletion 1q43 encompassing only CHRM3 in a patient with autistic disorder.
Petersen, Andrea Klunder; Ahmad, Ausaf; Shafiq, Mustafa; Brown-Kipphut, Brigette; Fong, Chin-To; Anwar Iqbal, M
2013-02-01
Deletions on the distal portion of the long arm of chromosome 1 result in complex and highly variable clinical phenotypes which include intellectual disability, autism, seizures, microcephaly/craniofacial dysmorphology, corpus callosal agenesis/hypogenesis, cardiac and genital anomalies, hand and foot abnormalities and short stature. Genotype-phenotype correlation reported a minimum region of 2 Mb at 1q43-q44. We report on a 3 ½ year old male patient diagnosed with autistic disorder who has social withdrawal, eating problems, repetitive stereotypic behaviors including self-injurious head banging and hair pulling, and no seizures, anxiety, or mood swings. Array comparative genomic hybridization (aCGH) showed an interstitial deletion of 473 kb at 1q43 region (239,412,391-239,885,394; NCBI build37/hg19) harboring only CHRM3 (Acetylcholine Receptor, Muscarinic, 3; OMIM: 118494). Recently, another case with a de novo interstitial deletion of 911 kb at 1q43 encompassing three genes including CHRM3 was reported. The M3 muscarinic receptor influences a multitude of central and peripheral nervous system processes via its interaction with acetylcholine and may be an important modulator of behavior, learning and memory. We propose CHRM3 as a candidate gene responsible for our patient's specific phenotype as well as the overlapping phenotypic features of other patients with 1q43 or 1q43-q44 deletions. Copyright © 2013. Published by Elsevier Masson SAS.
Malkin, Tamsin L; Heard, Dwayne E; Hood, Christina; Stocker, Jenny; Carruthers, David; MacKenzie, Ian A; Doherty, Ruth M; Vieno, Massimo; Lee, James; Kleffmann, Jörg; Laufs, Sebastian; Whalley, Lisa K
2016-07-18
Air pollution is the environmental factor with the greatest impact on human health in Europe. Understanding the key processes driving air quality across the relevant spatial scales, especially during pollution exceedances and episodes, is essential to provide effective predictions for both policymakers and the public. It is particularly important for policy regulators to understand the drivers of local air quality that can be regulated by national policies versus the contribution from regional pollution transported from mainland Europe or elsewhere. One of the main objectives of the Coupled Urban and Regional processes: Effects on AIR quality (CUREAIR) project is to determine local and regional contributions to ozone events. A detailed zero-dimensional (0-D) box model run with the Master Chemical Mechanism (MCMv3.2) is used as the benchmark model against which the less explicit chemistry mechanisms of the Generic Reaction Set (GRS) and the Common Representative Intermediates (CRIv2-R5) schemes are evaluated. GRS and CRI are used by the Atmospheric Dispersion Modelling System (ADMS-Urban) and the regional chemistry transport model EMEP4UK, respectively. The MCM model uses a near-explicit chemical scheme for the oxidation of volatile organic compounds (VOCs) and is constrained to observations of VOCs, NOx, CO, HONO (nitrous acid), photolysis frequencies and meteorological parameters measured during the ClearfLo (Clean Air for London) campaign. The sensitivity of the less explicit chemistry schemes to different model inputs has been investigated: Constraining GRS to the total VOC observed during ClearfLo as opposed to VOC derived from ADMS-Urban dispersion calculations, including emissions and background concentrations, led to a significant increase (674% during winter) in modelled ozone. The inclusion of HONO chemistry in this mechanism, particularly during wintertime when other radical sources are limited, led to substantial increases in the ozone levels predicted
Amorati, Roberta; Rizzi, Rolando
2002-03-20
A fast-forward radiative transfer (RTF) model is presented that includes cloud-radiation interaction for any number of cloud layers. Layer cloud fraction and transmittance are treated separately and combined with that of gaseous transmittances. RTF is tested against a reference procedure that uses line-by-line gaseous transmittances and solves the radiative transfer equation by use of the adding-doubling method to handle multiple-scattering conditions properly. The comparison is carried out for channels 8, 12, and 14 of the High Resolution Infrared Radiation Sounder (HIRS/2) and for the geostationary satellite METEOSAT thermal infrared and water vapor channels. Fairly large differences in simulated radiances by the two schemes are found in clear conditions for upper- and mid-tropospheric channels; the cause of the differences is discussed. For cloudy situations an improved layer source function is shown to be required when rapid changes in atmospheric transmission are experienced within the model layers. The roles of scattering processes are discussed; results with and without scattering, both obtained by use of a reference code, are compared. Overall, the presented results show that the fast model is capable of reproducing the cloudy results of the much more complex and time-consuming reference scheme.
International Nuclear Information System (INIS)
Li, R.
2012-01-01
The aim of this research dissertation is at studying natural and mixed convections of fluid flows, and to develop and validate numerical schemes for interface tracking in order to treat incompressible and immiscible fluid flows, later. In a first step, an original numerical method, based on Finite Volume discretizations, is developed for modeling low Mach number flows with large temperature gaps. Three physical applications on air flowing through vertical heated parallel plates were investigated. We showed that the optimum spacing corresponding to the peak heat flux transferred from an array of isothermal parallel plates cooled by mixed convection is smaller than those for natural or forced convections when the pressure drop at the outlet keeps constant. We also proved that mixed convection flows resulting from an imposed flow rate may exhibit unexpected physical solutions; alternative model based on prescribed total pressure at inlet and fixed pressure at outlet sections gives more realistic results. For channels heated by heat flux on one wall only, surface radiation tends to suppress the onset of re-circulations at the outlet and to unify the walls temperature. In a second step, the mathematical model coupling the incompressible Navier-Stokes equations and the Level-Set method for interface tracking is derived. Improvements in fluid volume conservation by using high order discretization (ENO-WENO) schemes for the transport equation and variants of the signed distance equation are discussed. (author)
Energy Technology Data Exchange (ETDEWEB)
Yang, Ben; Qian, Yun; Berg, Larry K.; Ma, Po-Lun; Wharton, Sonia; Bulaevskaya, Vera; Yan, Huiping; Hou, Zhangshuan; Shaw, William J.
2016-07-21
We evaluate the sensitivity of simulated turbine-height winds to 26 parameters applied in a planetary boundary layer (PBL) scheme and a surface layer scheme of the Weather Research and Forecasting (WRF) model over an area of complex terrain during the Columbia Basin Wind Energy Study. An efficient sampling algorithm and a generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of modeled turbine-height winds. The results indicate that most of the variability in the ensemble simulations is contributed by parameters related to the dissipation of the turbulence kinetic energy (TKE), Prandtl number, turbulence length scales, surface roughness, and the von Kármán constant. The relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability. The parameter associated with the TKE dissipation rate is found to be the most important one, and a larger dissipation rate can produce larger hub-height winds. A larger Prandtl number results in weaker nighttime winds. Increasing surface roughness reduces the frequencies of both extremely weak and strong winds, implying a reduction in the variability of the wind speed. All of the above parameters can significantly affect the vertical profiles of wind speed, the altitude of the low-level jet and the magnitude of the wind shear strength. The wind direction is found to be modulated by the same subset of influential parameters. Remainder of abstract is in attachment.
Breeding schemes in reindeer husbandry
Directory of Open Access Journals (Sweden)
Lars Rönnegård
2003-04-01
Full Text Available The objective of the paper was to investigate annual genetic gain from selection (G, and the influence of selection on the inbreeding effective population size (Ne, for different possible breeding schemes within a reindeer herding district. The breeding schemes were analysed for different proportions of the population within a herding district included in the selection programme. Two different breeding schemes were analysed: an open nucleus scheme where males mix and mate between owner flocks, and a closed nucleus scheme where the males in non-selected owner flocks are culled to maximise G in the whole population. The theory of expected long-term genetic contributions was used and maternal effects were included in the analyses. Realistic parameter values were used for the population, modelled with 5000 reindeer in the population and a sex ratio of 14 adult females per male. The standard deviation of calf weights was 4.1 kg. Four different situations were explored and the results showed: 1. When the population was randomly culled, Ne equalled 2400. 2. When the whole population was selected on calf weights, Ne equalled 1700 and the total annual genetic gain (direct + maternal in calf weight was 0.42 kg. 3. For the open nucleus scheme, G increased monotonically from 0 to 0.42 kg as the proportion of the population included in the selection programme increased from 0 to 1.0, and Ne decreased correspondingly from 2400 to 1700. 4. In the closed nucleus scheme the lowest value of Ne was 1300. For a given proportion of the population included in the selection programme, the difference in G between a closed nucleus scheme and an open one was up to 0.13 kg. We conclude that for mass selection based on calf weights in herding districts with 2000 animals or more, there are no risks of inbreeding effects caused by selection.
DEFF Research Database (Denmark)
Njage, Patrick Murigu Kamau; Sawe, Chemutai Tonui; Onyango, Cecilia Moraa
2017-01-01
Current approaches such as inspections, audits, and end product testing cannot detect the distribution and dynamics of microbial contamination. Despite the implementation of current food safety management systems, foodborne outbreaks linked to fresh produce continue to be reported. A microbial...... assessment scheme and statistical modeling were used to systematically assess the microbial performance of core control and assurance activities in five Kenyan fresh produce processing and export companies. Generalized linear mixed models and correlated random-effects joint models for multivariate clustered...... data followed by empirical Bayes estimates enabled the analysis of the probability of contamination across critical sampling locations (CSLs) and factories as a random effect. Salmonella spp. and Listeria monocytogenes were not detected in the final products. However, none of the processors attained...
Familial X/Y Translocation Encompassing ARSE in Two Moroccan Siblings with Sensorineural Deafness.
Amasdl, Saadia; Smaili, Wiam; Natiq, Abdelhafid; Hassani, Amale; Sbiti, Aziza; Agadr, Aomar; Sanlaville, Damien; Sefiani, Abdelaziz
2017-01-01
Unbalanced translocations involving X and Y chromosomes are rare and associated with a contiguous gene syndrome. The clinical phenotype is heterogeneous including mainly short stature, chondrodysplasia punctata, ichthyosis, hypogonadism, and intellectual disability. Here, we report 2 brothers with peculiar gestalt, short stature, and hearing loss, who harbor an X/Y translocation. Physical examination, brainstem acoustic potential evaluation, bone age, hormonal assessment, and X-ray investigations were performed. Because of their dysmorphic features, karyotyping, FISH, and aCGH were carried out. The probands had short stature, hypertelorism, midface hypoplasia, sensorineural hearing loss, normal intelligence as well as slight radial and ulnar bowing with brachytelephalangy. R-banding identified a derivative X chromosome with an abnormally expanded short arm. The mother was detected as a carrier of the same aberrant X chromosome. aCGH disclosed a 3.1-Mb distal deletion of chromosome region Xp22.33pter. This interval encompasses several genes, especially the short stature homeobox (SHOX) and arylsulfatase (ARSE) genes. The final karyotype of the probands was: 46,Y,der(X),t(X;Y)(p22;q12).ish der(X)(DXYS129-,DXYS153-)mat.arr[hg19] Xp22.33(61091_2689408)×1mat,Xp22.33(2701273_3258404)×0mat,Yq11.222q12 (21412851_59310245)×2. Herein, we describe a Moroccan family with a maternally inherited X/Y translocation and discuss the genotype-phenotype correlations according to the deleted genes. © 2017 S. Karger AG, Basel.
Diagnosis and treatment of a C2-osteoblastoma encompassing the vertebral artery.
Stavridis, Stavros I; Pingel, Andreas; Schnake, Klaus John; Kandziora, Frank
2013-11-01
Osteoblastoma is a rare, benign bone tumor that accounts for approximately 1% of all primary bone tumors and 5% of spinal tumors, mostly arising within the posterior elements of the spine within the second and third decades of life. Nonspecific initial symptoms mainly neck or back pain and stiffness of the spine remain often undiagnosed and the destructive nature of the expanding tumor can cause even neurological deficits. CT and MRI scans constitute the basic imaging modalities employed in diagnosis and preoperative planning with the former delineating the location and osseous involvement of the mass and the latter providing appreciation of the effect on soft tissues and neural elements. In our case a 23-year-old male presented with persisting head and neck pain, after being involved in a car collision a month ago. Although the initial diagnostic imaging, including plain X-rays and MRI scan failed to reveal any pathological findings, the persistence of the symptoms led to repeating imaging (CT and MRI) that showed the existence of a benign osseous tumor of the C2 lamina that was destructing the surrounding osseous structures and encompassing the right vertebral artery. The suspicion of an osteoblastoma was raised and the decision for surgical removal of the tumor was made for treating the persistent symptoms and preventing a possible neurological deficit or vascular lesion. A marginal tumor resection was performed through a posterior approach, followed by an anterior instrumented fusion. Histological examination confirmed the diagnosis of an osteoblastoma. The recovery of the patient was uneventful and a significant symptom subsidence was reported following surgery. Eighteen months postoperatively the patient remains pain free without any indications for tumor recurrence. This case delineates the difficulties in diagnosing this tumor, as well as the challenges and problems encountered in its surgical management, and also the favorable prognosis when adequately
Cultural respect encompassing simulation training: being heard about health through broadband
Directory of Open Access Journals (Sweden)
Phyllis Min-yu Lau
2016-04-01
Full Text Available Background. Cultural Respect Encompassing Simulation Training (CREST is a learning program that uses simulation to provide health professional students and practitioners with strategies to communicate sensitively with culturally and linguistically diverse (CALD patients. It consists of training modules with a cultural competency evaluation framework and CALD simulated patients to interact with trainees in immersive simulation scenarios. The aim of this study was to test the feasibility of expanding the delivery of CREST to rural Australia using live video streaming; and to investigate the fidelity of cultural sensitivity – defined within the process of cultural competency which includes awareness, knowledge, skills, encounters and desire – of the streamed simulations. Design and Methods. In this mixed-methods evaluative study, health professional trainees were recruited at three rural academic campuses and one rural hospital to pilot CREST sessions via live video streaming and simulation from the city campus in 2014. Cultural competency, teaching and learning evaluations were conducted. Results. Forty-five participants rated 26 reliable items before and after each session and reported statistically significant improvement in 4 of 5 cultural competency domains, particularly in cultural skills (P<0.05. Qualitative data indicated an overall acknowledgement amongst participants of the importance of communication training and the quality of the simulation training provided remotely by CREST. Conclusions. Cultural sensitivity education using live video-streaming and simulation can contribute to health professionals’ learning and is effective in improving cultural competency. CREST has the potential to be embedded within health professional curricula across Australian universities to address issues of health inequalities arising from a lack of cultural sensitivity training.
Directory of Open Access Journals (Sweden)
Sergi Pérez-Jorge
Full Text Available Along the East African coast, marine top predators are facing an increasing number of anthropogenic threats which requires the implementation of effective and urgent conservation measures to protect essential habitats. Understanding the role that habitat features play on the marine top predator' distribution and abundance is a crucial step to evaluate the suitability of an existing Marine Protected Area (MPA, originally designated for the protection of coral reefs. We developed species distribution models (SDM on the IUCN data deficient Indo-Pacific bottlenose dolphin (Tursiops aduncus in southern Kenya. We followed a comprehensive ecological modelling approach to study the environmental factors influencing the occurrence and abundance of dolphins while developing SDMs. Through the combination of ensemble prediction maps, we defined recurrent, occasional and unfavourable habitats for the species. Our results showed the influence of dynamic and static predictors on the dolphins' spatial ecology: dolphins may select shallow areas (5-30 m, close to the reefs (< 500 m and oceanic fronts (< 10 km and adjacent to the 100 m isobath (< 5 km. We also predicted a significantly higher occurrence and abundance of dolphins within the MPA. Recurrent and occasional habitats were identified on large percentages on the existing MPA (47% and 57% using presence-absence and abundance models respectively. However, the MPA does not adequately encompass all occasional and recurrent areas and within this context, we propose to extend the MPA to incorporate all of them which are likely key habitats for the highly mobile species. The results from this study provide two key conservation and management tools: (i an integrative habitat modelling approach to predict key marine habitats, and (ii the first study evaluating the effectiveness of an existing MPA for marine mammals in the Western Indian Ocean.
Directory of Open Access Journals (Sweden)
M. Spada
2013-12-01
Full Text Available One of the major sources of uncertainty in model estimates of the global sea-salt aerosol distribution is the emission parameterization. We evaluate a new sea-salt aerosol life cycle module coupled to the online multiscale chemical transport model NMMB/BSC-CTM. We compare 5 yr global simulations using five state-of-the-art sea-salt open-ocean emission schemes with monthly averaged coarse aerosol optical depth (AOD from selected AERONET sun photometers, surface concentration measurements from the University of Miami's Ocean Aerosol Network, and measurements from two NOAA/PMEL cruises (AEROINDOEX and ACE1. Model results are highly sensitive to the introduction of sea-surface-temperature (SST-dependent emissions and to the accounting of spume particles production. Emission ranges from 3888 Tg yr−1 to 8114 Tg yr−1, lifetime varies between 7.3 h and 11.3 h, and the average column mass load is between 5.0 Tg and 7.2 Tg. Coarse AOD is reproduced with an overall correlation of around 0.5 and with normalized biases ranging from +8.8% to +38.8%. Surface concentration is simulated with normalized biases ranging from −9.5% to +28% and the overall correlation is around 0.5. Our results indicate that SST-dependent emission schemes improve the overall model performance in reproducing surface concentrations. On the other hand, they lead to an overestimation of the coarse AOD at tropical latitudes, although it may be affected by uncertainties in the comparison due to the use of all-sky model AOD, the treatment of water uptake, deposition and optical properties in the model and/or an inaccurate size distribution at emission.
Xu, Jianhui; Zhang, Feifei; Zhao, Yi; Shu, Hong; Zhong, Kaiwen
2016-07-01
For the large-area snow depth (SD) data sets with high spatial resolution in the Altay region of Northern Xinjiang, China, we present a deterministic ensemble Kalman filter (DEnKF)-albedo assimilation scheme that considers the common land model (CoLM) subgrid heterogeneity. In the albedo assimilation of DEnKF-albedo, the assimilated albedos over each subgrid tile are estimated with the MCD43C1 bidirectional reflectance distribution function (BRDF) parameters product and CoLM calculated solar zenith angle. The BRDF parameters are hypothesized to be consistent over all subgrid tiles within a specified grid. In the SCF assimilation of DEnKF-albedo, a DEnKF combining a snow density-based observation operator considers the effects of the CoLM subgrid heterogeneity and is employed to assimilate MODIS SCF to update SD states over all subgrid tiles. The MODIS SCF over a grid is compared with the area-weighted sum of model predicted SCF over all the subgrid tiles within the grid. The results are validated with in situ SD measurements and AMSR-E product. Compared with the simulations, the DEnKF-albedo scheme can reduce errors of SD simulations and accurately simulate the seasonal variability of SD. Furthermore, it can improve simulations of SD spatiotemporal distribution in the Altay region, which is more accurate and shows more detail than the AMSR-E product.
Yang, Ben; Qian, Yun; Berg, Larry K.; Ma, Po-Lun; Wharton, Sonia; Bulaevskaya, Vera; Yan, Huiping; Hou, Zhangshuan; Shaw, William J.
2017-01-01
We evaluate the sensitivity of simulated turbine-height wind speeds to 26 parameters within the Mellor-Yamada-Nakanishi-Niino (MYNN) planetary boundary-layer scheme and MM5 surface-layer scheme of the Weather Research and Forecasting model over an area of complex terrain. An efficient sampling algorithm and generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of simulated turbine-height wind speeds. The results indicate that most of the variability in the ensemble simulations is due to parameters related to the dissipation of turbulent kinetic energy (TKE), Prandtl number, turbulent length scales, surface roughness, and the von Kármán constant. The parameter associated with the TKE dissipation rate is found to be most important, and a larger dissipation rate produces larger hub-height wind speeds. A larger Prandtl number results in smaller nighttime wind speeds. Increasing surface roughness reduces the frequencies of both extremely weak and strong airflows, implying a reduction in the variability of wind speed. All of the above parameters significantly affect the vertical profiles of wind speed and the magnitude of wind shear. The relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability.
An improved numerical scheme with the fully-implicit two-fluid model for a fast-running system code
International Nuclear Information System (INIS)
Jeong, J.J.; No, H.C.
1987-01-01
A new computational method is implemented in the FIDA-2 (Fully-Implicit Safety Analysis-2) code to simulate the thermal-hydraulic response to hypothetical accidents in nuclear power plants. The basis field equations of FISA-2 consist of the mixture continuity equation, void propagation equation, two phasic momentum equations, and two phasic energy equations. The fully-implicit scheme is used to elimate a time step limitation and the computation time per time step is minimized as much as possible by reducing the matrix-size to be solved. The phasic energy equations written in the nonconservation form are solved after they are set up to be decoupled from other field equations. The void propagation equation is solved to obtain the void fraction. Spatial acceleration terms in the phasic momentum equations are manipulated with the phasic continiuity equations so that pseudo-phasic mass flux may be expressed in terms of pressure only. Putting the pseudo-phasic mass flux into the mixture continuity equation, we obtain linear equations with pressure variables only as unknowns. By solving the linear equations, pressures at all the nodes are obtained and in turn other variables are obtained by back-substitution. The above procedure is performed until the convergence criterion is satisfied. Reasonable accuracy and no stability limitation with fast-running are confirmed by comparing results from FISA-2 with experimental data and results from other codes. (orig.)
Hanasaki, N.; Yoshikawa, S.; Pokhrel, Y. N.; Kanae, S.
2017-12-01
Humans abstract water from various sources to sustain their livelihood and society. Some global hydrological models (GHMs) include explicit schemes of human water management, but the representation and performance of these schemes remain limited. We substantially enhanced the human water management schemes of the H08 GHM by incorporating the latest data and techniques. The model enables us to estimate water abstraction from six major water sources, namely, river flow regulated by global reservoirs (i.e., reservoirs regulating the flow of the world's major rivers), aqueduct water transfer, local reservoirs, seawater desalination, renewable groundwater, and nonrenewable groundwater. All the interactions were simulated in a single computer program and the water balance was always strictly closed at any place and time during the simulation period. Using this model, we first conducted a historical global hydrological simulation at a spatial resolution of 0.5 x 0.5 degree to specify the sources of water for humanity. The results indicated that, in 2000, of the 3628 km3yr-1 global freshwater requirement, 2839 km3yr-1 was taken from surface water and 789 km3yr-1 from groundwater. Streamflow, aqueduct water transfer, local reservoirs, and seawater desalination accounted for 1786, 199, 106, and 1.8 km3yr-1 of the surface water, respectively. The remaining 747 km3yr-1 freshwater requirement was unmet, or surface water was not available when and where it was needed in our simulation. Renewable and nonrenewable groundwater accounted for 607 and 182 km3yr-1 of the groundwater total, respectively. Second, we evaluated the water stress using our simulations and contrasted it with earlier global assessments based on empirical water scarcity indicators, namely, the Withdrawal to Availability ratio and the Falkenmark index (annual renewable water resources per capita). We found that inclusion of water infrastructures in our model diminished water stress in some parts of the world, on
Directory of Open Access Journals (Sweden)
M. Cassiani
2016-11-01
Full Text Available The offline FLEXible PARTicle (FLEXPART stochastic dispersion model is currently a community model used by many scientists. Here, an alternative FLEXPART model version has been developed and tailored to use with the meteorological output data generated by the CMIP5-version of the Norwegian Earth System Model (NorESM1-M. The atmospheric component of the NorESM1-M is based on the Community Atmosphere Model (CAM4; hence, this FLEXPART version could be widely applicable and it provides a new advanced tool to directly analyse and diagnose atmospheric transport properties of the state-of-the-art climate model NorESM in a reliable way. The adaptation of FLEXPART to NorESM required new routines to read meteorological fields, new post-processing routines to obtain the vertical velocity in the FLEXPART coordinate system, and other changes. These are described in detail in this paper. To validate the model, several tests were performed that offered the possibility to investigate some aspects of offline global dispersion modelling. First, a comprehensive comparison was made between the tracer transport from several point sources around the globe calculated online by the transport scheme embedded in CAM4 and the FLEXPART model applied offline on output data. The comparison allowed investigating several aspects of the transport schemes including the approximation introduced by using an offline dispersion model with the need to transform the vertical coordinate system, the influence on the model results of the sub-grid-scale parameterisations of convection and boundary layer height and the possible advantage entailed in using a numerically non-diffusive Lagrangian particle solver. Subsequently, a comparison between the reference FLEXPART model and the FLEXPART–NorESM/CAM version was performed to compare the well-mixed state of the atmosphere in a 1-year global simulation. The two model versions use different methods to obtain the vertical velocity but no
Directory of Open Access Journals (Sweden)
J.-P. Vergnes
2012-10-01
Full Text Available Groundwater is a non-negligible component of the global hydrological cycle, and its interaction with overlying unsaturated zones can influence water and energy fluxes between the land surface and the atmosphere. Despite its importance, groundwater is not yet represented in most climate models. In this paper, the simple groundwater scheme implemented in the Total Runoff Integrating Pathways (TRIP river routing model is applied in off-line mode at global scale using a 0.5° model resolution. The simulated river discharges are evaluated against a large dataset of about 3500 gauging stations compiled from the Global Data Runoff Center (GRDC and other sources, while the terrestrial water storage (TWS variations derived from the Gravity Recovery and Climate Experiment (GRACE satellite mission help to evaluate the simulated TWS. The forcing fields (surface runoff and deep drainage come from an independent simulation of the Interactions between Soil-Biosphere-Atmosphere (ISBA land surface model covering the period from 1950 to 2008. Results show that groundwater improves the efficiency scores for about 70% of the gauging stations and deteriorates them for 15%. The simulated TWS are also in better agreement with the GRACE estimates. These results are mainly explained by the lag introduced by the low-frequency variations of groundwater, which tend to shift and smooth the simulated river discharges and TWS. A sensitivity study on the global precipitation forcing used in ISBA to produce the forcing fields is also proposed. It shows that the groundwater scheme is not influenced by the uncertainties in precipitation data.
Fuentes-Franco, Ramon; Giorgi, Filippo; Coppola, Erika; Zimmermann, Klaus
2016-04-01
The sensitivity of simulated tropical cyclones (TC) to resolution and convection scheme parameterization is investigated over the CORDEX Central America domain. The performance of the simulations, performed for a ten-year period (1989-1998) using ERA-Interim reanalysis as boundary and initial conditions, is assessed considering 50 km and 25 km resolution, and the use of two different convection schemes: Emanuel (Em) and Kain-Fritsch (KF). Two ocean surface fluxes are also compared as well: the Monin-Obukhov scheme, and the one proposed by Zeng et al. (1998). By comparing with observations, for the whole period we assess the spatial representation of the TC, and their intensity. At interannual scale we assess the representation of their variability and at daily scale we compare observed and simulated tracks in order to establish a measure of how similar to observed are the simulated tracks. In general the simulations using KF convection scheme show higher TC density, as well as longer-duration TC (up to 15 days) with stronger winds (> 50ms-1) than those using Em (<40ms-1). Similar results were found for simulations using 25 km respect to 50 km resolution. All simulations show a better spatial representation of simulated TC density and its interannual variability over the Tropical North Atlantic Ocean (TNA) than over the Eastern Tropical Pacific Ocean (ETP). The 25 km resolution simulations show an overestimation of TC density compared to observations over ETP off the coast of Mexico. The duration of the TC in simulations using 25km resolution is similar to the observations, while is underestimated by the 50km resolution. The Monin-Obukhov ocean flux overestimates the number of TCs, while Zeng parameterization give a number similar to observations in both oceans. At daily scale, in general all simulations capture the density of cyclones during highly active TC seasons over the TNA, however the tracks generally are not coincident with observations, except for highly
International Nuclear Information System (INIS)
Pustelny, Szymon; Jackson Kimball, Derek F.; Pankow, Chris; Ledbetter, Micah P.; Wlodarczyk, Przemyslaw; Wcislo, Piotr; Pospelov, Maxim; Smith, Joshua R.; Read, Jocelyn; Gawlik, Wojciech; Budker, Dmitry
2013-01-01
A novel experimental scheme enabling the investigation of transient exotic spin couplings is discussed. The scheme is based on synchronous measurements of optical-magnetometer signals from several devices operating in magnetically shielded environments in distant locations (>or similar 100 km). Although signatures of such exotic couplings may be present in the signal from a single magnetometer, it would be challenging to distinguish them from noise. By analyzing the correlation between signals from multiple, geographically separated magnetometers, it is not only possible to identify the exotic transient but also to investigate its nature. The ability of the network to probe presently unconstrained physics beyond the Standard Model is examined by considering the spin coupling to stable topological defects (e.g., domain walls) of axion-like fields. In the spirit of this research, a brief (∝2 hours) demonstration experiment involving two magnetometers located in Krakow and Berkeley (∝9000 km separation) is presented and discussion of the data-analysis approaches that may allow identification of transient signals is provided. The prospects of the network are outlined in the last part of the paper. (copyright 2013 by WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Rao, M; Ramachandra, S S; Bandyopadhyay, S; Chandran, A; Shidhaye, R; Tamisettynarayana, S; Thippaiah, A; Sitamma, M; Sunil George, M; Singh, V; Sivasankaran, S; Bangdiwala, S I
2011-01-01
Families living below the poverty line in countries which do not have universal healthcare coverage are drawn into indebtedness and bankruptcy. The state of Andhra Pradesh in India established the Rajiv Aarogyasri Community Health Insurance Scheme (RACHIS) in 2007 with the aim of breaking this cycle by improving the access of below the poverty line (BPL) families to secondary and tertiary healthcare. It covered a wide range of surgical and medical treatments for serious illnesses requiring specialist healthcare resources not always available at district-level government hospitals. The impact of this scheme was evaluated by a rapid assessment, commissioned by the government of Andhra Pradesh. The aim of the assessment was to explore the contribution of the scheme to the reduction of catastrophic health expenditure among the poor and to recommend ways by which delivery of the scheme could be improved. We report the findings of this assessment. Two types of data were used for the assessment. Patient data pertaining to 89 699 treatment requests approved by the scheme during its first 18 months were examined. Second, surveys of scheme beneficiaries and providers were undertaken in 6 randomly selected districts of Andhra Pradesh. This novel scheme was beginning to reach the BPL households in the state and providing access to free secondary and tertiary healthcare to seriously ill poor people. An integrated model encompassing primary, secondary and tertiary care would be of greater benefit to families below the poverty line and more cost-effective for the government. There is considerable potential for the government to build on this successful start and to strengthen equity of access and the quality of care provided by the scheme. Copyright 2011, NMJI.
Electrical injection schemes for nanolasers
DEFF Research Database (Denmark)
Lupi, Alexandra; Chung, Il-Sug; Yvind, Kresten
2013-01-01
injection schemes have been compared: vertical pi- n junction through a current post structure as in1 and lateral p-i-n junction with either uniform material as in2 or with a buried heterostructure (BH) as in3. To allow a direct comparison of the three schemes the same active material composition consisting......The performance of injection schemes among recently demonstrated electrically pumped photonic crystal nanolasers has been investigated numerically. The computation has been carried out at room temperature using a commercial semiconductor simulation software. For the simulations two electrical...... threshold current has been achieved with the lateral electrical injection through the BH; while the lowest resistance has been obtained from the current post structure even though this model shows a higher current threshold because of the lack of carrier confinement. Final scope of the simulations...
de Bont, Chris
2018-01-01
This booklet was written to share research results with farmers and practitioners in Tanzania. It gives a summary of the empirical material collected during three months of field work in the Mawala irrigation scheme (Kilimanjaro Region), and includes maps, tables and photos. It describes the history of the irrigation scheme, as well current irrigation and farming practices. It especially focuses on the different kinds of infrastructural improvement in the scheme (by farmers and the government...
Anekawati, Anik; Widjanarko Otok, Bambang; Purhadi; Sutikno
2017-06-01
Research in education often involves a latent variable. Statistical analysis technique that has the ability to analyze the pattern of relationship among latent variables as well as between latent variables and their indicators is Structural Equation Modeling (SEM). SEM partial least square (PLS) was developed as an alternative if these conditions are met: the theory that underlying the design of the model is weak, does not assume a certain scale measurement, the sample size should not be large and the data does not have the multivariate normal distribution. The purpose of this paper is to compare the results of modeling of the educational quality in high school level (SMA/MA) in Sumenep Regency with structural equation modeling approach partial least square with three schemes estimation of score factors. This paper is a result of explanatory research using secondary data from Sumenep Education Department and Badan Pusat Statistik (BPS) Sumenep which was data of Sumenep in the Figures and the District of Sumenep in the Figures for the year 2015. The unit of observation in this study were districts in Sumenep that consists of 18 districts on the mainland and 9 districts in the islands. There were two endogenous variables and one exogenous variable. Endogenous variables are the quality of education level of SMA/MA (Y1) and school infrastructure (Y2), whereas exogenous variable is socio-economic condition (X1). In this study, There is one improved model which represented by model from path scheme because this model is a consistent, all of its indicators are valid and its the value of R-square increased which is: Y1=0.651Y2. In this model, the quality of education influenced only by the school infrastructure (0.651). The socio-economic condition did not affect neither the school infrastructure nor the quality of education. If the school infrastructure increased 1 point, then the quality of education increased 0.651 point. The quality of education had an R2 of 0
Directory of Open Access Journals (Sweden)
Qingwei Liu
2014-01-01
Full Text Available The initial step towards a nondestructive technique that estimates grain orientation in an anisotropic weld is presented in this paper. The purpose is to aid future forward simulations of ultrasonic NDT of this kind of weld to achieve a better result. A forward model that consists of a weld model, a transmitter model, a receiver model, and a 2D ray tracing algorithm is introduced. An inversion based on a multiobjective genetic algorithm is also presented. Experiments are conducted for both P and SV waves in order to collect enough data used in the inversion. Calculation is conducted to fulfill the estimation with both the synthetic data and the experimental data. Concluding remarks are presented at the end of the paper.
Yao, Yao
2014-01-01
The deep sub-Ohmic spin-boson model shows a longstanding non-Markovian coherence at low temperature. Motivating to quench this robust coherence, the thermal effect is unitarily incorporated into the time evolution of the model, which is calculated by the adaptive time-dependent density matrix renormalization group algorithm combined with the orthogonal polynomials theory. Via introducing a unitary heating operator to the bosonic bath, the bath is heated up so that a majority portion of the bo...
Directory of Open Access Journals (Sweden)
Luigi Vecchione
2015-07-01
Full Text Available One of the most important issues in biomass biocatalytic gasification is the correct prediction of gasification products, with particular attention to the Topping Atmosphere Residues (TARs. In this work, performedwithin the European 7FP UNIfHY project, we develops and validate experimentally a model which is able of predicting the outputs,including TARs, of a steam-fluidized bed biomass gasifier. Pine wood was chosen as biomass feedstock: the products obtained in pyrolysis tests are the relevant model input. Hydrodynamics and chemical properties of the reacting system are considered: the hydrodynamic approach is based on the two phase theory of fluidization, meanwhile the chemical model is based on the kinetic equations for the heterogeneous and homogenous reactions. The derived differentials equations for the gasifier at steady state were implemented MATLAB. Solution was consecutively carried out using the Boubaker Polynomials Expansion Scheme by varying steam/biomass ratio (0.5-1 and operating temperature (750-850°C.The comparison between models and experimental results showed that the model is able of predicting gas mole fractions and production rate including most of the representative TARs compounds
International Nuclear Information System (INIS)
Bouzereau, Emmanuel
2004-01-01
A two-moment semi-spectral warm micro-physical scheme has been implemented inside the meteorological model 'MERCURE'. A new formulation of the buoyancy flux (
Directory of Open Access Journals (Sweden)
Bosneaga V.A
2013-08-01
Full Text Available The model is proposed for the calculation and research of steady state asymmetric modes and transients in three-phase three legs transformer devices with arbitrary diagram of windings connection, taking into account the electromagnetic coupling of the windings, located on different legs. Using as an example distribution transformer of 10/0.4 kV calculations and analysis were performed of the most characteristic steady asymmetrical modes, that occur during short circuit, phase failure, unbalanced load for the most common windings connections and, in particular, associated with the occurrence of zero sequence magnetic flow. For the considered regimes and schemes vector diagrams were constructed for currents and voltages as well as for the relative values of magnetic flow, which give a clear idea about their particular features.
Zouheir Habbal, Mohammad; Bou-Assi, Tarek; Zhu, Jun; Owen, Renius; Chehab, Farid F
2014-01-01
Alkaptonuria is often diagnosed clinically with episodes of dark urine, biochemically by the accumulation of peripheral homogentisic acid and molecularly by the presence of mutations in the homogentisate 1,2-dioxygenase gene (HGD). Alkaptonuria is invariably associated with HGD mutations, which consist of single nucleotide variants and small insertions/deletions. Surprisingly, the presence of deletions beyond a few nucleotides among over 150 reported deleterious mutations has not been described, raising the suspicion that this gene might be protected against the detrimental mechanisms of gene rearrangements. The quest for an HGD mutation in a proband with AKU revealed with a SNP array five large regions of homozygosity (5-16 Mb), one of which includes the HGD gene. A homozygous deletion of 649 bp deletion that encompasses the 72 nucleotides of exon 2 and surrounding DNA sequences in flanking introns of the HGD gene was unveiled in a proband with AKU. The nature of this deletion suggests that this in-frame deletion could generate a protein without exon 2. Thus, we modeled the tertiary structure of the mutant protein structure to determine the effect of exon 2 deletion. While the two β-pleated sheets encoded by exon 2 were missing in the mutant structure, other β-pleated sheets are largely unaffected by the deletion. However, nine novel α-helical coils substituted the eight coils present in the native HGD crystal structure. Thus, this deletion results in a deleterious enzyme, which is consistent with the proband's phenotype. Screening for mutations in the HGD gene, particularly in the Middle East, ought to include this exon 2 deletion in order to determine its frequency and uncover its origin.
International Nuclear Information System (INIS)
Yao, Yao
2015-01-01
The deep sub-Ohmic spin–boson model shows a longstanding non-Markovian coherence at low temperature. Motivating to quench this robust coherence, the thermal effect is unitarily incorporated into the time evolution of the model, which is calculated by the adaptive time-dependent density matrix renormalization group algorithm combined with the orthogonal polynomials theory. Via introducing a unitary heating operator to the bosonic bath, the bath is heated up so that a majority portion of the bosonic excited states is occupied. It is found in this situation the coherence of the spin is quickly quenched even in the coherent regime, in which the non-Markovian feature dominates. With this finding we come up with a novel way to implement the unitary equilibration, the essential term of the eigenstate-thermalization hypothesis, through a short-time evolution of the model
On Optimal Designs of Some Censoring Schemes
Directory of Open Access Journals (Sweden)
Dr. Adnan Mohammad Awad
2016-03-01
Full Text Available The main objective of this paper is to explore suitability of some entropy-information measures for introducing a new optimality censoring criterion and to apply it to some censoring schemes from some underlying life-time models. In addition, the paper investigates four related issues namely; the effect of the parameter of parent distribution on optimal scheme, equivalence of schemes based on Shannon and Awad sup-entropy measures, the conjecture that the optimal scheme is one stage scheme, and a conjecture by Cramer and Bagh (2011 about Shannon minimum and maximum schemes when parent distribution is reflected power. Guidelines for designing an optimal censoring plane are reported together with theoretical and numerical results and illustrations.
DEFF Research Database (Denmark)
Draxl, Caroline; Hahmann, Andrea N.; Pena Diaz, Alfredo
2014-01-01
with different PBL parameterizations at one coastal site over western Denmark. The evaluation focuses on determining which PBL parameterization performs best for wind energy forecasting, and presenting a validation methodology that takes into account wind speed at different heights. Winds speeds at heights...... regarding wind energy at these levels partly depends on the formulation and implementation of planetary boundary layer (PBL) parameterizations in these models. This study evaluates wind speeds and vertical wind shears simulated by theWeather Research and Forecasting model using seven sets of simulations...
Singh, K. S.; Bhaskaran, Prasad K.
2017-12-01
This study evaluates the performance of the Advanced Research Weather Research and Forecasting (WRF-ARW) model for prediction of land-falling Bay of Bengal (BoB) tropical cyclones (TCs). Model integration was performed using two-way interactive double nested domains at 27 and 9 km resolutions. The present study comprises two major components. Firstly, the study explores the impact of five different planetary boundary layer (PBL) and six cumulus convection (CC) schemes on seven land-falling BoB TCs. A total of 85 numerical simulations were studied in detail, and the results signify that the model simulated better both the track and intensity by using a combination of Yonsei University (YSU) PBL and the old simplified Arakawa-Schubert CC scheme. Secondly, the study also investigated the model performance based on the best possible combinations of model physics on the real-time forecasts of four BoB cyclones (Phailin, Helen, Lehar, and Madi) that made landfall during 2013 based on another 15 numerical simulations. The predicted mean track error during 2013 was about 71 km, 114 km, 133 km, 148 km, and 130 km respectively from day-1 to day-5. The Root Mean Square Error (RMSE) for Minimum Central Pressure (MCP) was about 6 hPa and the same noticed for Maximum Surface Wind (MSW) was about 4.5 m s-1 noticed during the entire simulation period. In addition the study also reveals that the predicted track errors during 2013 cyclones improved respectively by 43%, 44%, and 52% from day-1 to day-3 as compared to cyclones simulated during the period 2006-2011. The improvements noticed can be attributed due to relatively better quality data that was specified for the initial mean position error (about 48 km) during 2013. Overall the study signifies that the track and intensity forecast for 2013 cyclones using the specified combinations listed in the first part of this study performed relatively better than the other NWP (Numerical Weather Prediction) models, and thereby finds
CSIR Research Space (South Africa)
Heyns, T
2012-10-01
Full Text Available This paper investigates how Gaussian mixture models (GMMs) may be used to detect and trend fault induced vibration signal irregularities, such as those which might be indicative of the onset of gear damage. The negative log likelihood (NLL...
DEFF Research Database (Denmark)
Juhl, Hans Jørn; Stacey, Julia
2001-01-01
to carry out a campaign targeted at this segment. The awareness percentage is already 92 % and 67% of the respondents believe they know the meaning of the scheme. But it stands to reason to study whether the respondents actually know what the labelling scheme stands for or if they just think they do...
Yang, Ben; Zhou, Yang; Zhang, Yaocun; Huang, Anning; Qian, Yun; Zhang, Lujun
2018-03-01
Closure assumption in convection parameterization is critical for reasonably modeling the precipitation diurnal variation in climate models. This study evaluates the precipitation diurnal cycles over East Asia during the summer of 2008 simulated with three convective available potential energy (CAPE) based closure assumptions, i.e. CAPE-relaxing (CR), quasi-equilibrium (QE), and free-troposphere QE (FTQE) and investigates the impacts of planetary boundary layer (PBL) mixing, advection, and radiation on the simulation by using the weather research and forecasting model. The sensitivity of precipitation diurnal cycle to PBL vertical resolution is also examined. Results show that the precipitation diurnal cycles simulated with different closures all exhibit large biases over land and the simulation with FTQE closure agrees best with observation. In the simulation with QE closure, the intensified PBL mixing after sunrise is responsible for the late-morning peak of convective precipitation, while in the simulation with FTQE closure, convective precipitation is mainly controlled by advection cooling. The relative contributions of different processes to precipitation formation are functions of rainfall intensity. In the simulation with CR closure, the dynamical equilibrium in the free troposphere still can be reached, implying the complex cause-effect relationship between atmospheric motion and convection. For simulations in which total CAPE is consumed for the closures, daytime precipitation decreases with increased PBL resolution because thinner model layer produces lower convection starting layer, leading to stronger downdraft cooling and CAPE consumption. The sensitivity of the diurnal peak time of precipitation to closure assumption can also be modulated by changes in PBL vertical resolution. The results of this study help us better understand the impacts of various processes on the precipitation diurnal cycle simulation.
International Nuclear Information System (INIS)
Eshraghi, Hadi; Ahadi, Mohammad Sadegh
2016-01-01
Decision making in Iran's energy and environment-related issues has always been tied to complexities. Discussing these complexities and the necessity to deal with them, this paper strives to help the country with a tool by introducing Richest Alternatives for Implementation to Supply Energy (RAISE), a mixed integer linear programming model developed by the means of GNUMathprog mathematical programming language. The paper fully elaborates authors' desired modeling mentality and formulations on which RAISE is programmed to work and verifies its structure by running a widely known sample case named “UTOPIA” and comparing the results with other works including OSeMOSYS and Temoa. The model applies RAISE model to Iranian energy sector to elicit optimal policy without and with a CO 2 emission cap. The results suggest promotion of energy efficiency through investment on combined cycle power plants as the key to optimal policy in power generation sector. Regarding oil refining sector, investment on condensate refineries and advanced refineries equipped with Residual Fluid Catalytic Cracking (RFCC) units are suggested. Results also undermine the prevailing supposition that climate change mitigation deteriorates economic efficiency of energy system and suggest that there is a strong synergy between them. In the case of imposing a CO 2 cap that aims at maintaining CO 2 emissions from electricity production activities at 2012 levels, a shift to renewable energies occurs. - Highlights: • Combined cycle power plant is the best option to meet base load requirements. • There's synergy between climate change mitigation and economic affordability. • Power sector reacts to an emission cap by moving towards renewable energies. • Instead of being exported, condensates should be refined by condensate refineries • Iran's refining sector should be advanced by shifting to RFCC-equipped refineries.
International Nuclear Information System (INIS)
McGuffie, K.; Henderson-Sellers, A.
2002-01-01
Global climate model (GCM) predictions of the impact of large-scale land-use change date back to 1984 as do the earliest isotopic studies of large-basin hydrology. Despite this coincidence in interest and geography, with both papers focussed on the Amazon, there have been few studies that have tried to exploit isotopic information with the goal of improving climate model simulations of the land-surface. In this paper we analyze isotopic results from the IAEA global data base specifically with the goal of identifying signatures of potential value for improving global and regional climate model simulations of the land-surface. Evaluation of climate model predictions of the impacts of deforestation of the Amazon has been shown to be of significance by recent results which indicate impacts occurring distant from the Amazon i.e. tele-connections causing climate change elsewhere around the globe. It is suggested that these could be similar in magnitude and extent to the global impacts of ENSO events. Validation of GCM predictions associated with Amazonian deforestation are increasingly urgently required because of the additional effects of other aspects of climate change, particularly synergies occurring between forest removal and greenhouse gas increases, especially CO 2 . Here we examine three decades distributions of deuterium excess across the Amazon and use the results to evaluate the relative importance of the fractionating (partial evaporation) and non-fractionating (transpiration) processes. These results illuminate GCM scenarios of importance to the regional climate and hydrology: (i) the possible impact of increased stomatal resistance in the rainforest caused by higher levels of atmospheric CO2 [4]; and (ii) the consequences of the combined effects of deforestation and global warming on the regions climate and hydrology
Lu, Hongwei; Ren, Lixia; Chen, Yizhong; Tian, Peipei; Liu, Jia
2017-12-01
Due to the uncertainty (i.e., fuzziness, stochasticity and imprecision) existed simultaneously during the process for groundwater remediation, the accuracy of ranking results obtained by the traditional methods has been limited. This paper proposes a cloud model based multi-attribute decision making framework (CM-MADM) with Monte Carlo for the contaminated-groundwater remediation strategies selection. The cloud model is used to handle imprecise numerical quantities, which can describe the fuzziness and stochasticity of the information fully and precisely. In the proposed approach, the contaminated concentrations are aggregated via the backward cloud generator and the weights of attributes are calculated by employing the weight cloud module. A case study on the remedial alternative selection for a contaminated site suffering from a 1,1,1-trichloroethylene leakage problem in Shanghai, China is conducted to illustrate the efficiency and applicability of the developed approach. Totally, an attribute system which consists of ten attributes were used for evaluating each alternative through the developed method under uncertainty, including daily total pumping rate, total cost and cloud model based health risk. Results indicated that A14 was evaluated to be the most preferred alternative for the 5-year, A5 for the 10-year, A4 for the 15-year and A6 for the 20-year remediation.
Ciliberti, Stefania Angela; Peneva, Elisaveta; Storto, Andrea; Rostislav, Kandilarov; Lecci, Rita; Yang, Chunxue; Coppini, Giovanni; Masina, Simona; Pinardi, Nadia
2016-04-01
This study describes a new model implementation for the Black Sea, which uses data assimilation, towards operational forecasting, based on NEMO (Nucleus for European Modelling of the Ocean, Madec et al., 2012). The Black Sea domain is resolved with 1/27°×1/36° horizontal resolution (~3 km) and 31 z-levels with partial steps based on the GEBCO bathymetry data (Grayek et al., 2010). The model is forced by momentum, water and heat fluxes interactively computed by bulk formulae using high resolution atmospheric forcing provided by the European Centre for Medium-Range Forecast (ECMWF). The initial condition is calculated from long-term climatological temperature and salinity 3D fields. Precipitation field over the basin has been computed from the climatological GPCP rainfall monthly data (Adler et al., 2003; Huffman et al., 2009), while the evaporation is derived from the latent heat flux. The climatological monthly mean runoff of the major rivers in the Black Sea is computed using the hydrological dataset provided by SESAME project (Ludvig et al., 2009). The exchange with Mediterranean Sea through the Bosporus Straits is represented by a surface boundary condition taking into account the barotropic transport calculated to balance the fresh water fluxes on monthly bases (Stanev and Beckers, 1999, Peneva et al., 2001). A multi-annual run 2011-2015 has been completed in order to describe the main characteristics of the Black Sea circulation dynamics and thermohaline structure and the numerical results have been validated using in-situ (ARGO) and satellite (SST, SLA) data. The Black Sea model represents also the core of the new Black Sea Forecasting System, implemented at CMCC operationally since January 2016, which produces at daily frequency 10-day forecasts, 3-days analyses and 1-day simulation. Once a week, the system is run 15-day in the past in analysis mode to compute the new optimal initial condition for the forecast cycle. The assimilation is performed by a
Meimon, Serge; Petit, Cyril; Fusco, Thierry; Kulcsar, Caroline
2010-11-01
Adaptive optics (AO) systems have to correct tip-tilt (TT) disturbances down to a fraction of the diffraction-limited spot. This becomes a key issue for very or extremely large telescopes affected by mechanical vibration peaks or wind shake effects. Linear quadratic Gaussian (LQG) control achieves optimal TT correction when provided with the temporal model of the disturbance. We propose a nonsupervised identification procedure that does not require any auxiliary system or loop opening and validate it on synthetic profile as well as on experimental data.
Directory of Open Access Journals (Sweden)
I. G. Bakulin
2016-01-01
Full Text Available Currently in the Russian Federation or chronic hepatitis C (CHC are still relevant Interferon-based regimens. The purpose of this study is to investigate the influence of baseline characteristics and prognosis of the patient HCV genotype 1 for the development of leukopenia (LP and neutropenia (NP. We investigated factors such as sex, age, body mass index (BMI, viral load, genotype of Interleukin-28 B (IL-28B, the initial level of leukocytes and neutrophils, alanine aminotransferase (ALT, fibrosis, duration of infection, presence of previous therapy. Absolute values of leukocytes and neutrophils were analyzed on 4, 12, 24, 48 weeks of therapy, and at 4, 12, 24 weeks after antiviral treatment with protease inhibitors (PI 1 and 2 generation. Prognostic criteria were identified, indicating the possible development of the LP and NP expressed during treatment with interferon: female gender, low initial load, TT-genotype of IL-28B, the initial level of white blood cells and neutrophils below 5,7×109/L and 3,4×109/L, respectively. Mathematical models predicting the onset of LP and NP, formalized in the form of decision trees were also constructed. These models have shown the greatest potential for practical use in view of highest accuracy and reliability.
Directory of Open Access Journals (Sweden)
I. Gouttevin
2012-04-01
Full Text Available Soil freezing is a major feature of boreal regions with substantial impact on climate. The present paper describes the implementation of the thermal and hydrological effects of soil freezing in the land surface model ORCHIDEE, which includes a physical description of continental hydrology. The new soil freezing scheme is evaluated against analytical solutions and in-situ observations at a variety of scales in order to test its numerical robustness, explore its sensitivity to parameterization choices and confront its performance to field measurements at typical application scales.
Our soil freezing model exhibits a low sensitivity to the vertical discretization for spatial steps in the range of a few millimetres to a few centimetres. It is however sensitive to the temperature interval around the freezing point where phase change occurs, which should be 1 °C to 2 °C wide. Furthermore, linear and thermodynamical parameterizations of the liquid water content lead to similar results in terms of water redistribution within the soil and thermal evolution under freezing. Our approach does not allow firm discrimination of the performance of one approach over the other.
The new soil freezing scheme considerably improves the representation of runoff and river discharge in regions underlain by permafrost or subject to seasonal freezing. A thermodynamical parameterization of the liquid water content appears more appropriate for an integrated description of the hydrological processes at the scale of the vast Siberian basins. The use of a subgrid variability approach and the representation of wetlands could help capture the features of the Arctic hydrological regime with more accuracy.
The modeling of the soil thermal regime is generally improved by the representation of soil freezing processes. In particular, the dynamics of the active layer is captured with more accuracy, which is of crucial importance in the prospect of
Almazroui, Mansour; Islam, Md. Nazrul; Al-Khalaf, A. K.; Saeed, Fahad
2016-05-01
A suitable convective parameterization scheme within Regional Climate Model version 4.3.4 (RegCM4) developed by the Abdus Salam International Centre for Theoretical Physics, Trieste, Italy, is investigated through 12 sensitivity runs for the period 2000-2010. RegCM4 is driven with European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim 6-hourly boundary condition fields for the CORDEX-MENA/Arab domain. Besides ERA-Interim lateral boundary conditions data, the Climatic Research Unit (CRU) data is also used to assess the performance of RegCM4. Different statistical measures are taken into consideration in assessing model performance for 11 sub-domains throughout the analysis domain, out of which 7 (4) sub-domains give drier (wetter) conditions for the area of interest. There is no common best option for the simulation of both rainfall and temperature (with lowest bias); however, one option each for temperature and rainfall has been found to be superior among the 12 options investigated in this study. These best options for the two variables vary from region to region as well. Overall, RegCM4 simulates large pressure and water vapor values along with lower wind speeds compared to the driving fields, which are the key sources of bias in simulating rainfall and temperature. Based on the climatic characteristics of most of the Arab countries located within the study domain, the drier sub-domains are given priority in the selection of a suitable convective scheme, albeit with a compromise for both rainfall and temperature simulations. The most suitable option Grell over Land and Emanuel over Ocean in wet (GLEO wet) delivers a rainfall wet bias of 2.96 % and a temperature cold bias of 0.26 °C, compared to CRU data. An ensemble derived from all 12 runs provides unsatisfactory results for rainfall (28.92 %) and temperature (-0.54 °C) bias in the drier region because some options highly overestimate rainfall (reaching up to 200 %) and underestimate
Analysis of Program Obfuscation Schemes with Variable Encoding Technique
Fukushima, Kazuhide; Kiyomoto, Shinsaku; Tanaka, Toshiaki; Sakurai, Kouichi
Program analysis techniques have improved steadily over the past several decades, and software obfuscation schemes have come to be used in many commercial programs. A software obfuscation scheme transforms an original program or a binary file into an obfuscated program that is more complicated and difficult to analyze, while preserving its functionality. However, the security of obfuscation schemes has not been properly evaluated. In this paper, we analyze obfuscation schemes in order to clarify the advantages of our scheme, the XOR-encoding scheme. First, we more clearly define five types of attack models that we defined previously, and define quantitative resistance to these attacks. Then, we compare the security, functionality and efficiency of three obfuscation schemes with encoding variables: (1) Sato et al.'s scheme with linear transformation, (2) our previous scheme with affine transformation, and (3) the XOR-encoding scheme. We show that the XOR-encoding scheme is superior with regard to the following two points: (1) the XOR-encoding scheme is more secure against a data-dependency attack and a brute force attack than our previous scheme, and is as secure against an information-collecting attack and an inverse transformation attack as our previous scheme, (2) the XOR-encoding scheme does not restrict the calculable ranges of programs and the loss of efficiency is less than in our previous scheme.
Directory of Open Access Journals (Sweden)
Georgios S. Stamatakos
2009-10-01
Full Text Available The tremendous rate of accumulation of experimental and clinical knowledge pertaining to cancer dictates the development of a theoretical framework for the meaningful integration of such knowledge at all levels of biocomplexity. In this context our research group has developed and partly validated a number of spatiotemporal simulation models of in vivo tumour growth and in particular tumour response to several therapeutic schemes. Most of the modeling modules have been based on discrete mathematics and therefore have been formulated in terms of rather complex algorithms (e.g. in pseudocode and actual computer code. However, such lengthy algorithmic descriptions, although sufficient from the mathematical point of view, may render it difficult for an interested reader to readily identify the sequence of the very basic simulation operations that lie at the heart of the entire model. In order to both alleviate this problem and at the same time provide a bridge to symbolic mathematics, we propose the introduction of the notion of hypermatrix in conjunction with that of a discrete operator into the already developed models. Using a radiotherapy response simulation example we demonstrate how the entire model can be considered as the sequential application of a number of discrete operators to a hypermatrix corresponding to the dynamics of the anatomic area of interest. Subsequently, we investigate the operators’ commutativity and outline the “summarize and jump” strategy aiming at efficiently and realistically address multilevel biological problems such as cancer. In order to clarify the actual effect of the composite discrete operator we present further simulation results which are in agreement with the outcome of the clinical study RTOG 83–02, thus strengthening the reliability of the model developed.
Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin
2014-11-01
Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.
Stamatakos, Georgios S; Dionysiou, Dimitra D
2009-10-21
The tremendous rate of accumulation of experimental and clinical knowledge pertaining to cancer dictates the development of a theoretical framework for the meaningful integration of such knowledge at all levels of biocomplexity. In this context our research group has developed and partly validated a number of spatiotemporal simulation models of in vivo tumour growth and in particular tumour response to several therapeutic schemes. Most of the modeling modules have been based on discrete mathematics and therefore have been formulated in terms of rather complex algorithms (e.g. in pseudocode and actual computer code). However, such lengthy algorithmic descriptions, although sufficient from the mathematical point of view, may render it difficult for an interested reader to readily identify the sequence of the very basic simulation operations that lie at the heart of the entire model. In order to both alleviate this problem and at the same time provide a bridge to symbolic mathematics, we propose the introduction of the notion of hypermatrix in conjunction with that of a discrete operator into the already developed models. Using a radiotherapy response simulation example we demonstrate how the entire model can be considered as the sequential application of a number of discrete operators to a hypermatrix corresponding to the dynamics of the anatomic area of interest. Subsequently, we investigate the operators' commutativity and outline the "summarize and jump" strategy aiming at efficiently and realistically address multilevel biological problems such as cancer. In order to clarify the actual effect of the composite discrete operator we present further simulation results which are in agreement with the outcome of the clinical study RTOG 83-02, thus strengthening the reliability of the model developed.
Ma, Yingzhao; Hong, Yang; Chen, Yang; Yang, Yuan; Tang, Guoqiang; Yao, Yunjun; Long, Di; Li, Changmin; Han, Zhongying; Liu, Ronghua
2018-01-01
Accurate estimation of precipitation from satellites at high spatiotemporal scales over the Tibetan Plateau (TP) remains a challenge. In this study, we proposed a general framework for blending multiple satellite precipitation data using the dynamic Bayesian model averaging (BMA) algorithm. The blended experiment was performed at a daily 0.25° grid scale for 2007-2012 among Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42RT and 3B42V7, Climate Prediction Center MORPHing technique (CMORPH), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR). First, the BMA weights were optimized using the expectation-maximization (EM) method for each member on each day at 200 calibrated sites and then interpolated to the entire plateau using the ordinary kriging (OK) approach. Thus, the merging data were produced by weighted sums of the individuals over the plateau. The dynamic BMA approach showed better performance with a smaller root-mean-square error (RMSE) of 6.77 mm/day, higher correlation coefficient of 0.592, and closer Euclid value of 0.833, compared to the individuals at 15 validated sites. Moreover, BMA has proven to be more robust in terms of seasonality, topography, and other parameters than traditional ensemble methods including simple model averaging (SMA) and one-outlier removed (OOR). Error analysis between BMA and the state-of-the-art IMERG in the summer of 2014 further proved that the performance of BMA was superior with respect to multisatellite precipitation data merging. This study demonstrates that BMA provides a new solution for blending multiple satellite data in regions with limited gauges.
Ciarelli, Giancarlo; El Haddad, Imad; Bruns, Emily; Aksoyoglu, Sebnem; Möhler, Ottmar; Baltensperger, Urs; Prévôt, André S. H.
2017-06-01
In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K) in a ˜ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS) box model, representing the emission partitioning and their oxidation against OH. We combine aerosol-chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs) from a high-resolution proton transfer reaction mass spectrometer (PTR-MS) and with organic aerosol measurements from an aerosol mass spectrometer (AMS). Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model) relative to low volatility and semi-volatile primary organic material (OMsv), which is partitioned based on current published volatility distribution data. By comparing the NTVOC / OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ˜ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA) concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10-11 to 4. 0 × 10-11 cm3 molec-1 s-1. The average enthalpy of vaporization of secondary organic aerosol
Threshold Signature Schemes Application
Directory of Open Access Journals (Sweden)
Anastasiya Victorovna Beresneva
2015-10-01
Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.
Directory of Open Access Journals (Sweden)
V. N. Romaniuk
2016-01-01
Full Text Available Further improvement of natural gas usage in power industry is associated with transition to the combined-cycle gas technology, primarily at combined heat and power plants (CHP. Renovation of technology of conversion of fuel energy into heat and electricity flows is effective while it is performed simultaneously with the elaboration of thermal circuits of CHP by insertion heat accumulators and absorption lithium bromide heat pumps (ALBHP in the structure of CHP; the mentioned insertion amends thermodynamic as well as economic and environmental indicators of CHP renovation and also develops CHP maneuverability. The ability of CHP to provide heat in required quantity, their capacity to change electricity generation output without excessive fuel consumption is extremely relevant for the energy system that incorporates thermal power plants as dominating component. At the same time the displacement of traditional electrical power regulators take place. Implementation of projects of this kind requires the elaboration of CHP flow diagram calculation methods and determining relevant indicators. The results of the numerical study of the energy characteristics of CHP with the aid of the topological models of the existing heat flow diagrams of CHP that incorporate ALBHP for recovery of low-temperature waste of heat flows of systems of cooling water circulating are presented in the article. An example of calculation, the results of the CHP thermodynamic efficiency evaluation, the change of the energy characteristics for different modes of operation of CHP caused by implementation of ALBHP are shown. The conditions for the effective application of lithium bromide absorption heat pumps are specified, as well as the rate of increase of thermodynamic efficiency; the changes of maneuverability of CHP with high initial parameters are identified, the natural gas savings in The Republic of Belarus are determined.
Brinkman, Daniel
2013-05-01
We present and discuss a mathematical model for the operation of bilayer organic photovoltaic devices. Our model couples drift-diffusion-recombination equations for the charge carriers (specifically, electrons and holes) with a reaction-diffusion equation for the excitons/polaron pairs and Poisson\\'s equation for the self-consistent electrostatic potential. The material difference (i.e. the HOMO/LUMO gap) of the two organic substrates forming the bilayer device is included as a work-function potential. Firstly, we perform an asymptotic analysis of the scaled one-dimensional stationary state system: (i) with focus on the dynamics on the interface and (ii) with the goal of simplifying the bulk dynamics away from the interface. Secondly, we present a two-dimensional hybrid discontinuous Galerkin finite element numerical scheme which is very well suited to resolve: (i) the material changes, (ii) the resulting strong variation over the interface, and (iii) the necessary upwinding in the discretization of drift-diffusion equations. Finally, we compare the numerical results with the approximating asymptotics. © 2013 World Scientific Publishing Company.
Wang, Tingting; Chen, Yi-Ping Phoebe; MacLeod, Iona M; Pryce, Jennie E; Goddard, Michael E; Hayes, Ben J
2017-08-15
Using whole genome sequence data might improve genomic prediction accuracy, when compared with high-density SNP arrays, and could lead to identification of casual mutations affecting complex traits. For some traits, the most accurate genomic predictions are achieved with non-linear Bayesian methods. However, as the number of variants and the size of the reference population increase, the computational time required to implement these Bayesian methods (typically with Monte Carlo Markov Chain sampling) becomes unfeasibly long. Here, we applied a new method, HyB_BR (for Hybrid BayesR), which implements a mixture model of normal distributions and hybridizes an Expectation-Maximization (EM) algorithm followed by Markov Chain Monte Carlo (MCMC) sampling, to genomic prediction in a large dairy cattle population with imputed whole genome sequence data. The imputed whole genome sequence data included 994,019 variant genotypes of 16,214 Holstein and Jersey bulls and cows. Traits included fat yield, milk volume, protein kg, fat% and protein% in milk, as well as fertility and heat tolerance. HyB_BR achieved genomic prediction accuracies as high as the full MCMC implementation of BayesR, both for predicting a validation set of Holstein and Jersey bulls (multi-breed prediction) and a validation set of Australian Red bulls (across-breed prediction). HyB_BR had a ten fold reduction in compute time, compared with the MCMC implementation of BayesR (48 hours versus 594 hours). We also demonstrate that in many cases HyB_BR identified sequence variants with a high posterior probability of affecting the milk production or fertility traits that were similar to those identified in BayesR. For heat tolerance, both HyB_BR and BayesR found variants in or close to promising candidate genes associated with this trait and not detected by previous studies. The results demonstrate that HyB_BR is a feasible method for simultaneous genomic prediction and QTL mapping with whole genome sequence in
Universal health coverage in Latin American countries: how to improve solidarity-based schemes.
Titelman, Daniel; Cetrángolo, Oscar; Acosta, Olga Lucía
2015-04-04
In this Health Policy we examine the association between the financing structure of health systems and universal health coverage. Latin American health systems encompass a wide range of financial sources, which translate into different solidarity-based schemes that combine contributory (payroll taxes) and non-contributory (general taxes) sources of financing. To move towards universal health coverage, solidarity-based schemes must heavily rely on countries' capacity to increase public expenditure in health. Improvement of solidarity-based schemes will need the expansion of mandatory universal insurance systems and strengthening of the public sector including increased fiscal expenditure. These actions demand a new model to integrate different sources of health-sector financing, including general tax revenue, social security contributions, and private expenditure. The extent of integration achieved among these sources will be the main determinant of solidarity and universal health coverage. The basic challenges for improvement of universal health coverage are not only to spend more on health, but also to reduce the proportion of out-of-pocket spending, which will need increased fiscal resources. Copyright © 2015 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Willcock, J J; Lumsdaine, A; Quinlan, D J
2008-08-19
Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.
International Nuclear Information System (INIS)
2002-04-01
This scheme defines the objectives relative to the renewable energies and the rational use of the energy in the framework of the national energy policy. It evaluates the needs and the potentialities of the regions and preconizes the actions between the government and the territorial organizations. The document is presented in four parts: the situation, the stakes and forecasts; the possible actions for new measures; the scheme management and the regional contributions analysis. (A.L.B.)
Ponomarev, Yury K.
2018-01-01
The mathematical model of deformation of a cable (rope) vibration insulator consisting of two identical clips connected by means of elastic elements of a complex axial line is developed in detail. The axial line of the element is symmetric relatively to the horizontal axis of the shape and is made up of five rectilinear sections of arbitrary length a, b, c, conjugated to four radius sections with parameters R1 and R2 with angular extent 90°. On the basis of linear representations of the theory of bending and torsion of mechanics of materials, applied mechanics and linear algebra, a mathematical model of loading of an element and a vibration insulator as a whole in the direction of the vertical Y axis has been developed. Generalized characteristics of the friction and elastic forces for an elastic element with a complete set of the listed sections are obtained. Further, with the help of nullification in the generalized model of the characteristics of certain parameters, special cases of friction and elastic forces are obtained without taking into account the nullified parameters. Simultaneously, on the basis of the 3D computer-aided design system, volumetric models of simplified structures were created, given in the work. It is shown that, with the help of a variation of the five parameters of the axial scheme of the element, in combination with the variation of the moment of inertia of the rope section and the number of elements entering the ensemble, the load characteristics and stiffness of the vibration insulators can be changed tens and hundreds of times. This opens up unlimited possibilities for the optimal design of vibration protection systems in terms of weight characteristics, in cost, in terms of vibration intensity, in overall dimensions in different directions, which is very important for aerospace and transport engineering.
Sud, Y. C.; Mocko, David M.; Lin, S. J.
2006-01-01
An objective assessment of the impact of a new cloud scheme, called Microphysics of Clouds with Relaxed Arakawa-Schubert Scheme (McRAS) (together with its radiation modules), on the finite volume general circulation model (fvGCM) was made with a set of ensemble forecasts that invoke performance evaluation over both weather and climate timescales. The performance of McRAS (and its radiation modules) was compared with that of the National Center for Atmospheric Research Community Climate Model (NCAR CCM3) cloud scheme (with its NCAR physics radiation). We specifically chose the boreal summer months of May and June 2003, which were characterized by an anomalously wet eastern half of the continental United States as well as northern regions of Amazonia. The evaluation employed an ensemble of 70 daily 10-day forecasts covering the 61 days of the study period. Each forecast was started from the analyzed initial state of the atmosphere and spun-up soil moisture from the first-day forecasts with the model. Monthly statistics of these forecasts with up to 10-day lead time provided a robust estimate of the behavior of the simulated monthly rainfall anomalies. Patterns of simulated versus observed rainfall, 500-hPa heights, and top-of-the-atmosphere net radiation were recast into regional anomaly correlations. The correlations were compared among the simulations with each of the schemes. The results show that fvGCM with McRAS and its radiation package performed discernibly better than the original fvGCM with CCM3 cloud physics plus its radiation package. The McRAS cloud scheme also showed a reasonably positive response to the observed sea surface temperature on mean monthly rainfall fields at different time leads. This analysis represents a method for helpful systematic evaluation prior to selection of a new scheme in a global model.
Kelly, Frank; Anderson, H Ross; Armstrong, Ben; Atkinson, Richard; Barratt, Ben; Beevers, Sean; Derwent, Dick; Green, David; Mudway, Ian; Wilkinson, Paul
2011-04-01
On February 17, 2003, a congestion charging scheme (CCS*) was introduced in central London along with a program of traffic management measures. The scheme operated Monday through Friday, 7 AM to 6 PM. This program resulted in an 18% reduction in traffic volume and a 30% reduction in traffic congestion in the first year (2003). We developed methods to evaluate the possible effects of the scheme on air quality: We used a temporal-spatial design in which modeled and measured air quality data from roadside and background monitoring stations were used to compare time periods before (2001-2002) and after (2003-2004) the CCS was introduced and to compare the spatial area of the congestion charging zone (CCZ) with the rest of London. In the first part of this project, we modeled changes in concentrations of oxides of nitrogen (NOx), nitrogen dioxide (NO2), and PM10 (particles with a mass median aerodynamic diameter public transport associated with the CCS). In the second part of the project, we established a CCS Study Database from measurements obtained from the London Air Quality Network (LAQN) for air pollution monitors sited to measure roadside and urban background concentrations. Fully ratified (validated) 15-minute mean carbon monoxide (CO), nitric oxide (NO), NO2, NOx, PM10, and PM2.5 data from each chosen monitoring site for the period from February 17, 2001, to February 16, 2005, were transferred from the LAQN database. In the third part of our project, these data were used to compare geometric means for the 2 years before and the 2 years after the CCS was introduced. Temporal changes within the CCZ were compared with changes, over the same period, at similarly sited (roadside or background) monitors in a control area 8 km distant from the center of the CCZ. The analysis was confined to measurements obtained during the hours and days on which the scheme was in operation and focused on pollutants derived from vehicles (NO, NO2, NOx, PM10, and CO). This set of
Relaxation schemes for the shallow water equations
Delis, A. I.; Katsaounis, Th.
2003-03-01
We present a class of first and second order in space and time relaxation schemes for the shallow water (SW) equations. A new approach of incorporating the geometrical source term in the relaxation model is also presented. The schemes are based on classical relaxation models combined with Runge-Kutta time stepping mechanisms. Numerical results are presented for several benchmark test problems with or without the source term present.
Need for Modifying Modeling Scheme
Indian Academy of Sciences (India)
First page Back Continue Last page Graphics. Source and Drain depletion widths are to be accounted. Source and Drain depletion widths are to be accounted. Poisson equation is to be solved in source, drain, pocket, channel under cavity and channel under gate. The effect of Vgs on depletion width to be included ...
Zhu, Wenting; Leng, Xiangzi; Li, Huiming; Zhang, Ruibin; Ye, Rui; Qian, Xin
2015-01-01
Treated effluent from wastewater treatment plants has become an important source of excess nutrients causing eutrophication in water. In this study, an ecological purification method was used to further treat eutrophic water. A three-season ecological purification scheme which comprised an emergent plant (Eme.), a submerged plant (Sub.) and a novel biological rope (Bio.), was designed for the treated effluent canal of a wastewater treatment plant. The removal parameters determined from the experiment were input into a QUAL2K model to simulate downstream water quality of the treated effluent canal. Respective removal rates of total nitrogen and total phosphorus of the Eme., Sub. and Bio. were 32.48-37.33% and 31.63-39.86% in summer, 14.12-33.61% and 17.74-23.80% in autumn, and 14.13-18.03% and 10.05-12.75% in winter, with 1-day reaction time. Optimal combinations for summer, autumn/spring, and winter are Eme. + Bio., Eme. + Bio. + Sub., and Sub. + Bio., respectively. Simulated load reduction rates of total nitrogen and total phosphorus for the treated effluent canal were 42.64-78.40% and 30.98-78.29%, respectively, year round with 2.5-day reaction time. This study provides an efficient evaluation and design method for ecological purification engineering.
Directory of Open Access Journals (Sweden)
Arcangeli Giorgio
2009-08-01
Full Text Available Abstract Background Recently, the use of hypo-fractionated treatment schemes for the prostate cancer has been encouraged due to the fact that α/β ratio for prostate cancer should be low. However a major concern on the use of hypofractionation is the late rectal toxicity, it is important to be able to predict the risk of toxicity for alternative treatment schemes, with the best accuracy. The main purpose of this study is to evaluate the response of rectum wall to changes in fractionation and to quantify the α/β ratio for late rectal toxicity Methods 162 patients with localized prostate cancer, treated with conformal radiotherapy, were enrolled in a phase II randomized trial. The patients were randomly assigned to 80 Gy in 40 fractions over 8 weeks (arm A or 62 Gy in 20 fractions over 5 weeks (arm B. The median follow-up was 30 months. The late rectal toxicity was evaluated using the Radiation Therapy Oncology Group (RTOG scale. It was assumed ≥ Grade 2 (G2 toxicity incidence as primary end point. Fit of toxicity incidence by the Lyman-Burman-Kutcher (LKB model was performed. Results The crude incidence of late rectal toxicity ≥ G2 was 14% and 12% for the standard arm and the hypofractionated arm, respectively. The crude incidence of late rectal toxicity ≥ G2 was 14.0% and 12.3% for the arm A and B, respectively. For the arm A, volumes receiving ≥ 50 Gy (V50 and 70 Gy (V70 were 38.3 ± 7.5% and 23.4 ± 5.5%; for arm B, V38 and V54 were 40.9 ± 6.8% and 24.5 ± 4.4%. An α/β ratio for late rectal toxicity very close to 3 Gy was found. Conclusion The ≥ G2 late toxicities in both arms were comparable, indicating the feasibility of hypofractionated regimes in prostate cancer. An α/β ratio for late rectal toxicity very close to 3 Gy was found.
The mathematics of Ponzi schemes
Artzrouni, Marc
2009-01-01
A first order linear differential equation is used to describe the dynamics of an investment fund that promises more than it can deliver, also known as a Ponzi scheme. The model is based on a promised, unrealistic interest rate; on the actual, realized nominal interest rate; on the rate at which new deposits are accumulated and on the withdrawal rate. Conditions on these parameters are given for the fund to be solvent or to collapse. The model is fitted to data available on Charles...
Energy Technology Data Exchange (ETDEWEB)
Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.
2014-07-25
This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.
Energy Technology Data Exchange (ETDEWEB)
Placidi, M.; Jung, J.-Y.; Ratti, A.; Sun, C., E-mail: csun@lbl.gov
2014-12-21
This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.
Ponzi scheme diffusion in complex networks
Zhu, Anding; Fu, Peihua; Zhang, Qinghe; Chen, Zhenyue
2017-08-01
Ponzi schemes taking the form of Internet-based financial schemes have been negatively affecting China's economy for the last two years. Because there is currently a lack of modeling research on Ponzi scheme diffusion within social networks yet, we develop a potential-investor-divestor (PID) model to investigate the diffusion dynamics of Ponzi scheme in both homogeneous and inhomogeneous networks. Our simulation study of artificial and real Facebook social networks shows that the structure of investor networks does indeed affect the characteristics of dynamics. Both the average degree of distribution and the power-law degree of distribution will reduce the spreading critical threshold and will speed up the rate of diffusion. A high speed of diffusion is the key to alleviating the interest burden and improving the financial outcomes for the Ponzi scheme operator. The zero-crossing point of fund flux function we introduce proves to be a feasible index for reflecting the fast-worsening situation of fiscal instability and predicting the forthcoming collapse. The faster the scheme diffuses, the higher a peak it will reach and the sooner it will collapse. We should keep a vigilant eye on the harm of Ponzi scheme diffusion through modern social networks.
Directory of Open Access Journals (Sweden)
S. Ghosh
Full Text Available Many Large Eddy Simulation (LES models use the classic Kessler parameterisation either as it is or in a modified form to model the process of cloud water autoconversion into precipitation. The Kessler scheme, being linear, is particularly useful and is computationally straightforward to implement. However, a major limitation with this scheme lies in its inability to predict different autoconversion rates for maritime and continental clouds. In contrast, the Berry formulation overcomes this difficulty, although it is cubic. Due to their different forms, it is difficult to match the two solutions to each other. In this paper we single out the processes of cloud conversion and accretion operating in a deep model cloud and neglect the advection terms for simplicity. This facilitates exact analytical integration and we are able to derive new expressions for the time of onset of precipitation using both the Kessler and Berry formulations. We then discuss the conditions when the two schemes are equivalent. Finally, we also critically examine the process of droplet evaporation within the framework of the classic Kessler scheme. We improve the existing parameterisation with an accurate estimation of the diffusional mass transport of water vapour. We then demonstrate the overall robustness of our calculations by comparing our results with the experimental observations of Beard and Pruppacher, and find excellent agreement.
Key words. Atmospheric composition and structure · Cloud physics and chemistry · Pollution · Meteorology and atmospheric dynamics · Precipitation
Finite-volume scheme for anisotropic diffusion
Energy Technology Data Exchange (ETDEWEB)
Es, Bram van, E-mail: bramiozo@gmail.com [Centrum Wiskunde & Informatica, P.O. Box 94079, 1090GB Amsterdam (Netherlands); FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands); Koren, Barry [Eindhoven University of Technology (Netherlands); Blank, Hugo J. de [FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands)
2016-02-01
In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.
Vector domain decomposition schemes for parabolic equations
Vabishchevich, P. N.
2017-09-01
A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 2. Electronic Commerce - Payment Schemes. V Rajaraman. Series Article Volume 6 Issue 2 February 2001 pp 6-13. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/006/02/0006-0013 ...
Link Monotonic Allocation Schemes
Slikker, M.
1999-01-01
A network is a graph where the nodes represent players and the links represent bilateral interaction between the players. A reward game assigns a value to every network on a fixed set of players. An allocation scheme specifies how to distribute the worth of every network among the players. This
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 2. Electronic Commerce - Payment Schemes. V Rajaraman. Series Article Volume 6 Issue 2 February 2001 pp 6-13 ... Author Affiliations. V Rajaraman1. IBM Professor of Information Technology JNCASR Bangalore 560 064, India.
Simple monotonic interpolation scheme
International Nuclear Information System (INIS)
Greene, N.M.
1980-01-01
A procedure for presenting tabular data, such as are contained in the ENDF/B files, that is simpler, more general, and potentially much more compact than the present schemes used with ENDF/B is presented. The method has been successfully used for Bondarenko interpolation in a module of the AMPX system. 1 figure, 1 table
Spada, Michele; Jorba, Oriol; Pérez García-Pando, Carlos; Tsigaridis, Kostas; Soares, Joana; Obiso, Vincenzo; Janjic, Zavisa; Baldasano, Jose M.
2015-04-01
We develop and evaluate a fully online-coupled model simulating the life-cycle of the most relevant global aerosols (i.e. mineral dust, sea-salt, black carbon, primary and secondary organic aerosols, and sulfate) and their feedbacks upon atmospheric chemistry and radiative balance. Following the capabilities of its meteorological core, the model has been designed to simulate both global and regional scales with unvaried parameterizations: this allows detailed investigation on the aerosol processes bridging the gap between global and regional models. Since the strong uncertainties affecting aerosol models are often unresponsive to model complexity, we choose to introduce complexity only when it clearly improves results and leads to a better understanding of the simulated aerosol processes. We test two important sources of uncertainty - the fires injection height and secondary organic aerosol (SOA) production - by comparing a baseline simulation with experiments using more advanced approaches. First, injection heights prescribed by Dentener et al. (2006, ACP) are compared with climatological injection heights derived from satellite measurements and produced through the Integrated Monitoring and Modeling System For Wildland Fires (IS4FIRES). Also global patterns of SOA produced by the yield conversion of terpenes as prescribed by Dentener et al. (2006, ACP) are compared with those simulated by the two-product approach of Tsigaridis et al. (2003, ACP). We evaluate our simulations using a variety of observations and measurement techniques. Additionally, we discuss our results in comparison to other global models within AEROCOM and ACCMIP.
3n-Point Quaternary Shape Preserving Subdivision Schemes
Directory of Open Access Journals (Sweden)
MEHWISH BARI
2017-07-01
Full Text Available In this paper, an algorithm is defined to construct 3n-point quaternary approximating subdivision schemes which are useful to design different geometric objects in the field of geometric modeling. We are going to establish a family of approximating schemes because approximating scheme provide maximum smoothness as compare to the interpolating schemes. It is to be observed that the proposed schemes satisfying the basic sum rules with bell-shaped mask go up to the convergent subdivision schemes which preserve monotonicity. We analyze the shape-preserving properties such that convexity and concavity of proposed schemes. We also show that quaternary schemes associated to the certain refinable functions with dilation 4 have higher order shape preserving properties. We also calculated the polynomial reproduction of proposed quaternary approximating subdivision schemes. The proposed schemes have tension parameter, so by choosing different values of the tension parameter we can get different limit curves of initial control polygon. We show in the table form that the proposed schemes are better than the existing schemes by comparing them on the behalf of their support and continuity. The visual quality of proposed schemes is demonstrated by different snapshots.
Certificateless Key-Insulated Generalized Signcryption Scheme without Bilinear Pairings
Directory of Open Access Journals (Sweden)
Caixue Zhou
2017-01-01
Full Text Available Generalized signcryption (GSC can be applied as an encryption scheme, a signature scheme, or a signcryption scheme with only one algorithm and one key pair. A key-insulated mechanism can resolve the private key exposure problem. To ensure the security of cloud storage, we introduce the key-insulated mechanism into GSC and propose a concrete scheme without bilinear pairings in the certificateless cryptosystem setting. We provide a formal definition and a security model of certificateless key-insulated GSC. Then, we prove that our scheme is confidential under the computational Diffie-Hellman (CDH assumption and unforgeable under the elliptic curve discrete logarithm (EC-DL assumption. Our scheme also supports both random-access key update and secure key update. Finally, we evaluate the efficiency of our scheme and demonstrate that it is highly efficient. Thus, our scheme is more suitable for users who communicate with the cloud using mobile devices.
Energy Technology Data Exchange (ETDEWEB)
Billette, E.
1997-06-23
Complex chemical kinetics modelling is relevant in numerous fields related to the petroleum industry, for instance engine combustion, petrochemistry and atmospheric pollution. Many numerical difficulties are encountered in the computation of these models, mainly due to the large size, the non-linearity and the stiffness of the associated ordinary differential systems. We first studied systems that have an asymptotic behaviour which may be derived from an algebraic analysis. Then we reviewed different methods that make possible the reduction of size and stiffness for chemical kinetics-related differential systems, and suggest possible improvements for some of those methods. We also studied their application to atmospheric chemistry models. Finally, we started to extend those reduction methods to partial differential systems that include, in addition to chemical kinetics, other phenomena such as species emission, advection or diffusion. (author) 44 refs.
Qiu, Shanwen
2012-07-01
In this article, we propose a new grid-free and exact solution method for computing solutions associated with an hybrid traffic flow model based on the Lighthill- Whitham-Richards (LWR) partial differential equation. In this hybrid flow model, the vehicles satisfy the LWR equation whenever possible, and have a fixed acceleration otherwise. We first present a grid-free solution method for the LWR equation based on the minimization of component functions. We then show that this solution method can be extended to compute the solutions to the hybrid model by proper modification of the component functions, for any concave fundamental diagram. We derive these functions analytically for the specific case of a triangular fundamental diagram. We also show that the proposed computational method can handle fixed or moving bottlenecks.
Song, Y.; Yao, Q.; Wang, G.; Yang, X.; Pan, C.; Johnston, E.; Kim, M.; Konstantinidis, K.; Hazen, T.; Mayes, M. A.
2016-12-01
Soil microorganisms and their activities, which play a significant role in regulating carbon (C) and nutrient biogeochemical cycles, are highly responsive to changes in climate. The diversity of microorganisms, however, complicates the explicit representation of microbial and enzymatic processes in biogeochemical or earth system models. Uncertainties in accounting for microbial diversity therefore limits our ability to incorporate microbial functions into models. However, `omics technology provides abundant information to identify the structure and function of the microbial community and strengthens our ability to understand microbially-mediated C and nutrient cycles and their climate feedbacks. We collected soils from control and phosphorus (P) fertilized plots at the Gigante Peninsula long-term fertilization experiment at the Smithsonian Tropical Research Institute in Panama, an ecosystem where P limitation constrains primary productivity and microbial activities. We monitored effects P addition on soil carbon decomposition with respiration measurements and investigated the responsible microbial mechanisms with metagenomics, metatranscriptomics, metaproteomics, and enzyme activity assays. We integrated the P dynamics into the C-N coupled Microbial Enzyme Decomposition (MEND) model. We integrated the `omics data with the new microbially-enabled C-N-P model to examine the mechanistic responses of soil microbial activity and heterotrophic respiration to P availability. Our finding indicates that increases in soil P availability can alter both the abundance and activity of enzymes related to soil carbon decomposition and P mineralization in the tropical soil, leading to increased CO2 emissions to the atmosphere. Integrating the `omics data into the biogeochemical model enabled scaling of complex ecosystem functions from genes to functional groups to enable predictions of microbial controls on C, N and P cycles.
Cavazos Guerra, C.; Lauer, A.; Herber, A. B.; Butler, T. M.
2016-12-01
Realistic simulation of physical and dynamical processes happening in the Arctic surface and atmosphere, and the interacting feedbacks of these processes is still a challenge for Arctic climate modelers. This is critical when further studies involving for the example transport mechanisms and pathways of pollutants from lower latitudes into the Arctic rely on the efficiency of the model to represent atmospheric circulation, especially given the complexity of the Arctic atmosphere. In this work we evaluate model performance of the Weather Research and Forecast model (WRF) according to the choice of two land surface model schemes (Noah and NoahMP) and two reanalyzes data for initialization to create lateral boundary conditions (ERA-interim and ASR) to simulate surface and atmosphere dynamics including the location and displacement of the polar dome and other features characterizing atmospheric circulation associated to sea ice maxima/minima extent within the Eurasian Arctic conformed by the Nordic countries in Northern Europe and part of West Russia. Sensitivity analyses include simulations at 15km horizontal resolution within a period of five years from 2008 to 2012. The WRF model simulations are evaluated against surface meteorological data from automated weather stations and atmospheric profiles from radiosondes. Results show that the model is able to reproduce the main features of the atmospheric dynamics and vertical structure of the Arctic atmosphere reasonably well. The model is, however, sensitive to the choice of the reanalyses used for initialization and land surface scheme with significant biases in the simulated description of surface meteorology and winds, moisture and temperature profiles. The best choice of physical parameterization is then used in the WRF with coupled chemistry (WRF-Chem) to simulate BC concentrations in several case studies within the analyzed period in our domain and assess the role of modeled circulation in concentrations of BC
On Converting Secret Sharing Scheme to Visual Secret Sharing Scheme
Directory of Open Access Journals (Sweden)
Wang Daoshun
2010-01-01
Full Text Available Abstract Traditional Secret Sharing (SS schemes reconstruct secret exactly the same as the original one but involve complex computation. Visual Secret Sharing (VSS schemes decode the secret without computation, but each share is m times as big as the original and the quality of the reconstructed secret image is reduced. Probabilistic visual secret sharing (Prob.VSS schemes for a binary image use only one subpixel to share the secret image; however the probability of white pixels in a white area is higher than that in a black area in the reconstructed secret image. SS schemes, VSS schemes, and Prob. VSS schemes have various construction methods and advantages. This paper first presents an approach to convert (transform a -SS scheme to a -VSS scheme for greyscale images. The generation of the shadow images (shares is based on Boolean XOR operation. The secret image can be reconstructed directly by performing Boolean OR operation, as in most conventional VSS schemes. Its pixel expansion is significantly smaller than that of VSS schemes. The quality of the reconstructed images, measured by average contrast, is the same as VSS schemes. Then a novel matrix-concatenation approach is used to extend the greyscale -SS scheme to a more general case of greyscale -VSS scheme.
Directory of Open Access Journals (Sweden)
Tsung-Han Lee
2013-01-01
Full Text Available 6LoWPAN technology has attracted extensive attention recently. It is because 6LoWPAN is one of Internet of Things standard and it adapts to IPv6 protocol stack over low-rate wireless personal area network, such as IEEE 802.15.4. One view is that IP architecture is not suitable for low-rate wireless personal area network. It is a challenge to implement the IPv6 protocol stack into IEEE 802.15.4 devices due to that the size of IPv6 packet is much larger than the maximum packet size of IEEE 802.15.4 in data link layer. In order to solve this problem, 6LoWPAN provides header compression to reduce the transmission overhead for IP packets. In addition, two selected routing schemes, mesh-under and route-over routing schemes, are also proposed in 6LoWPAN to forward IP fragmentations under IEEE 802.15.4 radio link. The distinction is based on which layer of the 6LoWPAN protocol stack is in charge of routing decisions. In route-over routing scheme, the routing distinction is taken at the network layer and, in mesh-under, is taken by the adaptation layer. Thus, the goal of this research is to understand the performance of two routing schemes in 6LoWPAN under error-prone channel condition.
Energy Technology Data Exchange (ETDEWEB)
Elliott, C.J.; Fisher, H.; Pepin, J. [Los Alamos National Lab., NM (United States); Gillmann, R. [Federal Highway Administration, Washington, DC (United States)
1996-07-01
Traffic classification techniques were evaluated using data from a 1993 investigation of the traffic flow patterns on I-20 in Georgia. First we improved the data by sifting through the data base, checking against the original video for questionable events and removing and/or repairing questionable events. We used this data base to critique the performance quantitatively of a classification method known as Scheme F. As a context for improving the approach, we show in this paper that scheme F can be represented as a McCullogh-Pitts neural network, oar as an equivalent decomposition of the plane. We found that Scheme F, among other things, severely misrepresents the number of vehicles in Class 3 by labeling them as Class 2. After discussing the basic classification problem in terms of what is measured, and what is the desired prediction goal, we set forth desirable characteristics of the classification scheme and describe a recurrent neural network system that partitions the high dimensional space up into bins for each axle separation. the collection of bin numbers, one for each of the axle separations, specifies a region in the axle space called a hyper-bin. All the vehicles counted that have the same set of in numbers are in the same hyper-bin. The probability of the occurrence of a particular class in that hyper- bin is the relative frequency with which that class occurs in that set of bin numbers. This type of algorithm produces classification results that are much more balanced and uniform with respect to Classes 2 and 3 and Class 10. In particular, the cancellation of errors of classification that occurs is for many applications the ideal classification scenario. The neural network results are presented in the form of a primary classification network and a reclassification network, the performance matrices for which are presented.
Scalable Nonlinear Compact Schemes
Energy Technology Data Exchange (ETDEWEB)
Ghosh, Debojyoti [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil M. [Univ. of Chicago, IL (United States); Brown, Jed [Univ. of Colorado, Boulder, CO (United States)
2014-04-01
In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.
Hameed, Imad Hadi; Abdulzahra, Ameer Ibrahim; Jebor, Mohammed Abdullah; Kqueen, Cheah Yoke; Ommer, Aamera Jaber
2015-08-01
This study evaluates the mitochondrial noncoding regions by using the Sanger sequencing method for application in Forensic Science. FTA® Technology (FTA™ paper DNA extraction) was utilized to extract DNA. Portion of coding region encompassing positions from (10,716 to 11,184) amplified in accordance with the Anderson reference sequence. PCR products purified by EZ-10 spin column were then sequenced and detected using the ABI 3730 × L DNA Analyzer. A new polymorphic positions 10,750 and 10,790 that are described may be suitable sources in future for identification purpose. The data obtained can be used to identify variable nucleotide positions characterized by frequent occurrence, most promising for identification variants.
Vincent, Marie; Collet, Corinne; Verloes, Alain; Lambert, Laetitia; Herlin, Christian; Blanchet, Catherine; Sanchez, Elodie; Drunat, Séverine; Vigneron, Jacqueline; Laplanche, Jean-Louis; Puechberty, Jacques; Sarda, Pierre; Geneviève, David
2014-01-01
Mandibulofacial dysostosis is part of a clinically and genetically heterogeneous group of disorders of craniofacial development, which lead to malar and mandibular hypoplasia. Treacher Collins syndrome is the major cause of mandibulofacial dysostosis and is due to mutations in the TCOF1 gene. Usually patients with Treacher Collins syndrome do not present with intellectual disability. Recently, the EFTUD2 gene was identified in patients with mandibulofacial dysostosis associated with microcephaly, intellectual disability and esophageal atresia. We report on two patients presenting with mandibulofacial dysostosis characteristic of Treacher Collins syndrome, but associated with unexpected intellectual disability, due to a large deletion encompassing several genes including the TCOF1 gene. We discuss the involvement of the other deleted genes such as CAMK2A or SLC6A7 in the cognitive development delay of the patients reported, and we propose the systematic investigation for 5q32 deletion when intellectual disability is associated with Treacher Collins syndrome.
Energy Technology Data Exchange (ETDEWEB)
Stiles, J.I.; Friedman, L.R.; Sherman, F.
1980-01-01
It has been recently found that a specific chromosomal segment, in certain but not all laboratory strains of Saccharomyces cerevisiae, is deleted and transposed at high frequencies. This segment, denoted COR, encompasses the three closely linked loci CYC1, OSM1 and RAD7 which control iso-1-cytochrome c, osmotic sensitivity and UV-sensitivity, respectively. Two types of apparently normal laboratory strains of yeast designated COR1 and COR2, were uncovered after the examination of the frequencies and types of mutations causing either deficiencies or overproduction of iso-1-cytochrome c; in contrast to COR1 strains which give predominantly point mutations causing deficiencies of iso-1-cytochrome c, COR2 strains give rise to deletions and transpositions of the COR segment. We have undertaken a systematic investigation of the physical structure and genetic properties of the COR region and of the aberrations arising in COR2 strains.
Luo, Hong; Xie, Li; Wang, Shou-Zheng; Chen, Jin-Lan; Huang, Can; Wang, Jian; Yang, Jin-Fu; Zhang, Wei-Zhi; Yang, Yi-Feng; Tan, Zhi-Ping
2012-11-01
Interstitial duplications of 8q12 encompassing CHD7 have recently been described as a new microduplication syndrome. Three 8q12 duplications have been reported with shared recognizable phenotype: Duane anomaly, developmental delay and dysmorphic facial features. We identified a 2.7 Mb duplication on chromosome 8q12 with SNP-array in a patient with growth delay, congenital heart defects, ear anomalies and torticollis. To our knowledge, this is the smallest duplication reported to date. Our findings support the notion that increased copy number of CHD7 may underlie the phenotype of the 8q12 duplication. Our study together with previous studies suggest that the 8q12 duplication could be defined as a novel syndrome. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
DEFF Research Database (Denmark)
Roos, L; Jønch, A E; Kjaergaard, S
2009-01-01
BACKGROUND: The use of array comparative genome hybridisation (CGH) analyses for investigation of children with mental retardation has led to the identification of a growing number of new microdeletion and microduplication syndromes, some of which have become clinically well characterised and some...... that await further delineation. This report describes three children with de novo 17p13.1 duplications encompassing the PAFAH1B1 gene, who had similar phenotypic features, including mild to moderate developmental delay, hypotonia and facial dysmorphism, and compares them to the few previously reported cases...... to patients with deletion of the region (Miller-Dieker syndrome) the patients reported here had mild to moderate retardation and displayed no lissencephaly or gross brain malformations. Further cases with similar duplications are expected to be diagnosed, and will contribute to the delineation of a potential...
Quantum identification schemes with entanglements
International Nuclear Information System (INIS)
Mihara, Takashi
2002-01-01
We need secure identification schemes because many situations exist in which a person must be identified. In this paper, we propose three quantum identification schemes with entanglements. First, we propose a quantum one-time pad password scheme. In this scheme, entanglements play the role of a one-time pad password. Next, we propose a quantum identification scheme that requires a trusted authority. Finally, we propose a quantum message authentication scheme that is constructed by combining a different quantum cryptosystem with an ordinary authentication tag
Yasas, F M
1977-01-01
In response to a United Nations resolution, the Mobile Training Scheme (MTS) was set up to provide training to the trainers of national cadres engaged in frontline and supervisory tasks in social welfare and rural development. The training is innovative in its being based on an analysis of field realities. The MTS team consisted of a leader, an expert on teaching methods and materials, and an expert on action research and evaluation. The country's trainers from different departments were sent to villages to work for a short period and to report their problems in fulfilling their roles. From these grass roots experiences, they made an analysis of the job, determining what knowledge, attitude and skills it required. Analysis of daily incidents and problems were used to produce indigenous teaching materials drawn from actual field practice. How to consider the problems encountered through government structures for policy making and decisions was also learned. Tasks of the students were to identify the skills needed for role performance by job analysis, daily diaries and project histories; to analyze the particular community by village profiles; to produce indigenous teaching materials; and to practice the role skills by actual role performance. The MTS scheme was tried in Nepal in 1974-75; 3 training programs trained 25 trainers and 51 frontline workers; indigenous teaching materials were created; technical papers written; and consultations were provided. In Afghanistan the scheme was used in 1975-76; 45 participants completed the training; seminars were held; and an ongoing Council was created. It is hoped that the training program will be expanded to other countries.
Bonus Schemes and Trading Activity
Pikulina, E.S.; Renneboog, L.D.R.; Ter Horst, J.R.; Tobler, P.N.
2013-01-01
Abstract: Little is known about how different bonus schemes affect traders’ propensity to trade and which bonus schemes improve traders’ performance. We study the effects of linear versus threshold (convex) bonus schemes on traders’ behavior. Traders purchase and sell shares in an experimental stock
International Nuclear Information System (INIS)
Grashilin, V.A.; Karyshev, Yu.Ya.
1982-01-01
A 6-cycle scheme of step motor is described. The block-diagram and the basic circuit of the step motor control are presented. The step motor control comprises a pulse shaper, electronic commutator and power amplifiers. The step motor supply from 6-cycle electronic commutator provides for higher reliability and accuracy than from 3-cycle commutator. The control of step motor work is realised by the program given by the external source of control signals. Time-dependent diagrams for step motor control are presented. The specifications of the step-motor is given
A numerical scheme for the generalized Burgers–Huxley equation
Directory of Open Access Journals (Sweden)
Brajesh K. Singh
2016-10-01
Full Text Available In this article, a numerical solution of generalized Burgers–Huxley (gBH equation is approximated by using a new scheme: modified cubic B-spline differential quadrature method (MCB-DQM. The scheme is based on differential quadrature method in which the weighting coefficients are obtained by using modified cubic B-splines as a set of basis functions. This scheme reduces the equation into a system of first-order ordinary differential equation (ODE which is solved by adopting SSP-RK43 scheme. Further, it is shown that the proposed scheme is stable. The efficiency of the proposed method is illustrated by four numerical experiments, which confirm that obtained results are in good agreement with earlier studies. This scheme is an easy, economical and efficient technique for finding numerical solutions for various kinds of (nonlinear physical models as compared to the earlier schemes.
Packet reversed packet combining scheme
International Nuclear Information System (INIS)
Bhunia, C.T.
2006-07-01
The packet combining scheme is a well defined simple error correction scheme with erroneous copies at the receiver. It offers higher throughput combined with ARQ protocols in networks than that of basic ARQ protocols. But packet combining scheme fails to correct errors when the errors occur in the same bit locations of two erroneous copies. In the present work, we propose a scheme that will correct error if the errors occur at the same bit location of the erroneous copies. The proposed scheme when combined with ARQ protocol will offer higher throughput. (author)
Vande Perre, P; Zazo Seco, C; Patat, O; Bouneau, L; Vigouroux, A; Bourgeois, D; El Hout, S; Chassaing, N; Calvas, P
2018-02-01
Axenfeld-Rieger syndrome (ARS) is a heterogeneous clinical entity transmitted in an autosomal dominant manner. The main feature, Axenfeld-Rieger Anomaly (ARA), is a malformation of the anterior segment of the eye that can lead to glaucoma and impair vision. Extra-ocular defects have also been reported. Point mutations of FOXC1 and PITX2 are responsible for about 40% of the ARS cases. We describe the phenotype of a patient carrying a deletion encompassing the 4q25 locus containing PITX2 gene. This child presented with a congenital heart defect (Tetralogy of Fallot, TOF) and no signs of ARA. He is the first patient described with TOF and a complete deletion of PITX2 (arr[GRCh37]4q25(110843057-112077858)x1, involving PITX2, EGF, ELOVL6 and ENPEP) inherited from his ARS affected mother. In addition, to our knowledge, he is the first patient reported with no ocular phenotype associated with haploinsufficiency of PITX2. We compare the phenotype and genotype of this patient to those of five other patients carrying 4q25 deletions. Two of these patients were enrolled in the university hospital in Toulouse, while the other three were already documented in DECIPHER. This comparative study suggests both an incomplete penetrance of the ocular malformation pattern in patients carrying PITX2 deletions and a putative association between TOF and PITX2 haploinsufficiency. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Lin, Shaobin; Zheng, Xiaohe; Gu, Heng; Li, Mingzhen
2017-06-10
To delineate the phenotypic characteristics of 22q11.2 deletion syndrome and the role of CRKL gene in the pathogenesis of cardiac abnormalities. G-banded karyotyping, single nucleotide polymorphism (SNP) array and fluorescence in situ hybridization (FISH) were performed on a fetus with tetralogy of Fallot detected by ultrasound. Correlation between the genotype and phenotype was explored after precise mapping of the breakpoints on chromosome 22q11.2. SNP array was also performed on peripheral blood samples from both parents to clarify its origin. The fetus showed a normal karyotype of 46,XY. SNP array performed on fetal blood sample revealed a 749 kb deletion (chr22: 20 716 876-21 465 659) at 22q11.21, which encompassed the CRKL gene but not TBX1, HIRA, COMT and MAPK1. Precise mapping of the breakpoints suggested that the deleted region has overlapped with that of central 22q11.2 deletion syndrome. SNP array analysis of the parental blood samples suggested that the 22q11.21 deletion has a de novo origin. The presence of 22q11.21 deletion in the fetus was also confirmed by FISH analysis. Central 22q11.21 deletion probably accounts for the cardiac abnormalities in the fetus, for which the CRKL gene should be considered as an important candidate.
Black Box Traceable Ciphertext Policy Attribute-Based Encryption Scheme
Directory of Open Access Journals (Sweden)
Xingbing Fu
2015-08-01
Full Text Available In the existing attribute-based encryption (ABE scheme, the authority (i.e., private key generator (PKG is able to calculate and issue any user’s private key, which makes it completely trusted, which severely influences the applications of the ABE scheme. To mitigate this problem, we propose the black box traceable ciphertext policy attribute-based encryption (T-CP-ABE scheme in which if the PKG re-distributes the users’ private keys for malicious uses, it might be caught and sued. We provide a construction to realize the T-CP-ABE scheme in a black box model. Our scheme is based on the decisional bilinear Diffie-Hellman (DBDH assumption in the standard model. In our scheme, we employ a pair (ID, S to identify a user, where ID denotes the identity of a user and S denotes the attribute set associated with her.
Mase, Nobuko; Sawamura, Yutaka; Yamamoto, Toshiya; Takada, Norio; Nishio, Sogo; Saito, Toshihiro; Iketani, Hiroyuki
2014-01-01
Self-compatible mutants of self-incompatible crops have been extensively studied for research and agricultural purposes. Until now, the only known pollen-part self-compatible mutants in Rosaceae subtribe Pyrinae, which contains many important fruit trees, were polyploid. This study revealed that the pollen-part self-compatibility of breeding selection 415-1, a recently discovered mutant of Japanese pear ( Pyrus pyrifolia ) derived from γ-irradiated pollen, is caused by a duplication of an S -haplotype. In the progeny of 415-1, some plants had three S -haplotypes, two of which were from the pollen parent. Thus, 415-1 was able to produce pollen with two S -haplotypes, even though it was found to be diploid: the relative nuclear DNA content measured by flow cytometry showed no significant difference from that of a diploid cultivar. Inheritance patterns of simple sequence repeat (SSR) alleles in the same linkage group as the S -locus (LG 17) showed that some SSRs closely linked to S -haplotypes were duplicated in progeny containing the duplicated S -haplotype. These results indicate that the pollen-part self-compatibility of 415-1 is not caused by a mutation of pollen S factors in either one of the S -haplotypes, but by a segmental duplication encompassing the S -haplotype. Consequently, 415-1 can produce S -heteroallelic pollen grains that are capable of breaking down self-incompatibility (SI) by competitive interaction between the two different S factors in the pollen grain. 415-1 is the first diploid pollen-part self-compatible mutant with a duplicated S -haplotype to be discovered in the Pyrinae. The fact that 415-1 is not polyploid makes it particularly valuable for further studies of SI mechanisms.
Verification of an objective analysis scheme
International Nuclear Information System (INIS)
Cats, G.J.; Haan, B.J. de; Hafkenscheid, L.M.
1987-01-01
An intermittent data assimilation scheme has been used to produce wind and precipitation fields during the 10 days after the explosion at the Chernobyl nuclear power plant on 25 April 1986. The wind fields are analyses, the precipitation fields have been generated by the forecast model part of the scheme. The precipitation fields are of fair quality. The quality of the wind fields has been monitored by the ensuing trajectories. These were found to describe the arrival times of radioactive air in good agreement with most observational data, taken all over Europe. The wind analyses are therefore considered to be reliable. 25 refs.; 13 figs
Low overhead slipless carrier phase estimation scheme.
Cheng, Haiquan; Li, Yan; Kong, Deming; Zang, Jizhao; Wu, Jian; Lin, Jintong
2014-08-25
Two slipless schemes are compared with application to single carrier 30 Gbaud quadrature phase shift keying (QPSK) system. An equivalent linewidth model considering the phase noise induced by both the laser linewidth and fiber nonlinearity is applied in the performance analysis. The simulation results show that it is possible to mitigate cycle slip (CS) using only 0.39% pilot overhead for the proposed blind carrier phase recovery (CPR) + pilot-symbols-aided phase unwrapping (PAPU) scheme within 1 dB signal-to-noise ratio (SNR) penalty limit at the bit error ratio (BER) of 10(-3) with 4 MHz equivalent linewidth.
Transmission usage cost allocation schemes
International Nuclear Information System (INIS)
Abou El Ela, A.A.; El-Sehiemy, R.A.
2009-01-01
This paper presents different suggested transmission usage cost allocation (TCA) schemes to the system individuals. Different independent system operator (ISO) visions are presented using the proportional rata and flow-based TCA methods. There are two proposed flow-based TCA schemes (FTCA). The first FTCA scheme generalizes the equivalent bilateral exchanges (EBE) concepts for lossy networks through two-stage procedure. The second FTCA scheme is based on the modified sensitivity factors (MSF). These factors are developed from the actual measurements of power flows in transmission lines and the power injections at different buses. The proposed schemes exhibit desirable apportioning properties and are easy to implement and understand. Case studies for different loading conditions are carried out to show the capability of the proposed schemes for solving the TCA problem. (author)
Patrick Honohan
1987-01-01
A Ponzi scheme is an arrangement whereby a promoter offers an investment opportunity with attractive dividends, but where the only basis for the dividends is the future receipts from new investors. The first of these two notes explores some of the analytical properties of a Ponzi scheme, addressing in particular the question whether it is possible for a Ponzi scheme to exist if all the participants are rational. The second note briefly examines the collapse of the PMPA insurance company whos...
Entropy conservative finite element schemes
Tadmor, E.
1986-01-01
The question of entropy stability for discrete approximations to hyperbolic systems of conservation laws is studied. The amount of numerical viscosity present in such schemes is quantified and related to their entropy stability by means of comparison. To this end, two main ingredients are used: entropy variables and the construction of certain entropy conservative schemes in terms of piecewise-linear finite element approximations. It is then shown that conservative schemes are entropy stable, if and (for three-point schemes) only if, they contain more numerical viscosity than the abovementioned entropy conservation ones.
DEFF Research Database (Denmark)
Rotbart, Noy Galil
in a distributed fashion increases. Second, attempting to answer queries on vertices of a graph stored in a distributed fashion can be significantly more complicated. In order to lay theoretical foundations to the first penalty mentioned a large body of work concentrated on labeling schemes. A labeling scheme...... evaluation of fully dynamic labeling schemes. Due to a connection between adjacency labeling schemes and the graph theoretical study of induced universal graphs, we study these in depth and show novel results for bounded degree graphs and power-law graphs. We also survey and make progress on the related...
International Nuclear Information System (INIS)
Katata, G.; Chino, M.; Kobayashi, T.
2015-01-01
Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Daiichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate the detailed atmospheric releases during the accident using a reverse estimation method which calculates the release rates of radionuclides by comparing measurements of air concentration of a radionuclide or its dose rate in the environment with the ones calculated by atmospheric and oceanic transport, dispersion and deposition models. The atmospheric and oceanic models used are WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN-FDM (Finite difference oceanic dispersion model), both developed by the authors. A sophisticated deposition scheme, which deals with dry and fog-water depositions, cloud condensation nuclei (CCN) activation, and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I 2 and CH 3 I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The results revealed that the major releases of radionuclides due to the FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, midnight of 14 March when the SRV (safety relief valve) was opened three times at Unit 2, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates. The simulation by WSPEEDI-II using the new source term reproduced the local and regional patterns of cumulative
Energy Technology Data Exchange (ETDEWEB)
Katata, G.; Chino, M.; Kobayashi, T. [Japan Atomic Energy Agency (JAEA), Ibaraki (Japan); and others
2015-07-01
Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Daiichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate the detailed atmospheric releases during the accident using a reverse estimation method which calculates the release rates of radionuclides by comparing measurements of air concentration of a radionuclide or its dose rate in the environment with the ones calculated by atmospheric and oceanic transport, dispersion and deposition models. The atmospheric and oceanic models used are WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN-FDM (Finite difference oceanic dispersion model), both developed by the authors. A sophisticated deposition scheme, which deals with dry and fog-water d