Graphical Derivatives and Stability Analysis for Parameterized Equilibria with Conic Constraints
Czech Academy of Sciences Publication Activity Database
Mordukhovich, B. S.; Outrata, Jiří; Ramírez, H. C.
2015-01-01
Roč. 23, č. 4 (2015), s. 687-704 ISSN 1877-0533 R&D Projects: GA ČR(CZ) GAP201/12/0671 Institutional support: RVO:67985556 Keywords : Variational analysis and optimization * Parameterized equilibria * Conic constraints * Sensitivity and stability analysis * Solution maps * Graphical derivatives * Normal and tangent cones Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2015 http://library.utia.cas.cz/separaty/2015/MTR/outrata-0449259.pdf
Investigation of previously derived Hyades, Coma, and M67 reddenings
International Nuclear Information System (INIS)
Taylor, B.J.
1980-01-01
New Hyades polarimetry and field star photometry have been obtained to check the Hyades reddening, which was found to be nonzero in a previous paper. The new Hyades polarimetry implies essentially zero reddening; this is also true of polarimetry published by Behr (which was incorrectly interpreted in the previous paper). Four photometric techniques which are presumed to be insensitive to blanketing are used to compare the Hyades to nearby field stars; these four techniques also yield essentially zero reddening. When all of these results are combined with others which the author has previously published and a simultaneous solution for the Hyades, Coma, and M67 reddenings is made, the results are E (B-V) =3 +- 2 (sigma) mmag, -1 +- 3 (sigma) mmag, and 46 +- 6 (sigma) mmag, respectively. No support for a nonzero Hyades reddening is offered by the new results. When the newly obtained reddenings for the Hyades, Coma, and M67 are compared with results from techniques given by Crawford and by users of the David Dunlap Observatory photometric system, no differences between the new and other reddenings are found which are larger than about 2 sigma. The author had previously found that the M67 main-sequence stars have about the same blanketing as that of Coma and less blanketing than the Hyades; this conclusion is essentially unchanged by the revised reddenings
Székely, Balázs; Kania, Adam; Varga, Katalin; Heilmeier, Hermann
2017-04-01
Lacunarity, a measure of the spatial distribution of the empty space is found to be a useful descriptive quantity of the forest structure. Its calculation, based on laser-scanned point clouds, results in a four-dimensional data set. The evaluation of results needs sophisticated tools and visualization techniques. To simplify the evaluation, it is straightforward to use approximation functions fitted to the results. The lacunarity function L(r), being a measure of scale-independent structural properties, has a power-law character. Previous studies showed that log(log(L(r))) transformation is suitable for analysis of spatial patterns. Accordingly, transformed lacunarity functions can be approximated by appropriate functions either in the original or in the transformed domain. As input data we have used a number of laser-scanned point clouds of various forests. The lacunarity distribution has been calculated along a regular horizontal grid at various (relative) elevations. The lacunarity data cube then has been logarithm-transformed and the resulting values became the input of parameter estimation at each point (point of interest, POI). This way at each POI a parameter set is generated that is suitable for spatial analysis. The expectation is that the horizontal variation and vertical layering of the vegetation can be characterized by this procedure. The results show that the transformed L(r) functions can be typically approximated by exponentials individually, and the residual values remain low in most cases. However, (1) in most cases the residuals may vary considerably, and (2) neighbouring POIs often give rather differing estimates both in horizontal and in vertical directions, of them the vertical variation seems to be more characteristic. In the vertical sense, the distribution of estimates shows abrupt changes at places, presumably related to the vertical structure of the forest. In low relief areas horizontal similarity is more typical, in higher relief areas
Jo, Bum Seak; Myong, Jun Pyo; Rhee, Chin Kook; Yoon, Hyoung Kyu; Koo, Jung Wan; Kim, Hyoung Ryoul
2018-01-15
The present study aimed to update the prediction equations for spirometry and their lower limits of normal (LLN) by using the lambda, mu, sigma (LMS) method and to compare the outcomes with the values of previous spirometric reference equations. Spirometric data of 10,249 healthy non-smokers (8,776 females) were extracted from the fourth and fifth versions of the Korea National Health and Nutrition Examination Survey (KNHANES IV, 2007-2009; V, 2010-2012). Reference equations were derived using the LMS method which allows modeling skewness (lambda [L]), mean (mu [M]), and coefficient of variation (sigma [S]). The outcome equations were compared with previous reference values. Prediction equations were presented in the following form: predicted value = e{a + b × ln(height) + c × ln(age) + M - spline}. The new predicted values for spirometry and their LLN derived using the LMS method were shown to more accurately reflect transitions in pulmonary function in young adults than previous prediction equations derived using conventional regression analysis in 2013. There were partial discrepancies between the new reference values and the reference values from the Global Lung Function Initiative in 2012. The results should be interpreted with caution for young adults and elderly males, particularly in terms of the LLN for forced expiratory volume in one second/forced vital capacity in elderly males. Serial spirometry follow-up, together with correlations with other clinical findings, should be emphasized in evaluating the pulmonary function of individuals. Future studies are needed to improve the accuracy of reference data and to develop continuous reference values for spirometry across all ages. © 2018 The Korean Academy of Medical Sciences.
Li, Hong-Lei; Li, Xiao-Ming; Mándi, Attila; Antus, Sándor; Li, Xin; Zhang, Peng; Liu, Yang; Kurtán, Tibor; Wang, Bin-Gui
2017-10-06
Four new cladosporol derivatives, cladosporols F-I (1-4), the known cladosporol C (5), and its new epimer, cladosporol J (6), were isolated and identified from the marine algal-derived endophytic fungus Cladosporium cladosporioides EN-399. Their structures were determined by detailed interpretation of NMR and MS data, and the absolute configurations were established on the basis of TDDFT-ECD and OR calculations. The configurational assignment of cladosporols F (1) and G (2) showed that the previously reported absolute configuration of cladosporol A and all the related cladosporols need to be revised from (4'R) to (4'S). Compounds 1-6 showed antibacterial activity against Escherichia coli, Micrococcus luteus, and Vibrio harveyi with MIC values ranging from 4 to 128 μg/mL. Compound 3 showed significant cytotoxicity against A549, Huh7, and LM3 cell lines with IC 50 values of 5.0, 1.0, and 4.1 μM, respectively, and compound 5 showed activity against H446 cell line with IC 50 value of 4.0 μM.
Parameterized post-Newtonian cosmology
International Nuclear Information System (INIS)
Sanghai, Viraj A A; Clifton, Timothy
2017-01-01
Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC). (paper)
Parameterized post-Newtonian cosmology
Sanghai, Viraj A. A.; Clifton, Timothy
2017-03-01
Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).
Inheritance versus parameterization
DEFF Research Database (Denmark)
Ernst, Erik
2013-01-01
This position paper argues that inheritance and parameterization differ in their fundamental structure, even though they may emulate each other in many ways. Based on this, we claim that certain mechanisms, e.g., final classes, are in conflict with the nature of inheritance, and hence causes...
Gain scheduling using the Youla parameterization
DEFF Research Database (Denmark)
Niemann, Hans Henrik; Stoustrup, Jakob
1999-01-01
Gain scheduling controllers are considered in this paper. The gain scheduling problem where the scheduling parameter vector cannot be measured directly, but needs to be estimated is considered. An estimation of the scheduling vector has been derived by using the Youla parameterization. The use...... in connection with H_inf gain scheduling controllers....
Nass, A.; D'Amore, M.; Helbert, J.
2018-04-01
An archiving structure and reference level of derived and already published data supports the scientific community significantly by a constant rise of knowledge and understanding based on recent discussions within Information Science and Management.
Parameterized examination in econometrics
Malinova, Anna; Kyurkchiev, Vesselin; Spasov, Georgi
2018-01-01
The paper presents a parameterization of basic types of exam questions in Econometrics. This algorithm is used to automate and facilitate the process of examination, assessment and self-preparation of a large number of students. The proposed parameterization of testing questions reduces the time required to author tests and course assignments. It enables tutors to generate a large number of different but equivalent dynamic questions (with dynamic answers) on a certain topic, which are automatically assessed. The presented methods are implemented in DisPeL (Distributed Platform for e-Learning) and provide questions in the areas of filtering and smoothing of time-series data, forecasting, building and analysis of single-equation econometric models. Questions also cover elasticity, average and marginal characteristics, product and cost functions, measurement of monopoly power, supply, demand and equilibrium price, consumer and product surplus, etc. Several approaches are used to enable the required numerical computations in DisPeL - integration of third-party mathematical libraries, developing our own procedures from scratch, and wrapping our legacy math codes in order to modernize and reuse them.
Parameterization analysis and inversion for orthorhombic media
Masmoudi, Nabil
2018-05-01
Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most effective anisotropic model that approximates the azimuthal anisotropy we observe in seismic data. In the framework of full wave form inversion (FWI), the large number of parameters describing orthorhombic media exerts a considerable trade-off and increases the non-linearity of the inversion problem. Choosing a suitable parameterization for the model, and identifying which parameters in that parameterization could be well resolved, are essential to a successful inversion. In this thesis, I derive the radiation patterns for different acoustic orthorhombic parameterization. Analyzing the angular dependence of the scattering of the parameters of different parameterizations starting with the conventionally used notation, I assess the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. In order to build practical inversion strategies, I suggest new parameters (called deviation parameters) for a new parameterization style in orthorhombic media. The novel parameters denoted ∈d, ƞd and δd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. The main feature of the deviation parameters consists of keeping the scattering of the vertical transversely isotropic (VTI) parameters stationary with azimuth. Using these scattering features, we can condition FWI to invert for the parameters which the data are sensitive to, at different stages, scales, and locations in the model. With this parameterization, the data are mainly sensitive to the scattering of 3 parameters (out of six that describe an acoustic orthorhombic medium): the horizontal velocity in the x1 direction, ∈1 which provides scattering mainly near
Parameterized Linear Longitudinal Airship Model
Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph
2010-01-01
A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics
Directory of Open Access Journals (Sweden)
Paulo Fernandes Saad
2012-12-01
Full Text Available CONTEXT: Non-derivative surgical techniques are the treatment of choice for the control of upper digestive tract hemorrhages after schistosomotic portal hypertension. However, recurrent hemorrhaging due to gastroesophagic varices is frequent. OBJECTIVE: To evaluate the outcome of treatment based on embolization of the left gastric vein to control the reoccurrence of hemorrhages caused by gastroesophagic varices in patients with schistosomiasis previously submitted to non-derivative surgery. METHODS: Rates of reoccurrence of hemorrhages and the qualitative and quantitative reduction of gastroesophagic varices in patients undergoing transhepatic embolization of the left gastric vein between December 1999 and January 2009 were studied based on medical charts and follow-up reports. RESULTS: Seven patients with a mean age of 39.3 years underwent percutaneous transhepatic embolization of the left gastric vein. The mean time between azigoportal disconnections employed in combination with splenectomy and the percutaneous approach was 8.4 ± 7.3 years, and the number of episodes of digestive hemorrhaging ranged from 1 to 7 years. No episodes of reoccurrence of hemorrhaging were found during a follow-up period which ranged from 6 months to 7 years. Endoscopic postembolization studies revealed reductions in gastroesophagic varices in all patients compared to preembolization endoscopy. CONCLUSIONS: Percutaneous transhepatic embolization of the left gastric vein in patients with schistosomiasis previously submitted to surgery resulted in a decrease in gastroesophagic varices and was shown to be effective in controlling hemorrhage reoccurrence.
Parameterized Concurrent Multi-Party Session Types
Directory of Open Access Journals (Sweden)
Minas Charalambides
2012-08-01
Full Text Available Session types have been proposed as a means of statically verifying implementations of communication protocols. Although prior work has been successful in verifying some classes of protocols, it does not cope well with parameterized, multi-actor scenarios with inherent asynchrony. For example, the sliding window protocol is inexpressible in previously proposed session type systems. This paper describes System-A, a new typing language which overcomes many of the expressiveness limitations of prior work. System-A explicitly supports asynchrony and parallelism, as well as multiple forms of parameterization. We define System-A and show how it can be used for the static verification of a large class of asynchronous communication protocols.
Spectral cumulus parameterization based on cloud-resolving model
Baba, Yuya
2018-02-01
We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.
DEFF Research Database (Denmark)
Niemann, Hans Henrik
2003-01-01
A different aspect of using the parameterisation of all systems stabilised by a given controller, i.e. the dual Youla parameterisation, is considered. The relation between system change and the dual Youla parameter is derived in explicit form. A number of standard uncertain model descriptions...... are considered and the relation with the dual Youla parameter given. Some applications of the dual Youla parameterisation are considered in connection with the design of controllers and model/performance validation....
Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization
Oh, Juwon; Alkhalifah, Tariq Ali
2016-01-01
The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth's surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry
Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization
Oh, Juwon
2016-09-15
The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth\\'s surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry
Sims, Lynn M; Garvey, Dennis; Ballantyne, Jack
2007-01-01
Single nucleotide polymorphisms on the Y chromosome (Y-SNPs) have been widely used in the study of human migration patterns and evolution. Potential forensic applications of Y-SNPs include their use in predicting the ethnogeographic origin of the donor of a crime scene sample, or exclusion of suspects of sexual assaults (the evidence of which often comprises male/female mixtures and may involve multiple perpetrators), paternity testing, and identification of non- and half-siblings. In this study, we used a population of 118 African- and 125 European-Americans to evaluate 12 previously phylogenetically undefined Y-SNPs for their ability to further differentiate individuals who belong to the major African (E3a)- and European (R1b3, I)-derived haplogroups. Ten of these markers define seven new sub-clades (equivalent to E3a7a, E3a8, E3a8a, E3a8a1, R1b3h, R1b3i, and R1b3i1 using the Y Chromosome Consortium nomenclature) within haplogroups E and R. Interestingly, during the course of this study we evaluated M222, a sub-R1b3 marker rarely used, and found that this sub-haplogroup in effect defines the Y-STR Irish Modal Haplotype (IMH). The new bi-allelic markers described here are expected to find application in human evolutionary studies and forensic genetics. (c) 2006 Wiley-Liss, Inc.
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
A subgrid parameterization scheme for precipitation
Directory of Open Access Journals (Sweden)
S. Turner
2012-04-01
Full Text Available With increasing computing power, the horizontal resolution of numerical weather prediction (NWP models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II and fair weather cumulus (RICO and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.
Application of the dual Youla parameterization
DEFF Research Database (Denmark)
Niemann, Hans Henrik
1999-01-01
Different applications of the parameterization of all systems stabilized by a given controller, i.e. the dual Youla parameterization, are considered in this paper. It will be shown how the parameterization can be applied in connection with controller design, adaptive controllers, model validation...
Parameterization of mixing by secondary circulation in estuaries
Basdurak, N. B.; Huguenard, K. D.; Valle-Levinson, A.; Li, M.; Chant, R. J.
2017-07-01
Eddy viscosity parameterizations that depend on a gradient Richardson number Ri have been most pertinent to the open ocean. Parameterizations applicable to stratified coastal regions typically require implementation of a numerical model. Two novel parameterizations of the vertical eddy viscosity, based on Ri, are proposed here for coastal waters. One turbulence closure considers temporal changes in stratification and bottom stress and is coined the "regular fit." The alternative approach, named the "lateral fit," incorporates variability of lateral flows that are prevalent in estuaries. The two turbulence parameterization schemes are tested using data from a Self-Contained Autonomous Microstructure Profiler (SCAMP) and an Acoustic Doppler Current Profiler (ADCP) collected in the James River Estuary. The "regular fit" compares favorably to SCAMP-derived vertical eddy viscosity values but only at relatively small values of gradient Ri. On the other hand, the "lateral fit" succeeds at describing the lateral variability of eddy viscosity over a wide range of Ri. The modifications proposed to Ri-dependent eddy viscosity parameterizations allow applicability to stratified coastal regions, particularly in wide estuaries, without requiring implementation of a numerical model.
A new parameterization for waveform inversion in acoustic orthorhombic media
Masmoudi, Nabil
2016-05-26
Orthorhombic anisotropic model inversion is extra challenging because of the multiple parameter nature of the inversion problem. The high number of parameters required to describe the medium exerts considerable trade-off and additional nonlinearity to a full-waveform inversion (FWI) application. Choosing a suitable set of parameters to describe the model and designing an effective inversion strategy can help in mitigating this problem. Using the Born approximation, which is the central ingredient of the FWI update process, we have derived radiation patterns for the different acoustic orthorhombic parameterizations. Analyzing the angular dependence of scattering (radiation patterns) of the parameters of different parameterizations starting with the often used Thomsen-Tsvankin parameterization, we have assessed the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. The analysis led us to introduce new parameters ϵd, δd, and ηd, which have azimuthally dependent radiation patterns, but keep the scattering potential of the transversely isotropic parameters stationary with azimuth (azimuth independent). The novel parameters ϵd, δd, and ηd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. Therefore, these deviation parameters offer a new parameterization style for an acoustic orthorhombic medium described by six parameters: three vertical transversely isotropic (VTI) parameters, two deviation parameters, and one parameter describing the anisotropy in the horizontal symmetry plane. The main feature of any parameterization based on the deviation parameters, is the azimuthal independency of the modeled data with respect to the VTI parameters, which allowed us to propose practical inversion strategies based on our experience with the VTI parameters. This feature of the new parameterization style holds for even the long-wavelength components of
Cumulus parameterizations in chemical transport models
Mahowald, Natalie M.; Rasch, Philip J.; Prinn, Ronald G.
1995-12-01
Global three-dimensional chemical transport models (CTMs) are valuable tools for studying processes controlling the distribution of trace constituents in the atmosphere. A major uncertainty in these models is the subgrid-scale parametrization of transport by cumulus convection. This study seeks to define the range of behavior of moist convective schemes and point toward more reliable formulations for inclusion in chemical transport models. The emphasis is on deriving convective transport from meteorological data sets (such as those from the forecast centers) which do not routinely include convective mass fluxes. Seven moist convective parameterizations are compared in a column model to examine the sensitivity of the vertical profile of trace gases to the parameterization used in a global chemical transport model. The moist convective schemes examined are the Emanuel scheme [Emanuel, 1991], the Feichter-Crutzen scheme [Feichter and Crutzen, 1990], the inverse thermodynamic scheme (described in this paper), two versions of a scheme suggested by Hack [Hack, 1994], and two versions of a scheme suggested by Tiedtke (one following the formulation used in the ECMWF (European Centre for Medium-Range Weather Forecasting) and ECHAM3 (European Centre and Hamburg Max-Planck-Institut) models [Tiedtke, 1989], and one formulated as in the TM2 (Transport Model-2) model (M. Heimann, personal communication, 1992). These convective schemes vary in the closure used to derive the mass fluxes, as well as the cloud model formulation, giving a broad range of results. In addition, two boundary layer schemes are compared: a state-of-the-art nonlocal boundary layer scheme [Holtslag and Boville, 1993] and a simple adiabatic mixing scheme described in this paper. Three tests are used to compare the moist convective schemes against observations. Although the tests conducted here cannot conclusively show that one parameterization is better than the others, the tests are a good measure of the
Neutrosophic Parameterized Soft Relations and Their Applications
Directory of Open Access Journals (Sweden)
Irfan Deli
2014-06-01
Full Text Available The aim of this paper is to introduce the concept of relation on neutrosophic parameterized soft set (NP- soft sets theory. We have studied some related properties and also put forward some propositions on neutrosophic parameterized soft relation with proofs and examples. Finally the notions of symmetric, transitive, reflexive, and equivalence neutrosophic parameterized soft set relations have been established in our work. Finally a decision making method on NP-soft sets is presented.
Infrared radiation parameterizations in numerical climate models
Chou, Ming-Dah; Kratz, David P.; Ridgway, William
1991-01-01
This study presents various approaches to parameterizing the broadband transmission functions for utilization in numerical climate models. One-parameter scaling is applied to approximate a nonhomogeneous path with an equivalent homogeneous path, and the diffuse transmittances are either interpolated from precomputed tables or fit by analytical functions. Two-parameter scaling is applied to parameterizing the carbon dioxide and ozone transmission functions in both the lower and middle atmosphere. Parameterizations are given for the nitrous oxide and methane diffuse transmission functions.
On the Dependence of Cloud Feedbacks on Physical Parameterizations in WRF Aquaplanet Simulations
Cesana, Grégory; Suselj, Kay; Brient, Florent
2017-10-01
We investigate the effects of physical parameterizations on cloud feedback uncertainty in response to climate change. For this purpose, we construct an ensemble of eight aquaplanet simulations using the Weather Research and Forecasting (WRF) model. In each WRF-derived simulation, we replace only one parameterization at a time while all other parameters remain identical. By doing so, we aim to (i) reproduce cloud feedback uncertainty from state-of-the-art climate models and (ii) understand how parametrizations impact cloud feedbacks. Our results demonstrate that this ensemble of WRF simulations, which differ only in physical parameterizations, replicates the range of cloud feedback uncertainty found in state-of-the-art climate models. We show that microphysics and convective parameterizations govern the magnitude and sign of cloud feedbacks, mostly due to tropical low-level clouds in subsidence regimes. Finally, this study highlights the advantages of using WRF to analyze cloud feedback mechanisms owing to its plug-and-play parameterization capability.
Parameterization Of Solar Radiation Using Neural Network
International Nuclear Information System (INIS)
Jiya, J. D.; Alfa, B.
2002-01-01
This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values
Tuning controllers using the dual Youla parameterization
DEFF Research Database (Denmark)
Niemann, Hans Henrik; Stoustrup, Jakob
2000-01-01
This paper describes the application of the Youla parameterization of all stabilizing controllers and the dual Youla parameterization of all systems stabilized by a given controller in connection with tuning of controllers. In the uncertain case, it is shown that the use of the Youla parameteriza......This paper describes the application of the Youla parameterization of all stabilizing controllers and the dual Youla parameterization of all systems stabilized by a given controller in connection with tuning of controllers. In the uncertain case, it is shown that the use of the Youla...
Parameterization of planetary wave breaking in the middle atmosphere
Garcia, Rolando R.
1991-01-01
A parameterization of planetary wave breaking in the middle atmosphere has been developed and tested in a numerical model which includes governing equations for a single wave and the zonal-mean state. The parameterization is based on the assumption that wave breaking represents a steady-state equilibrium between the flux of wave activity and its dissipation by nonlinear processes, and that the latter can be represented as linear damping of the primary wave. With this and the additional assumption that the effect of breaking is to prevent further amplitude growth, the required dissipation rate is readily obtained from the steady-state equation for wave activity; diffusivity coefficients then follow from the dissipation rate. The assumptions made in the derivation are equivalent to those commonly used in parameterizations for gravity wave breaking, but the formulation in terms of wave activity helps highlight the central role of the wave group velocity in determining the dissipation rate. Comparison of model results with nonlinear calculations of wave breaking and with diagnostic determinations of stratospheric diffusion coefficients reveals remarkably good agreement, and suggests that the parameterization could be useful for simulating inexpensively, but realistically, the effects of planetary wave transport.
CLOUD PARAMETERIZATIONS, CLOUD PHYSICS, AND THEIR CONNECTIONS: AN OVERVIEW
International Nuclear Information System (INIS)
LIU, Y.; DAUM, P.H.; CHAI, S.K.; LIU, F.
2002-01-01
This paper consists of three parts. The first part is concerned with the parameterization of cloud microphysics in climate models. We demonstrate the crucial importance of spectral dispersion of the cloud droplet size distribution in determining radiative properties of clouds (e.g., effective radius), and underline the necessity of specifying spectral dispersion in the parameterization of cloud microphysics. It is argued that the inclusion of spectral dispersion makes the issue of cloud parameterization essentially equivalent to that of the droplet size distribution function, bringing cloud parameterization to the forefront of cloud physics. The second part is concerned with theoretical investigations into the spectral shape of droplet size distributions in cloud physics. After briefly reviewing the mainstream theories (including entrainment and mixing theories, and stochastic theories), we discuss their deficiencies and the need for a paradigm shift from reductionist approaches to systems approaches. A systems theory that has recently been formulated by utilizing ideas from statistical physics and information theory is discussed, along with the major results derived from it. It is shown that the systems formalism not only easily explains many puzzles that have been frustrating the mainstream theories, but also reveals such new phenomena as scale-dependence of cloud droplet size distributions. The third part is concerned with the potential applications of the systems theory to the specification of spectral dispersion in terms of predictable variables and scale-dependence under different fluctuating environments
Parameterization of solar flare dose
International Nuclear Information System (INIS)
Lamarche, A.H.; Poston, J.W.
1996-01-01
A critical aspect of missions to the moon or Mars will be the safety and health of the crew. Radiation in space is a hazard for astronauts, especially high-energy radiation following certain types of solar flares. A solar flare event can be very dangerous if astronauts are not adequately shielded because flares can deliver a very high dose in a short period of time. The goal of this research was to parameterize solar flare dose as a function of time to see if it was possible to predict solar flare occurrence, thus providing a warning time. This would allow astronauts to take corrective action and avoid receiving a dose greater than the recommended limit set by the National Council on Radiation Protection and Measurements (NCRP)
A scheme for parameterizing ice cloud water content in general circulation models
Heymsfield, Andrew J.; Donner, Leo J.
1989-01-01
A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.
Parameterization of a fuzzy classifier for the diagnosis of an industrial process
International Nuclear Information System (INIS)
Toscano, R.; Lyonnet, P.
2002-01-01
The aim of this paper is to present a classifier based on a fuzzy inference system. For this classifier, we propose a parameterization method, which is not necessarily based on an iterative training. This approach can be seen as a pre-parameterization, which allows the determination of the rules base and the parameters of the membership functions. We also present a continuous and derivable version of the previous classifier and suggest an iterative learning algorithm based on a gradient method. An example using the learning basis IRIS, which is a benchmark for classification problems, is presented showing the performances of this classifier. Finally this classifier is applied to the diagnosis of a DC motor showing the utility of this method. However in many cases the total knowledge necessary to the synthesis of the fuzzy diagnosis system (FDS) is not, in general, directly available. It must be extracted from an often-considerable mass of information. For this reason, a general methodology for the design of a FDS is presented and illustrated on a non-linear plant
A new simple parameterization of daily clear-sky global solar radiation including horizon effects
International Nuclear Information System (INIS)
Lopez, Gabriel; Javier Batlles, F.; Tovar-Pescador, Joaquin
2007-01-01
Estimation of clear-sky global solar radiation is usually an important previous stage for calculating global solar radiation under all sky conditions. This is, for instance, a common procedure to derive incoming solar radiation from remote sensing or by using digital elevation models. In this work, we present a new model to calculate daily values of clear-sky global solar irradiation. The main goal is the simple parameterization in terms of atmospheric temperature and relative humidity, Angstroem's turbidity coefficient, ground albedo and site elevation, including a factor to take into account horizon obstructions. This allows us to obtain estimates even though a free horizon is not present as is the case of mountainous locations. Comparisons of calculated daily values with measured data show that this model is able to provide a good level of accurate estimates using either daily or mean monthly values of the input parameters. This new model has also been shown to improve daily estimates against those obtained using the clear-sky model from the European Solar Radiation Atlas and other accurate parameterized daily irradiation models. The introduction of Angstroem's turbidity coefficient and ground albedo should allow us to use the increasing worldwide aerosol information available and to consider those sites affected by snow covers in an easy and fast way. In addition, the proposed model is intended to be a useful tool to select clear-sky conditions
submitter Data-driven RBE parameterization for helium ion beams
Mairani, A; Dokic, I; Valle, S M; Tessonnier, T; Galm, R; Ciocca, M; Parodi, K; Ferrari, A; Jäkel, O; Haberer, T; Pedroni, P; Böhlen, T T
2016-01-01
Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter ${{(\\alpha /\\beta )}_{\\text{ph}}}$ of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the $\\text{RB}{{\\text{E}}_{\\alpha}}={{\\alpha}_{\\text{He}}}/{{\\alpha}_{\\text{ph}}}$ and ${{\\text{R}}_{\\beta}}={{\\beta}_{\\text{He}}}/{{\\beta}_{\\text{ph}}}$ ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival ($\\text{RB}{{\\text{E}}_{10}}$ ) are compared with the experimental ones. Pearson's correlati...
Parameterization and measurements of helical magnetic fields
International Nuclear Information System (INIS)
Fischer, W.; Okamura, M.
1997-01-01
Magnetic fields with helical symmetry can be parameterized using multipole coefficients (a n , b n ). We present a parameterization that gives the familiar multipole coefficients (a n , b n ) for straight magnets when the helical wavelength tends to infinity. To measure helical fields all methods used for straight magnets can be employed. We show how to convert the results of those measurements to obtain the desired helical multipole coefficients (a n , b n )
Menangkal Serangan SQL Injection Dengan Parameterized Query
Directory of Open Access Journals (Sweden)
Yulianingsih Yulianingsih
2016-06-01
Full Text Available Semakin meningkat pertumbuhan layanan informasi maka semakin tinggi pula tingkat kerentanan keamanan dari suatu sumber informasi. Melalui tulisan ini disajikan penelitian yang dilakukan secara eksperimen yang membahas tentang kejahatan penyerangan database secara SQL Injection. Penyerangan dilakukan melalui halaman autentikasi dikarenakan halaman ini merupakan pintu pertama akses yang seharusnya memiliki pertahanan yang cukup. Kemudian dilakukan eksperimen terhadap metode Parameterized Query untuk mendapatkan solusi terhadap permasalahan tersebut. Kata kunci— Layanan Informasi, Serangan, eksperimen, SQL Injection, Parameterized Query.
Parameterizing Coefficients of a POD-Based Dynamical System
Kalb, Virginia L.
2010-01-01
-continuation software can be used on the parameterized dynamical system to derive a bifurcation diagram that accurately predicts the temporal flow behavior.
Directory of Open Access Journals (Sweden)
A. Endalamaw
2017-09-01
Full Text Available Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW in Interior Alaska: one nearly permafrost-free (LowP sub-basin and one permafrost-dominated (HighP sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC mesoscale hydrological model to simulate runoff, evapotranspiration (ET, and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub
Directory of Open Access Journals (Sweden)
D. Barahona
2009-01-01
Full Text Available We present a parameterization of cirrus cloud formation that computes the ice crystal number and size distribution under the presence of homogeneous and heterogeneous freezing. The parameterization is very simple to apply and is derived from the analytical solution of the cloud parcel equations, assuming that the ice nuclei population is monodisperse and chemically homogeneous. In addition to the ice distribution, an analytical expression is provided for the limiting ice nuclei number concentration that suppresses ice formation from homogeneous freezing. The parameterization is evaluated against a detailed numerical parcel model, and reproduces numerical simulations over a wide range of conditions with an average error of 6±33%. The parameterization also compares favorably against other formulations that require some form of numerical integration.
Robust parameterization of elastic and absorptive electron atomic scattering factors
International Nuclear Information System (INIS)
Peng, L.M.; Ren, G.; Dudarev, S.L.; Whelan, M.J.
1996-01-01
A robust algorithm and computer program have been developed for the parameterization of elastic and absorptive electron atomic scattering factors. The algorithm is based on a combined modified simulated-annealing and least-squares method, and the computer program works well for fitting both elastic and absorptive atomic scattering factors with five Gaussians. As an application of this program, the elastic electron atomic scattering factors have been parameterized for all neutral atoms and for s up to 6 A -1 . Error analysis shows that the present results are considerably more accurate than the previous analytical fits in terms of the mean square value of the deviation between the numerical and fitted scattering factors. Parameterization for absorptive atomic scattering factors has been made for 17 important materials with the zinc blende structure over the temperature range 1 to 1000 K, where appropriate, and for temperature ranges for which accurate Debye-Waller factors are available. For other materials, the parameterization of the absorptive electron atomic scattering factors can be made using the program by supplying the atomic number of the element, the Debye-Waller factor and the acceleration voltage. For ions or when more accurate numerical results for neutral atoms are available, the program can read in the numerical values of the elastic scattering factors and return the parameters for both the elastic and absorptive scattering factors. The computer routines developed have been tested both on computer workstations and desktop PC computers, and will be made freely available via electronic mail or on floppy disk upon request. (orig.)
International Nuclear Information System (INIS)
Yan, Y.T.
1991-01-01
The transverse motion of charged particles in a circular accelerator can be well represented by a one-turn high-order Taylor map. For particles without energy deviation, the one-turn Taylor map is a 4-dimensional polynomials of four variables. The four variables are the transverse canonical coordinates and their conjugate momenta. To include the energy deviation (off-momentum) effects, the map has to be parameterized with a smallness factor representing the off-momentum and so the Taylor map becomes a 4-dimensional polynomials of five variables. It is for this type of parameterized Taylor map that a mehtod is presented for converting it into a parameterized Dragt-Finn factorization map. Parameterized nonlinear normal form and parameterized kick factorization can thus be obtained with suitable modification of the existing technique
A parameterization method and application in breast tomosynthesis dosimetry
Energy Technology Data Exchange (ETDEWEB)
Li, Xinhua; Zhang, Da; Liu, Bob [Division of Diagnostic Imaging Physics and Webster Center for Advanced Research and Education in Radiation, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States)
2013-09-15
Purpose: To present a parameterization method based on singular value decomposition (SVD), and to provide analytical parameterization of the mean glandular dose (MGD) conversion factors from eight references for evaluating breast tomosynthesis dose in the Mammography Quality Standards Act (MQSA) protocol and in the UK, European, and IAEA dosimetry protocols.Methods: MGD conversion factor is usually listed in lookup tables for the factors such as beam quality, breast thickness, breast glandularity, and projection angle. The authors analyzed multiple sets of MGD conversion factors from the Hologic Selenia Dimensions quality control manual and seven previous papers. Each data set was parameterized using a one- to three-dimensional polynomial function of 2–16 terms. Variable substitution was used to improve accuracy. A least-squares fit was conducted using the SVD.Results: The differences between the originally tabulated MGD conversion factors and the results computed using the parameterization algorithms were (a) 0.08%–0.18% on average and 1.31% maximum for the Selenia Dimensions quality control manual, (b) 0.09%–0.66% on average and 2.97% maximum for the published data by Dance et al. [Phys. Med. Biol. 35, 1211–1219 (1990); ibid. 45, 3225–3240 (2000); ibid. 54, 4361–4372 (2009); ibid. 56, 453–471 (2011)], (c) 0.74%–0.99% on average and 3.94% maximum for the published data by Sechopoulos et al. [Med. Phys. 34, 221–232 (2007); J. Appl. Clin. Med. Phys. 9, 161–171 (2008)], and (d) 0.66%–1.33% on average and 2.72% maximum for the published data by Feng and Sechopoulos [Radiology 263, 35–42 (2012)], excluding one sample in (d) that does not follow the trends in the published data table.Conclusions: A flexible parameterization method is presented in this paper, and was applied to breast tomosynthesis dosimetry. The resultant data offer easy and accurate computations of MGD conversion factors for evaluating mean glandular breast dose in the MQSA
A simple parameterization for the rising velocity of bubbles in a liquid pool
Energy Technology Data Exchange (ETDEWEB)
Park, Sung Hoon [Dept. of Environmental Engineering, Sunchon National University, Suncheon (Korea, Republic of); Park, Chang Hwan; Lee, Jin Yong; Lee, Byung Chul [FNC Technology, Co., Ltd., Yongin (Korea, Republic of)
2017-06-15
The determination of the shape and rising velocity of gas bubbles in a liquid pool is of great importance in analyzing the radioactive aerosol emissions from nuclear power plant accidents in terms of the fission product release rate and the pool scrubbing efficiency of radioactive aerosols. This article suggests a simple parameterization for the gas bubble rising velocity as a function of the volume-equivalent bubble diameter; this parameterization does not require prior knowledge of bubble shape. This is more convenient than previously suggested parameterizations because it is given as a single explicit formula. It is also shown that a bubble shape diagram, which is very similar to the Grace's diagram, can be easily generated using the parameterization suggested in this article. Furthermore, the boundaries among the three bubble shape regimes in the E{sub o}–R{sub e} plane and the condition for the bypass of the spheroidal regime can be delineated directly from the parameterization formula. Therefore, the parameterization suggested in this article appears to be useful not only in easily determining the bubble rising velocity (e.g., in postulated severe accident analysis codes) but also in understanding the trend of bubble shape change due to bubble growth.
A simple parameterization for the rising velocity of bubbles in a liquid pool
International Nuclear Information System (INIS)
Park, Sung Hoon; Park, Chang Hwan; Lee, Jin Yong; Lee, Byung Chul
2017-01-01
The determination of the shape and rising velocity of gas bubbles in a liquid pool is of great importance in analyzing the radioactive aerosol emissions from nuclear power plant accidents in terms of the fission product release rate and the pool scrubbing efficiency of radioactive aerosols. This article suggests a simple parameterization for the gas bubble rising velocity as a function of the volume-equivalent bubble diameter; this parameterization does not require prior knowledge of bubble shape. This is more convenient than previously suggested parameterizations because it is given as a single explicit formula. It is also shown that a bubble shape diagram, which is very similar to the Grace's diagram, can be easily generated using the parameterization suggested in this article. Furthermore, the boundaries among the three bubble shape regimes in the E_o–R_e plane and the condition for the bypass of the spheroidal regime can be delineated directly from the parameterization formula. Therefore, the parameterization suggested in this article appears to be useful not only in easily determining the bubble rising velocity (e.g., in postulated severe accident analysis codes) but also in understanding the trend of bubble shape change due to bubble growth
A Solar Radiation Parameterization for Atmospheric Studies. Volume 15
Chou, Ming-Dah; Suarez, Max J. (Editor)
1999-01-01
The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.
Active Subspaces of Airfoil Shape Parameterizations
Grey, Zachary J.; Constantine, Paul G.
2018-05-01
Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.
Model parameterization as method for data analysis in dendroecology
Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita
2017-04-01
There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)
Invariant box-parameterization of neutrino oscillations
International Nuclear Information System (INIS)
Weiler, Thomas J.; Wagner, DJ
1998-01-01
The model-independent 'box' parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing-matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n≥3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements
Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.
Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot
2013-10-01
Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.
Invariant box parameterization of neutrino oscillations
International Nuclear Information System (INIS)
Weiler, T.J.; Wagner, D.
1998-01-01
The model-independent 'box' parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n≥3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements. copyright 1998 American Institute of Physics
Directory of Open Access Journals (Sweden)
D. Barahona
2009-08-01
Full Text Available This study presents a comprehensive ice cloud formation parameterization that computes the ice crystal number, size distribution, and maximum supersaturation from precursor aerosol and ice nuclei. The parameterization provides an analytical solution of the cloud parcel model equations and accounts for the competition effects between homogeneous and heterogeneous freezing, and, between heterogeneous freezing in different modes. The diversity of heterogeneous nuclei is described through a nucleation spectrum function which is allowed to follow any form (i.e., derived from classical nucleation theory or from observations. The parameterization reproduces the predictions of a detailed numerical parcel model over a wide range of conditions, and several expressions for the nucleation spectrum. The average error in ice crystal number concentration was −2.0±8.5% for conditions of pure heterogeneous freezing, and, 4.7±21% when both homogeneous and heterogeneous freezing were active. The formulation presented is fast and free from requirements of numerical integration.
Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations
Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot
2013-01-01
Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or ...
Directory of Open Access Journals (Sweden)
N. Kuba
2006-01-01
Full Text Available First, a hybrid cloud microphysical model was developed that incorporates both Lagrangian and Eulerian frameworks to study quantitatively the effect of cloud condensation nuclei (CCN on the precipitation of warm clouds. A parcel model and a grid model comprise the cloud model. The condensation growth of CCN in each parcel is estimated in a Lagrangian framework. Changes in cloud droplet size distribution arising from condensation and coalescence are calculated on grid points using a two-moment bin method in a semi-Lagrangian framework. Sedimentation and advection are estimated in the Eulerian framework between grid points. Results from the cloud model show that an increase in the number of CCN affects both the amount and the area of precipitation. Additionally, results from the hybrid microphysical model and Kessler's parameterization were compared. Second, new parameterizations were developed that estimate the number and size distribution of cloud droplets given the updraft velocity and the number of CCN. The parameterizations were derived from the results of numerous numerical experiments that used the cloud microphysical parcel model. The input information of CCN for these parameterizations is only several values of CCN spectrum (they are given by CCN counter for example. It is more convenient than conventional parameterizations those need values concerned with CCN spectrum, C and k in the equation of N=CSk, or, breadth, total number and median radius, for example. The new parameterizations' predictions of initial cloud droplet size distribution for the bin method were verified by using the aforesaid hybrid microphysical model. The newly developed parameterizations will save computing time, and can effectively approximate components of cloud microphysics in a non-hydrostatic cloud model. The parameterizations are useful not only in the bin method in the regional cloud-resolving model but also both for a two-moment bulk microphysical model and
Parameterized and resolved Southern Ocean eddy compensation
Poulsen, Mads B.; Jochum, Markus; Nuterman, Roman
2018-04-01
The ability to parameterize Southern Ocean eddy effects in a forced coarse resolution ocean general circulation model is assessed. The transient model response to a suite of different Southern Ocean wind stress forcing perturbations is presented and compared to identical experiments performed with the same model in 0.1° eddy-resolving resolution. With forcing of present-day wind stress magnitude and a thickness diffusivity formulated in terms of the local stratification, it is shown that the Southern Ocean residual meridional overturning circulation in the two models is different in structure and magnitude. It is found that the difference in the upper overturning cell is primarily explained by an overly strong subsurface flow in the parameterized eddy-induced circulation while the difference in the lower cell is mainly ascribed to the mean-flow overturning. With a zonally constant decrease of the zonal wind stress by 50% we show that the absolute decrease in the overturning circulation is insensitive to model resolution, and that the meridional isopycnal slope is relaxed in both models. The agreement between the models is not reproduced by a 50% wind stress increase, where the high resolution overturning decreases by 20%, but increases by 100% in the coarse resolution model. It is demonstrated that this difference is explained by changes in surface buoyancy forcing due to a reduced Antarctic sea ice cover, which strongly modulate the overturning response and ocean stratification. We conclude that the parameterized eddies are able to mimic the transient response to altered wind stress in the high resolution model, but partly misrepresent the unperturbed Southern Ocean meridional overturning circulation and associated heat transports.
Parameterization of ion channeling half-angles and minimum yields
Energy Technology Data Exchange (ETDEWEB)
Doyle, Barney L.
2016-03-15
A MS Excel program has been written that calculates ion channeling half-angles and minimum yields in cubic bcc, fcc and diamond lattice crystals. All of the tables and graphs in the three Ion Beam Analysis Handbooks that previously had to be manually looked up and read from were programed into Excel in handy lookup tables, or parameterized, for the case of the graphs, using rather simple exponential functions with different power functions of the arguments. The program then offers an extremely convenient way to calculate axial and planar half-angles, minimum yields, effects on half-angles and minimum yields of amorphous overlayers. The program can calculate these half-angles and minimum yields for 〈u v w〉 axes and [h k l] planes up to (5 5 5). The program is open source and available at (http://www.sandia.gov/pcnsc/departments/iba/ibatable.html).
Laparoscopy After Previous Laparotomy
Directory of Open Access Journals (Sweden)
Zulfo Godinjak
2006-11-01
Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.
Energy Technology Data Exchange (ETDEWEB)
Liou, Kuo-Nan [Univ. of California, Los Angeles, CA (United States)
2016-02-09
Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracing computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the
Application of a planetary wave breaking parameterization to stratospheric circulation statistics
Randel, William J.; Garcia, Rolando R.
1994-01-01
The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.
Parameterization of ion-induced nucleation rates based on ambient observations
Directory of Open Access Journals (Sweden)
T. Nieminen
2011-04-01
Full Text Available Atmospheric ions participate in the formation of new atmospheric aerosol particles, yet their exact role in this process has remained unclear. Here we derive a new simple parameterization for ion-induced nucleation or, more precisely, for the formation rate of charged 2-nm particles. The parameterization is semi-empirical in the sense that it is based on comprehensive results of one-year-long atmospheric cluster and particle measurements in the size range ~1–42 nm within the EUCAARI (European Integrated project on Aerosol Cloud Climate and Air Quality interactions project. Data from 12 field sites across Europe measured with different types of air ion and cluster mobility spectrometers were used in our analysis, with more in-depth analysis made using data from four stations with concomitant sulphuric acid measurements. The parameterization is given in two slightly different forms: a more accurate one that requires information on sulfuric acid and nucleating organic vapor concentrations, and a simpler one in which this information is replaced with the global radiation intensity. These new parameterizations are applicable to all large-scale atmospheric models containing size-resolved aerosol microphysics, and a scheme to calculate concentrations of sulphuric acid, condensing organic vapours and cluster ions.
Evaluating parameterizations of aerodynamic resistance to heat transfer using field measurements
Directory of Open Access Journals (Sweden)
Shaomin Liu
2007-01-01
Full Text Available Parameterizations of aerodynamic resistance to heat and water transfer have a significant impact on the accuracy of models of land – atmosphere interactions and of estimated surface fluxes using spectro-radiometric data collected from aircrafts and satellites. We have used measurements from an eddy correlation system to derive the aerodynamic resistance to heat transfer over a bare soil surface as well as over a maize canopy. Diurnal variations of aerodynamic resistance have been analyzed. The results showed that the diurnal variation of aerodynamic resistance during daytime (07:00 h–18:00 h was significant for both the bare soil surface and the maize canopy although the range of variation was limited. Based on the measurements made by the eddy correlation system, a comprehensive evaluation of eight popularly used parameterization schemes of aerodynamic resistance was carried out. The roughness length for heat transfer is a crucial parameter in the estimation of aerodynamic resistance to heat transfer and can neither be taken as a constant nor be neglected. Comparing with the measurements, the parameterizations by Choudhury et al. (1986, Viney (1991, Yang et al. (2001 and the modified forms of Verma et al. (1976 and Mahrt and Ek (1984 by inclusion of roughness length for heat transfer gave good agreements with the measurements, while the parameterizations by Hatfield et al. (1983 and Xie (1988 showed larger errors even though the roughness length for heat transfer has been taken into account.
Climate impacts of parameterized Nordic Sea overflows
Danabasoglu, Gokhan; Large, William G.; Briegleb, Bruce P.
2010-11-01
A new overflow parameterization (OFP) of density-driven flows through ocean ridges via narrow, unresolved channels has been developed and implemented in the ocean component of the Community Climate System Model version 4. It represents exchanges from the Nordic Seas and the Antarctic shelves, associated entrainment, and subsequent injection of overflow product waters into the abyssal basins. We investigate the effects of the parameterized Denmark Strait (DS) and Faroe Bank Channel (FBC) overflows on the ocean circulation, showing their impacts on the Atlantic Meridional Overturning Circulation and the North Atlantic climate. The OFP is based on the Marginal Sea Boundary Condition scheme of Price and Yang (1998), but there are significant differences that are described in detail. Two uncoupled (ocean-only) and two fully coupled simulations are analyzed. Each pair consists of one case with the OFP and a control case without this parameterization. In both uncoupled and coupled experiments, the parameterized DS and FBC source volume transports are within the range of observed estimates. The entrainment volume transports remain lower than observational estimates, leading to lower than observed product volume transports. Due to low entrainment, the product and source water properties are too similar. The DS and FBC overflow temperature and salinity properties are in better agreement with observations in the uncoupled case than in the coupled simulation, likely reflecting surface flux differences. The most significant impact of the OFP is the improved North Atlantic Deep Water penetration depth, leading to a much better comparison with the observational data and significantly reducing the chronic, shallow penetration depth bias in level coordinate models. This improvement is due to the deeper penetration of the southward flowing Deep Western Boundary Current. In comparison with control experiments without the OFP, the abyssal ventilation rates increase in the North
A parameterization of cloud droplet nucleation
International Nuclear Information System (INIS)
Ghan, S.J.; Chuang, C.; Penner, J.E.
1993-01-01
Droplet nucleation is a fundamental cloud process. The number of aerosols activated to form cloud droplets influences not only the number of aerosols scavenged by clouds but also the size of the cloud droplets. Cloud droplet size influences the cloud albedo and the conversion of cloud water to precipitation. Global aerosol models are presently being developed with the intention of coupling with global atmospheric circulation models to evaluate the influence of aerosols and aerosol-cloud interactions on climate. If these and other coupled models are to address issues of aerosol-cloud interactions, the droplet nucleation process must be adequately represented. Here we introduce a droplet nucleation parametrization that offers certain advantages over the popular Twomey (1959) parameterization
Parameterization of MARVELS Spectra Using Deep Learning
Gilda, Sankalp; Ge, Jian; MARVELS
2018-01-01
Like many large-scale surveys, the Multi-Object APO Radial Velocity Exoplanet Large-area Survey (MARVELS) was designed to operate at a moderate spectral resolution ($\\sim$12,000) for efficiency in observing large samples, which makes the stellar parameterization difficult due to the high degree of blending of spectral features. Two extant solutions to deal with this issue are to utilize spectral synthesis, and to utilize spectral indices [Ghezzi et al. 2014]. While the former is a powerful and tested technique, it can often yield strongly coupled atmospheric parameters, and often requires high spectral resolution (Valenti & Piskunov 1996). The latter, though a promising technique utilizing measurements of equivalent widths of spectral indices, has only been employed with respect to FKG dwarfs and sub-giants and not red-giant branch stars, which constitute ~30% of MARVELS targets. In this work, we tackle this problem using a convolution neural network (CNN). In particular, we train a one-dimensional CNN on appropriately processed PHOENIX synthetic spectra using supervised training to automatically distinguish the features relevant for the determination of each of the three atmospheric parameters – T_eff, log(g), [Fe/H] – and use the knowledge thus gained by the network to parameterize 849 MARVELS giants. When tested on the synthetic spectra themselves, our estimates of the parameters were consistent to within 11 K, .02 dex, and .02 dex (in terms of mean absolute errors), respectively. For MARVELS dwarfs, the accuracies are 80K, .16 dex and .10 dex, respectively.
Inclusion of Solar Elevation Angle in Land Surface Albedo Parameterization Over Bare Soil Surface.
Zheng, Zhiyuan; Wei, Zhigang; Wen, Zhiping; Dong, Wenjie; Li, Zhenchao; Wen, Xiaohang; Zhu, Xian; Ji, Dong; Chen, Chen; Yan, Dongdong
2017-12-01
Land surface albedo is a significant parameter for maintaining a balance in surface energy. It is also an important parameter of bare soil surface albedo for developing land surface process models that accurately reflect diurnal variation characteristics and the mechanism behind the solar spectral radiation albedo on bare soil surfaces and for understanding the relationships between climate factors and spectral radiation albedo. Using a data set of field observations, we conducted experiments to analyze the variation characteristics of land surface solar spectral radiation and the corresponding albedo over a typical Gobi bare soil underlying surface and to investigate the relationships between the land surface solar spectral radiation albedo, solar elevation angle, and soil moisture. Based on both solar elevation angle and soil moisture measurements simultaneously, we propose a new two-factor parameterization scheme for spectral radiation albedo over bare soil underlying surfaces. The results of numerical simulation experiments show that the new parameterization scheme can more accurately depict the diurnal variation characteristics of bare soil surface albedo than the previous schemes. Solar elevation angle is one of the most important factors for parameterizing bare soil surface albedo and must be considered in the parameterization scheme, especially in arid and semiarid areas with low soil moisture content. This study reveals the characteristics and mechanism of the diurnal variation of bare soil surface solar spectral radiation albedo and is helpful in developing land surface process models, weather models, and climate models.
A simple parameterization of aerosol emissions in RAMS
Letcher, Theodore
Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical
Ozonolysis of α-pinene: parameterization of secondary organic aerosol mass fraction
Directory of Open Access Journals (Sweden)
R. K. Pathak
2007-07-01
Full Text Available Existing parameterizations tend to underpredict the α-pinene aerosol mass fraction (AMF or yield by a factor of 2–5 at low organic aerosol concentrations (<5 µg m^{−3}. A wide range of smog chamber results obtained at various conditions (low/high NO_{x}, presence/absence of UV radiation, dry/humid conditions, and temperatures ranging from 15–40°C collected by various research teams during the last decade are used to derive new parameterizations of the SOA formation from α-pinene ozonolysis. Parameterizations are developed by fitting experimental data to a basis set of saturation concentrations (from 10^{−2} to 10^{4} µg m^{−3} using an absorptive equilibrium partitioning model. Separate parameterizations for α-pinene SOA mass fractions are developed for: 1 Low NO_{x}, dark, and dry conditions, 2 Low NO_{x}, UV, and dry conditions, 3 Low NO_{x}, dark, and high RH conditions, 4 High NO_{x}, dark, and dry conditions, 5 High NO_{x}, UV, and dry conditions. According to the proposed parameterizations the α-pinene SOA mass fractions in an atmosphere with 5 µg m^{−3} of organic aerosol range from 0.032 to 0.1 for reacted α-pinene concentrations in the 1 ppt to 5 ppb range.
Basarab, B.; Fuchs, B.; Rutledge, S. A.
2013-12-01
Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare
Parameterizing Size Distribution in Ice Clouds
Energy Technology Data Exchange (ETDEWEB)
DeSlover, Daniel; Mitchell, David L.
2009-09-25
PARAMETERIZING SIZE DISTRIBUTIONS IN ICE CLOUDS David L. Mitchell and Daniel H. DeSlover ABSTRACT An outstanding problem that contributes considerable uncertainty to Global Climate Model (GCM) predictions of future climate is the characterization of ice particle sizes in cirrus clouds. Recent parameterizations of ice cloud effective diameter differ by a factor of three, which, for overcast conditions, often translate to changes in outgoing longwave radiation (OLR) of 55 W m-2 or more. Much of this uncertainty in cirrus particle sizes is related to the problem of ice particle shattering during in situ sampling of the ice particle size distribution (PSD). Ice particles often shatter into many smaller ice fragments upon collision with the rim of the probe inlet tube. These small ice artifacts are counted as real ice crystals, resulting in anomalously high concentrations of small ice crystals (D < 100 µm) and underestimates of the mean and effective size of the PSD. Half of the cirrus cloud optical depth calculated from these in situ measurements can be due to this shattering phenomenon. Another challenge is the determination of ice and liquid water amounts in mixed phase clouds. Mixed phase clouds in the Arctic contain mostly liquid water, and the presence of ice is important for determining their lifecycle. Colder high clouds between -20 and -36 oC may also be mixed phase but in this case their condensate is mostly ice with low levels of liquid water. Rather than affecting their lifecycle, the presence of liquid dramatically affects the cloud optical properties, which affects cloud-climate feedback processes in GCMs. This project has made advancements in solving both of these problems. Regarding the first problem, PSD in ice clouds are uncertain due to the inability to reliably measure the concentrations of the smallest crystals (D < 100 µm), known as the “small mode”. Rather than using in situ probe measurements aboard aircraft, we employed a treatment of ice
International Nuclear Information System (INIS)
Novoselov, G.M.; Litvinskij, L.L
2001-01-01
Different cross-section parameterization methods in the low-energy region are considered. It is shown that the potential scattering parameter value derived from analysis of experimental cross-section data is dependent essentially on the method used to take account of the nearest resonances. A formula describing this dependence is obtained. The results are verified by numerical model calculations. (author)
Parameterized combinatorial geometry modeling in Moritz
International Nuclear Information System (INIS)
Van Riper, K.A.
2005-01-01
We describe the use of named variables as surface and solid body coefficients in the Moritz geometry editing program. Variables can also be used as material numbers, cell densities, and transformation values. A variable is defined as a constant or an arithmetic combination of constants and other variables. A variable reference, such as in a surface coefficient, can be a single variable or an expression containing variables and constants. Moritz can read and write geometry models in MCNP and ITS ACCEPT format; support for other codes will be added. The geometry can be saved with either the variables in place, for modifying the models in Moritz, or with the variables evaluated for use in the transport codes. A program window shows a list of variables and provides fields for editing them. Surface coefficients and other values that use a variable reference are shown in a distinctive style on object property dialogs; associated buttons show fields for editing the reference. We discuss our use of variables in defining geometry models for shielding studies in PET clinics. When a model is parameterized through the use of variables, changes such as room dimensions, shielding layer widths, and cell compositions can be quickly achieved by changing a few numbers without requiring knowledge of the input syntax for the transport code or the tedious and error prone work of recalculating many surface or solid body coefficients. (author)
Phenomenology of convection-parameterization closure
Directory of Open Access Journals (Sweden)
J.-I. Yano
2013-04-01
Full Text Available Closure is a problem of defining the convective intensity in a given parameterization. In spite of many years of efforts and progress, it is still considered an overall unresolved problem. The present article reviews this problem from phenomenological perspectives. The physical variables that may contribute in defining the convective intensity are listed, and their statistical significances identified by observational data analyses are reviewed. A possibility is discussed for identifying a correct closure hypothesis by performing a linear stability analysis of tropical convectively coupled waves with various different closure hypotheses. Various individual theoretical issues are considered from various different perspectives. The review also emphasizes that the dominant physical factors controlling convection differ between the tropics and extra-tropics, as well as between oceanic and land areas. Both observational as well as theoretical analyses, often focused on the tropics, do not necessarily lead to conclusions consistent with our operational experiences focused on midlatitudes. Though we emphasize the importance of the interplays between these observational, theoretical and operational perspectives, we also face challenges for establishing a solid research framework that is universally applicable. An energy cycle framework is suggested as such a candidate.
Stellar Atmospheric Parameterization Based on Deep Learning
Pan, Ru-yang; Li, Xiang-ru
2017-07-01
Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.
Mireles James, J. D.; Murray, Maxime
2017-12-01
This paper develops a Chebyshev-Taylor spectral method for studying stable/unstable manifolds attached to periodic solutions of differential equations. The work exploits the parameterization method — a general functional analytic framework for studying invariant manifolds. Useful features of the parameterization method include the fact that it can follow folds in the embedding, recovers the dynamics on the manifold through a simple conjugacy, and admits a natural notion of a posteriori error analysis. Our approach begins by deriving a recursive system of linear differential equations describing the Taylor coefficients of the invariant manifold. We represent periodic solutions of these equations as solutions of coupled systems of boundary value problems. We discuss the implementation and performance of the method for the Lorenz system, and for the planar circular restricted three- and four-body problems. We also illustrate the use of the method as a tool for computing cycle-to-cycle connecting orbits.
On quaternion based parameterization of orientation in computer vision and robotics
Directory of Open Access Journals (Sweden)
G. Terzakis
2014-04-01
Full Text Available The problem of orientation parameterization for applications in computer vision and robotics is examined in detail herein. The necessary intuition and formulas are provided for direct practical use in any existing algorithm that seeks to minimize a cost function in an iterative fashion. Two distinct schemes of parameterization are analyzed: The first scheme concerns the traditional axis-angle approach, while the second employs stereographic projection from unit quaternion sphere to the 3D real projective space. Performance measurements are taken and a comparison is made between the two approaches. Results suggests that there exist several benefits in the use of stereographic projection that include rational expressions in the rotation matrix derivatives, improved accuracy, robustness to random starting points and accelerated convergence.
Carbody structural lightweighting based on implicit parameterized model
Chen, Xin; Ma, Fangwu; Wang, Dengfeng; Xie, Chen
2014-05-01
Most of recent research on carbody lightweighting has focused on substitute material and new processing technologies rather than structures. However, new materials and processing techniques inevitably lead to higher costs. Also, material substitution and processing lightweighting have to be realized through body structural profiles and locations. In the huge conventional workload of lightweight optimization, model modifications involve heavy manual work, and it always leads to a large number of iteration calculations. As a new technique in carbody lightweighting, the implicit parameterization is used to optimize the carbody structure to improve the materials utilization rate in this paper. The implicit parameterized structural modeling enables the use of automatic modification and rapid multidisciplinary design optimization (MDO) in carbody structure, which is impossible in the traditional structure finite element method (FEM) without parameterization. The structural SFE parameterized model is built in accordance with the car structural FE model in concept development stage, and it is validated by some structural performance data. The validated SFE structural parameterized model can be used to generate rapidly and automatically FE model and evaluate different design variables group in the integrated MDO loop. The lightweighting result of body-in-white (BIW) after the optimization rounds reveals that the implicit parameterized model makes automatic MDO feasible and can significantly improve the computational efficiency of carbody structural lightweighting. This paper proposes the integrated method of implicit parameterized model and MDO, which has the obvious practical advantage and industrial significance in the carbody structural lightweighting design.
Factors influencing the parameterization of anvil clouds within GCMs
International Nuclear Information System (INIS)
Leone, J.M. Jr.; Chin, Hung-Neng.
1993-03-01
The overall goal of this project is to improve the representation of clouds and their effects within global climate models (GCMs). The authors have concentrated on a small portion of the overall goal, the evolution of convectively generated cirrus clouds and their effects on the large-scale environment. Because of the large range of time and length scales involved they have been using a multi-scale attack. For the early time generation and development of the cirrus anvil they are using a cloud-scale model with horizontal resolution of 1--2 kilometers; while for the larger scale transport by the larger scale flow they are using a mesoscale model with a horizontal resolution of 20--60 kilometers. The eventual goal is to use the information obtained from these simulations together with available observations to derive improved cloud parameterizations for use in GCMs. This paper presents results from their cloud-scale studies and describes a new tool, a cirrus generator, that they have developed to aid in their mesoscale studies
An energetically consistent vertical mixing parameterization in CCSM4
DEFF Research Database (Denmark)
Nielsen, Søren Borg; Jochum, Markus; Eden, Carsten
2018-01-01
An energetically consistent stratification-dependent vertical mixing parameterization is implemented in the Community Climate System Model 4 and forced with energy conversion from the barotropic tides to internal waves. The structures of the resulting dissipation and diffusivity fields are compared......, however, depends greatly on the details of the vertical mixing parameterizations, where the new energetically consistent parameterization results in low thermocline diffusivities and a sharper and shallower thermocline. It is also investigated if the ocean state is more sensitive to a change in forcing...
Test Driven Development of a Parameterized Ice Sheet Component
Clune, T.
2011-12-01
Test driven development (TDD) is a software development methodology that offers many advantages over traditional approaches including reduced development and maintenance costs, improved reliability, and superior design quality. Although TDD is widely accepted in many software communities, the suitability to scientific software is largely undemonstrated and warrants a degree of skepticism. Indeed, numerical algorithms pose several challenges to unit testing in general, and TDD in particular. Among these challenges are the need to have simple, non-redundant closed-form expressions to compare against the results obtained from the implementation as well as realistic error estimates. The necessity for serial and parallel performance raises additional concerns for many scientific applicaitons. In previous work I demonstrated that TDD performed well for the development of a relatively simple numerical model that simulates the growth of snowflakes, but the results were anecdotal and of limited relevance to far more complex software components typical of climate models. This investigation has now been extended by successfully applying TDD to the implementation of a substantial portion of a new parameterized ice sheet component within a full climate model. After a brief introduction to TDD, I will present techniques that address some of the obstacles encountered with numerical algorithms. I will conclude with some quantitative and qualitative comparisons against climate components developed in a more traditional manner.
Automatic Generation of Symbolic Model for Parameterized Synchronous Systems
Institute of Scientific and Technical Information of China (English)
Wei-Wen Xu
2004-01-01
With the purpose of making the verification of parameterized system more general and easier, in this paper, a new and intuitive language PSL (Parameterized-system Specification Language) is proposed to specify a class of parameterized synchronous systems. From a PSL script, an automatic method is proposed to generate a constraint-based symbolic model. The model can concisely symbolically represent the collections of global states by counting the number of processes in a given state. Moreover, a theorem has been proved that there is a simulation relation between the original system and its symbolic model. Since the abstract and symbolic techniques are exploited in the symbolic model, state-explosion problem in traditional verification methods is efficiently avoided. Based on the proposed symbolic model, a reachability analysis procedure is implemented using ANSI C++ on UNIX platform. Thus, a complete tool for verifying the parameterized synchronous systems is obtained and tested for some cases. The experimental results show that the method is satisfactory.
Kim, M.; Harman, C. J.; Troch, P. A. A.
2017-12-01
Hillslopes have been extensively explored as a natural fundamental unit for spatially-integrated hydrologic models. Much of this attention has focused on their use in predicting the quantity of discharge, but hillslope-based models can potentially be used to predict the composition of discharge (in terms of age and chemistry) if they can be parameterized terms of measurable physical properties. Here we present advances in the use of rank StorAge Selection (rSAS) functions to parameterize transport through hillslopes. These functions provide a mapping between the distribution of water ages in storage and in outfluxes in terms of a probability distribution over storage. It has previously been shown that rSAS functions are related to the relative partitioning and arrangement of flow pathways (and variabilities in that arrangement), while separating out the effect of changes in the overall rate of fluxes in and out. This suggests that rSAS functions should have a connection to the internal organization of flow paths in a hillslope.Using a combination of numerical modeling and theoretical analysis we examined: first, the controls of physical properties on internal spatial organization of age (time since entry), life expectancy (time to exit), and the emergent transit time distribution and rSAS functions; second, the possible parameterization of the rSAS function using the physical properties. The numerical modeling results showed the clear dependence of the rSAS function forms on the physical properties and relations between the internal organization and the rSAS functions. For the different rates of the exponential saturated hydraulic conductivity decline with depth the spatial organization of life expectancy varied dramatically and determined the rSAS function forms, while the organizaiton of the age showed less qualitative differences. Analytical solutions predicting this spatial organization and the resulting rSAS function were derived for simplified systems. These
Chao, Winston C.
2015-01-01
The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.
Parameterization of a ruminant model of phosphorus digestion and metabolism.
Feng, X; Knowlton, K F; Hanigan, M D
2015-10-01
The objective of the current work was to parameterize the digestive elements of the model of Hill et al. (2008) using data collected from animals that were ruminally, duodenally, and ileally cannulated, thereby providing a better understanding of the digestion and metabolism of P fractions in growing and lactating cattle. The model of Hill et al. (2008) was fitted and evaluated for adequacy using the data from 6 animal studies. We hypothesized that sufficient data would be available to estimate P digestion and metabolism parameters and that these parameters would be sufficient to derive P bioavailabilities of a range of feed ingredients. Inputs to the model were dry matter intake; total feed P concentration (fPtFd); phytate (Pp), organic (Po), and inorganic (Pi) P as fractions of total P (fPpPt, fPoPt, fPiPt); microbial growth; amount of Pi and Pp infused into the omasum or ileum; milk yield; and BW. The available data were sufficient to derive all model parameters of interest. The final model predicted that given 75 g/d of total P input, the total-tract digestibility of P was 40.8%, Pp digestibility in the rumen was 92.4%, and in the total-tract was 94.7%. Blood P recycling to the rumen was a major source of Pi flow into the small intestine, and the primary route of excretion. A large proportion of Pi flowing to the small intestine was absorbed; however, additional Pi was absorbed from the large intestine (3.15%). Absorption of Pi from the small intestine was regulated, and given the large flux of salivary P recycling, the effective fractional small intestine absorption of available P derived from the diet was 41.6% at requirements. Milk synthesis used 16% of total absorbed P, and less than 1% was excreted in urine. The resulting model could be used to derive P bioavailabilities of commonly used feedstuffs in cattle production. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
The relationship between a deformation-based eddy parameterization and the LANS-α turbulence model
Bachman, Scott D.; Anstey, James A.; Zanna, Laure
2018-06-01
A recent class of ocean eddy parameterizations proposed by Porta Mana and Zanna (2014) and Anstey and Zanna (2017) modeled the large-scale flow as a non-Newtonian fluid whose subgridscale eddy stress is a nonlinear function of the deformation. This idea, while largely new to ocean modeling, has a history in turbulence modeling dating at least back to Rivlin (1957). The new class of parameterizations results in equations that resemble the Lagrangian-averaged Navier-Stokes-α model (LANS-α, e.g., Holm et al., 1998a). In this note we employ basic tensor mathematics to highlight the similarities between these turbulence models using component-free notation. We extend the Anstey and Zanna (2017) parameterization, which was originally presented in 2D, to 3D, and derive variants of this closure that arise when the full non-Newtonian stress tensor is used. Despite the mathematical similarities between the non-Newtonian and LANS-α models which might provide insight into numerical implementation, the input and dissipation of kinetic energy between these two turbulent models differ.
CloudSat 2C-ICE product update with a new Ze parameterization in lidar-only region.
Deng, Min; Mace, Gerald G; Wang, Zhien; Berry, Elizabeth
2015-12-16
The CloudSat 2C-ICE data product is derived from a synergetic ice cloud retrieval algorithm that takes as input a combination of CloudSat radar reflectivity ( Z e ) and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation lidar attenuated backscatter profiles. The algorithm uses a variational method for retrieving profiles of visible extinction coefficient, ice water content, and ice particle effective radius in ice or mixed-phase clouds. Because of the nature of the measurements and to maintain consistency in the algorithm numerics, we choose to parameterize (with appropriately large specification of uncertainty) Z e and lidar attenuated backscatter in the regions of a cirrus layer where only the lidar provides data and where only the radar provides data, respectively. To improve the Z e parameterization in the lidar-only region, the relations among Z e , extinction, and temperature have been more thoroughly investigated using Atmospheric Radiation Measurement long-term millimeter cloud radar and Raman lidar measurements. This Z e parameterization provides a first-order estimation of Z e as a function extinction and temperature in the lidar-only regions of cirrus layers. The effects of this new parameterization have been evaluated for consistency using radiation closure methods where the radiative fluxes derived from retrieved cirrus profiles compare favorably with Clouds and the Earth's Radiant Energy System measurements. Results will be made publicly available for the entire CloudSat record (since 2006) in the most recent product release known as R05.
Tsunami damping by mangrove forest: a laboratory study using parameterized trees
Directory of Open Access Journals (Sweden)
A. Strusińska-Correia
2013-02-01
Full Text Available Tsunami attenuation by coastal vegetation was examined under laboratory conditions for mature mangroves Rhizophora sp. The developed novel tree parameterization concept, accounting for both bio-mechanical and structural tree properties, allowed to substitute the complex tree structure by a simplified tree model of identical hydraulic resistance. The most representative parameterized mangrove model was selected among the tested models with different frontal area and root density, based on hydraulic test results. The selected parameterized tree models were arranged in a forest model of different width and further tested systematically under varying incident tsunami conditions (solitary waves and tsunami bores. The damping performance of the forest models under these two flow regimes was compared in terms of wave height and force envelopes, wave transmission coefficient as well as drag and inertia coefficients. Unlike the previous studies, the results indicate a significant contribution of the foreshore topography to solitary wave energy reduction through wave breaking in comparison to that attributed to the forest itself. A similar rate of tsunami transmission (ca. 20% was achieved for both flow conditions (solitary waves and tsunami bores and the widest forest (75 m in prototype investigated. Drag coefficient C_{D} attributed to the solitary waves tends to be constant (C_{D} = 1.5 over the investigated range of the Reynolds number.
Parameterization of Mixed Layer and Deep-Ocean Mesoscales Including Nonlinearity
Canuto, V. M.; Cheng, Y.; Dubovikov, M. S.; Howard, A. M.; Leboissetier, A.
2018-01-01
In 2011, Chelton et al. carried out a comprehensive census of mesoscales using altimetry data and reached the following conclusions: "essentially all of the observed mesoscale features are nonlinear" and "mesoscales do not move with the mean velocity but with their own drift velocity," which is "the most germane of all the nonlinear metrics."� Accounting for these results in a mesoscale parameterization presents conceptual and practical challenges since linear analysis is no longer usable and one needs a model of nonlinearity. A mesoscale parameterization is presented that has the following features: 1) it is based on the solutions of the nonlinear mesoscale dynamical equations, 2) it describes arbitrary tracers, 3) it includes adiabatic (A) and diabatic (D) regimes, 4) the eddy-induced velocity is the sum of a Gent and McWilliams (GM) term plus a new term representing the difference between drift and mean velocities, 5) the new term lowers the transfer of mean potential energy to mesoscales, 6) the isopycnal slopes are not as flat as in the GM case, 7) deep-ocean stratification is enhanced compared to previous parameterizations where being more weakly stratified allowed a large heat uptake that is not observed, 8) the strength of the Deacon cell is reduced. The numerical results are from a stand-alone ocean code with Coordinated Ocean-Ice Reference Experiment I (CORE-I) normal-year forcing.
The package PAKPDF 1.1 of parameterizations of parton distribution functions in the proton
International Nuclear Information System (INIS)
Charchula, K.
1992-01-01
A FORTRAN package containing parameterizations of parton distribution functions (PDFs) in the proton is described, allows an easy access to PDFs provided by several recent parameterizations and to some parameters characterizing particular parameterization. Some comments about the use of various parameterizations are also included. (orig.)
A new parameterization for ice cloud optical properties used in BCC-RAD and its radiative impact
International Nuclear Information System (INIS)
Zhang, Hua; Chen, Qi; Xie, Bing
2015-01-01
A new parameterization of the solar and infrared optical properties of ice clouds that considers the multiple habits of ice particles was developed on the basis of a prescribed dataset. First, the fitting formulae of the bulk extinction coefficient, single-scatter albedo, asymmetry factor, and δ-function forward-peak factor at the given 65 wavelengths as a function of effective radius were created for common scenarios, which consider a greater number of wavelengths and are more accurate than those used previously. Then, the band-averaged volume extinction and absorption coefficients, asymmetry factor and forward-peak factor of ice cloud were derived for the BCC-RAD (Beijing Climate Center radiative transfer model) using a parameter reference table. Finally, the newly developed and the original schemes in the BCC-RAD and the commonly used Fu Scheme of ice cloud were all applied to the BCC-RAD. Their influences on radiation calculations were compared using the mid-latitude summer atmospheric profile with ice clouds under no-aerosol conditions, and produced a maximum difference of approximately 30.0 W/m 2 for the radiative flux, and 4.0 K/d for the heating rate. Additionally, a sensitivity test was performed to investigate the impact of the ice crystal density on radiation calculations using the three schemes. The results showed that the maximum difference was 68.1 W/m 2 for the shortwave downward radiative flux (for the case of perpendicular solar insolation), and 4.2 K/d for the longwave heating rate, indicating that the ice crystal density exerts a significant effect on radiation calculations for a cloudy atmosphere. - Highlights: • A new parameterization of the radiative properties of ice cloud was obtained. • More accurate fitting formulae of them were created for common scenarios. • The band-averaged of them were derived for our radiation model of BCC-RAD. • We found that there exist large differences of results among different ice schemes. • We found
Impact of Physics Parameterization Ordering in a Global Atmosphere Model
Donahue, Aaron S.; Caldwell, Peter M.
2018-02-01
Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.
Polynomial parameterized representation of macroscopic cross section for PWR reactor
International Nuclear Information System (INIS)
Fiel, Joao Claudio B.
2015-01-01
The purpose of this work is to describe, by means of Tchebychev polynomial, a parameterized representation of the homogenized macroscopic cross section for PWR fuel element as a function of soluble boron concentration, moderator temperature, fuel temperature, moderator density and 235 U 92 enrichment. Analyzed cross sections are: fission, scattering, total, transport, absorption and capture. This parameterization enables a quick and easy determination of the problem-dependent cross-sections to be used in few groups calculations. The methodology presented here will enable to provide cross-sections values to perform PWR core calculations without the need to generate them based on computer code calculations using standard steps. The results obtained by parameterized cross-sections functions, when compared with the cross-section generated by SCALE code calculations, or when compared with K inf , generated by MCNPX code calculations, show a difference of less than 0.7 percent. (author)
Development of a parameterization scheme of mesoscale convective systems
International Nuclear Information System (INIS)
Cotton, W.R.
1994-01-01
The goal of this research is to develop a parameterization scheme of mesoscale convective systems (MCS) including diabatic heating, moisture and momentum transports, cloud formation, and precipitation. The approach is to: Perform explicit cloud-resolving simulation of MCSs; Perform statistical analyses of simulated MCSs to assist in fabricating a parameterization, calibrating coefficients, etc.; Test the parameterization scheme against independent field data measurements and in numerical weather prediction (NWP) models emulating general circulation model (GCM) grid resolution. Thus far we have formulated, calibrated, implemented and tested a deep convective engine against explicit Florida sea breeze convection and in coarse-grid regional simulations of mid-latitude and tropical MCSs. Several explicit simulations of MCSs have been completed, and several other are in progress. Analysis code is being written and run on the explicitly simulated data
Parameterized Analysis of Paging and List Update Algorithms
DEFF Research Database (Denmark)
Dorrigiv, Reza; Ehmsen, Martin R.; López-Ortiz, Alejandro
2015-01-01
that a larger cache leads to a better performance. We also apply the parameterized analysis framework to list update and show that certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results....... set model and express the performance of well known algorithms in terms of this parameter. This explicitly introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly...... express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their experimentally observed relative strengths. It also reflects the intuition...
Droplet Nucleation: Physically-Based Parameterizations and Comparative Evaluation
Directory of Open Access Journals (Sweden)
Steve Ghan
2011-10-01
Full Text Available One of the greatest sources of uncertainty in simulations of climate and climate change is the influence of aerosols on the optical properties of clouds. The root of this influence is the droplet nucleation process, which involves the spontaneous growth of aerosol into cloud droplets at cloud edges, during the early stages of cloud formation, and in some cases within the interior of mature clouds. Numerical models of droplet nucleation represent much of the complexity of the process, but at a computational cost that limits their application to simulations of hours or days. Physically-based parameterizations of droplet nucleation are designed to quickly estimate the number nucleated as a function of the primary controlling parameters: the aerosol number size distribution, hygroscopicity and cooling rate. Here we compare and contrast the key assumptions used in developing each of the most popular parameterizations and compare their performances under a variety of conditions. We find that the more complex parameterizations perform well under a wider variety of nucleation conditions, but all parameterizations perform well under the most common conditions. We then discuss the various applications of the parameterizations to cloud-resolving, regional and global models to study aerosol effects on clouds at a wide range of spatial and temporal scales. We compare estimates of anthropogenic aerosol indirect effects using two different parameterizations applied to the same global climate model, and find that the estimates of indirect effects differ by only 10%. We conclude with a summary of the outstanding challenges remaining for further development and application.
Parameterization of radiocaesium soil-plant transfer using soil characteristics
International Nuclear Information System (INIS)
Konoplev, A. V.; Drissner, J.; Klemt, E.; Konopleva, I. V.; Zibold, G.
1996-01-01
A model of radionuclide soil-plant transfer is proposed to parameterize the transfer factor by soil and soil solution characteristics. The model is tested with experimental data on the aggregated transfer factor T ag and soil parameters for 8 forest sites in Baden-Wuerttemberg. It is shown that the integral soil-plant transfer factor can be parameterized through radiocaesium exchangeability, capacity of selective sorption sites and ion composition of the soil solution or the water extract. A modified technique of (FES) measurement for soils with interlayer collapse is proposed. (author)
On parameterized deformations and unsupervised learning
DEFF Research Database (Denmark)
Hansen, Michael Sass
matrix. Spline approximations of functions and in particular image registration warp fields are discussed. It is shown how spline bases may be learned from the optimization process, i.e. image registration optimization, and how this may contribute with a reasonable prior, or regularization in the method...... on an unrestricted linear parameter space, where all derivatives are defined, is introduced. Furthermore, it is shown that L2-norm the parameter space introduces a reasonable metric in the actual space of modelled diffeomorphisms. A new parametrization of 3D deformation fields, using potentials and Helmholtz...... of the multivariate B-splines, the warp field is automatically refined in areas where it results in the minimization of the registration cost function....
Gravitational wave tests of general relativity with the parameterized post-Einsteinian framework
International Nuclear Information System (INIS)
Cornish, Neil; Sampson, Laura; Yunes, Nicolas; Pretorius, Frans
2011-01-01
Gravitational wave astronomy has tremendous potential for studying extreme astrophysical phenomena and exploring fundamental physics. The waves produced by binary black hole mergers will provide a pristine environment in which to study strong-field dynamical gravity. Extracting detailed information about these systems requires accurate theoretical models of the gravitational wave signals. If gravity is not described by general relativity, analyses that are based on waveforms derived from Einstein's field equations could result in parameter biases and a loss of detection efficiency. A new class of ''parameterized post-Einsteinian'' waveforms has been proposed to cover this eventuality. Here, we apply the parameterized post-Einsteinian approach to simulated data from a network of advanced ground-based interferometers and from a future space-based interferometer. Bayesian inference and model selection are used to investigate parameter biases, and to determine the level at which departures from general relativity can be detected. We find that in some cases the parameter biases from assuming the wrong theory can be severe. We also find that gravitational wave observations will beat the existing bounds on deviations from general relativity derived from the orbital decay of binary pulsars by a large margin across a wide swath of parameter space.
A parameterization of nuclear track profiles in CR-39 detector
Azooz, A. A.; Al-Nia'emi, S. H.; Al-Jubbori, M. A.
2012-11-01
on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage. No. of lines in distributed program, including test data, etc.: 15598 No. of bytes in distributed program, including test data, etc.: 3933244 Distribution format: tar.gz Programming language: MATLAB. Computer: Any Desktop or Laptop. Operating system: Windows 1998 or above (with MATLAB R13 or above installed). RAM: 512 Megabytes or higher Classification: 17.5. Nature of problem: A new semispherical parameterization of charged particle tracks in CR-39 SSNTD is carried out in a previous paper. This parameterization is developed here into a MATLAB based software to calculate the track length and track profile for any proton or alpha particle energy or etching time. This software is intended to compete with the TRACK_TEST [1] and TRACK_VISION [2] software currently in use by all people working in the field of SSNTD. Solution method: Based on fitting of experimental results of protons and alpha particles track lengths for various energies and etching times to a new semispherical formula with four free fitting parameters, the best set of energy independent parameters were found. These parameters are introduced into the software and the software is programmed to solve the set of equations to calculate the track depth, track etching rate as a function of both time and residual range for particles of normal and oblique incidence, the track longitudinal profile at both normal and oblique incidence, and the three dimensional track profile at normal incidence. Running time: 1-8 s on Pentium (4) 2 GHz CPU, 3 GB of RAM depending on the etching time value References: [1] ADWT_v1_0 Track_Test Computer program TRACK_TEST for calculating parameters and plotting profiles for etch pits in nuclear track materials. D. Nikezic, K.N. Yu Comput. Phys. Commun. 174(2006)160 [2] AEAF
Impact of cloud microphysics and cumulus parameterization on ...
Indian Academy of Sciences (India)
2007-10-09
Oct 9, 2007 ... Bangladesh. Weather Research and Forecast (WRF–ARW version) modelling system with six dif- .... tem intensified rapidly into a land depression over southern part of ... Impact of cloud microphysics and cumulus parameterization on heavy rainfall. 261 .... tent and temperature and is represented as a sum.
Parameterized representation of macroscopic cross section for PWR reactor
International Nuclear Information System (INIS)
Fiel, João Cláudio Batista; Carvalho da Silva, Fernando; Senra Martinez, Aquilino; Leal, Luiz C.
2015-01-01
Highlights: • This work describes a parameterized representation of the homogenized macroscopic cross section for PWR reactor. • Parameterization enables a quick determination of problem-dependent cross-sections to be used in few group calculations. • This work allows generating group cross-section data to perform PWR core calculations without computer code calculations. - Abstract: The purpose of this work is to describe, by means of Chebyshev polynomials, a parameterized representation of the homogenized macroscopic cross section for PWR fuel element as a function of soluble boron concentration, moderator temperature, fuel temperature, moderator density and 235 92 U enrichment. The cross-section data analyzed are fission, scattering, total, transport, absorption and capture. The parameterization enables a quick and easy determination of problem-dependent cross-sections to be used in few group calculations. The methodology presented in this paper will allow generation of group cross-section data from stored polynomials to perform PWR core calculations without the need to generate them based on computer code calculations using standard steps. The results obtained by the proposed methodology when compared with results from the SCALE code calculations show very good agreement
Stable Kernel Representations and the Youla Parameterization for Nonlinear Systems
Paice, A.D.B.; Schaft, A.J. van der
1994-01-01
In this paper a general approach is taken to yield a characterization of the class of stable plant controller pairs, which is a generalization of the Youla parameterization for linear systems. This is based on the idea of representing the input-output pairs of the plant and controller as elements of
Lv, M.; Li, C.; Lu, H.; Yang, K.; Chen, Y.
2017-12-01
The parameterization of vegetation cover fraction (VCF) is an important component of land surface models. This paper investigates the impacts of three VCF parameterization schemes on land surface temperature (LST) simulation by the Common Land Model (CoLM) in the Tibetan Plateau (TP). The first scheme is a simple land cover (LC) based method; the second one is based on remote sensing observation (hereafter named as RNVCF) , in which multi-year climatology VCFs is derived from Moderate-resolution Imaging Spectroradiometer (MODIS) NDVI (Normalized Difference Vegetation Index); the third VCF parameterization scheme derives VCF from the LAI simulated by LSM and clump index at every model time step (hereafter named as SMVCF). Simulated land surface temperature(LST) and soil temperature by CoLM with three VCF parameterization schemes were evaluated by using satellite LST observation and in situ soil temperature observation, respectively, during the period of 2010 to 2013. The comparison against MODIS Aqua LST indicates that (1) CTL produces large biases for both four seasons in early afternoon (about 13:30, local solar time), while the mean bias in spring reach to 12.14K; (2) RNVCF and SMVCF reduce the mean bias significantly, especially in spring as such reduce is about 6.5K. Surface soil temperature observed at 5 cm depth from three soil moisture and temperature monitoring networks is also employed to assess the skill of three VCF schemes. The three networks, crossing TP from West to East, have different climate and vegetation conditions. In the Ngari network, located in the Western TP with an arid climate, there are not obvious differences among three schemes. In Naqu network, located in central TP with a semi-arid climate condition, CTL shows a severe overestimates (12.1 K), but such overestimations can be reduced by 79% by RNVCF and 87% by SMVCF. In the third humid network (Maqu in eastern TP), CoLM performs similar to Naqu. However, at both Naqu and Maqu networks
Elvidge, A. D.; Renfrew, I. A.; Weiss, A. I.; Brooks, I. M.; Lachlan-Cope, T. A.; King, J. C.
2015-10-01
Comprehensive aircraft observations are used to characterise surface roughness over the Arctic marginal ice zone (MIZ) and consequently make recommendations for the parameterization of surface momentum exchange in the MIZ. These observations were gathered in the Barents Sea and Fram Strait from two aircraft as part of the Aerosol-Cloud Coupling And Climate Interactions in the Arctic (ACCACIA) project. They represent a doubling of the total number of such aircraft observations currently available over the Arctic MIZ. The eddy covariance method is used to derive estimates of the 10 m neutral drag coefficient (CDN10) from turbulent wind velocity measurements, and a novel method using albedo and surface temperature is employed to derive ice fraction. Peak surface roughness is found at ice fractions in the range 0.6 to 0.8 (with a mean interquartile range in CDN10 of 1.25 to 2.85 × 10-3). CDN10 as a function of ice fraction is found to be well approximated by the negatively skewed distribution provided by a leading parameterization scheme (Lüpkes et al., 2012) tailored for sea ice drag over the MIZ in which the two constituent components of drag - skin and form drag - are separately quantified. Current parameterization schemes used in the weather and climate models are compared with our results and the majority are found to be physically unjustified and unrepresentative. The Lüpkes et al. (2012) scheme is recommended in a computationally simple form, with adjusted parameter settings. A good agreement is found to hold for subsets of the data from different locations despite differences in sea ice conditions. Ice conditions in the Barents Sea, characterised by small, unconsolidated ice floes, are found to be associated with higher CDN10 values - especially at the higher ice fractions - than those of Fram Strait, where typically larger, smoother floes are observed. Consequently, the important influence of sea ice morphology and floe size on surface roughness is
Wood, Eric F.
1993-01-01
The objectives of the research were as follows: (1) Extend the Representative Elementary Area (RE) concept, first proposed and developed in Wood et al, (1988), to the water balance fluxes of the interstorm period (redistribution, evapotranspiration and baseflow) necessary for the analysis of long-term water balance processes. (2) Derive spatially averaged water balance model equations for spatially variable soil, topography and vegetation, over A RANGE OF CLIMATES. This is a necessary step in our goal to derive consistent hydrologic results up to GCM grid scales necessary for global climate modeling. (3) Apply the above macroscale water balance equations with remotely sensed data and begin to explore the feasibility of parameterizing the water balance constitutive equations at GCM grid scale.
Benilov, E. S.
2018-05-01
This paper examines quasigeostrophic flows in an ocean that can be subdivided into an upper active layer (AL) and a lower passive layer (PL), with the flow and density stratification mainly confined to the former. Under this assumption, an asymptotic model is derived parameterizing the effect of the PL on the AL. The model depends only on the PL's depth, whereas its Väisälä-Brunt frequency turns out to be unimportant (as long as it is small). Under an additional assumption-that the potential vorticity field in the PL is well-diffused and, thus, uniform-the derived model reduces to a simple boundary condition. This condition is to be applied at the AL/PL interface, after which the PL can be excluded from consideration.
Structural test of the parameterized-backbone method for protein design.
Plecs, Joseph J; Harbury, Pehr B; Kim, Peter S; Alber, Tom
2004-09-03
Designing new protein folds requires a method for simultaneously optimizing the conformation of the backbone and the side-chains. One approach to this problem is the use of a parameterized backbone, which allows the systematic exploration of families of structures. We report the crystal structure of RH3, a right-handed, three-helix coiled coil that was designed using a parameterized backbone and detailed modeling of core packing. This crystal structure was determined using another rationally designed feature, a metal-binding site that permitted experimental phasing of the X-ray data. RH3 adopted the intended fold, which has not been observed previously in biological proteins. Unanticipated structural asymmetry in the trimer was a principal source of variation within the RH3 structure. The sequence of RH3 differs from that of a previously characterized right-handed tetramer, RH4, at only one position in each 11 amino acid sequence repeat. This close similarity indicates that the design method is sensitive to the core packing interactions that specify the protein structure. Comparison of the structures of RH3 and RH4 indicates that both steric overlap and cavity formation provide strong driving forces for oligomer specificity.
Directory of Open Access Journals (Sweden)
Weijian Guo
2015-05-01
Full Text Available Spatial variability plays an important role in nonlinear hydrologic processes. Due to the limitation of computational efficiency and data resolution, subgrid variability is usually assumed to be uniform for most grid-based rainfall-runoff models, which leads to the scale-dependence of model performances. In this paper, the scale effect on the Grid-Xinanjiang model was examined. The bias of the estimation of precipitation, runoff, evapotranspiration and soil moisture at the different grid scales, along with the scale-dependence of the effective parameters, highlights the importance of well representing the subgrid variability. This paper presents a subgrid parameterization method to incorporate the subgrid variability of the soil storage capacity, which is a key variable that controls runoff generation and partitioning in the Grid-Xinanjiang model. In light of the similar spatial pattern and physical basis, the soil storage capacity is correlated with the topographic index, whose spatial distribution can more readily be measured. A beta distribution is introduced to represent the spatial distribution of the soil storage capacity within the grid. The results derived from the Yanduhe Basin show that the proposed subgrid parameterization method can effectively correct the watershed soil storage capacity curve. Compared to the original Grid-Xinanjiang model, the model performances are quite consistent at the different grid scales when the subgrid variability is incorporated. This subgrid parameterization method reduces the recalibration necessity when the Digital Elevation Model (DEM resolution is changed. Moreover, it improves the potential for the application of the distributed model in the ungauged basin.
Resolving kinematic redundancy with constraints using the FSP (Full Space Parameterization) approach
International Nuclear Information System (INIS)
Pin, F.G.; Tulloch, F.A.
1996-01-01
A solution method is presented for the motion planning and control of kinematically redundant serial-link manipulators in the presence of motion constraints such as joint limits or obstacles. Given a trajectory for the end-effector, the approach utilizes the recently proposed Full Space Parameterization (FSP) method to generate a parameterized expression for the entire space of solutions of the unconstrained system. At each time step, a constrained optimization technique is then used to analytically find the specific joint motion solution that satisfies the desired task objective and all the constraints active during the time step. The method is applicable to systems operating in a priori known environments or in unknown environments with sensor-based obstacle detection. The derivation of the analytical solution is first presented for a general type of kinematic constraint and is then applied to the problem of motion planning for redundant manipulators with joint limits and obstacle avoidance. Sample results using planar and 3-D manipulators with various degrees of redundancy are presented to illustrate the efficiency and wide applicability of constrained motion planning using the FSP approach
Chekroun, Mickaël D; Wang, Shouhong
2015-01-01
In this second volume, a general approach is developed to provide approximate parameterizations of the "small" scales by the "large" ones for a broad class of stochastic partial differential equations (SPDEs). This is accomplished via the concept of parameterizing manifolds (PMs), which are stochastic manifolds that improve, for a given realization of the noise, in mean square error the partial knowledge of the full SPDE solution when compared to its projection onto some resolved modes. Backward-forward systems are designed to give access to such PMs in practice. The key idea consists of representing the modes with high wave numbers as a pullback limit depending on the time-history of the modes with low wave numbers. Non-Markovian stochastic reduced systems are then derived based on such a PM approach. The reduced systems take the form of stochastic differential equations involving random coefficients that convey memory effects. The theory is illustrated on a stochastic Burgers-type equation.
Parameterized neural networks for high-energy physics
Energy Technology Data Exchange (ETDEWEB)
Baldi, Pierre; Sadowski, Peter [University of California, Department of Computer Science, Irvine, CA (United States); Cranmer, Kyle [NYU, Department of Physics, New York, NY (United States); Faucett, Taylor; Whiteson, Daniel [University of California, Department of Physics and Astronomy, Irvine, CA (United States)
2016-05-15
We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)
Reliable control using the primary and dual Youla parameterizations
DEFF Research Database (Denmark)
Niemann, Hans Henrik; Stoustrup, J.
2002-01-01
Different aspects of modeling faults in dynamic systems are considered in connection with reliable control (RC). The fault models include models with additive faults, multiplicative faults and structural changes in the models due to faults in the systems. These descriptions are considered...... in connection with reliable control and feedback control with fault rejection. The main emphasis is on fault modeling. A number of fault diagnosis problems, reliable control problems, and feedback control with fault rejection problems are formulated/considered, again, mainly from a fault modeling point of view....... Reliability is introduced by means of the (primary) Youla parameterization of all stabilizing controllers, where an additional loop is closed around a diagnostic signal. In order to quantify the level of reliability, the dual Youla parameterization is introduced which can be used to analyze how large faults...
IR OPTICS MEASUREMENT WITH LINEAR COUPLING'S ACTION-ANGLE PARAMETERIZATION
International Nuclear Information System (INIS)
LUO, Y.; BAI, M.; PILAT, R.; SATOGATA, T.; TRBOJEVIC, D.
2005-01-01
A parameterization of linear coupling in action-angle coordinates is convenient for analytical calculations and interpretation of turn-by-turn (TBT) beam position monitor (BPM) data. We demonstrate how to use this parameterization to extract the twiss and coupling parameters in interaction regions (IRs), using BPMs on each side of the long IR drift region. The example of TBT BPM analysis was acquired at the Relativistic Heavy Ion Collider (RHIC), using an AC dipole to excite a single eigenmode. Besides the full treatment, a fast estimate of beta*, the beta function at the interaction point (IP), is provided, along with the phase advance between these BPMs. We also calculate and measure the waist of the beta function and the local optics
Firefly Algorithm for Polynomial Bézier Surface Parameterization
Directory of Open Access Journals (Sweden)
Akemi Gálvez
2013-01-01
reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy.
Parameterized neural networks for high-energy physics
International Nuclear Information System (INIS)
Baldi, Pierre; Sadowski, Peter; Cranmer, Kyle; Faucett, Taylor; Whiteson, Daniel
2016-01-01
We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)
Elastic FWI for VTI media: A synthetic parameterization study
Kamath, Nishant
2016-09-06
A major challenge for multiparameter full-waveform inversion (FWI) is the inherent trade-offs (or cross-talk) between model parameters. Here, we perform FWI of multicomponent data generated for a synthetic VTI (transversely isotropic with a vertical symmetry axis) model based on a geologic section of the Valhall field. A horizontal displacement source, which excites intensive shear waves in the conventional offset range, helps provide more accurate updates to the SV-wave vertical velocity. We test three model parameterizations, which exhibit different radiation patterns and, therefore, create different parameter trade-offs. The results show that the choice of parameterization for FWI depends on the availability of long-offset data, the quality of the initial model for the anisotropy coefficients, and the parameter that needs to be resolved with the highest accuracy.
Parameterization of phase change of water in a mesoscale model
Energy Technology Data Exchange (ETDEWEB)
Levkov, L; Eppel, D; Grassl, H
1987-01-01
A parameterization scheme of phase change of water is suggested to be used in the 3-D numerical nonhydrostatic model GESIMA. The microphysical formulation follows the so-called bulk technique. With this procedure the net production rates in the balance equations for water and potential temperature are given both for liquid and ice-phase. Convectively stable as well as convectively unstable mesoscale systems are considered. With 2 figs..
Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change
Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.
2017-12-01
Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at
Liu, Peng; Sun, Jianning; Shen, Lidu
2016-10-01
The entrainment flux ratio A e and the inversion layer (IL) thickness are two key parameters in a mixed layer model. A e is defined as the ratio of the entrainment heat flux at the mixed layer top to the surface heat flux. The IL is the layer between the mixed layer and the free atmosphere. In this study, a parameterization of A e is derived from the TKE budget in the firstorder model for a well-developed CBL under the condition of linearly sheared geostrophic velocity with a zero value at the surface. It is also appropriate for a CBL under the condition of geostrophic velocity remaining constant with height. LESs are conducted under the above two conditions to determine the coefficients in the parameterization scheme. Results suggest that about 43% of the shear-produced TKE in the IL is available for entrainment, while the shear-produced TKE in the mixed layer and surface layer have little effect on entrainment. Based on this scheme, a new scale of convective turbulence velocity is proposed and applied to parameterize the IL thickness. The LES outputs for the CBLs under the condition of linearly sheared geostrophic velocity with a non-zero surface value are used to verify the performance of the parameterization scheme. It is found that the parameterized A e and IL thickness agree well with the LES outputs.
Parameterized Shower Simulation in Lelaps: a Comparison with Geant4
International Nuclear Information System (INIS)
Langeveld, Willy G.J.
2003-01-01
The detector simulation toolkit Lelaps[1] simulates electromagnetic and hadronic showers in calorimetric detector elements of high-energy particle detectors using a parameterization based on the algorithms originally developed by Grindhammer and Peters[2] and Bock et al.[3]. The primary motivations of the present paper are to verify the implementation of the parameterization, to explore regions of energy where the parameterization is valid and to serve as a basis for further improvement of the algorithm. To this end, we compared the Lelaps simulation to a detailed simulation provided by Geant4[4]. A number of different calorimeters, both electromagnetic and hadronic, were implemented in both programs. Longitudinal and radial shower profiles and their fluctuations were obtained from Geant4 over a wide energy range and compared with those obtained from Lelaps. Generally the longitudinal shower profiles are found to be in good agreement in a large part of the energy range, with poorer results at energies below about 300 MeV. Radial profiles agree well in homogeneous detectors, but are somewhat deficient in segmented ones. These deficiencies are discussed
Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone
Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo
2017-12-01
The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.
Air quality modeling: evaluation of chemical and meteorological parameterizations
International Nuclear Information System (INIS)
Kim, Youngseob
2011-01-01
The influence of chemical mechanisms and meteorological parameterizations on pollutant concentrations calculated with an air quality model is studied. The influence of the differences between two gas-phase chemical mechanisms on the formation of ozone and aerosols in Europe is low on average. For ozone, the large local differences are mainly due to the uncertainty associated with the kinetics of nitrogen monoxide (NO) oxidation reactions on the one hand and the representation of different pathways for the oxidation of aromatic compounds on the other hand. The aerosol concentrations are mainly influenced by the selection of all major precursors of secondary aerosols and the explicit treatment of chemical regimes corresponding to the nitrogen oxides (NO x ) levels. The influence of the meteorological parameterizations on the concentrations of aerosols and their vertical distribution is evaluated over the Paris region in France by comparison to lidar data. The influence of the parameterization of the dynamics in the atmospheric boundary layer is important; however, it is the use of an urban canopy model that improves significantly the modeling of the pollutant vertical distribution (author) [fr
Statistical dynamical subgrid-scale parameterizations for geophysical flows
International Nuclear Information System (INIS)
O'Kane, T J; Frederiksen, J S
2008-01-01
Simulations of both atmospheric and oceanic circulations at given finite resolutions are strongly dependent on the form and strengths of the dynamical subgrid-scale parameterizations (SSPs) and in particular are sensitive to subgrid-scale transient eddies interacting with the retained scale topography and the mean flow. In this paper, we present numerical results for SSPs of the eddy-topographic force, stochastic backscatter, eddy viscosity and eddy-mean field interaction using an inhomogeneous statistical turbulence model based on a quasi-diagonal direct interaction approximation (QDIA). Although the theoretical description on which our model is based is for general barotropic flows, we specifically focus on global atmospheric flows where large-scale Rossby waves are present. We compare and contrast the closure-based results with an important earlier heuristic SSP of the eddy-topographic force, based on maximum entropy or statistical canonical equilibrium arguments, developed specifically for general ocean circulation models (Holloway 1992 J. Phys. Oceanogr. 22 1033-46). Our results demonstrate that where strong zonal flows and Rossby waves are present, such as in the atmosphere, maximum entropy arguments are insufficient to accurately parameterize the subgrid contributions due to eddy-eddy, eddy-topographic and eddy-mean field interactions. We contrast our atmospheric results with findings for the oceans. Our study identifies subgrid-scale interactions that are currently not parameterized in numerical atmospheric climate models, which may lead to systematic defects in the simulated circulations.
Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone
Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.
2017-12-01
The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.
A Thermal Infrared Radiation Parameterization for Atmospheric Studies
Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)
2001-01-01
This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.
Model-driven harmonic parameterization of the cortical surface: HIP-HOP.
Auzias, G; Lefèvre, J; Le Troter, A; Fischer, C; Perrot, M; Régis, J; Coulon, O
2013-05-01
In the context of inter subject brain surface matching, we present a parameterization of the cortical surface constrained by a model of cortical organization. The parameterization is defined via an harmonic mapping of each hemisphere surface to a rectangular planar domain that integrates a representation of the model. As opposed to previous landmark-based registration methods we do not match folds between individuals but instead optimize the fit between cortical sulci and specific iso-coordinate axis in the model. This strategy overcomes some limitation to sulcus-based registration techniques such as topological variability in sulcal landmarks across subjects. Experiments on 62 subjects with manually traced sulci are presented and compared with the result of the Freesurfer software. The evaluation involves a measure of dispersion of sulci with both angular and area distortions. We show that the model-based strategy can lead to a natural, efficient and very fast (less than 5 min per hemisphere) method for defining inter subjects correspondences. We discuss how this approach also reduces the problems inherent to anatomically defined landmarks and open the way to the investigation of cortical organization through the notion of orientation and alignment of structures across the cortex.
A stochastic parameterization for deep convection using cellular automata
Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.
2012-12-01
Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in
Previously unknown species of Aspergillus.
Gautier, M; Normand, A-C; Ranque, S
2016-08-01
The use of multi-locus DNA sequence analysis has led to the description of previously unknown 'cryptic' Aspergillus species, whereas classical morphology-based identification of Aspergillus remains limited to the section or species-complex level. The current literature highlights two main features concerning these 'cryptic' Aspergillus species. First, the prevalence of such species in clinical samples is relatively high compared with emergent filamentous fungal taxa such as Mucorales, Scedosporium or Fusarium. Second, it is clearly important to identify these species in the clinical laboratory because of the high frequency of antifungal drug-resistant isolates of such Aspergillus species. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) has recently been shown to enable the identification of filamentous fungi with an accuracy similar to that of DNA sequence-based methods. As MALDI-TOF MS is well suited to the routine clinical laboratory workflow, it facilitates the identification of these 'cryptic' Aspergillus species at the routine mycology bench. The rapid establishment of enhanced filamentous fungi identification facilities will lead to a better understanding of the epidemiology and clinical importance of these emerging Aspergillus species. Based on routine MALDI-TOF MS-based identification results, we provide original insights into the key interpretation issues of a positive Aspergillus culture from a clinical sample. Which ubiquitous species that are frequently isolated from air samples are rarely involved in human invasive disease? Can both the species and the type of biological sample indicate Aspergillus carriage, colonization or infection in a patient? Highly accurate routine filamentous fungi identification is central to enhance the understanding of these previously unknown Aspergillus species, with a vital impact on further improved patient care. Copyright © 2016 European Society of Clinical Microbiology and
Systematic Parameterization of Lignin for the CHARMM Force Field
Energy Technology Data Exchange (ETDEWEB)
Vermaas, Joshua; Petridis, Loukas; Beckham, Gregg; Crowley, Michael
2017-07-06
Plant cell walls have three primary components, cellulose, hemicellulose, and lignin, the latter of which is a recalcitrant, aromatic heteropolymer that provides structure to plants, water and nutrient transport through plant tissues, and a highly effective defense against pathogens. Overcoming the recalcitrance of lignin is key to effective biomass deconstruction, which would in turn enable the use of biomass as a feedstock for industrial processes. Our understanding of lignin structure in the plant cell wall is hampered by the limitations of the available lignin forcefields, which currently only account for a single linkage between lignins and lack explicit parameterization for emerging lignin structures both from natural variants and engineered lignin structures. Since polymerization of lignin occurs via radical intermediates, multiple C-O and C-C linkages have been isolated , and the current force field only represents a small subset of lignin the diverse lignin structures found in plants. In order to take into account the wide range of lignin polymerization chemistries, monomers and dimer combinations of C-, H-, G-, and S-lignins as well as with hydroxycinnamic acid linkages were subjected to extensive quantum mechanical calculations to establish target data from which to build a complete molecular mechanics force field tuned specifically for diverse lignins. This was carried out in a GPU-accelerated global optimization process, whereby all molecules were parameterized simultaneously using the same internal parameter set. By parameterizing lignin specifically, we are able to more accurately represent the interactions and conformations of lignin monomers and dimers relative to a general force field. This new force field will enables computational researchers to study the effects of different linkages on the structure of lignin, as well as construct more accurate plant cell wall models based on observed statistical distributions of lignin that differ between
Multisite Evaluation of APEX for Water Quality: II. Regional Parameterization.
Nelson, Nathan O; Baffaut, Claire; Lory, John A; Anomaa Senaviratne, G M M M; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S
2017-11-01
Phosphorus (P) Index assessment requires independent estimates of long-term average annual P loss from fields, representing multiple climatic scenarios, management practices, and landscape positions. Because currently available measured data are insufficient to evaluate P Index performance, calibrated and validated process-based models have been proposed as tools to generate the required data. The objectives of this research were to develop a regional parameterization for the Agricultural Policy Environmental eXtender (APEX) model to estimate edge-of-field runoff, sediment, and P losses in restricted-layer soils of Missouri and Kansas and to assess the performance of this parameterization using monitoring data from multiple sites in this region. Five site-specific calibrated models (SSCM) from within the region were used to develop a regionally calibrated model (RCM), which was further calibrated and validated with measured data. Performance of the RCM was similar to that of the SSCMs for runoff simulation and had Nash-Sutcliffe efficiency (NSE) > 0.72 and absolute percent bias (|PBIAS|) 90%) and was particularly ineffective at simulating sediment loss from locations with small sediment loads. The RCM had acceptable performance for simulation of total P loss (NSE > 0.74, |PBIAS| < 30%) but underperformed the SSCMs. Total P-loss estimates should be used with caution due to poor simulation of sediment loss. Although we did not attain our goal of a robust regional parameterization of APEX for estimating sediment and total P losses, runoff estimates with the RCM were acceptable for P Index evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Parameterization models for solar radiation and solar technology applications
International Nuclear Information System (INIS)
Khalil, Samy A.
2008-01-01
Solar radiation is very important for the evaluation and wide use of solar renewable energy systems. The development of calibration procedures for broadband solar radiation photometric instrumentation and the improvement of broadband solar radiation measurement accuracy have been done. An improved diffuse sky reference and photometric calibration and characterization software for outdoor pyranometer calibrations are outlined. Parameterizations for direct beam, total hemispherical and diffuse sky radiation and solar radiation technology are briefly reviewed. The uncertainties for various broadband solar radiations of solar energy and atmospheric effects are discussed. The varying responsivities of solar radiation with meteorological, statistical and climatological parameters and possibility atmospheric conditions was examined
Non-perturbative Aspects of QCD and Parameterized Quark Propagator
Institute of Scientific and Technical Information of China (English)
HAN Ding-An; ZHOU Li-Juan; ZENG Ya-Guang; GU Yun-Ting; CAO Hui; MA Wei-Xing; MENG Cheng-Ju; PAN Ji-Huan
2008-01-01
Based on the Global Color Symmetry Model, the non-perturbative QCD vacuum is investigated in theparameterized fully dressed quark propagator. Our theoretical predictions for various quantities characterized the QCD vacuum are in agreement with those predicted by many other phenomenological QCD inspired models. The successful predictions clearly indicate the extensive validity of our parameterized quark propagator used here. A detailed discussion on the arbitrariness in determining the integration cut-off parameter of# in calculating QCD vacuum condensates and a good method, which avoided the dependence of calculating results on the cut-off parameter is also strongly recommended to readers.
Parameterization models for solar radiation and solar technology applications
Energy Technology Data Exchange (ETDEWEB)
Khalil, Samy A. [National Research Institute of Astronomy and Geophysics, Solar and Space Department, Marsed Street, Helwan, 11421 Cairo (Egypt)
2008-08-15
Solar radiation is very important for the evaluation and wide use of solar renewable energy systems. The development of calibration procedures for broadband solar radiation photometric instrumentation and the improvement of broadband solar radiation measurement accuracy have been done. An improved diffuse sky reference and photometric calibration and characterization software for outdoor pyranometer calibrations are outlined. Parameterizations for direct beam, total hemispherical and diffuse sky radiation and solar radiation technology are briefly reviewed. The uncertainties for various broadband solar radiations of solar energy and atmospheric effects are discussed. The varying responsivities of solar radiation with meteorological, statistical and climatological parameters and possibility atmospheric conditions was examined. (author)
Parameterization of the dielectric function of semiconductor nanocrystals
Energy Technology Data Exchange (ETDEWEB)
Petrik, P., E-mail: petrik@mfa.kfki.hu
2014-11-15
Optical methods like spectroscopic ellipsometry are sensitive to the structural properties of semiconductor films such as crystallinity or grain size. The imaginary part of the dielectric function is proportional to the joint density of electronic states. Consequently, the analysis of the dielectric function around the critical point energies provides useful information about the electron band structure and all related parameters like the grain structure, band gap, temperature, composition, phase structure, and carrier mobility. In this work an attempt is made to present a selection of the approaches to parameterize and analyze the dielectric function of semiconductors, as well as some applications.
IR Optics Measurement with Linear Coupling's Action-Angle Parameterization
Luo, Yun; Pilat, Fulvia Caterina; Satogata, Todd; Trbojevic, Dejan
2005-01-01
The interaction region (IP) optics are measured with the two DX/BPMs close to the IPs at the Relativistic Heavy Ion Collider (RHIC). The beta functions at IP are measured with the two eigenmodes' phase advances between the two BPMs. And the beta waists are also determined through the beta functions at the two BPMs. The coupling parameters at the IPs are also given through the linear coupling's action-angle parameterization. All the experimental data are taken during the driving oscillations with the AC dipole. The methods to do these measurements are discussed. And the measurement results during the beta*
Parameterization of interatomic potential by genetic algorithms: A case study
Energy Technology Data Exchange (ETDEWEB)
Ghosh, Partha S., E-mail: psghosh@barc.gov.in; Arya, A.; Dey, G. K. [Materials Science Division, Bhabha Atomic Research Centre, Mumbai-400085 (India); Ranawat, Y. S. [Department of Ceramic Engineering, Indian Institute of Technology (BHU), Varanasi-221005 (India)
2015-06-24
A framework for Genetic Algorithm based methodology is developed to systematically obtain and optimize parameters for interatomic force field functions for MD simulations by fitting to a reference data base. This methodology is applied to the fitting of ThO{sub 2} (CaF{sub 2} prototype) – a representative of ceramic based potential fuel for nuclear applications. The resulting GA optimized parameterization of ThO{sub 2} is able to capture basic structural, mechanical, thermo-physical properties and also describes defect structures within the permissible range.
The causal structure of spacetime is a parameterized Randers geometry
Energy Technology Data Exchange (ETDEWEB)
Skakala, Jozef; Visser, Matt, E-mail: jozef.skakala@msor.vuw.ac.nz, E-mail: matt.visser@msor.vuw.ac.nz [School of Mathematics, Statistics and Operations Research, Victoria University of Wellington, PO Box 600, Wellington (New Zealand)
2011-03-21
There is a well-established isomorphism between stationary four-dimensional spacetimes and three-dimensional purely spatial Randers geometries-these Randers geometries being a particular case of the more general class of three-dimensional Finsler geometries. We point out that in stably causal spacetimes, by using the (time-dependent) ADM decomposition, this result can be extended to general non-stationary spacetimes-the causal structure (conformal structure) of the full spacetime is completely encoded in a parameterized (t-dependent) class of Randers spaces, which can then be used to define a Fermat principle, and also to reconstruct the null cones and causal structure.
The causal structure of spacetime is a parameterized Randers geometry
International Nuclear Information System (INIS)
Skakala, Jozef; Visser, Matt
2011-01-01
There is a well-established isomorphism between stationary four-dimensional spacetimes and three-dimensional purely spatial Randers geometries-these Randers geometries being a particular case of the more general class of three-dimensional Finsler geometries. We point out that in stably causal spacetimes, by using the (time-dependent) ADM decomposition, this result can be extended to general non-stationary spacetimes-the causal structure (conformal structure) of the full spacetime is completely encoded in a parameterized (t-dependent) class of Randers spaces, which can then be used to define a Fermat principle, and also to reconstruct the null cones and causal structure.
The parameterization of microchannel-plate-based detection systems
Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Giles, Barbara L.; Pollock, Craig J.
2016-10-01
The most common instrument for low-energy plasmas consists of a top-hat electrostatic analyzer (ESA) geometry coupled with a microchannel-plate-based (MCP-based) detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Here we develop a comprehensive mathematical description of particle detection systems. As a function of instrument azimuthal angle, we parameterize (1) particle scattering within the ESA and at the surface of the MCP, (2) the probability distribution of MCP gain for an incident particle, (3) electron charge cloud spreading between the MCP and anode board, and (4) capacitive coupling between adjacent discrete anodes. Using the Dual Electron Spectrometers on the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission as an example, we demonstrate a method for extracting these fundamental detection system parameters from laboratory calibration. We further show that parameters that will evolve in flight, namely, MCP gain, can be determined through application of this model to specifically tailored in-flight calibration activities. This methodology provides a robust characterization of sensor suite performance throughout mission lifetime. The model developed in this work is not only applicable to existing sensors but also can be used as an analytical design tool for future particle instrumentation.
Further study on parameterization of reactor NAA: Pt. 2
International Nuclear Information System (INIS)
Tian Weizhi; Zhang Shuxin
1989-01-01
In the last paper, Ik 0 method was proposed for fission interference corrections. Another important kind of interferences in reator NAA is due to threshold reaction induced by reactor fast neutrons. In view of the increasing importance of this kind of interferences, and difficulties encountered in using the relative comparison method, a parameterized method has been introduced. Typical channels in heavy water reflector and No.2 horizontal channel of Heavy Water Research Reactor in the Insitute of Atomic Energy have been shown to have fast neutron energy distributions (E>4 MeV) close to primary fission neutron spectrum, by using multi-threshold detectors. On this basis, Ti foil is used as an 'instant fast neutron flux monitor' in parameterized corrections for threshold reaction interferences in the long irradiations. Constant values of φ f /φ s = 0.70 ± 0.02% have been obtained for No.2 rabbit channel. This value can be directly used for threshold reaction inference correction in the short irradiations
Improving microphysics in a convective parameterization: possibilities and limitations
Labbouz, Laurent; Heikenfeld, Max; Stier, Philip; Morrison, Hugh; Milbrandt, Jason; Protat, Alain; Kipling, Zak
2017-04-01
The convective cloud field model (CCFM) is a convective parameterization implemented in the climate model ECHAM6.1-HAM2.2. It represents a population of clouds within each ECHAM-HAM model column, simulating up to 10 different convective cloud types with individual radius, vertical velocities and microphysical properties. Comparisons between CCFM and radar data at Darwin, Australia, show that in order to reproduce both the convective cloud top height distribution and the vertical velocity profile, the effect of aerodynamic drag on the rising parcel has to be considered, along with a reduced entrainment parameter. A new double-moment microphysics (the Predicted Particle Properties scheme, P3) has been implemented in the latest version of CCFM and is compared to the standard single-moment microphysics and the radar retrievals at Darwin. The microphysical process rates (autoconversion, accretion, deposition, freezing, …) and their response to changes in CDNC are investigated and compared to high resolution CRM WRF simulations over the Amazon region. The results shed light on the possibilities and limitations of microphysics improvements in the framework of CCFM and in convective parameterizations in general.
Parameterization of ionization rate by auroral electron precipitation in Jupiter
Directory of Open Access Journals (Sweden)
Y. Hiraki
2008-02-01
Full Text Available We simulate auroral electron precipitation into the Jovian atmosphere in which electron multi-directional scattering and energy degradation processes are treated exactly with a Monte Carlo technique. We make a parameterization of the calculated ionization rate of the neutral gas by electron impact in a similar way as used for the Earth's aurora. Our method allows the altitude distribution of the ionization rate to be obtained as a function of an arbitrary initial energy spectrum in the range of 1–200 keV. It also includes incident angle dependence and an arbitrary density distribution of molecular hydrogen. We show that there is little dependence of the estimated ionospheric conductance on atomic species such as H and He. We compare our results with those of recent studies with different electron transport schemes by adapting our parameterization to their atmospheric conditions. We discuss the intrinsic problem of their simplified assumption. The ionospheric conductance, which is important for Jupiter's magnetosphere-ionosphere coupling system, is estimated to vary by a factor depending on the electron energy spectrum based on recent observation and modeling. We discuss this difference through the relation with field-aligned current and electron spectrum.
Parameterization of ionization rate by auroral electron precipitation in Jupiter
Directory of Open Access Journals (Sweden)
Y. Hiraki
2008-02-01
Full Text Available We simulate auroral electron precipitation into the Jovian atmosphere in which electron multi-directional scattering and energy degradation processes are treated exactly with a Monte Carlo technique. We make a parameterization of the calculated ionization rate of the neutral gas by electron impact in a similar way as used for the Earth's aurora. Our method allows the altitude distribution of the ionization rate to be obtained as a function of an arbitrary initial energy spectrum in the range of 1–200 keV. It also includes incident angle dependence and an arbitrary density distribution of molecular hydrogen. We show that there is little dependence of the estimated ionospheric conductance on atomic species such as H and He. We compare our results with those of recent studies with different electron transport schemes by adapting our parameterization to their atmospheric conditions. We discuss the intrinsic problem of their simplified assumption. The ionospheric conductance, which is important for Jupiter's magnetosphere-ionosphere coupling system, is estimated to vary by a factor depending on the electron energy spectrum based on recent observation and modeling. We discuss this difference through the relation with field-aligned current and electron spectrum.
Rapid parameterization of small molecules using the Force Field Toolkit.
Mayne, Christopher G; Saam, Jan; Schulten, Klaus; Tajkhorshid, Emad; Gumbart, James C
2013-12-15
The inability to rapidly generate accurate and robust parameters for novel chemical matter continues to severely limit the application of molecular dynamics simulations to many biological systems of interest, especially in fields such as drug discovery. Although the release of generalized versions of common classical force fields, for example, General Amber Force Field and CHARMM General Force Field, have posited guidelines for parameterization of small molecules, many technical challenges remain that have hampered their wide-scale extension. The Force Field Toolkit (ffTK), described herein, minimizes common barriers to ligand parameterization through algorithm and method development, automation of tedious and error-prone tasks, and graphical user interface design. Distributed as a VMD plugin, ffTK facilitates the traversal of a clear and organized workflow resulting in a complete set of CHARMM-compatible parameters. A variety of tools are provided to generate quantum mechanical target data, setup multidimensional optimization routines, and analyze parameter performance. Parameters developed for a small test set of molecules using ffTK were comparable to existing CGenFF parameters in their ability to reproduce experimentally measured values for pure-solvent properties (<15% error from experiment) and free energy of solvation (±0.5 kcal/mol from experiment). Copyright © 2013 Wiley Periodicals, Inc.
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Capturing the Interplay of Dynamics and Networks through Parameterizations of Laplacian Operators
2016-08-24
we describe an umbrella framework that unifies some of the well known measures, connecting the ideas of centrality , communities and dynamical processes...change of basis. Parameterized centrality also leads to the definition of parameterized volume for subsets of vertices. Parameterized conductance...behind this definition is to establish a direct connection between centrality and community measures, as we will later demonstrate with the notion of
A Coordinated Effort to Improve Parameterization of High-Latitude Cloud and Radiation Processes
International Nuclear Information System (INIS)
J. O. Pinto; A.H. Lynch
2004-01-01
The goal of this project is the development and evaluation of improved parameterization of arctic cloud and radiation processes and implementation of the parameterizations into a climate model. Our research focuses specifically on the following issues: (1) continued development and evaluation of cloud microphysical parameterizations, focusing on issues of particular relevance for mixed phase clouds; and (2) evaluation of the mesoscale simulation of arctic cloud system life cycles
ANALYSIS OF PARAMETERIZATION VALUE REDUCTION OF SOFT SETS AND ITS ALGORITHM
Directory of Open Access Journals (Sweden)
Mohammed Adam Taheir Mohammed
2016-02-01
Full Text Available In this paper, the parameterization value reduction of soft sets and its algorithm in decision making are studied and described. It is based on parameterization reduction of soft sets. The purpose of this study is to investigate the inherited disadvantages of parameterization reduction of soft sets and its algorithm. The algorithms presented in this study attempt to reduce the value of least parameters from soft set. Through the analysis, two techniques have been described. Through this study, it is found that parameterization reduction of soft sets and its algorithm has yielded a different and inconsistency in suboptimal result.
Stachura, M.; Herzfeld, U. C.; McDonald, B.; Weltman, A.; Hale, G.; Trantow, T.
2012-12-01
The dynamical processes that occur during the surge of a large, complex glacier system are far from being understood. The aim of this paper is to derive a parameterization of surge characteristics that captures the principle processes and can serve as the basis for a dynamic surge model. Innovative mathematical methods are introduced that facilitate derivation of such a parameterization from remote-sensing observations. Methods include automated geostatistical characterization and connectionist-geostatistical classification of dynamic provinces and deformation states, using the vehicle of crevasse patterns. These methods are applied to analyze satellite and airborne image and laser altimeter data collected during the current surge of Bering Glacier and Bagley Ice Field, Alaska.
A stratiform cloud parameterization for General Circulation Models
International Nuclear Information System (INIS)
Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.
1994-01-01
The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species
A stratiform cloud parameterization for general circulation models
International Nuclear Information System (INIS)
Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.
1994-01-01
The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species
Sensitivity of tropical cyclone simulations to microphysics parameterizations in WRF
International Nuclear Information System (INIS)
Reshmi Mohan, P.; Srinivas, C.V.; Bhaskaran, R.; Venkatraman, B.; Yesubabu, V.
2018-01-01
Tropical cyclones (TC) cause storm surge along coastal areas where these storms cross the coast. As major nuclear facilities are usually installed in coastal region, the surge predictions are highly important for DAE. The critical TC parameters needed in estimating storm surge are intensity (winds, central pressure and radius of maximum winds) and storm tracks. The predictions with numerical models are generally made by representing the clouds and precipitation processes using convective and microphysics parameterization. At high spatial resolutions (1-3Km) microphysics can act as cloud resolving NWP model to explicitly resolve the convective precipitation without using convection schemes. Recent simulation studies using WRF on severe weather phenomena such as thunderstorms and hurricanes indicated large sensitivity of predicted rainfall and hurricane tracks to microphysics due to variation in temperature and pressure gradients which generate winds that determine the storm track. In the present study the sensitivity of tropical cyclone tracks and intensity to different microphysics schemes has been conducted
Examining Chaotic Convection with Super-Parameterization Ensembles
Jones, Todd R.
This study investigates a variety of features present in a new configuration of the Community Atmosphere Model (CAM) variant, SP-CAM 2.0. The new configuration (multiple-parameterization-CAM, MP-CAM) changes the manner in which the super-parameterization (SP) concept represents physical tendency feedbacks to the large-scale by using the mean of 10 independent two-dimensional cloud-permitting model (CPM) curtains in each global model column instead of the conventional single CPM curtain. The climates of the SP and MP configurations are examined to investigate any significant differences caused by the application of convective physical tendencies that are more deterministic in nature, paying particular attention to extreme precipitation events and large-scale weather systems, such as the Madden-Julian Oscillation (MJO). A number of small but significant changes in the mean state climate are uncovered, and it is found that the new formulation degrades MJO performance. Despite these deficiencies, the ensemble of possible realizations of convective states in the MP configuration allows for analysis of uncertainty in the small-scale solution, lending to examination of those weather regimes and physical mechanisms associated with strong, chaotic convection. Methods of quantifying precipitation predictability are explored, and use of the most reliable of these leads to the conclusion that poor precipitation predictability is most directly related to the proximity of the global climate model column state to atmospheric critical points. Secondarily, the predictability is tied to the availability of potential convective energy, the presence of mesoscale convective organization on the CPM grid, and the directive power of the large-scale.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
Lin, Shangfei; Sheng, Jinyu
2017-12-01
Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.
Hall, Carlton Raden
thickness Ltadj, LAI, and h (m). Its function is to translate leaf level estimates of diffuse absorption and backscatter to the canopy scale allowing the leaf optical properties to directly influence above canopy estimates of reflectance. The model was successfully modified and parameterized to operate in a canopy scale and a leaf scale mode. Canopy scale model simulations produced the best results. Simulations based on leaf derived coefficients produced calculated above canopy reflectance errors of 15% to 18%. A comprehensive sensitivity analyses indicated the most important parameters were beam to diffuse conversion c(lambda, m-1), diffuse absorption a(lambda, m-1), diffuse backscatter b(lambda, m-1), h (m), Q, and direct and diffuse irradiance. Sources of error include the estimation procedure for the direct beam to diffuse conversion and attenuation coefficients and other field and laboratory measurement and analysis errors. Applications of the model include creation of synthetic reflectance data sets for remote sensing algorithm development, simulations of stress and drought on vegetation reflectance signatures, and the potential to estimate leaf moisture and chemical status.
Mckinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.
2015-01-01
A semianalytical ocean color inversion algorithm was developed for improving retrievals of inherent optical properties (IOPs) in optically shallow waters. In clear, geometrically shallow waters, light reflected off the seafloor can contribute to the water-leaving radiance signal. This can have a confounding effect on ocean color algorithms developed for optically deep waters, leading to an overestimation of IOPs. The algorithm described here, the Shallow Water Inversion Model (SWIM), uses pre-existing knowledge of bathymetry and benthic substrate brightness to account for optically shallow effects. SWIM was incorporated into the NASA Ocean Biology Processing Group's L2GEN code and tested in waters of the Great Barrier Reef, Australia, using the Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua time series (2002-2013). SWIM-derived values of the total non-water absorption coefficient at 443 nm, at(443), the particulate backscattering coefficient at 443 nm, bbp(443), and the diffuse attenuation coefficient at 488 nm, Kd(488), were compared with values derived using the Generalized Inherent Optical Properties algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA). The results indicated that in clear, optically shallow waters SWIM-derived values of at(443), bbp(443), and Kd(443) were realistically lower than values derived using GIOP and QAA, in agreement with radiative transfer modeling. This signified that the benthic reflectance correction was performing as expected. However, in more optically complex waters, SWIM had difficulty converging to a solution, a likely consequence of internal IOP parameterizations. Whilst a comprehensive study of the SWIM algorithm's behavior was conducted, further work is needed to validate the algorithm using in situ data.
Directory of Open Access Journals (Sweden)
R. C. Braga
2017-06-01
Full Text Available The objective of this study is to validate parameterizations that were recently developed for satellite retrievals of cloud condensation nuclei supersaturation spectra, NCCN(S, at cloud base alongside more traditional parameterizations connecting NCCN(S with cloud base updrafts and drop concentrations. This was based on the HALO aircraft measurements during the ACRIDICON–CHUVA campaign over the Amazon region, which took place in September 2014. The properties of convective clouds were measured with a cloud combination probe (CCP, a cloud and aerosol spectrometer (CAS-DPOL, and a CCN counter onboard the HALO aircraft. An intercomparison of the cloud drop size distributions (DSDs and the cloud water content (CWC derived from the different instruments generally shows good agreement within the instrumental uncertainties. To this end, the directly measured cloud drop concentrations (Nd near cloud base were compared with inferred values based on the measured cloud base updraft velocity (Wb and NCCN(S spectra. The measurements of Nd at cloud base were also compared with drop concentrations (Na derived on the basis of an adiabatic assumption and obtained from the vertical evolution of cloud drop effective radius (re above cloud base. The measurements of NCCN(S and Wb reproduced the observed Nd within the measurements uncertainties when the old (1959 Twomey's parameterization was used. The agreement between the measured and calculated Nd was only within a factor of 2 with attempts to use cloud base S, as obtained from the measured Wb, Nd, and NCCN(S. This underscores the yet unresolved challenge of aircraft measurements of S in clouds. Importantly, the vertical evolution of re with height reproduced the observation-based nearly adiabatic cloud base drop concentrations, Na. The combination of these results provides aircraft observational support for the various components of the satellite-retrieved methodology that was recently developed to
The CCPP-ARM Parameterization Testbed (CAPT): Where Climate Simulation Meets Weather Prediction
Energy Technology Data Exchange (ETDEWEB)
Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J
2003-11-21
To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands, in particular, that the GCM parameterizations of unresolved processes should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provied that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be similarly tested. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the USDOE is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM. Numerical weather prediction methods show promise for improving parameterizations in climate GCMs.
A shallow convection parameterization for the non-hydrostatic MM5 mesoscale model
Energy Technology Data Exchange (ETDEWEB)
Seaman, N.L.; Kain, J.S.; Deng, A. [Pennsylvania State Univ., University Park, PA (United States)
1996-04-01
A shallow convection parameterization suitable for the Pennsylvannia State University (PSU)/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) is being developed at PSU. The parameterization is based on parcel perturbation theory developed in conjunction with a 1-D Mellor Yamada 1.5-order planetary boundary layer scheme and the Kain-Fritsch deep convection model.
Distance parameterization for efficient seismic history matching with the ensemble Kalman Filter
Leeuwenburgh, O.; Arts, R.
2012-01-01
The Ensemble Kalman Filter (EnKF), in combination with travel-time parameterization, provides a robust and flexible method for quantitative multi-model history matching to time-lapse seismic data. A disadvantage of the parameterization in terms of travel-times is that it requires simulation of
A test harness for accelerating physics parameterization advancements into operations
Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.
2017-12-01
The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a
Directory of Open Access Journals (Sweden)
P. Otero
2013-05-01
Full Text Available The estimation of sea–air CO2 fluxes is largely dependent on wind speed through the gas transfer velocity parameterization. In this paper, we quantify uncertainties in the estimation of the CO2 uptake in the Bay of Biscay resulting from the use of different sources of wind speed such as three different global reanalysis meteorological models (NCEP/NCAR 1, NCEP/DOE 2 and ERA-Interim, one high-resolution regional forecast model (HIRLAM-AEMet, winds derived under the Cross-Calibrated Multi-Platform (CCMP project, and QuikSCAT winds in combination with some of the most widely used gas transfer velocity parameterizations. Results show that net CO2 flux estimations during an entire seasonal cycle (September 2002–September 2003 may vary by a factor of ~ 3 depending on the selected wind speed product and the gas exchange parameterization, with the highest impact due to the last one. The comparison of satellite- and model-derived winds with observations at buoys advises against the systematic overestimation of NCEP-2 and the underestimation of NCEP-1. In the coastal region, the presence of land and the time resolution are the main constraints of QuikSCAT, which turns CCMP and ERA-Interim in the preferred options.
Experimental continuously reinforced concrete pavement parameterization using nondestructive methods
Directory of Open Access Journals (Sweden)
L. S. Salles
Full Text Available ABSTRACT Four continuously reinforced concrete pavement (CRCP sections were built at the University of São Paulo campus in order to analyze the pavement performance in a tropical environment. The sections short length coupled with particular project aspects made the experimental CRCP cracking be different from the traditional CRCP one. After three years of construction, a series of nondestructive testing were performed - Falling Weight Deflectometer (FWD loadings - to verify and to parameterize the pavement structural condition based on two main properties: the elasticity modulus of concrete (E and the modulus of subgrade reaction (k. These properties estimation was obtained through the matching process between real and EverFE simulated basins with the load at the slab center, between two consecutive cracks. The backcalculation results show that the lack of anchorage at the sections end decreases the E and k values and that the longitudinal reinforcement percentage provides additional stiffness to the pavement. Additionally, FWD loadings tangential to the cracks allowed the load transfer efficiency (LTE estimation determination across cracks. The LTE resulted in values above 90 % for all cracks.
Parameterization-based tracking for the P2 experiment
Sorokin, Iurii
2017-08-01
The P2 experiment in Mainz aims to determine the weak mixing angle θW at low momentum transfer by measuring the parity-violating asymmetry of elastic electronproton scattering. In order to achieve the intended precision of Δ(sin2 θW)/sin2θW = 0:13% within the planned 10 000 hours of running the experiment has to operate at the rate of 1011 detected electrons per second. Although it is not required to measure the kinematic parameters of each individual electron, every attempt is made to achieve the highest possible throughput in the track reconstruction chain. In the present work a parameterization-based track reconstruction method is described. It is a variation of track following, where the results of the computation-heavy steps, namely the propagation of a track to the further detector plane, and the fitting, are pre-calculated, and expressed in terms of parametric analytic functions. This makes the algorithm extremely fast, and well-suited for an implementation on an FPGA. The method also takes implicitly into account the actual phase space distribution of the tracks already at the stage of candidate construction. Compared to a simple algorithm, that does not use such information, this allows reducing the combinatorial background by many orders of magnitude, down to O(1) background candidate per one signal track. The method is developed specifically for the P2 experiment in Mainz, and the presented implementation is tightly coupled to the experimental conditions.
Parameterization-based tracking for the P2 experiment
Energy Technology Data Exchange (ETDEWEB)
Sorokin, Iurii [Institut fuer Kernphysik and PRISMA Cluster of Excellence, Mainz (Germany); Collaboration: P2-Collaboration
2016-07-01
The P2 experiment at the new MESA accelerator in Mainz aims to determine the weak mixing angle by measuring the parity-violating asymmetry in elastic electron-proton scattering at low momentum transfer. To achieve an unprecedented precision an order of 10{sup 11} scattered electrons per second have to be acquired. %within the acceptance. Whereas the tracking system is not required to operate at such high rates, every attempt is made to achieve as high rate capability as possible. The P2 tracking system will consist of four planes of high-voltage monolithic active pixel sensors (HV-MAPS). With the present preliminary design one expects about 150 signal electron tracks and 20000 background hits (from bremsstrahlung photons) per plane in every 50 ns readout frame at the full rate. In order to cope with this extreme combinatorial background in on-line mode, a parameterization-based tracking is considered as a possible solution. The idea is to transform the hit positions into a set of weakly correlated quantities, and to find simple (e.g. polynomial) functions of these quantities, that would give the required characteristics of the track (e.g. momentum). The parameters of the functions are determined from a sample of high-quality tracks, taken either from a simulation, or reconstructed in a conventional way from a sample of low-rate data.
Influence of Ice Nuclei Parameterization Schemes on the Hail Process
Directory of Open Access Journals (Sweden)
Xiaoli Liu
2018-01-01
Full Text Available Ice nuclei are very important factors as they significantly affect the development and evolvement of convective clouds such as hail clouds. In this study, numerical simulations of hail processes in the Zhejiang Province were conducted using a mesoscale numerical model (WRF v3.4. The effects of six ice nuclei parameterization schemes on the macroscopic and microscopic structures of hail clouds were compared. The effect of the ice nuclei concentration on ground hailfall is stronger than that on ground rainfall. There were significant spatiotemporal, intensity, and distribution differences in hailfall. Changes in the ice nuclei concentration caused different changes in hydrometeors and directly affected the ice crystals, and, hence, the spatiotemporal distribution of other hydrometeors and the thermodynamic structure of clouds. An increased ice nuclei concentration raises the initial concentration of ice crystals with higher mixing ratio. In the developing and early maturation stages of hail cloud, a larger number of ice crystals competed for water vapor with increasing ice nuclei concentration. This effect prevents ice crystals from maturing into snow particles and inhibits the formation and growth of hail embryos. During later maturation stages, updraft in the cloud intensified and more supercooled water was transported above the 0°C level, benefitting the production and growth of hail particles. An increased ice nuclei concentration therefore favors the formation of hail.
Frozen soil parameterization in a distributed biosphere hydrological model
Directory of Open Access Journals (Sweden)
L. Wang
2010-03-01
Full Text Available In this study, a frozen soil parameterization has been modified and incorporated into a distributed biosphere hydrological model (WEB-DHM. The WEB-DHM with the frozen scheme was then rigorously evaluated in a small cold area, the Binngou watershed, against the in-situ observations from the WATER (Watershed Allied Telemetry Experimental Research. First, by using the original WEB-DHM without the frozen scheme, the land surface parameters and two van Genuchten parameters were optimized using the observed surface radiation fluxes and the soil moistures at upper layers (5, 10 and 20 cm depths at the DY station in July. Second, by using the WEB-DHM with the frozen scheme, two frozen soil parameters were calibrated using the observed soil temperature at 5 cm depth at the DY station from 21 November 2007 to 20 April 2008; while the other soil hydraulic parameters were optimized by the calibration of the discharges at the basin outlet in July and August that covers the annual largest flood peak in 2008. With these calibrated parameters, the WEB-DHM with the frozen scheme was then used for a yearlong validation from 21 November 2007 to 20 November 2008. Results showed that the WEB-DHM with the frozen scheme has given much better performance than the WEB-DHM without the frozen scheme, in the simulations of soil moisture profile at the cold regions catchment and the discharges at the basin outlet in the yearlong simulation.
Boundary layer parameterizations and long-range transport
International Nuclear Information System (INIS)
Irwin, J.S.
1992-01-01
A joint work group between the American Meteorological Society (AMS) and the EPA is perusing the construction of an air quality model that incorporates boundary layer parameterizations of dispersion and transport. This model could replace the currently accepted model, the Industrial Source Complex (ISC) model. The ISC model is a Gaussian-plume multiple point-source model that provides for consideration of fugitive emissions, aerodynamic wake effects, gravitational settling and dry deposition. A work group of several Federal and State agencies is perusing the construction of an air quality modeling system for use in assessing and tracking visibility impairment resulting from long-range transport of pollutants. The modeling system is designed to use the hourly vertical profiles of wind, temperature and moisture resulting from a mesoscale meteorological processor that employs four dimensional data assimilation (FDDA). FDDA involves adding forcing functions to the governing model equations to gradually ''nudge'' the model state toward the observations (12-hourly upper air observations of wind, temperature and moisture, and 3-hourly surface observations of wind and moisture). In this way it is possible to generate data sets whose accuracy, in terms of transport, precipitation, and dynamic consistency is superior to both direct interpolation of synoptic-scale analyses of observations and purely predictive mode model result. (AB) ( 19 refs.)
Parameterization of Fuel-Optimal Synchronous Approach Trajectories to Tumbling Targets
Directory of Open Access Journals (Sweden)
David Charles Sternberg
2018-04-01
Full Text Available Docking with potentially tumbling Targets is a common element of many mission architectures, including on-orbit servicing and active debris removal. This paper studies synchronized docking trajectories as a way to ensure the Chaser satellite remains on the docking axis of the tumbling Target, thereby reducing collision risks and enabling persistent onboard sensing of the docking location. Chaser satellites have limited computational power available to them and the time allowed for the determination of a fuel optimal trajectory may be limited. Consequently, parameterized trajectories that approximate the fuel optimal trajectory while following synchronous approaches may be used to provide a computationally efficient means of determining near optimal trajectories to a tumbling Target. This paper presents a method of balancing the computation cost with the added fuel expenditure required for parameterization, including the selection of a parameterization scheme, the number of parameters in the parameterization, and a means of incorporating the dynamics of a tumbling satellite into the parameterization process. Comparisons of the parameterized trajectories are made with the fuel optimal trajectory, which is computed through the numerical propagation of Euler’s equations. Additionally, various tumble types are considered to demonstrate the efficacy of the presented computation scheme. With this parameterized trajectory determination method, Chaser satellites may perform terminal approach and docking maneuvers with both fuel and computational efficiency.
van der Ent, R.; Van Beek, R.; Sutanudjaja, E.; Wang-Erlandsson, L.; Hessels, T.; Bastiaanssen, W.; Bierkens, M. F.
2017-12-01
The storage and dynamics of water in the root zone control many important hydrological processes such as saturation excess overland flow, interflow, recharge, capillary rise, soil evaporation and transpiration. These processes are parameterized in hydrological models or land-surface schemes and the effect on runoff prediction can be large. Root zone parameters in global hydrological models are very uncertain as they cannot be measured directly at the scale on which these models operate. In this paper we calibrate the global hydrological model PCR-GLOBWB using a state-of-the-art ensemble of evaporation fields derived by solving the energy balance for satellite observations. We focus our calibration on the root zone parameters of PCR-GLOBWB and derive spatial patterns of maximum root zone storage. We find these patterns to correspond well with previous research. The parameterization of our model allows for the conversion of maximum root zone storage to root zone depth and we find that these correspond quite well to the point observations where available. We conclude that climate and soil type should be taken into account when regionalizing measured root depth for a certain vegetation type. We equally find that using evaporation rather than discharge better allows for local adjustment of root zone parameters within a basin and thus provides orthogonal data to diagnose and optimize hydrological models and land surface schemes.
Tang, W.; Yang, K.; Sun, Z.; Qin, J.; Niu, X.
2016-12-01
A fast parameterization scheme named SUNFLUX is used in this study to estimate instantaneous surface solar radiation (SSR) based on products from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard both Terra and Aqua platforms. The scheme mainly takes into account the absorption and scattering processes due to clouds, aerosols and gas in the atmosphere. The estimated instantaneous SSR is evaluated against surface observations obtained from seven stations of the Surface Radiation Budget Network (SURFRAD), four stations in the North China Plain (NCP) and 40 stations of the Baseline Surface Radiation Network (BSRN). The statistical results for evaluation against these three datasets show that the relative root-mean-square error (RMSE) values of SUNFLUX are less than 15%, 16% and 17%, respectively. Daily SSR is derived through temporal upscaling from the MODIS-based instantaneous SSR estimates, and is validated against surface observations. The relative RMSE values for daily SSR estimates are about 16% at the seven SURFRAD stations, four NCP stations, 40 BSRN stations and 90 China Meteorological Administration (CMA) radiation stations.
Yan, Peng; Zhang, Yangming
2018-06-01
High performance scanning of nano-manipulators is widely deployed in various precision engineering applications such as SPM (scanning probe microscope), where trajectory tracking of sophisticated reference signals is an challenging control problem. The situation is further complicated when rate dependent hysteresis of the piezoelectric actuators and the stress-stiffening induced nonlinear stiffness of the flexure mechanism are considered. In this paper, a novel control framework is proposed to achieve high precision tracking of a piezoelectric nano-manipulator subjected to hysteresis and stiffness nonlinearities. An adaptive parameterized rate-dependent Prandtl-Ishlinskii model is constructed and the corresponding adaptive inverse model based online compensation is derived. Meanwhile a robust adaptive control architecture is further introduced to improve the tracking accuracy and robustness of the compensated system, where the parametric uncertainties of the nonlinear dynamics can be well eliminated by on-line estimations. Comparative experimental studies of the proposed control algorithm are conducted on a PZT actuated nano-manipulating stage, where hysteresis modeling accuracy and excellent tracking performance are demonstrated in real-time implementations, with significant improvement over existing results.
Parameterization of water vapor using high-resolution GPS data and empirical models
Ningombam, Shantikumar S.; Jade, Sridevi; Shrungeshwara, T. S.
2018-03-01
The present work evaluates eleven existing empirical models to estimate Precipitable Water Vapor (PWV) over a high-altitude (4500 m amsl), cold-desert environment. These models are tested extensively and used globally to estimate PWV for low altitude sites (below 1000 m amsl). The moist parameters used in the model are: water vapor scale height (Hc), dew point temperature (Td) and water vapor pressure (Es 0). These moist parameters are derived from surface air temperature and relative humidity measured at high temporal resolution from automated weather station. The performance of these models are examined statistically with observed high-resolution GPS (GPSPWV) data over the region (2005-2012). The correlation coefficient (R) between the observed GPSPWV and Model PWV is 0.98 at daily data and varies diurnally from 0.93 to 0.97. Parameterization of moisture parameters were studied in-depth (i.e., 2 h to monthly time scales) using GPSPWV , Td , and Es 0 . The slope of the linear relationships between GPSPWV and Td varies from 0.073°C-1 to 0.106°C-1 (R: 0.83 to 0.97) while GPSPWV and Es 0 varied from 1.688 to 2.209 (R: 0.95 to 0.99) at daily, monthly and diurnal time scales. In addition, the moist parameters for the cold desert, high-altitude environment are examined in-depth at various time scales during 2005-2012.
A Parameterized Method for Air-Quality Diagnosis and Its Applications
Directory of Open Access Journals (Sweden)
J. Z. Wang
2012-01-01
Full Text Available A parameterized method is developed to diagnose the air quality in Beijing and other cities with an index termed (parameters linking air-quality to meteorological elements PLAM derived from a correlation between PM10 and relevant weather elements based on the data between 2000 and 2007. Key weather factors for diagnosing the air pollution intensity are identified and included in PLAM that include atmospheric condensation of water vapour, wet potential equivalent temperature, and wind velocity. It is found that the poor air quality days with elevated PM10 are usually associated with higher PLAM values, featuring higher temperature, humidity, lower wind velocity, and higher stability compared to the averaged values in the same period. Both 24 h and 72 h forecasts provided useful services for the day of the opening ceremony of the Beijing Olympic Games and subsequent sport events. A correlation coefficient of 0.82 was achieved between the forecasts and (air pollution index API and 0.59 between the forecasts and observed PM10, all reaching the significant level of 0.001, for the summer period. A correction factor was also introduced to enable the PLAM to diagnose the observed PM10 concentrations all year round.
Factors influencing the parameterization of anvil clouds within general circulation models
International Nuclear Information System (INIS)
Leone, J.M. Jr.; Chin, H.N.
1994-01-01
The overall goal of this project is to improve the representation of clouds and their effects within global climate models (GCMs). We have concentrated on a small portion of the overall goal, the evolution of convectively generated cirrus clouds and their effects on the large-scale environment. Because of the large range of time and length scales involved, we have been using a multi-scale attack. For the early time generation and development of the cirrus anvil, we are using a cloud-scale model with horizontal resolution of 1 to 2 kilometers; for the larger scale transport by the larger scale flow, we are using a mesoscale model with a horizontal resolution of 20 to 60 kilometers. The eventual goal is to use the information obtained from these simulations, together with available observations, to derive improved cloud parameterizations for use in GCMs. This paper presents a new tool, a cirrus generator, that we have developed to aid in our mesoscale studies
Directory of Open Access Journals (Sweden)
K. M. Sakamoto
2016-06-01
Full Text Available Biomass-burning aerosols have a significant effect on global and regional aerosol climate forcings. To model the magnitude of these effects accurately requires knowledge of the size distribution of the emitted and evolving aerosol particles. Current biomass-burning inventories do not include size distributions, and global and regional models generally assume a fixed size distribution from all biomass-burning emissions. However, biomass-burning size distributions evolve in the plume due to coagulation and net organic aerosol (OA evaporation or formation, and the plume processes occur on spacial scales smaller than global/regional-model grid boxes. The extent of this size-distribution evolution is dependent on a variety of factors relating to the emission source and atmospheric conditions. Therefore, accurately accounting for biomass-burning aerosol size in global models requires an effective aerosol size distribution that accounts for this sub-grid evolution and can be derived from available emission-inventory and meteorological parameters. In this paper, we perform a detailed investigation of the effects of coagulation on the aerosol size distribution in biomass-burning plumes. We compare the effect of coagulation to that of OA evaporation and formation. We develop coagulation-only parameterizations for effective biomass-burning size distributions using the SAM-TOMAS large-eddy simulation plume model. For the most-sophisticated parameterization, we use the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA to build a parameterization of the aged size distribution based on the SAM-TOMAS output and seven inputs: emission median dry diameter, emission distribution modal width, mass emissions flux, fire area, mean boundary-layer wind speed, plume mixing depth, and time/distance since emission. This parameterization was tested against an independent set of SAM-TOMAS simulations and yields R2 values of 0.83 and 0.89 for Dpm and modal width
Sakamoto, Kimiko M.; Laing, James R.; Stevens, Robin G.; Jaffe, Daniel A.; Pierce, Jeffrey R.
2016-06-01
Biomass-burning aerosols have a significant effect on global and regional aerosol climate forcings. To model the magnitude of these effects accurately requires knowledge of the size distribution of the emitted and evolving aerosol particles. Current biomass-burning inventories do not include size distributions, and global and regional models generally assume a fixed size distribution from all biomass-burning emissions. However, biomass-burning size distributions evolve in the plume due to coagulation and net organic aerosol (OA) evaporation or formation, and the plume processes occur on spacial scales smaller than global/regional-model grid boxes. The extent of this size-distribution evolution is dependent on a variety of factors relating to the emission source and atmospheric conditions. Therefore, accurately accounting for biomass-burning aerosol size in global models requires an effective aerosol size distribution that accounts for this sub-grid evolution and can be derived from available emission-inventory and meteorological parameters. In this paper, we perform a detailed investigation of the effects of coagulation on the aerosol size distribution in biomass-burning plumes. We compare the effect of coagulation to that of OA evaporation and formation. We develop coagulation-only parameterizations for effective biomass-burning size distributions using the SAM-TOMAS large-eddy simulation plume model. For the most-sophisticated parameterization, we use the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) to build a parameterization of the aged size distribution based on the SAM-TOMAS output and seven inputs: emission median dry diameter, emission distribution modal width, mass emissions flux, fire area, mean boundary-layer wind speed, plume mixing depth, and time/distance since emission. This parameterization was tested against an independent set of SAM-TOMAS simulations and yields R2 values of 0.83 and 0.89 for Dpm and modal width, respectively. The
Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds
Yun, Yuxing; Penner, Joyce E.
2012-04-01
A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.
Directory of Open Access Journals (Sweden)
Paulo Fernandes Saad
2012-12-01
Full Text Available CONTEXT: Non-derivative surgical techniques are the treatment of choice for the control of upper digestive tract hemorrhages after schistosomotic portal hypertension. However, recurrent hemorrhaging due to gastroesophagic varices is frequent. OBJECTIVE: To evaluate the outcome of treatment based on embolization of the left gastric vein to control the reoccurrence of hemorrhages caused by gastroesophagic varices in patients with schistosomiasis previously submitted to non-derivative surgery. METHODS: Rates of reoccurrence of hemorrhages and the qualitative and quantitative reduction of gastroesophagic varices in patients undergoing transhepatic embolization of the left gastric vein between December 1999 and January 2009 were studied based on medical charts and follow-up reports. RESULTS: Seven patients with a mean age of 39.3 years underwent percutaneous transhepatic embolization of the left gastric vein. The mean time between azigoportal disconnections employed in combination with splenectomy and the percutaneous approach was 8.4 ± 7.3 years, and the number of episodes of digestive hemorrhaging ranged from 1 to 7 years. No episodes of reoccurrence of hemorrhaging were found during a follow-up period which ranged from 6 months to 7 years. Endoscopic postembolization studies revealed reductions in gastroesophagic varices in all patients compared to preembolization endoscopy. CONCLUSIONS: Percutaneous transhepatic embolization of the left gastric vein in patients with schistosomiasis previously submitted to surgery resulted in a decrease in gastroesophagic varices and was shown to be effective in controlling hemorrhage reoccurrence.INTRODUÇÃO: A cirurgia por técnicas não derivativas é o tratamento de escolha para o controle da hemorragia digestiva alta secundária à hipertensão portal esquistossomótica. Contudo, a recidiva hemorrágica em decorrência das varizes gastroesofágicas é um evento frequente. O programa de erradicação endosc
Tool-driven Design and Automated Parameterization for Real-time Generic Drivetrain Models
Directory of Open Access Journals (Sweden)
Schwarz Christina
2015-01-01
Full Text Available Real-time dynamic drivetrain modeling approaches have a great potential for development cost reduction in the automotive industry. Even though real-time drivetrain models are available, these solutions are specific to single transmission topologies. In this paper an environment for parameterization of a solution is proposed based on a generic method applicable to all types of gear transmission topologies. This enables tool-guided modeling by non- experts in the fields of mechanic engineering and control theory leading to reduced development and testing efforts. The approach is demonstrated for an exemplary automatic transmission using the environment for automated parameterization. Finally, the parameterization is validated via vehicle measurement data.
USING PARAMETERIZATION OF OBJECTS IN AUTODESK INVENTOR IN DESIGNING STRUCTURAL CONNECTORS
Directory of Open Access Journals (Sweden)
Gabriel Borowski
2015-05-01
Full Text Available The article presents the parameterization of objects used for designing the type of elements as structural connectors and making modifications of their characteristics. The design process was carried out using Autodesk Inventor 2015. We show the latest software tools, which were used for parameterization and modeling selected types of structural connectors. We also show examples of the use of parameterization facilities in the process of constructing some details and making changes to geometry with holding of the shape the element. The presented method of Inventor usage has enabled fast and efficient creation of new objects based on sketches created.
Bioavailability of radiocaesium in soil: parameterization using soil characteristics
Energy Technology Data Exchange (ETDEWEB)
Syssoeva, A.A.; Konopleva, I.V. [Russian Institute of Agricultural Radiology and Agroecology, Obninsk (Russian Federation)
2004-07-01
It has been shown that radiocaesium availability to plants strongly influenced by soil properties. For the best evaluation of TFs it necessary to use mechanistic models that predict radionuclide uptake by plants based on consideration of sorption-desorption and fixation-remobilization of the radionuclide in the soil as well as root uptake processes controlled by the plant. The aim of the research was to characterise typical Russian soils on the basis of the radiocaesium availability. The parameter of the radiocaesium availability in soils (A) has been developed which consist on radiocaesium exchangeability; CF -concentration factor which is the ratio of the radiocaesium in plant to that in soil solution; K{sub Dex} - exchangeable solid-liquid distribution coefficient of radiocaesium. The approach was tested for a wide range of Russian soils using radiocaesium uptake data from a barley pot trial and parameters of the radiocaesium bioavailability. Soils were collected from the arable horizons in different soil climatic zones of Russia and artificially contaminated by {sup 137}Cs. The classification of soils in terms of the radiocaesium availability corresponds quite well to observed linear relationship between {sup 137}Cs TF for barley and A. K{sub Dex} is related to the soil radiocaesium interception potential (RIP), which was found to be positively and strongly related to clay and physical clay (<0,01 mm) content. The {sup 137}Cs exchangeability were found to be in close relation to the soil vermiculite content, which was estimated by the method of Cs{sup +} fixation. It's shown radiocaesium availability to plants in soils under study can be parameterized through mineralogical soil characteristics: % clay and the soil vermiculite content. (author)
Measuring the Binary Black Hole Mass Spectrum with an Astrophysically Motivated Parameterization
Talbot, Colm; Thrane, Eric
2018-04-01
Gravitational-wave detections have revealed a previously unknown population of stellar mass black holes with masses above 20 M ⊙. These observations provide a new way to test models of stellar evolution for massive stars. By considering the astrophysical processes likely to determine the shape of the binary black hole mass spectrum, we construct a parameterized model to capture key spectral features that relate gravitational-wave data to theoretical stellar astrophysics. In particular, we model the signature of pulsational pair-instability supernovae, which are expected to cause all stars with initial mass 100 M ⊙ ≲ M ≲ 150 M ⊙ to form ∼40 M ⊙ black holes. This would cause a cutoff in the black hole mass spectrum along with an excess of black holes near 40 M ⊙. We carry out a simulated data study to illustrate some of the stellar physics that can be inferred using gravitational-wave measurements of binary black holes and demonstrate several such inferences that might be made in the near future. First, we measure the minimum and maximum stellar black hole mass. Second, we infer the presence of a peak due to pair-instability supernovae. Third, we measure the distribution of black hole mass ratios. Finally, we show how inadequate models of the black hole mass spectrum lead to biased estimates of the merger rate and the amplitude of the stochastic gravitational-wave background.
Parameterization of general Z-γ-Z' mixing in an electroweak chiral theory
International Nuclear Information System (INIS)
Zhang Ying; Wang Qing
2012-01-01
A new general parameterization with eight mixing parameters among Z, γ and an extra neutral gauge boson Z ' is proposed and subjected to phenomenological analysis. We show that in addition to the conventional Weinberg angle θ W , there are seven other phenomenological parameters, G ' , ξ, η, θ 1 , θ r , r and l, for the most general Z-γ-Z ' mixings, in which parameter G ' arises due to the presence of an extra Stueckelbergtype mass coupling. Combined with the conventional Z-Z ' mass mixing angle 0', the remaining six parameters, ξ, η, θ l -θ ' , θ r - θ ' , r and l, are caused by general kinetic mixing. In all eight phenomenological parameters, θ W , G ' , ξ, η, θ 1 , θ r , r and l, we can determine the Z-Z ' mass mixing angle θ ' and the mass ratio M Z /M Z ' . The Z-γ-Z ' mixing that we discuss are based on the model-independent description of the extended electroweak chiral Lagrangian (EWCL) previously proposed by us. In addition, we show that there are eight corresponding independent theoretical coefficients in our EWCL, which are fully fixed by our eight phenomenological mixing parameters. We further find that the experimental measurability of these eight parameters does not rely on the extended neutral current for Z ' , but depends on the Z-Z ' mass ratio. (authors)
Predicting non-melanoma skin cancer via a multi-parameterized artificial neural network.
Roffman, David; Hart, Gregory; Girardi, Michael; Ko, Christine J; Deng, Jun
2018-01-26
Ultraviolet radiation (UVR) exposure and family history are major associated risk factors for the development of non-melanoma skin cancer (NMSC). The objective of this study was to develop and validate a multi-parameterized artificial neural network based on available personal health information for early detection of NMSC with high sensitivity and specificity, even in the absence of known UVR exposure and family history. The 1997-2015 NHIS adult survey data used to train and validate our neural network (NN) comprised of 2,056 NMSC and 460,574 non-cancer cases. We extracted 13 parameters for our NN: gender, age, BMI, diabetic status, smoking status, emphysema, asthma, race, Hispanic ethnicity, hypertension, heart diseases, vigorous exercise habits, and history of stroke. This study yielded an area under the ROC curve of 0.81 and 0.81 for training and validation, respectively. Our results (training sensitivity 88.5% and specificity 62.2%, validation sensitivity 86.2% and specificity 62.7%) were comparable to a previous study of basal and squamous cell carcinoma prediction that also included UVR exposure and family history information. These results indicate that our NN is robust enough to make predictions, suggesting that we have identified novel associations and potential predictive parameters of NMSC.
Energy Technology Data Exchange (ETDEWEB)
Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)
2010-03-01
This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using
Energy Technology Data Exchange (ETDEWEB)
Grogan, Brandon R [ORNL
2010-05-01
This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the
Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...
Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model
This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Guang J. [Univ. of California, San Diego, CA (United States)
2016-11-07
The fundamental scientific objectives of our research are to use ARM observations and the NCAR CAM5 to understand the large-scale control on convection, and to develop improved convection and cloud parameterizations for use in GCMs.
Radiative flux and forcing parameterization error in aerosol-free clear skies.
Pincus, Robert; Mlawer, Eli J; Oreopoulos, Lazaros; Ackerman, Andrew S; Baek, Sunghye; Brath, Manfred; Buehler, Stefan A; Cady-Pereira, Karen E; Cole, Jason N S; Dufresne, Jean-Louis; Kelley, Maxwell; Li, Jiangnan; Manners, James; Paynter, David J; Roehrig, Romain; Sekiguchi, Miho; Schwarzkopf, Daniel M
2015-07-16
Radiation parameterizations in GCMs are more accurate than their predecessorsErrors in estimates of 4 ×CO 2 forcing are large, especially for solar radiationErrors depend on atmospheric state, so global mean error is unknown.
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
International Nuclear Information System (INIS)
Somerville, R.C.J.; Iacobellis, S.F.
2005-01-01
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional
Parameterization of pion production and reaction cross sections at LAMPF energies
International Nuclear Information System (INIS)
Burman, R.L.; Smith, E.S.
1989-05-01
A parameterization of pion production and reaction cross sections is developed for eventual use in modeling neutrino production by protons in a beam stop. Emphasis is placed upon smooth parameterizations for proton energies up to 800 MeV, for all pion energies and angles, and for a wide range of materials. The resulting representations of the data are well-behaved and can be used for extrapolation to regions where there are no measurements. 22 refs., 16 figs., 2 tabs
Tsang, Yue-Kin; Vallis, Geoffrey K.
2018-01-01
In this paper we describe the construction of an efficient probabilistic parameterization that could be used in a coarse-resolution numerical model in which the variation of moisture is not properly resolved. An Eulerian model using a coarse-grained field on a grid cannot properly resolve regions of saturation---in which condensation occurs---that are smaller than the grid boxes. Thus, in the absence of a parameterization scheme, either the grid box must become saturated or condensation will ...
A parameterization for the absorption of solar radiation by water vapor in the earth's atmosphere
Wang, W.-C.
1976-01-01
A parameterization for the absorption of solar radiation as a function of the amount of water vapor in the earth's atmosphere is obtained. Absorption computations are based on the Goody band model and the near-infrared absorption band data of Ludwig et al. A two-parameter Curtis-Godson approximation is used to treat the inhomogeneous atmosphere. Heating rates based on a frequently used one-parameter pressure-scaling approximation are also discussed and compared with the present parameterization.
Directory of Open Access Journals (Sweden)
Reza Mohammadyari
2015-08-01
Full Text Available The problem of solid particle settling is a well known problem in mechanic of fluids. The parametrized Perturbation Method is applied to analytically solve the unsteady motion of a spherical particle falling in a Newtonian fluid using the drag of the form given by Oseen/Ferreira, for a range of Reynolds numbers. Particle equation of motion involved added mass term and ignored the Basset term. By using this new kind of perturbation method called parameterized perturbation method (PPM, analytical expressions for the instantaneous velocity, acceleration and position of the particle were derived. The presented results show the effectiveness of PPM and high rate of convergency of the method to achieve acceptable answers.
A review of the theoretical basis for bulk mass flux convective parameterization
Directory of Open Access Journals (Sweden)
R. S. Plant
2010-04-01
Full Text Available Most parameterizations for precipitating convection in use today are bulk schemes, in which an ensemble of cumulus elements with different properties is modelled as a single, representative entraining-detraining plume. We review the underpinning mathematical model for such parameterizations, in particular by comparing it with spectral models in which elements are not combined into the representative plume. The chief merit of a bulk model is that the representative plume can be described by an equation set with the same structure as that which describes each element in a spectral model. The equivalence relies on an ansatz for detrained condensate introduced by Yanai et al. (1973 and on a simplified microphysics. There are also conceptual differences in the closure of bulk and spectral parameterizations. In particular, we show that the convective quasi-equilibrium closure of Arakawa and Schubert (1974 for spectral parameterizations cannot be carried over to a bulk parameterization in a straightforward way. Quasi-equilibrium of the cloud work function assumes a timescale separation between a slow forcing process and a rapid convective response. But, for the natural bulk analogue to the cloud-work function, the relevant forcing is characterised by a different timescale, and so its quasi-equilibrium entails a different physical constraint. Closures of bulk parameterizations that use a parcel value of CAPE do not suffer from this timescale issue. However, the Yanai et al. (1973 ansatz must be invoked as a necessary ingredient of those closures.
Parameterized Disturbance Observer Based Controller to Reduce Cyclic Loads of Wind Turbine
Directory of Open Access Journals (Sweden)
Raja M. Imran
2018-05-01
Full Text Available This paper is concerned with bump-less transfer of parameterized disturbance observer based controller with individual pitch control strategy to reduce cyclic loads of wind turbine in full load operation. Cyclic loads are generated due to wind shear and tower shadow effects. Multivariable disturbance observer based linear controllers are designed with objective to reduce output power fluctuation, tower oscillation and drive-train torsion using optimal control theory. Linear parameterized controllers are designed by using a smooth scheduling mechanism between the controllers. The proposed parameterized controller with individual pitch was tested on nonlinear Fatigue, Aerodynamics, Structures, and Turbulence (FAST code model of National Renewable Energy Laboratory (NREL’s 5 MW wind turbine. The closed-loop system performance was assessed by comparing the simulation results of proposed controller with a fixed gain and parameterized controller with collective pitch for full load operation of wind turbine. Simulations are performed with step wind to see the behavior of the system with wind shear and tower shadow effects. Then, turbulent wind is applied to see the smooth transition of the controllers. It can be concluded from the results that the proposed parameterized control shows smooth transition from one controller to another controller. Moreover, 3p and 6p harmonics are well mitigated as compared to fixed gain DOBC and parameterized DOBC with collective pitch.
Analysis of sensitivity to different parameterization schemes for a subtropical cyclone
Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.
2018-05-01
A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.
Pincus, R.; Mlawer, E. J.
2017-12-01
Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.
Firl, G. J.; Randall, D. A.
2013-12-01
The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been
Invariant-based reasoning about parameterized security protocols
Mooij, A.J.
2010-01-01
We explore the applicability of the programming method of Feijen and van Gasteren to the domain of security protocols. This method addresses the derivation of concurrent programs from a formal specification, and it is based on common notions like invariants and pre- and post-conditions. We show that
Agishev, Ravil; Comerón, Adolfo; Rodriguez, Alejandro; Sicard, Michaël
2014-05-20
In this paper, we show a renewed approach to the generalized methodology for atmospheric lidar assessment, which uses the dimensionless parameterization as a core component. It is based on a series of our previous works where the problem of universal parameterization over many lidar technologies were described and analyzed from different points of view. The modernized dimensionless parameterization concept applied to relatively new silicon photomultiplier detectors (SiPMs) and traditional photomultiplier (PMT) detectors for remote-sensing instruments allowed predicting the lidar receiver performance with sky background available. The renewed approach can be widely used to evaluate a broad range of lidar system capabilities for a variety of lidar remote-sensing applications as well as to serve as a basis for selection of appropriate lidar system parameters for a specific application. Such a modernized methodology provides a generalized, uniform, and objective approach for evaluation of a broad range of lidar types and systems (aerosol, Raman, DIAL) operating on different targets (backscatter or topographic) and under intense sky background conditions. It can be used within the lidar community to compare different lidar instruments.
Impact of cloud parameterization on the numerical simulation of a super cyclone
Energy Technology Data Exchange (ETDEWEB)
Deshpande, M.S.; Pattnaik, S.; Salvekar, P.S. [Indian Institute of Tropical Meteorology, Pune (India)
2012-07-01
This study examines the role of parameterization of convection and explicit moisture processes on the simulated track, intensity and inner core structure of Orissa super cyclone (1999) in Bay of Bengal (north Indian Ocean). Sensitivity experiments are carried out to examine the impact of cumulus parameterization schemes (CPS) using MM5 model (Version 3.7) in a two-way nested domain (D1 and D2) configuration at horizontal resolutions (45-15 km). Three different cumulus parameterization schemes, namely Grell (Gr), Betts-Miller (BM) and updated Kain Fritsch (KF2), are tested. It is noted that track and intensity both are very sensitive to CPS and comparatively, KF2 predicts them reasonably well. Particularly, the rapid intensification phase of the super cyclone is best simulated by KF2 compared to other CPS. To examine the effect of the cumulus parameterization scheme at high resolution (5 km), the three-domain configuration (45-15-5 km resolution) is utilized. Based on initial results, KF2 scheme is used for both the domains (D1 and D2). Two experiments are conducted: one in which KF2 is used as CPS and another in which no CPS is used in the third domain. The intensity is well predicted when no CPS is used in the innermost domain. The sensitivity experiments are also carried out to examine the impact from microphysics parameterization schemes (MPS). Four cloud microphysics parameterization schemes, namely mixed phase (MP), Goddard microphysics with Graupel (GG), Reisner Graupel (RG) and Schultz (Sc), are tested in these experiments. It is noted that the tropical cyclone tracks and intensity variation have considerable sensitivity to the varying cloud microphysical parameterization schemes. The MPS of MP and Sc could very well capture the rapid intensification phase. The final intensity is well predicted by MP, which is overestimated by Sc. The MPS of GG and RG underestimates the intensity. (orig.)
Directory of Open Access Journals (Sweden)
K. Zhang
2013-05-01
Full Text Available This study uses aircraft measurements of relative humidity and ice crystal size distribution collected during the SPARTICUS (Small PARTicles In CirrUS field campaign to evaluate and constrain ice cloud parameterizations in the Community Atmosphere Model version 5. About 200 h of data were collected during the campaign between January and June 2010, providing the longest aircraft measurements available so far for cirrus clouds in the midlatitudes. The probability density function (PDF of ice crystal number concentration (Ni derived from the high-frequency (1 Hz measurements features a strong dependence on ambient temperature. As temperature decreases from −35 °C to −62 °C, the peak in the PDF shifts from 10–20 L−1 to 200–1000 L−1, while Ni shows a factor of 6–7 increase. Model simulations are performed with two different ice nucleation schemes for pure ice-phase clouds. One of the schemes can reproduce a clear increase of Ni with decreasing temperature by using either an observation-based ice nuclei spectrum or a classical-theory-based spectrum with a relatively low (5–10% maximum freezing ratio for dust aerosols. The simulation with the other scheme, which assumes a high maximum freezing ratio (100%, shows much weaker temperature dependence of Ni. Simulations are also performed to test empirical parameters related to water vapor deposition and the autoconversion of ice crystals to snow. Results show that a value between 0.05 and 0.1 for the water vapor deposition coefficient, and 250 μm for the critical diameter that distinguishes ice crystals from snow, can produce good agreement between model simulation and the SPARTICUS measurements in terms of Ni and effective radius. The climate impact of perturbing these parameters is also discussed.
Singh, A.; Serbin, S.; Kucharik, C. J.; Townsend, P. A.
2014-12-01
Ecosystem models such AgroIBIS require detailed parameterizations of numerous vegetation traits related to leaf structure, biochemistry and photosynthetic capacity to properly assess plant carbon assimilation and yield response to environmental variability. In general, these traits are estimated from a limited number of field measurements or sourced from the literature, but rarely is the full observed range of variability in these traits utilized in modeling activities. In addition, pathogens and pests, such as the exotic soybean aphid (Aphis glycines), which affects photosynthetic pathways in soybean plants by feeding on phloem and sap, can potentially impact plant productivity and yields. Capturing plant responses to pest pressure in conjunction with environmental variability is of considerable interest to managers and the scientific community alike. In this research, we employed full-range (400-2500 nm) field and laboratory spectroscopy to rapidly characterize the leaf biochemical and physiological traits, namely foliar nitrogen, specific leaf area (SLA) and the maximum rate of RuBP carboxylation by the enzyme RuBisCo (Vcmax) in soybean plants, which experienced a broad range of environmental conditions and soybean aphid pressures. We utilized near-surface spectroscopic remote sensing measurements as a means to capture the spatial and temporal patterns of aphid impacts across broad aphid pressure levels. In addition, we used the spectroscopic data to generate a much larger dataset of key model parameters required by AgroIBIS than would be possible through traditional measurements of biochemistry and leaf-level gas exchange. The use of spectroscopic retrievals of soybean traits allowed us to better characterize the variability of plant responses associated with aphid pressure to more accurately model the likely impacts of soybean aphid on soybeans. Our next steps include the coupling of the information derived from our spectral measurements with the Agro
Shallow cumuli ensemble statistics for development of a stochastic parameterization
Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs
2014-05-01
According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a
Preoperative screening: value of previous tests.
Macpherson, D S; Snow, R; Lofgren, R P
1990-12-15
To determine the frequency of tests done in the year before elective surgery that might substitute for preoperative screening tests and to determine the frequency of test results that change from a normal value to a value likely to alter perioperative management. Retrospective cohort analysis of computerized laboratory data (complete blood count, sodium, potassium, and creatinine levels, prothrombin time, and partial thromboplastin time). Urban tertiary care Veterans Affairs Hospital. Consecutive sample of 1109 patients who had elective surgery in 1988. At admission, 7549 preoperative tests were done, 47% of which duplicated tests performed in the previous year. Of 3096 previous results that were normal as defined by hospital reference range and done closest to the time of but before admission (median interval, 2 months), 13 (0.4%; 95% CI, 0.2% to 0.7%), repeat values were outside a range considered acceptable for surgery. Most of the abnormalities were predictable from the patient's history, and most were not noted in the medical record. Of 461 previous tests that were abnormal, 78 (17%; CI, 13% to 20%) repeat values at admission were outside a range considered acceptable for surgery (P less than 0.001, frequency of clinically important abnormalities of patients with normal previous results with those with abnormal previous results). Physicians evaluating patients preoperatively could safely substitute the previous test results analyzed in this study for preoperative screening tests if the previous tests are normal and no obvious indication for retesting is present.
Directory of Open Access Journals (Sweden)
P. Pisoft
2010-07-01
Full Text Available In general, regional and global chemistry transport models apply instantaneous mixing of emissions into the model's finest resolved scale. In case of a concentrated source, this could result in erroneous calculation of the evolution of both primary and secondary chemical species. Several studies discussed this issue in connection with emissions from ships and aircraft. In this study, we present an approach to deal with the non-linear effects during dispersion of NOx emissions from ships. It represents an adaptation of the original approach developed for aircraft NOx emissions, which uses an exhaust tracer to trace the amount of the emitted species in the plume and applies an effective reaction rate for the ozone production/destruction during the plume's dilution into the background air. In accordance with previous studies examining the impact of international shipping on the composition of the troposphere, we found that the contribution of ship induced surface NOx to the total reaches 90% over remote ocean and makes 10–30% near coastal regions. Due to ship emissions, surface ozone increases by up to 4–6 ppbv making 10% contribution to the surface ozone budget. When applying the ship plume parameterization, we show that the large scale NOx decreases and the ship NOx contribution is reduced by up to 20–25%. A similar decrease was found in the case of O3. The plume parameterization suppressed the ship induced ozone production by 15–30% over large areas of the studied region. To evaluate the presented parameterization, nitrogen monoxide measurements over the English Channel were compared with modeled values and it was found that after activating the parameterization the model accuracy increases.
Automatic electromagnetic valve for previous vacuum
International Nuclear Information System (INIS)
Granados, C. E.; Martin, F.
1959-01-01
A valve which permits the maintenance of an installation vacuum when electric current fails is described. It also lets the air in the previous vacuum bomb to prevent the oil ascending in the vacuum tubes. (Author)
Khan, Tanvir R.; Perlinger, Judith A.
2017-10-01
Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of
Directory of Open Access Journals (Sweden)
T. R. Khan
2017-10-01
Full Text Available Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001 (Z01, Petroff and Zhang (2010 (PZ10, Kouznetsov and Sofiev (2012 (KS12, Zhang and He (2014 (ZH14, and Zhang and Shao (2014 (ZS14, respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy; the influence of imprecision in input parameter values on the modeled Vd (uncertainty; and identification of the most influential parameter(s (sensitivity. The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs: grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy, we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0
White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John
2017-08-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management
Wang, Yu; Wu, Zhijun; Ma, Nan; Wu, Yusheng; Zeng, Limin; Zhao, Chunsheng; Wiedensohler, Alfred
2018-02-01
The take-up of water of aerosol particles plays an important role in heavy haze formation over North China Plain, since it is related with particle mass concentration, visibility degradation, and particle chemistry. In the present study, we investigated the size-resolved hygroscopic growth factor (HGF) of sub-micrometer aerosol particles (smaller than 350 nm) on a basis of 9-month Hygroscopicity-Tandem Differential Mobility Analyzer measurement in the urban background atmosphere of Beijing. The mean hygroscopicity parameter (κ) values derived from averaging over the entire sampling period for particles of 50 nm, 75 nm, 100 nm, 150 nm, 250 nm, and 350 nm in diameters were 0.14 ± 0.07, 0.17 ± 0.05, 0.18 ± 0.06, 0.20 ± 0.07, 0.21 ± 0.09, and 0.23 ± 0.12, respectively, indicating the dominance of organics in the sub-micrometer urban aerosols. In the spring, summer, and autumn, the number fraction of hydrophilic particles increased with increasing particle size, resulting in an increasing trend of overall particle hygroscopicity with enhanced particle size. Differently, the overall mean κ values peaked in the range of 75-150 nm and decreased for particles larger than 150 nm in diameter during wintertime. Such size-dependency of κ in winter was related to the strong primary particle emissions from coal combustion during domestic heating period. The number fraction of hydrophobic particles such as freshly emitted soot decreased with increasing PM2.5 mass concentration, indicating aged and internal mixed particles were dominant in the severe particulate matter pollution. Parameterization schemes of the HGF as a function of relative humidity (RH) and particle size between 50 and 350 nm were determined for different seasons and pollution levels. The HGFs calculated from the parameterizations agree well with the measured HGFs at 20-90% RH. The parameterizations can be applied to determine the hygroscopic growth of aerosol particles at ambient conditions for the area
Alfieri, J. G.; Kustas, W. P.; Gao, F.; Nieto, H.; Prueger, J. H.; Hipps, L.
2017-12-01
Because the judicious application of water is key to ensuring berry quality, information regarding evapotranspiration (ET) is critical when making irrigation and other crop management decisions for vineyards. Increasingly, wine grape producers seek to use remote sensing-based models to monitor ET and inform management decisions. However, the parameterization schemes used by these models do not fully account for the effects of the highly-structured canopy architecture on either the roughness characteristics of the vineyard or the turbulent transport and exchange within and above the vines. To investigate the effects of vineyard structure on the roughness length (zo) and displacement height (do) of vineyards, data collected from 2013 to 2016 as a part of the Grape Remote Sensing Atmospheric Profiling and Evapotranspiration Experiment (GRAPEX), an ongoing multi-agency field campaign conducted in the Central Valley of California, was used. Specifically, vertical profiles (2.5 m, 3.75 m, 5 m, and 8 m, agl) of wind velocity collected under near-neutral conditions were used to estimate do and zo and characterize how these roughness parameters vary in response changing environmental conditions. The roughness length was found to vary as a function of wind direction. It increased sigmoidally from a minimum near 0.15 m when the wind direction was parallel to the vine rows to a maximum between 0.3 m and 0.4 m when the winds were perpendicularly to the rows. Similarly, do was found responds strongly to changes in vegetation density as measured via leaf area index (LAI). Although the maximum varied from year-to-year, do increased rapidly after bud break in all cases and then remained constant for the remainder of the growing season. A comparison of the model output from the remote sensing-based two-source energy balance (TSEB) model using the standard roughness parameterization scheme and the empirical relationships derived from observations indicates a that the modeled ET
On parameterization of the inverse problem for estimating aquifer properties using tracer data
International Nuclear Information System (INIS)
Kowalsky, M. B.; Finsterle, Stefan A.; Williams, Kenneth H.; Murray, Christopher J.; Commer, Michael; Newcomer, Darrell R.; Englert, Andreas L.; Steefel, Carl I.; Hubbard, Susan
2012-01-01
We consider a field-scale tracer experiment conducted in 2007 in a shallow uranium-contaminated aquifer at Rifle, Colorado. In developing a reliable approach for inferring hydrological properties at the site through inverse modeling of the tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance. We present an approach for hydrological inversion of the tracer data and explore, using a 2D synthetic example at first, how parameterization affects the solution, and how additional characterization data could be incorporated to reduce uncertainty. Specifically, we examine sensitivity of the results to the configuration of pilot points used in a geostatistical parameterization, and to the sampling frequency and measurement error of the concentration data. A reliable solution of the inverse problem is found when the pilot point configuration is carefully implemented. In addition, we examine the use of a zonation parameterization, in which the geometry of the geological facies is known (e.g., from geophysical data or core data), to reduce the non-uniqueness of the solution and the number of unknown parameters to be estimated. When zonation information is only available for a limited region, special treatment in the remainder of the model is necessary, such as using a geostatistical parameterization. Finally, inversion of the actual field data is performed using 2D and 3D models, and results are compared with slug test data.
Directory of Open Access Journals (Sweden)
R. Sinreich
2013-06-01
Full Text Available We present a novel parameterization method to convert multi-axis differential optical absorption spectroscopy (MAX-DOAS differential slant column densities (dSCDs into near-surface box-averaged volume mixing ratios. The approach is applicable inside the planetary boundary layer under conditions with significant aerosol load, and builds on the increased sensitivity of MAX-DOAS near the instrument altitude. It parameterizes radiative transfer model calculations and significantly reduces the computational effort, while retrieving ~ 1 degree of freedom. The biggest benefit of this method is that the retrieval of an aerosol profile, which usually is necessary for deriving a trace gas concentration from MAX-DOAS dSCDs, is not needed. The method is applied to NO2 MAX-DOAS dSCDs recorded during the Mexico City Metropolitan Area 2006 (MCMA-2006 measurement campaign. The retrieved volume mixing ratios of two elevation angles (1° and 3° are compared to volume mixing ratios measured by two long-path (LP-DOAS instruments located at the same site. Measurements are found to agree well during times when vertical mixing is expected to be strong. However, inhomogeneities in the air mass above Mexico City can be detected by exploiting the different horizontal and vertical dimensions probed by the MAX-DOAS and LP-DOAS instruments. In particular, a vertical gradient in NO2 close to the ground can be observed in the afternoon, and is attributed to reduced mixing coupled with near-surface emission inside street canyons. The existence of a vertical gradient in the lower 250 m during parts of the day shows the general challenge of sampling the boundary layer in a representative way, and emphasizes the need of vertically resolved measurements.
A review of recent research on improvement of physical parameterizations in the GLA GCM
Sud, Y. C.; Walker, G. K.
1990-01-01
A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.
Institute of Scientific and Technical Information of China (English)
PING Fan; TANG Xi-ba; YIN Lei
2016-01-01
According to the characteristics of organized cumulus convective precipitation in China,a cumulus parameterization scheme suitable for describing the organized convective precipitation in East Asia is presented and modified.The Kain-Fristch scheme is chosen as the scheme to be modified based on analyses and comparisons of simulated precipitation in East Asia by several commonly-used mesoscale parameterization schemes.A key dynamic parameter to dynamically control the cumulus parameterization is then proposed to improve the Kain-Fristch scheme.Numerical simulations of a typhoon case and a Mei-yu front rainfall case are carried out with the improved scheme,and the results show that the improved version performs better than the original in simulating the track and intensity of the typhoons,as well as the distribution of Mei-yu front precipitation.
Liu, S.; Ng, G. H. C.
2017-12-01
The global plant database has revealed that plant traits can vary more within a plant functional type (PFT) than among different PFTs, indicating that the current paradigm in ecohydrogical models of specifying fixed parameters based solely on plant functional type (PFT) could potentially bias simulations. Although some recent modeling studies have attempted to incorporate this observed plant trait variability, many failed to consider uncertainties due to sparse global observation, or they omitted spatial and/or temporal variability in the traits. Here we present a stochastic parameterization for prognostic vegetation simulations that are stochastic in time and space in order to represent plant trait plasticity - the process by which trait differences arise. We have developed the new PFT parameterization within the Community Land Model 4.5 (CLM 4.5) and tested the method for a desert shrubland watershed in the Mojave Desert, where fixed parameterizations cannot represent acclimation to desert conditions. Spatiotemporally correlated plant trait parameters were first generated based on TRY statistics and were then used to implement ensemble runs for the study area. The new PFT parameterization was then further conditioned on field measurements of soil moisture and remotely sensed observations of leaf-area-index to constrain uncertainties in the sparse global database. Our preliminary results show that incorporating data-conditioned, variable PFT parameterizations strongly affects simulated soil moisture and water fluxes, compared with default simulations. The results also provide new insights about correlations among plant trait parameters and between traits and environmental conditions in the desert shrubland watershed. Our proposed stochastic PFT parameterization method for ecohydrological models has great potential in advancing our understanding of how terrestrial ecosystems are predicted to adapt to variable environmental conditions.
Directory of Open Access Journals (Sweden)
Pedro Saa
2015-04-01
Full Text Available Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol, a transition region (-2> ΔGr >-20 kJ/mol and a constant elasticity region (ΔGr <-20 kJ/mol. We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only
Coherence factors beyond the BCS expressions—a derivation
International Nuclear Information System (INIS)
Gorohovsky, G; Bettelheim, E
2014-01-01
We present a derivation of a previously announced result for matrix elements between exact eigenstates of the pairing Hamiltonian. Our results, which generalize the well-known Bardeen–Cooper–Schrieffer (BCS) (Bardeen et al 1957 Phys. Rev. 108 1175; 1957 Phys. Rev. 106 162) expressions for what are known as ‘coherence factors’, are derived based on the Slavnov (1989 Theor. Math. Phys. 79 502) formula for overlaps between Bethe-ansatz states, thus making use of the known connection between the exact diagonalization of the BCS Hamiltonian, due to Richardson (1963 Phys. Lett. 3 277; 1964 Nucl. Phys. A 52 221), and the algebraic Bethe ansatz. The resulting formula has a compact form after a suitable parameterization of the energy plane. Although we apply our method here to the pairing Hamiltonian, it may be adjusted to study what is termed the ‘Sutherland limit’ (Sutherland 1995 Phys. Rev. Lett. 74 816) for exactly solvable models, namely where a macroscopic number of rapidities form a large string. (paper)
Dynamic parameterization and ladder operators for the Kratzer molecular potential
International Nuclear Information System (INIS)
Devi, O Babynanda; Singh, C Amuba
2014-01-01
Introducing independent parameters k and δ to represent the strength of the attractive and repulsive components, respectively, we write the Kratzer molecular potential as V(k,δ)=(ℏ 2 /2 m)(−k/r+δ(δ−1)/r 2 ). This parameterisation is not only natural, but also convenient for the construction of ladder operators for the system. Adopting the straightforward method of deriving recurrence relations among confluent hypergeometric functions, we construct seven pairs of ladder operators for the Kratzer potential system. Detailed analysis of the laddering actions of these operators is given to show that they connect eigenstates of equal energy but belong to a hierarchy of Kratzer potential systems corresponding to different values of the parameters k and δ. Significantly, it is pointed out that it may not be possible to construct, in the position representation, a ladder operator which would connect different eigenstates belonging to the same potential V(k,δ). Transition to the hydrogen atom case is discussed. A number (14 altogether) of functional relations among the confluent hypergeometric functions have been derived and reported separately in an appendix. (paper)
Comparison of mixing height parameterizations with profiles measurements
Energy Technology Data Exchange (ETDEWEB)
Jaquier, A.; Stuebi, R.; Tercier, P. [Swiss Meteorological Inst., SMI - MeteoSwiss, Payerne (Switzerland)
1997-10-01
Different meteorological pre-processors for dispersion studies are available to derive the atmospheric boundary layer mixing height (MH). The analysis of their performances has been reviewed in the framework of the European COST Action 710. In this project, the computed mixing height values have been compared with data derived mostly from aero-logical sounding analysis and Sodar measurements. Since then, a new analysis of a low-tropospheric wind profiler (WP) data has been performed taking advantage of its high data sampling ({delta}t {approx} 30 sec.). The comparison between these recent results and aero-logical sounding, Sodar data, as well as to meteorological pre-processors calculations are reported for three periods of several days corresponding to different meteorological situations. In convective conditions, the pre-processors give reasonable level, the mixing height growing rate is in fair agreement with the measured one. In stable cloudy daytime conditions, the modeled mixing height does not correspond to any measured height. (LN)
77 FR 70176 - Previous Participation Certification
2012-11-23
... participants' previous participation in government programs and ensure that the past record is acceptable prior... information is designed to be 100 percent automated and digital submission of all data and certifications is... government programs and ensure that the past record is acceptable prior to granting approval to participate...
On the Tengiz petroleum deposit previous study
International Nuclear Information System (INIS)
Nysangaliev, A.N.; Kuspangaliev, T.K.
1997-01-01
Tengiz petroleum deposit previous study is described. Some consideration about structure of productive formation, specific characteristic properties of petroleum-bearing collectors are presented. Recommendation on their detail study and using of experience on exploration and development of petroleum deposit which have analogy on most important geological and industrial parameters are given. (author)
Subsequent pregnancy outcome after previous foetal death
Nijkamp, J. W.; Korteweg, F. J.; Holm, J. P.; Timmer, A.; Erwich, J. J. H. M.; van Pampus, M. G.
Objective: A history of foetal death is a risk factor for complications and foetal death in subsequent pregnancies as most previous risk factors remain present and an underlying cause of death may recur. The purpose of this study was to evaluate subsequent pregnancy outcome after foetal death and to
International Nuclear Information System (INIS)
Yan, Y.T.
1996-11-01
A brief review of the Zlib development is given. Emphasized is the Zlib nerve system which uses the One-Step Index Pointers (OSIPs) for efficient computation and flexible use of the Truncated Power Series Algebra (TPSA). Also emphasized is the treatment of parameterized maps with an object-oriented language (e.g. C++). A parameterized map can be a Vector Power Series (Vps) or a Lie generator represented by an exponent of a Truncated Power Series (Tps) of which each coefficient is an object of truncated power series
Impact of model structure and parameterization on Penman-Monteith type evaporation models
Ershadi, A.; McCabe, Matthew; Evans, J.P.; Wood, E.F.
2015-01-01
Overall, the results illustrate the sensitivity of Penman-Monteith type models to model structure, parameterization choice and biome type. A particular challenge in flux estimation relates to developing robust and broadly applicable model formulations. With many choices available for use, providing guidance on the most appropriate scheme to employ is required to advance approaches for routine global scale flux estimates, undertake hydrometeorological assessments or develop hydrological forecasting tools, amongst many other applications. In such cases, a multi-model ensemble or biome-specific tiled evaporation product may be an appropriate solution, given the inherent variability in model and parameterization choice that is observed within single product estimates.
Parameterization Models for Pesticide Exposure via Crop Consumption
DEFF Research Database (Denmark)
Fantke, Peter; Wieland, Peter; Juraske, Ronnie
2012-01-01
harvest, degradation half-lives in crops and on crop surfaces, overall residence times in soil, and substance molecular weight. Partition coefficients also play an important role for fruit trees and tomato (Kow), potato (Koc), and lettuce (Kaw, Kow). Focusing on these parameters, we develop crop...... correspond well with results from the complex framework for 1540 substance-crop combinations with total deviations between a factor 4 (potato) and a factor 66 (lettuce). Predicted residues also correspond well with experimental data previously used to evaluate the complex framework. Pesticide mass in harvest...
Chen, Ying; Wolke, Ralf; Ran, Liang; Birmili, Wolfram; Spindler, Gerald; Schröder, Wolfram; Su, Hang; Cheng, Yafang; Tegen, Ina; Wiedensohler, Alfred
2018-01-01
The heterogeneous hydrolysis of N2O5 on the surface of deliquescent aerosol leads to HNO3 formation and acts as a major sink of NOx in the atmosphere during night-time. The reaction constant of this heterogeneous hydrolysis is determined by temperature (T), relative humidity (RH), aerosol particle composition, and the surface area concentration (S). However, these parameters were not comprehensively considered in the parameterization of the heterogeneous hydrolysis of N2O5 in previous mass-based 3-D aerosol modelling studies. In this investigation, we propose a sophisticated parameterization (NewN2O5) of N2O5 heterogeneous hydrolysis with respect to T, RH, aerosol particle compositions, and S based on laboratory experiments. We evaluated closure between NewN2O5 and a state-of-the-art parameterization based on a sectional aerosol treatment. The comparison showed a good linear relationship (R = 0.91) between these two parameterizations. NewN2O5 was incorporated into a 3-D fully online coupled model, COSMO-MUSCAT, with the mass-based aerosol treatment. As a case study, we used the data from the HOPE Melpitz campaign (10-25 September 2013) to validate model performance. Here, we investigated the improvement of nitrate prediction over western and central Europe. The modelled particulate nitrate mass concentrations ([NO3-]) were validated by filter measurements over Germany (Neuglobsow, Schmücke, Zingst, and Melpitz). The modelled [NO3-] was significantly overestimated for this period by a factor of 5-19, with the corrected NH3 emissions (reduced by 50 %) and the original parameterization of N2O5 heterogeneous hydrolysis. The NewN2O5 significantly reduces the overestimation of [NO3-] by ˜ 35 %. Particularly, the overestimation factor was reduced to approximately 1.4 in our case study (12, 17-18 and 25 September 2013) when [NO3-] was dominated by local chemical formations. In our case, the suppression of organic coating was negligible over western and central Europe
DEFF Research Database (Denmark)
Olsen, Nils; Sabaka, T.J.; Lowes, F.
2005-01-01
When deriving spherical harmonic models of the Earth's magnetic field, low-degree external field contributions are traditionally considered by assuming that their expansion coefficient q(1)(0) varies linearly with the D-st-index, while induced contributions are considered assuming a constant ratio...... Q(1) of induced to external coefficients. A value of Q(1) = 0.27 was found from Magsat data and has been used by several authors when deriving recent field models from Orsted and CHAMP data. We describe a new approach that considers external and induced field based on a separation of D-st = E-st + I......-st into external (E-st) and induced (I-st) parts using a 1D model of mantle conductivity. The temporal behavior of q(1)(0) and of the corresponding induced coefficient are parameterized by E-st and I-st, respectively. In addition, we account for baseline-instabilities of D-st by estimating a value of q(1...
International Nuclear Information System (INIS)
Tronstad, Christian; Martinsen, Ørjan G; Grimnes, Sverre; Pischke, Soeren E; Holhjem, Lars; Tønnessen, Tor Inge
2010-01-01
For detection of cardiac ischemia based on regional pCO 2 measurement, sensor drift becomes a problem when monitoring over several hours. A real-time drift correction algorithm was developed based on utilization of the time-derivative to distinguish between physiological responses and the drift, customized by measurements from a myocardial infarction porcine model (6 pigs, 23 sensors). IscAlert(TM) conductometric pCO 2 sensors were placed in the myocardial regions supplied by the left anterior descending coronary artery (LAD) and the left circumflex artery (LCX) while the LAD artery was fully occluded for 1, 3, 5 and 15 min leading to ischemia in the LAD-dependent region. The measured pCO 2 , the drift-corrected pCO 2 (ΔpCO 2 ) and its time-derivative (TDpCO 2 ) were compared with respect to detection ability. Baseline stability in the ΔpCO 2 led to earlier, more accurate detection. The TDpCO 2 featured the earliest sensitivity, but with a lower specificity. Combining ΔpCO 2 and TDpCO 2 enables increased accuracy. Suggestions are given for the utilization of the parameters for an automated early warning and alarming system. In conclusion, early detection of cardiac ischemia is feasible using the conductometric pCO 2 sensor together with parameterization methods
Subsequent childbirth after a previous traumatic birth.
Beck, Cheryl Tatano; Watson, Sue
2010-01-01
Nine percent of new mothers in the United States who participated in the Listening to Mothers II Postpartum Survey screened positive for meeting the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria for posttraumatic stress disorder after childbirth. Women who have had a traumatic birth experience report fewer subsequent children and a longer length of time before their second baby. Childbirth-related posttraumatic stress disorder impacts couples' physical relationship, communication, conflict, emotions, and bonding with their children. The purpose of this study was to describe the meaning of women's experiences of a subsequent childbirth after a previous traumatic birth. Phenomenology was the research design used. An international sample of 35 women participated in this Internet study. Women were asked, "Please describe in as much detail as you can remember your subsequent pregnancy, labor, and delivery following your previous traumatic birth." Colaizzi's phenomenological data analysis approach was used to analyze the stories of the 35 women. Data analysis yielded four themes: (a) riding the turbulent wave of panic during pregnancy; (b) strategizing: attempts to reclaim their body and complete the journey to motherhood; (c) bringing reverence to the birthing process and empowering women; and (d) still elusive: the longed-for healing birth experience. Subsequent childbirth after a previous birth trauma has the potential to either heal or retraumatize women. During pregnancy, women need permission and encouragement to grieve their prior traumatic births to help remove the burden of their invisible pain.
Midgley, S M
2004-01-21
A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.
Madhulatha, A.; Rajeevan, M.
2018-02-01
Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.
Zedler, S. E.
2009-04-25
The drag coefficient parameterization of wind stress is investigated for tropical storm conditions using model sensitivity studies. The Massachusetts Institute of Technology (MIT) Ocean General Circulation Model was run in a regional setting with realistic stratification and forcing fields representing Hurricane Frances, which in early September 2004 passed east of the Caribbean Leeward Island chain. The model was forced with a NOAA-HWIND wind speed product after converting it to wind stress using four different drag coefficient parameterizations. Respective model results were tested against in situ measurements of temperature profiles and velocity, available from an array of 22 surface drifters and 12 subsurface floats. Changing the drag coefficient parameterization from one that saturated at a value of 2.3 × 10 -3 to a constant drag coefficient of 1.2 × 10-3 reduced the standard deviation difference between the simulated minus the measured sea surface temperature change from 0.8°C to 0.3°C. Additionally, the standard deviation in the difference between simulated minus measured high pass filtered 15-m current speed reduced from 15 cm/s to 5 cm/s. The maximum difference in sea surface temperature response when two different turbulent mixing parameterizations were implemented was 0.3°C, i.e., only 11% of the maximum change of sea surface temperature caused by the storm. Copyright 2009 by the American Geophysical Union.
Multi-sensor remote sensing parameterization of heat fluxes over heterogeneous land surfaces
Faivre, R.D.
2014-01-01
The parameterization of heat transfer by remote sensing, and based on SEBS scheme for turbulent heat fluxes retrieval, already proved to be very convenient for estimating evapotranspiration (ET) over homogeneous land surfaces. However, the use of such a method over heterogeneous landscapes (e.g.
Beretvas, S. Natasha; Cawthon, Stephanie W.; Lockhart, L. Leland; Kaye, Alyssa D.
2012-01-01
This pedagogical article is intended to explain the similarities and differences between the parameterizations of two multilevel measurement model (MMM) frameworks. The conventional two-level MMM that includes item indicators and models item scores (Level 1) clustered within examinees (Level 2) and the two-level cross-classified MMM (in which item…
Energy Technology Data Exchange (ETDEWEB)
Evans, J.L.; Frank, W.M.; Young, G.S. [Pennsylvania State Univ., University Park, PA (United States)
1996-04-01
Successful simulations of the global circulation and climate require accurate representation of the properties of shallow and deep convective clouds, stable-layer clouds, and the interactions between various cloud types, the boundary layer, and the radiative fluxes. Each of these phenomena play an important role in the global energy balance, and each must be parameterized in a global climate model. These processes are highly interactive. One major problem limiting the accuracy of parameterizations of clouds and other processes in general circulation models (GCMs) is that most of the parameterization packages are not linked with a common physical basis. Further, these schemes have not, in general, been rigorously verified against observations adequate to the task of resolving subgrid-scale effects. To address these problems, we are designing a new Integrated Cumulus Ensemble and Turbulence (ICET) parameterization scheme, installing it in a climate model (CCM2), and evaluating the performance of the new scheme using data from Atmospheric Radiation Measurement (ARM) Program Cloud and Radiation Testbed (CART) sites.
Accurate parameterization of reference evapotranspiration (ET0) is necessary for optimizing irrigation scheduling and avoiding costs associated with over-irrigation (water expense, loss of water productivity, energy costs, pollution) or with under-irrigation (crop stress and suboptimal yields or qua...
Efficient Parameterization for Grey-box Model Identification of Complex Physical Systems
DEFF Research Database (Denmark)
Blanke, Mogens; Knudsen, Morten Haack
2006-01-01
Grey box model identification preserves known physical structures in a model but with limits to the possible excitation, all parameters are rarely identifiable, and different parametrizations give significantly different model quality. Convenient methods to show which parameterizations are the be...... that need be constrained to achieve satisfactory convergence. Identification of nonlinear models for a ship illustrate the concept....
Aerosol-Cloud-Precipitation Interactions in WRF Model:Sensitivity to Autoconversion Parameterization
Institute of Scientific and Technical Information of China (English)
解小宁; 刘晓东
2015-01-01
Cloud-to-rain autoconversion process is an important player in aerosol loading, cloud morphology, and precipitation variations because it can modulate cloud microphysical characteristics depending on the par-ticipation of aerosols, and aff ects the spatio-temporal distribution and total amount of precipitation. By applying the Kessler, the Khairoutdinov-Kogan (KK), and the Dispersion autoconversion parameterization schemes in a set of sensitivity experiments, the indirect eff ects of aerosols on clouds and precipitation are investigated for a deep convective cloud system in Beijing under various aerosol concentration backgrounds from 50 to 10000 cm−3. Numerical experiments show that aerosol-induced precipitation change is strongly dependent on autoconversion parameterization schemes. For the Kessler scheme, the average cumulative precipitation is enhanced slightly with increasing aerosols, whereas surface precipitation is reduced signifi-cantly with increasing aerosols for the KK scheme. Moreover, precipitation varies non-monotonically for the Dispersion scheme, increasing with aerosols at lower concentrations and decreasing at higher concentrations. These diff erent trends of aerosol-induced precipitation change are mainly ascribed to diff erences in rain wa-ter content under these three autoconversion parameterization schemes. Therefore, this study suggests that accurate parameterization of cloud microphysical processes, particularly the cloud-to-rain autoconversion process, is needed for improving the scientifi c understanding of aerosol-cloud-precipitation interactions.
Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.
2016-12-01
Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.
Toy, M. D.; Olson, J.; Kenyon, J.; Smirnova, T. G.; Brown, J. M.
2017-12-01
The accuracy of wind forecasts in numerical weather prediction (NWP) models is improved when the drag forces imparted on atmospheric flow by subgrid-scale orography are included. Without such parameterizations, only the terrain resolved by the model grid, along with the small-scale obstacles parameterized by the roughness lengths can have an effect on the flow. This neglects the impacts of subgrid-scale terrain variations, which typically leads to wind speeds that are too strong. Using statistical information about the subgrid-scale orography, such as the mean and variance of the topographic height within a grid cell, the drag forces due to flow blocking, gravity wave drag, and turbulent form drag are estimated and distributed vertically throughout the grid cell column. We recently implemented the small-scale gravity wave drag paramterization of Steeneveld et al. (2008) and Tsiringakis et al. (2017) for stable planetary boundary layers, and the turbulent form drag parameterization of Beljaars et al. (2004) in the High-Resolution Rapid Refresh (HRRR) NWP model developed at the National Oceanic and Atmospheric Administration (NOAA). As a result, a high surface wind speed bias in the model has been reduced and small improvement to the maintenance of stable layers has also been found. We present the results of experiments with the subgrid-scale orographic drag parameterization for the regional HRRR model, as well as for a global model in development at NOAA, showing the direct and indirect impacts.
Framework to parameterize and validate APEX to support deployment of the nutrient tracking tool
Guidelines have been developed to parameterize and validate the Agricultural Policy Environmental eXtender (APEX) to support the Nutrient Tracking Tool (NTT). This follow-up paper presents 1) a case study to illustrate how the developed guidelines are applied in a headwater watershed located in cent...
Hydrological models have become essential tools for environmental assessments. This study’s objective was to evaluate a best professional judgment (BPJ) parameterization of the Agricultural Policy and Environmental eXtender (APEX) model with soil-survey data against the calibrated model with either ...
A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone
Filipot, J.-F.; Ardhuin, F.
2012-11-01
A new wave-breaking dissipation parameterization designed for phase-averaged spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is first explicitly calculated in physical space before being distributed over the relevant spectral components. The transition from deep to shallow water is made possible by using a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth. This parameterization is implemented in the WAVEWATCH III modeling framework, which is applied to a wide range of conditions and scales, from the global ocean to the beach scale. Wave height, peak and mean periods, and spectral data are validated using in situ and remote sensing data. Model errors are comparable to those of other specialized deep or shallow water parameterizations. This work shows that it is possible to have a seamless parameterization from the deep ocean to the surf zone.
A new albedo parameterization for use in climate models over the Antarctic ice sheet
Kuipers Munneke, P.|info:eu-repo/dai/nl/304831891; van den Broeke, M.R.|info:eu-repo/dai/nl/073765643; Lenaerts, J.T.M.|info:eu-repo/dai/nl/314850163; Flanner, M.G.; Gardner, A.S.; van de Berg, W.J.|info:eu-repo/dai/nl/304831611
2011-01-01
A parameterization for broadband snow surface albedo, based on snow grain size evolution, cloud optical thickness, and solar zenith angle, is implemented into a regional climate model for Antarctica and validated against field observations of albedo for the period 1995–2004. Over the Antarctic
Directory of Open Access Journals (Sweden)
J. Arteta
2009-09-01
Full Text Available The general objective of this series of papers is to evaluate long duration limited area simulations with idealised tracers as a tool to assess tracer transport in chemistry-transport models (CTMs. In this first paper, we analyse the results of six simulations using different convection closures and parameterizations. The simulations are using the Grell and Dévényi (2002 mass-flux framework for the convection parameterization with different closures (Grell = GR, Arakawa-Shubert = AS, Kain-Fritch = KF, Low omega = LO, Moisture convergence = MC and an ensemble parameterization (EN based on the other five closures. The simulations are run for one month during the SCOUT-O3 field campaign lead from Darwin (Australia. They have a 60 km horizontal resolution and a fine vertical resolution in the upper troposphere/lower stratosphere. Meteorological results are compared with satellite products, radiosoundings and SCOUT-O3 aircraft campaign data. They show that the model is generally in good agreement with the measurements with less variability in the model. Except for the precipitation field, the differences between the six simulations are small on average with respect to the differences with the meteorological observations. The comparison with TRMM rainrates shows that the six parameterizations or closures have similar behaviour concerning convection triggering times and locations. However, the 6 simulations provide two different behaviours for rainfall values, with the EN, AS and KF parameterizations (Group 1 modelling better rain fields than LO, MC and GR (Group 2. The vertical distribution of tropospheric tracers is very different for the two groups showing significantly more transport into the TTL for Group 1 related to the larger average values of the upward velocities. Nevertheless the low values for the Group 1 fluxes at and above the cold point level indicate that the model does not simulate significant overshooting. For stratospheric tracers
Chen, T. H.; Henderson-Sellers, A.; Milly, P. C. D.; Pitman, A. J.; Beljaars, A. C. M.; Polcher, J.; Abramopoulos, F.; Boone, A.; Chang, S.; Chen, F.; Dai, Y.; Desborough, C. E.; Dickinson, R. E.; Dümenil, L.; Ek, M.; Garratt, J. R.; Gedney, N.; Gusev, Y. M.; Kim, J.; Koster, R.; Kowalczyk, E. A.; Laval, K.; Lean, J.; Lettenmaier, D.; Liang, X.; Mahfouf, J.-F.; Mengelkamp, H.-T.; Mitchell, K.; Nasonova, O. N.; Noilhan, J.; Robock, A.; Rosenzweig, C.; Schaake, J.; Schlosser, C. A.; Schulz, J.-P.; Shao, Y.; Shmakin, A. B.; Verseghy, D. L.; Wetzel, P.; Wood, E. F.; Xue, Y.; Yang, Z.-L.; Zeng, Q.
1997-06-01
In the Project for Intercomparison of Land-Surface Parameterization Schemes phase 2a experiment, meteorological data for the year 1987 from Cabauw, the Netherlands, were used as inputs to 23 land-surface flux schemes designed for use in climate and weather models. Schemes were evaluated by comparing their outputs with long-term measurements of surface sensible heat fluxes into the atmosphere and the ground, and of upward longwave radiation and total net radiative fluxes, and also comparing them with latent heat fluxes derived from a surface energy balance. Tuning of schemes by use of the observed flux data was not permitted. On an annual basis, the predicted surface radiative temperature exhibits a range of 2 K across schemes, consistent with the range of about 10 W m2 in predicted surface net radiation. Most modeled values of monthly net radiation differ from the observations by less than the estimated maximum monthly observational error (±10 W m2). However, modeled radiative surface temperature appears to have a systematic positive bias in most schemes; this might be explained by an error in assumed emissivity and by models' neglect of canopy thermal heterogeneity. Annual means of sensible and latent heat fluxes, into which net radiation is partitioned, have ranges across schemes of30 W m2 and 25 W m2, respectively. Annual totals of evapotranspiration and runoff, into which the precipitation is partitioned, both have ranges of 315 mm. These ranges in annual heat and water fluxes were approximately halved upon exclusion of the three schemes that have no stomatal resistance under non-water-stressed conditions. Many schemes tend to underestimate latent heat flux and overestimate sensible heat flux in summer, with a reverse tendency in winter. For six schemes, root-mean-square deviations of predictions from monthly observations are less than the estimated upper bounds on observation errors (5 W m2 for sensible heat flux and 10 W m2 for latent heat flux). Actual
Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken
2018-05-17
An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4
Parameterization models for pesticide exposure via crop consumption.
Fantke, Peter; Wieland, Peter; Juraske, Ronnie; Shaddick, Gavin; Itoiz, Eva Sevigné; Friedrich, Rainer; Jolliet, Olivier
2012-12-04
An approach for estimating human exposure to pesticides via consumption of six important food crops is presented that can be used to extend multimedia models applied in health risk and life cycle impact assessment. We first assessed the variation of model output (pesticide residues per kg applied) as a function of model input variables (substance, crop, and environmental properties) including their possible correlations using matrix algebra. We identified five key parameters responsible for between 80% and 93% of the variation in pesticide residues, namely time between substance application and crop harvest, degradation half-lives in crops and on crop surfaces, overall residence times in soil, and substance molecular weight. Partition coefficients also play an important role for fruit trees and tomato (Kow), potato (Koc), and lettuce (Kaw, Kow). Focusing on these parameters, we develop crop-specific models by parametrizing a complex fate and exposure assessment framework. The parametric models thereby reflect the framework's physical and chemical mechanisms and predict pesticide residues in harvest using linear combinations of crop, crop surface, and soil compartments. Parametric model results correspond well with results from the complex framework for 1540 substance-crop combinations with total deviations between a factor 4 (potato) and a factor 66 (lettuce). Predicted residues also correspond well with experimental data previously used to evaluate the complex framework. Pesticide mass in harvest can finally be combined with reduction factors accounting for food processing to estimate human exposure from crop consumption. All parametric models can be easily implemented into existing assessment frameworks.
Wang, Chao; Forget, François; Bertrand, Tanguy; Spiga, Aymeric; Millour, Ehouarn; Navarro, Thomas
2018-04-01
The origin of the detached dust layers observed by the Mars Climate Sounder aboard the Mars Reconnaissance Orbiter is still debated. Spiga et al. (2013, https://doi.org/10.1002/jgre.20046) revealed that deep mesoscale convective "rocket dust storms" are likely to play an important role in forming these dust layers. To investigate how the detached dust layers are generated by this mesoscale phenomenon and subsequently evolve at larger scales, a parameterization of rocket dust storms to represent the mesoscale dust convection is designed and included into the Laboratoire de Météorologie Dynamique (LMD) Martian Global Climate Model (GCM). The new parameterization allows dust particles in the GCM to be transported to higher altitudes than in traditional GCMs. Combined with the horizontal transport by large-scale winds, the dust particles spread out and form detached dust layers. During the Martian dusty seasons, the LMD GCM with the new parameterization is able to form detached dust layers. The formation, evolution, and decay of the simulated dust layers are largely in agreement with the Mars Climate Sounder observations. This suggests that mesoscale rocket dust storms are among the key factors to explain the observed detached dust layers on Mars. However, the detached dust layers remain absent in the GCM during the clear seasons, even with the new parameterization. This implies that other relevant atmospheric processes, operating when no dust storms are occurring, are needed to explain the Martian detached dust layers. More observations of local dust storms could improve the ad hoc aspects of this parameterization, such as the trigger and timing of dust injection.
International Nuclear Information System (INIS)
Hammond, D S; Chapman, L; Thornes, J E
2011-01-01
A ground-penetrating radar (GPR) survey of a 32 km mixed urban and rural study route is undertaken to assess the usefulness of GPR as a tool for parameterizing road construction in a route-based road weather forecast model. It is shown that GPR can easily identify even the smallest of bridges along the route, which previous thermal mapping surveys have identified as thermal singularities with implications for winter road maintenance. Using individual GPR traces measured at each forecast point along the route, an inflexion point detection algorithm attempts to identify the depth of the uppermost subsurface layers at each forecast point for use in a road weather model instead of existing ordinal road-type classifications. This approach has the potential to allow high resolution modelling of road construction and bridge decks on a scale previously not possible within a road weather model, but initial results reveal that significant future research will be required to unlock the full potential that this technology can bring to the road weather industry. (technical design note)
Underestimation of Severity of Previous Whiplash Injuries
Naqui, SZH; Lovell, SJ; Lovell, ME
2008-01-01
INTRODUCTION We noted a report that more significant symptoms may be expressed after second whiplash injuries by a suggested cumulative effect, including degeneration. We wondered if patients were underestimating the severity of their earlier injury. PATIENTS AND METHODS We studied recent medicolegal reports, to assess subjects with a second whiplash injury. They had been asked whether their earlier injury was worse, the same or lesser in severity. RESULTS From the study cohort, 101 patients (87%) felt that they had fully recovered from their first injury and 15 (13%) had not. Seventy-six subjects considered their first injury of lesser severity, 24 worse and 16 the same. Of the 24 that felt the violence of their first accident was worse, only 8 had worse symptoms, and 16 felt their symptoms were mainly the same or less than their symptoms from their second injury. Statistical analysis of the data revealed that the proportion of those claiming a difference who said the previous injury was lesser was 76% (95% CI 66–84%). The observed proportion with a lesser injury was considerably higher than the 50% anticipated. CONCLUSIONS We feel that subjects may underestimate the severity of an earlier injury and associated symptoms. Reasons for this may include secondary gain rather than any proposed cumulative effect. PMID:18201501
[Electronic cigarettes - effects on health. Previous reports].
Napierała, Marta; Kulza, Maksymilian; Wachowiak, Anna; Jabłecka, Katarzyna; Florek, Ewa
2014-01-01
Currently very popular in the market of tobacco products have gained electronic cigarettes (ang. E-cigarettes). These products are considered to be potentially less harmful in compared to traditional tobacco products. However, current reports indicate that the statements of the producers regarding to the composition of the e- liquids not always are sufficient, and consumers often do not have reliable information on the quality of the product used by them. This paper contain a review of previous reports on the composition of e-cigarettes and their impact on health. Most of the observed health effects was related to symptoms of the respiratory tract, mouth, throat, neurological complications and sensory organs. Particularly hazardous effects of the e-cigarettes were: pneumonia, congestive heart failure, confusion, convulsions, hypotension, aspiration pneumonia, face second-degree burns, blindness, chest pain and rapid heartbeat. In the literature there is no information relating to passive exposure by the aerosols released during e-cigarette smoking. Furthermore, the information regarding to the use of these products in the long term are not also available.
Impact of model structure and parameterization on Penman-Monteith type evaporation models
Ershadi, A.
2015-04-12
The impact of model structure and parameterization on the estimation of evaporation is investigated across a range of Penman-Monteith type models. To examine the role of model structure on flux retrievals, three different retrieval schemes are compared. The schemes include a traditional single-source Penman-Monteith model (Monteith, 1965), a two-layer model based on Shuttleworth and Wallace (1985) and a three-source model based on Mu et al. (2011). To assess the impact of parameterization choice on model performance, a number of commonly used formulations for aerodynamic and surface resistances were substituted into the different formulations. Model response to these changes was evaluated against data from twenty globally distributed FLUXNET towers, representing a cross-section of biomes that include grassland, cropland, shrubland, evergreen needleleaf forest and deciduous broadleaf forest. Scenarios based on 14 different combinations of model structure and parameterization were ranked based on their mean value of Nash-Sutcliffe Efficiency. Results illustrated considerable variability in model performance both within and between biome types. Indeed, no single model consistently outperformed any other when considered across all biomes. For instance, in grassland and shrubland sites, the single-source Penman-Monteith model performed the best. In croplands it was the three-source Mu model, while for evergreen needleleaf and deciduous broadleaf forests, the Shuttleworth-Wallace model rated highest. Interestingly, these top ranked scenarios all shared the simple lookup-table based surface resistance parameterization of Mu et al. (2011), while a more complex Jarvis multiplicative method for surface resistance produced lower ranked simulations. The highly ranked scenarios mostly employed a version of the Thom (1975) formulation for aerodynamic resistance that incorporated dynamic values of roughness parameters. This was true for all cases except over deciduous broadleaf
Directory of Open Access Journals (Sweden)
Xin Tian
2015-11-01
Full Text Available We propose a long-term parameterization scheme for two critical parameters, zero-plane displacement height (d and aerodynamic roughness length (z0m, that we further use in the Surface Energy Balance System (SEBS. A sensitivity analysis of SEBS indicated that these two parameters largely impact the estimated sensible heat and latent heat fluxes. First, we calibrated regression relationships between measured forest vertical parameters (Lorey’s height and the frontal area index (FAI and forest aboveground biomass (AGB. Next, we derived the interannual Lorey’s height and FAI values from our calibrated regression models and corresponding forest AGB dynamics that were converted from interannual carbon fluxes, as simulated from two incorporated ecological models and a 2009 forest basis map These dynamic forest vertical parameters, combined with refined eight-day Global LAnd Surface Satellite (GLASS LAI products, were applied to estimate the eight-day d, z0m, and, thus, the heat roughness length (z0h. The obtained d, z0m and z0h were then used as forcing for the SEBS model in order to simulate long-term forest evapotranspiration (ET from 2000 to 2012 within the Qilian Mountains (QMs. As compared with MODIS, MOD16 products at the eddy covariance (EC site, ET estimates from the SEBS agreed much better with EC measurements (R2 = 0.80 and RMSE = 0.21 mm·day−1.
Brubaker, Kaye L.; Entekhabi, Dara; Eagleson, Peter S.
1991-01-01
The advective transport of atmospheric water vapor and its role in global hydrology and the water balance of continental regions are discussed and explored. The data set consists of ten years of global wind and humidity observations interpolated onto a regular grid by objective analysis. Atmospheric water vapor fluxes across the boundaries of selected continental regions are displayed graphically. The water vapor flux data are used to investigate the sources of continental precipitation. The total amount of water that precipitates on large continental regions is supplied by two mechanisms: (1) advection from surrounding areas external to the region; and (2) evaporation and transpiration from the land surface recycling of precipitation over the continental area. The degree to which regional precipitation is supplied by recycled moisture is a potentially significant climate feedback mechanism and land surface-atmosphere interaction, which may contribute to the persistence and intensification of droughts. A simplified model of the atmospheric moisture over continents and simultaneous estimates of regional precipitation are employed to estimate, for several large continental regions, the fraction of precipitation that is locally derived. In a separate, but related, study estimates of ocean to land water vapor transport are used to parameterize an existing simple climate model, containing both land and ocean surfaces, that is intended to mimic the dynamics of continental climates.
Basarab, B. M.; Rutledge, S. A.; Fuchs, B. R.
2015-09-01
Accurate prediction of total lightning flash rate in thunderstorms is important to improve estimates of nitrogen oxides (NOx) produced by lightning (LNOx) from the storm scale to the global scale. In this study, flash rate parameterization schemes from the literature are evaluated against observed total flash rates for a sample of 11 Colorado thunderstorms, including nine storms from the Deep Convective Clouds and Chemistry (DC3) experiment in May-June 2012. Observed flash rates were determined using an automated algorithm that clusters very high frequency radiation sources emitted by electrical breakdown in clouds and detected by the northern Colorado lightning mapping array. Existing schemes were found to inadequately predict flash rates and were updated based on observed relationships between flash rate and simple storm parameters, yielding significant improvement. The most successful updated scheme predicts flash rate based on the radar-derived mixed-phase 35 dBZ echo volume. Parameterizations based on metrics for updraft intensity were also updated but were found to be less reliable predictors of flash rate for this sample of storms. The 35 dBZ volume scheme was tested on a data set containing radar reflectivity volume information for thousands of isolated convective cells in different regions of the U.S. This scheme predicted flash rates to within 5.8% of observed flash rates on average. These results encourage the application of this scheme to larger radar data sets and its possible implementation into cloud-resolving models.
Energy Technology Data Exchange (ETDEWEB)
Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)
2016-03-11
Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.
de Lavenne, Alban; Andréassian, Vazken
2018-03-01
This paper examines the hydrological impact of the seasonality of precipitation and maximum evaporation: seasonality is, after aridity, a second-order determinant of catchment water yield. Based on a data set of 171 French catchments (where aridity ranged between 0.2 and 1.2), we present a parameterization of three commonly-used water balance formulas (namely, Turc-Mezentsev, Tixeront-Fu and Oldekop formulas) to account for seasonality effects. We quantify the improvement of seasonality-based parameterization in terms of the reconstitution of both catchment streamflow and water yield. The significant improvement obtained (reduction of RMSE between 9 and 14% depending on the formula) demonstrates the importance of climate seasonality in the determination of long-term catchment water balance.
International Nuclear Information System (INIS)
Huang, Dong; Liu, Yangang
2014-01-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models. (letter)
Mesoscale model parameterizations for radiation and turbulent fluxes at the lower boundary
International Nuclear Information System (INIS)
Somieski, F.
1988-11-01
A radiation parameterization scheme for use in mesoscale models with orography and clouds has been developed. Broadband parameterizations are presented for the solar and the terrestrial spectral ranges. They account for clear, turbid or cloudy atmospheres. The scheme is one-dimensional in the atmosphere, but the effects of mountains (inclination, shading, elevated horizon) are taken into account at the surface. In the terrestrial band, grey and black clouds are considered. Furthermore, the calculation of turbulent fluxes of sensible and latent heat and momentum at an inclined lower model boundary is described. Surface-layer similarity and the surface energy budget are used to evaluate the ground surface temperature. The total scheme is part of the mesoscale model MESOSCOP. (orig.) With 3 figs., 25 refs [de
Huang, Dong; Liu, Yangang
2014-12-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.
International Nuclear Information System (INIS)
Owusu, Haile K; Yuzbashyan, Emil A
2011-01-01
We study general quantum integrable Hamiltonians linear in a coupling constant and represented by finite N x N real symmetric matrices. The restriction on the coupling dependence leads to a natural notion of nontrivial integrals of motion and classification of integrable families into types according to the number of such integrals. A type M family in our definition is formed by N-M nontrivial mutually commuting operators linear in the coupling. Working from this definition alone, we parameterize type M operators, i.e. resolve the commutation relations, and obtain an exact solution for their eigenvalues and eigenvectors. We show that our parameterization covers all type 1, 2 and 3 integrable models and discuss the extent to which it is complete for other types. We also present robust numerical observation on the number of energy-level crossings in type M integrable systems and analyze the taxonomy of types in the 1D Hubbard model. (paper)
Parameterized entropy analysis of EEG following hypoxic-ischemic brain injury
International Nuclear Information System (INIS)
Tong Shanbao; Bezerianos, Anastasios; Malhotra, Amit; Zhu Yisheng; Thakor, Nitish
2003-01-01
In the present study Tsallis and Renyi entropy methods were used to study the electric activity of brain following hypoxic-ischemic (HI) injury. We investigated the performances of these parameterized information measures in describing the electroencephalogram (EEG) signal of controlled experimental animal HI injury. The results show that (a): compared with Shannon and Renyi entropy, the parameterized Tsallis entropy acts like a spatial filter and the information rate can either tune to long range rhythms or to short abrupt changes, such as bursts or spikes during the beginning of recovery, by the entropic index q; (b): Renyi entropy is a compact and predictive indicator for monitoring the physiological changes during the recovery of brain injury. There is a reduction in the Renyi entropy after brain injury followed by a gradual recovery upon resuscitation
Demuzere, M.; De Ridder, K.; van Lipzig, N. P. M.
2008-08-01
During the ESCOMPTE campaign (Experience sur Site pour COntraindre les Modeles de Pollution atmospherique et de Transport d'Emissions), a 4-day intensive observation period was selected to evaluate the Advanced Regional Prediction System (ARPS), a nonhydrostatic meteorological mesoscale model that was optimized with a parameterization for thermal roughness length to better represent urban surfaces. The evaluation shows that the ARPS model is able to correctly reproduce temperature, wind speed, and direction for one urban and two rural measurements stations. Furthermore, simulated heat fluxes show good agreement compared to the observations, although simulated sensible heat fluxes were initially too low for the urban stations. In order to improve the latter, different roughness length parameterization schemes were tested, combined with various thermal admittance values. This sensitivity study showed that the Zilitinkevich scheme combined with and intermediate value of thermal admittance performs best.
A Symbolic Computation Approach to Parameterizing Controller for Polynomial Hamiltonian Systems
Directory of Open Access Journals (Sweden)
Zhong Cao
2014-01-01
Full Text Available This paper considers controller parameterization method of H∞ control for polynomial Hamiltonian systems (PHSs, which involves internal stability and external disturbance attenuation. The aims of this paper are to design a controller with parameters to insure that the systems are H∞ stable and propose an algorithm for solving parameters of the controller with symbolic computation. The proposed parameterization method avoids solving Hamilton-Jacobi-Isaacs equations, and thus the obtained controllers with parameters are relatively simple in form and easy in operation. Simulation with a numerical example shows that the controller is effective as it can optimize H∞ control by adjusting parameters. All these results are expected to be of use in the study of H∞ control for nonlinear systems with perturbations.
Solvation of monovalent anions in formamide and methanol: Parameterization of the IEF-PCM model
International Nuclear Information System (INIS)
Boees, Elvis S.; Bernardi, Edson; Stassen, Hubert; Goncalves, Paulo F.B.
2008-01-01
The thermodynamics of solvation for a series of monovalent anions in formamide and methanol has been studied using the polarizable continuum model (PCM). The parameterization of this continuum model was guided by molecular dynamics simulations. The parameterized PCM model predicts the Gibbs free energies of solvation for 13 anions in formamide and 16 anions in methanol in very good agreement with experimental data. Two sets of atomic radii were tested in the definition of the solute cavities in the PCM and their performances are evaluated and discussed. Mean absolute deviations of the calculated free energies of solvation from the experimental values are in the range of 1.3-2.1 kcal/mol
Energy Technology Data Exchange (ETDEWEB)
Wu, Tongwen [China Meteorological Administration (CMA), National Climate Center (Beijing Climate Center), Beijing (China)
2012-02-15
A simple mass-flux cumulus parameterization scheme suitable for large-scale atmospheric models is presented. The scheme is based on a bulk-cloud approach and has the following properties: (1) Deep convection is launched at the level of maximum moist static energy above the top of the boundary layer. It is triggered if there is positive convective available potential energy (CAPE) and relative humidity of the air at the lifting level of convection cloud is greater than 75%; (2) Convective updrafts for mass, dry static energy, moisture, cloud liquid water and momentum are parameterized by a one-dimensional entrainment/detrainment bulk-cloud model. The lateral entrainment of the environmental air into the unstable ascending parcel before it rises to the lifting condensation level is considered. The entrainment/detrainment amount for the updraft cloud parcel is separately determined according to the increase/decrease of updraft parcel mass with altitude, and the mass change for the adiabatic ascent cloud parcel with altitude is derived from a total energy conservation equation of the whole adiabatic system in which involves the updraft cloud parcel and the environment; (3) The convective downdraft is assumed saturated and originated from the level of minimum environmental saturated equivalent potential temperature within the updraft cloud; (4) The mass flux at the base of convective cloud is determined by a closure scheme suggested by Zhang (J Geophys Res 107(D14)), in which the increase/decrease of CAPE due to changes of the thermodynamic states in the free troposphere resulting from convection approximately balances the decrease/increase resulting from large-scale processes. Evaluation of the proposed convection scheme is performed by using a single column model (SCM) forced by the Atmospheric Radiation Measurement Program's (ARM) summer 1995 and 1997 Intensive Observing Period (IOP) observations, and field observations from the Global Atmospheric Research
Harrington, J. Y.
2017-12-01
Parameterizing the growth of ice particles in numerical models is at an interesting cross-roads. Most parameterizations developed in the past, including some that I have developed, parse model ice into numerous categories based primarily on the growth mode of the particle. Models routinely possess smaller ice, snow crystals, aggregates, graupel, and hail. The snow and ice categories in some models are further split into subcategories to account for the various shapes of ice. There has been a relatively recent shift towards a new class of microphysical models that predict the properties of ice particles instead of using multiple categories and subcategories. Particle property models predict the physical characteristics of ice, such as aspect ratio, maximum dimension, effective density, rime density, effective area, and so forth. These models are attractive in the sense that particle characteristics evolve naturally in time and space without the need for numerous (and somewhat artificial) transitions among pre-defined classes. However, particle property models often require fundamental parameters that are typically derived from laboratory measurements. For instance, the evolution of particle shape during vapor depositional growth requires knowledge of the growth efficiencies for the various axis of the crystals, which in turn depends on surface parameters that can only be determined in the laboratory. The evolution of particle shapes and density during riming, aggregation, and melting require data on the redistribution of mass across a crystals axis as that crystal collects water drops, ice crystals, or melts. Predicting the evolution of particle properties based on laboratory-determined parameters has a substantial influence on the evolution of some cloud systems. Radiatively-driven cirrus clouds show a broader range of competition between heterogeneous nucleation and homogeneous freezing when ice crystal properties are predicted. Even strongly convective squall
Nagihara, S.; Nakamura, Y.; Williams, D. R.; Taylor, P. T.; Kiefer, W. S.; Hager, M. A.; Hills, H. K.
2016-01-01
In year 2010, 440 original data archival tapes for the Apollo Lunar Science Experiment Package (ALSEP) experiments were found at the Washington National Records Center. These tapes hold raw instrument data received from the Moon for all the ALSEP instruments for the period of April through June 1975. We have recently completed extraction of binary files from these tapes, and we have delivered them to the NASA Space Science Data Cordinated Archive (NSSDCA). We are currently processing the raw data into higher order data products in file formats more readily usable by contemporary researchers. These data products will fill a number of gaps in the current ALSEP data collection at NSSDCA. In addition, we have estabilished a digital, searcheable archive of ALSEP document and metadata as part of the web portal of the Lunar and Planetary Institute. It currently holds approx. 700 documents totaling approx. 40,000 pages
Stijve, T.; Kuyper, Th.W.
1988-01-01
Seven taxa of agarics reported in literature to contain psilocybin (viz. Psathyrella candolleana, Gymnopilus spectabilis, G. fulgens, Hygrocybe psittacina var. psittacina and var. californica, Rickenella fibula, R. swartzii) have been analysed for psilocybin and related tryptamines with negative results.
Stijve, T.; Kuyper, Th.W.
1988-01-01
Seven taxa of agarics reported in literature to contain psilocybin (viz. Psathyrella candolleana, Gymnopilus spectabilis, G. fulgens, Hygrocybe psittacina var. psittacina and var. californica, Rickenella fibula, R. swartzii) have been analysed for psilocybin and related tryptamines with negative
DEFF Research Database (Denmark)
Petrlova, Jitka; Hansen, Finja C; van der Plas, Mariena J A
2017-01-01
bind to and form amorphous amyloid-like aggregates with both bacterial lipopolysaccharide (LPS) and gram-negative bacteria. In silico molecular modeling using atomic resolution and coarse-grained simulations corroborates our experimental observations, altogether indicating increased aggregation through...
Dmitriev, Egor V; Khomenko, Georges; Chami, Malik; Sokolov, Anton A; Churilova, Tatyana Y; Korotaev, Gennady K
2009-03-01
The absorption of sunlight by oceanic constituents significantly contributes to the spectral distribution of the water-leaving radiance. Here it is shown that current parameterizations of absorption coefficients do not apply to the optically complex waters of the Crimea Peninsula. Based on in situ measurements, parameterizations of phytoplankton, nonalgal, and total particulate absorption coefficients are proposed. Their performance is evaluated using a log-log regression combined with a low-pass filter and the nonlinear least-square method. Statistical significance of the estimated parameters is verified using the bootstrap method. The parameterizations are relevant for chlorophyll a concentrations ranging from 0.45 up to 2 mg/m(3).
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail
2011-01-01
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.
An explicit parameterization for casting constraints in gradient driven topology optimization
DEFF Research Database (Denmark)
Gersborg, Allan Roulund; Andreasen, Casper Schousboe
2011-01-01
From a practical point of view it is often desirable to limit the complexity of a topology optimization design such that casting/milling type manufacturing techniques can be applied. In the context of gradient driven topology optimization this work studies how castable designs can be obtained...... by use of a Heaviside design parameterization in a specified casting direction. This reduces the number of design variables considerably and the approach is simple to implement....
Framework of cloud parameterization including ice for 3-D mesoscale models
Energy Technology Data Exchange (ETDEWEB)
Levkov, L; Jacob, D; Eppel, D; Grassl, H
1989-01-01
A parameterization scheme for the simulation of ice in clouds incorporated into the hydrostatic version of the GKSS three-dimensional mesoscale model. Numerical simulations of precipitation are performed: over the Northe Sea, the Hawaiian trade wind area and in the region of the intertropical convergence zone. Not only some major features of convective structures in all three areas but also cloud-aerosol interactions have successfully been simulated. (orig.) With 19 figs., 2 tabs.
Reliability and parameterization of Romberg Test in people who have suffered a stroke
Perez Cruzado, David; Gonzalez Sanchez, Manuel; Cuesta-Vargas, Antonio
2014-01-01
AIM: To analyze the reliability and describe the parameterization with inertial sensors, of Romberg test in people who have had a stroke. METHODS: Romberg's Test was performed during 20 seconds in four different setting, depending from supporting leg and position of the eyes (opened eyes / dominant leg; closed eyes / dominant leg; opened eyes / non-dominant leg; closed eyes / non-dominant leg) in people who have suffered a stroke over a year ago. Two inertial sensors (sampli...
Multimodel Uncertainty Changes in Simulated River Flows Induced by Human Impact Parameterizations
Liu, Xingcai; Tang, Qiuhong; Cui, Huijuan; Mu, Mengfei; Gerten Dieter; Gosling, Simon; Masaki, Yoshimitsu; Satoh, Yusuke; Wada, Yoshihide
2017-01-01
Human impacts increasingly affect the global hydrological cycle and indeed dominate hydrological changes in some regions. Hydrologists have sought to identify the human-impact-induced hydrological variations via parameterizing anthropogenic water uses in global hydrological models (GHMs). The consequently increased model complexity is likely to introduce additional uncertainty among GHMs. Here, using four GHMs, between-model uncertainties are quantified in terms of the ratio of signal to noise (SNR) for average river flow during 1971-2000 simulated in two experiments, with representation of human impacts (VARSOC) and without (NOSOC). It is the first quantitative investigation of between-model uncertainty resulted from the inclusion of human impact parameterizations. Results show that the between-model uncertainties in terms of SNRs in the VARSOC annual flow are larger (about 2 for global and varied magnitude for different basins) than those in the NOSOC, which are particularly significant in most areas of Asia and northern areas to the Mediterranean Sea. The SNR differences are mostly negative (-20 to 5, indicating higher uncertainty) for basin-averaged annual flow. The VARSOC high flow shows slightly lower uncertainties than NOSOC simulations, with SNR differences mostly ranging from -20 to 20. The uncertainty differences between the two experiments are significantly related to the fraction of irrigation areas of basins. The large additional uncertainties in VARSOC simulations introduced by the inclusion of parameterizations of human impacts raise the urgent need of GHMs development regarding a better understanding of human impacts. Differences in the parameterizations of irrigation, reservoir regulation and water withdrawals are discussed towards potential directions of improvements for future GHM development. We also discuss the advantages of statistical approaches to reduce the between-model uncertainties, and the importance of calibration of GHMs for not only
Directory of Open Access Journals (Sweden)
P. Kumar
2009-04-01
Full Text Available Dust and black carbon aerosol have long been known to exert potentially important and diverse impacts on cloud droplet formation. Most studies to date focus on the soluble fraction of these particles, and overlook interactions of the insoluble fraction with water vapor (even if known to be hydrophilic. To address this gap, we developed a new parameterization that considers cloud droplet formation within an ascending air parcel containing insoluble (but wettable particles externally mixed with aerosol containing an appreciable soluble fraction. Activation of particles with a soluble fraction is described through well-established Köhler theory, while the activation of hydrophilic insoluble particles is treated by "adsorption-activation" theory. In the latter, water vapor is adsorbed onto insoluble particles, the activity of which is described by a multilayer Frenkel-Halsey-Hill (FHH adsorption isotherm modified to account for particle curvature. We further develop FHH activation theory to i find combinations of the adsorption parameters A_{FHH}, B_{FHH} which yield atmospherically-relevant behavior, and, ii express activation properties (critical supersaturation that follow a simple power law with respect to dry particle diameter.
The new parameterization is tested by comparing the parameterized cloud droplet number concentration against predictions with a detailed numerical cloud model, considering a wide range of particle populations, cloud updraft conditions, water vapor condensation coefficient and FHH adsorption isotherm characteristics. The agreement between parameterization and parcel model is excellent, with an average error of 10% and R^{2}~0.98. A preliminary sensitivity study suggests that the sublinear response of droplet number to Köhler particle concentration is not as strong for FHH particles.
Robust H∞ Control for Singular Time-Delay Systems via Parameterized Lyapunov Functional Approach
Directory of Open Access Journals (Sweden)
Li-li Liu
2014-01-01
Full Text Available A new version of delay-dependent bounded real lemma for singular systems with state delay is established by parameterized Lyapunov-Krasovskii functional approach. In order to avoid generating nonconvex problem formulations in control design, a strategy that introduces slack matrices and decouples the system matrices from the Lyapunov-Krasovskii parameter matrices is used. Examples are provided to demonstrate that the results in this paper are less conservative than the existing corresponding ones in the literature.
Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
Current state of aerosol nucleation parameterizations for air-quality and climate modeling
Semeniuk, Kirill; Dastoor, Ashu
2018-04-01
Aerosol nucleation parameterization models commonly used in 3-D air quality and climate models have serious limitations. This includes classical nucleation theory based variants, empirical models and other formulations. Recent work based on detailed and extensive laboratory measurements and improved quantum chemistry computation has substantially advanced the state of nucleation parameterizations. In terms of inorganic nucleation involving BHN and THN including ion effects these new models should be considered as worthwhile replacements for the old models. However, the contribution of organic species to nucleation remains poorly quantified. New particle formation consists of a distinct post-nucleation growth regime which is characterized by a strong Kelvin curvature effect and is thus dependent on availability of very low volatility organic species or sulfuric acid. There have been advances in the understanding of the multiphase chemistry of biogenic and anthropogenic organic compounds which facilitate to overcome the initial aerosol growth barrier. Implementation of processes influencing new particle formation is challenging in 3-D models and there is a lack of comprehensive parameterizations. This review considers the existing models and recent innovations.
A Heuristic Parameterization for the Integrated Vertical Overlap of Cumulus and Stratus
Park, Sungsu
2017-10-01
The author developed a heuristic parameterization to handle the contrasting vertical overlap structures of cumulus and stratus in an integrated way. The parameterization assumes that cumulus is maximum-randomly overlapped with adjacent cumulus; stratus is maximum-randomly overlapped with adjacent stratus; and radiation and precipitation areas at each model interface are grouped into four categories, that is, convective, stratiform, mixed, and clear areas. For simplicity, thermodynamic scalars within individual portions of cloud, radiation, and precipitation areas are assumed to be internally homogeneous. The parameterization was implemented into the Seoul National University Atmosphere Model version 0 (SAM0) in an offline mode and tested over the globe. The offline control simulation reasonably reproduces the online surface precipitation flux and longwave cloud radiative forcing (LWCF). Although the cumulus fraction is much smaller than the stratus fraction, cumulus dominantly contributes to precipitation production in the tropics. For radiation, however, stratus is dominant. Compared with the maximum overlap, the random overlap of stratus produces stronger LWCF and, surprisingly, more precipitation flux due to less evaporation of convective precipitation. Compared with the maximum overlap, the random overlap of cumulus simulates stronger LWCF and weaker precipitation flux. Compared with the control simulation with separate cumulus and stratus, the simulation with a single-merged cloud substantially enhances the LWCF in the tropical deep convection and midlatitude storm track regions. The process-splitting treatment of convective and stratiform precipitation with an independent precipitation approximation (IPA) simulates weaker surface precipitation flux than the control simulation in the tropical region.
De Meij, A.; Vinuesa, J.-F.; Maupas, V.
2018-05-01
The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.
Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.
Kamesh, Reddi; Rani, K Yamuna
2016-09-01
A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Exploring the potential of machine learning to break deadlock in convection parameterization
Pritchard, M. S.; Gentine, P.
2017-12-01
We explore the potential of modern machine learning tools (via TensorFlow) to replace parameterization of deep convection in climate models. Our strategy begins by generating a large ( 1 Tb) training dataset from time-step level (30-min) output harvested from a one-year integration of a zonally symmetric, uniform-SST aquaplanet integration of the SuperParameterized Community Atmosphere Model (SPCAM). We harvest the inputs and outputs connecting each of SPCAM's 8,192 embedded cloud-resolving model (CRM) arrays to its host climate model's arterial thermodynamic state variables to afford 143M independent training instances. We demonstrate that this dataset is sufficiently large to induce preliminary convergence for neural network prediction of desired outputs of SP, i.e. CRM-mean convective heating and moistening profiles. Sensitivity of the machine learning convergence to the nuances of the TensorFlow implementation are discussed, as well as results from pilot tests from the neural network operating inline within the SPCAM as a replacement to the (super)parameterization of convection.
Energy Technology Data Exchange (ETDEWEB)
Charles, T.K. [School of Physics and Astronomy, Monash University, Clayton, Victoria, 3800 (Australia); Australian Synchrotron, 800 Blackburn Road, Clayton, Victoria, 3168 (Australia); Paganin, D.M. [School of Physics and Astronomy, Monash University, Clayton, Victoria, 3800 (Australia); Dowd, R.T. [Australian Synchrotron, 800 Blackburn Road, Clayton, Victoria, 3168 (Australia)
2016-08-21
Intrinsic emittance is often the limiting factor for brightness in fourth generation light sources and as such, a good understanding of the factors affecting intrinsic emittance is essential in order to be able to decrease it. Here we present a parameterization model describing the proportional increase in emittance induced by cathode surface roughness. One major benefit behind the parameterization approach presented here is that it takes the complexity of a Monte Carlo model and reduces the results to a straight-forward empirical model. The resulting models describe the proportional increase in transverse momentum introduced by surface roughness, and are applicable to various metal types, photon wavelengths, applied electric fields, and cathode surface terrains. The analysis includes the increase in emittance due to changes in the electric field induced by roughness as well as the increase in transverse momentum resultant from the spatially varying surface normal. We also compare the results of the Parameterization Model to an Analytical Model which employs various approximations to produce a more compact expression with the cost of a reduction in accuracy.
Parameterization of a Hydrological Model for a Large, Ungauged Urban Catchment
Directory of Open Access Journals (Sweden)
Gerald Krebs
2016-10-01
Full Text Available Urbanization leads to the replacement of natural areas by impervious surfaces and affects the catchment hydrological cycle with adverse environmental impacts. Low impact development tools (LID that mimic hydrological processes of natural areas have been developed and applied to mitigate these impacts. Hydrological simulations are one possibility to evaluate the LID performance but the associated small-scale processes require a highly spatially distributed and explicit modeling approach. However, detailed data for model development are often not available for large urban areas, hampering the model parameterization. In this paper we propose a methodology to parameterize a hydrological model to a large, ungauged urban area by maintaining at the same time a detailed surface discretization for direct parameter manipulation for LID simulation and a firm reliance on available data for model conceptualization. Catchment delineation was based on a high-resolution digital elevation model (DEM and model parameterization relied on a novel model regionalization approach. The impact of automated delineation and model regionalization on simulation results was evaluated for three monitored study catchments (5.87–12.59 ha. The simulated runoff peak was most sensitive to accurate catchment discretization and calibration, while both the runoff volume and the fit of the hydrograph were less affected.
Natarajan, Murali; Fairlie, T. Duncan; Dwyer Cianciolo, Alicia; Smith, Michael D.
2015-01-01
We use the mesoscale modeling capability of Mars Weather Research and Forecasting (MarsWRF) model to study the sensitivity of the simulated Martian lower atmosphere to differences in the parameterization of the planetary boundary layer (PBL). Characterization of the Martian atmosphere and realistic representation of processes such as mixing of tracers like dust depend on how well the model reproduces the evolution of the PBL structure. MarsWRF is based on the NCAR WRF model and it retains some of the PBL schemes available in the earth version. Published studies have examined the performance of different PBL schemes in NCAR WRF with the help of observations. Currently such assessments are not feasible for Martian atmospheric models due to lack of observations. It is of interest though to study the sensitivity of the model to PBL parameterization. Typically, for standard Martian atmospheric simulations, we have used the Medium Range Forecast (MRF) PBL scheme, which considers a correction term to the vertical gradients to incorporate nonlocal effects. For this study, we have also used two other parameterizations, a non-local closure scheme called Yonsei University (YSU) PBL scheme and a turbulent kinetic energy closure scheme called Mellor- Yamada-Janjic (MYJ) PBL scheme. We will present intercomparisons of the near surface temperature profiles, boundary layer heights, and wind obtained from the different simulations. We plan to use available temperature observations from Mini TES instrument onboard the rovers Spirit and Opportunity in evaluating the model results.
DEFF Research Database (Denmark)
Andersen, Torben Juul
approaches to dealing in the global business environment." - Sharon Brown-Hruska, Commissioner, Commodity Futures Trading Commission, USA. "This comprehensive survey of modern risk management using derivative securities is a fine demonstration of the practical relevance of modern derivatives theory to risk......" provides comprehensive coverage of different types of derivatives, including exchange traded contracts and over-the-counter instruments as well as real options. There is an equal emphasis on the practical application of derivatives and their actual uses in business transactions and corporate risk...... management situations. Its key features include: derivatives are introduced in a global market perspective; describes major derivative pricing models for practical use, extending these principles to valuation of real options; practical applications of derivative instruments are richly illustrated...
Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.
2009-04-01
An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very
Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo
2016-04-01
The effects of the propagation and breaking of atmospheric gravity waves have long been considered crucial for their impact on the circulation, especially in the stratosphere and mesosphere, between heights of 10 and 110 km. These waves, that in the Earth's atmosphere originate from surface orography (OGWs) or from transient (nonorographic) phenomena such as fronts and convective processes (NOGWs), have horizontal wavelengths between 10 and 1000 km, vertical wavelengths of several km, and frequencies spanning from minutes to hours. Orographic and nonorographic GWs must be accounted for in climate models to obtain a realistic simulation of the stratosphere in both hemispheres, since they can have a substantial impact on circulation and temperature, hence an important role in ozone chemistry for chemistry-climate models. Several types of parameterization are currently employed in models, differing in the formulation and for the values assigned to parameters, but the common aim is to quantify the effect of wave breaking on large-scale wind and temperature patterns. In the last decade, both global observations from satellite-borne instruments and the outputs of very high resolution climate models provided insight on the variability and properties of gravity wave field, and these results can be used to constrain some of the empirical parameters present in most parameterization scheme. A feature of the NOGW forcing that clearly emerges is the intermittency, linked with the nature of the sources: this property is absent in the majority of the models, in which NOGW parameterizations are uncoupled with other atmospheric phenomena, leading to results which display lower variability compared to observations. In this work, we analyze the climate simulated in AMIP runs of the MAECHAM5 model, which uses the Hines NOGW parameterization and with a fine vertical resolution suitable to capture the effects of wave-mean flow interaction. We compare the results obtained with two
Vorobyov, E. I.
2010-01-01
We study numerically the applicability of the effective-viscosity approach for simulating the effect of gravitational instability (GI) in disks of young stellar objects with different disk-to-star mass ratios ξ . We adopt two α-parameterizations for the effective viscosity based on Lin and Pringle [Lin, D.N.C., Pringle, J.E., 1990. ApJ 358, 515] and Kratter et al. [Kratter, K.M., Matzner, Ch.D., Krumholz, M.R., 2008. ApJ 681, 375] and compare the resultant disk structure, disk and stellar masses, and mass accretion rates with those obtained directly from numerical simulations of self-gravitating disks around low-mass (M∗ ∼ 1.0M⊙) protostars. We find that the effective viscosity can, in principle, simulate the effect of GI in stellar systems with ξ≲ 0.2- 0.3 , thus corroborating a similar conclusion by Lodato and Rice [Lodato, G., Rice, W.K.M., 2004. MNRAS 351, 630] that was based on a different α-parameterization. In particular, the Kratter et al.'s α-parameterization has proven superior to that of Lin and Pringle's, because the success of the latter depends crucially on the proper choice of the α-parameter. However, the α-parameterization generally fails in stellar systems with ξ≳ 0.3 , particularly in the Classes 0 and I phases of stellar evolution, yielding too small stellar masses and too large disk-to-star mass ratios. In addition, the time-averaged mass accretion rates onto the star are underestimated in the early disk evolution and greatly overestimated in the late evolution. The failure of the α-parameterization in the case of large ξ is caused by a growing strength of low-order spiral modes in massive disks. Only in the late Class II phase, when the magnitude of spiral modes diminishes and the mode-to-mode interaction ensues, may the effective viscosity be used to simulate the effect of GI in stellar systems with ξ≳ 0.3 . A simple modification of the effective viscosity that takes into account disk fragmentation can somewhat improve
Barone, Vincenzo; Cacelli, Ivo; De Mitri, Nicola; Licari, Daniele; Monti, Susanna; Prampolini, Giacomo
2013-03-21
The Joyce program is augmented with several new features, including the user friendly Ulysses GUI, the possibility of complete excited state parameterization and a more flexible treatment of the force field electrostatic terms. A first validation is achieved by successfully comparing results obtained with Joyce2.0 to literature ones, obtained for the same set of benchmark molecules. The parameterization protocol is also applied to two other larger molecules, namely nicotine and a coumarin based dye. In the former case, the parameterized force field is employed in molecular dynamics simulations of solvated nicotine, and the solute conformational distribution at room temperature is discussed. Force fields parameterized with Joyce2.0, for both the dye's ground and first excited electronic states, are validated through the calculation of absorption and emission vertical energies with molecular mechanics optimized structures. Finally, the newly implemented procedure to handle polarizable force fields is discussed and applied to the pyrimidine molecule as a test case.
The urban land use in the COSMO-CLM model: a comparison of three parameterizations for Berlin
Directory of Open Access Journals (Sweden)
Kristina Trusilova
2016-05-01
Full Text Available The regional non-hydrostatic climate model COSMO-CLM is increasingly being used on fine spatial scales of 1–5 km. Such applications require a detailed differentiation between the parameterization for natural and urban land uses. Since 2010, three parameterizations for urban land use have been incorporated into COSMO-CLM. These parameterizations vary in their complexity, required city parameters and their computational cost. We perform model simulations with the COSMO-CLM coupled to these three parameterizations for urban land in the same model domain of Berlin on a 1-km grid and compare results with available temperature observations. While all models capture the urban heat island, they differ in spatial detail, magnitude and the diurnal variation.
Sraj, Ihab; Zedler, Sarah E.; Knio, Omar; Jackson, Charles S.; Hoteit, Ibrahim
2016-01-01
The authors present a polynomial chaos (PC)-based Bayesian inference method for quantifying the uncertainties of the K-profile parameterization (KPP) within the MIT general circulation model (MITgcm) of the tropical Pacific. The inference
Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.
Gubler, S.; Gruber, S.; Purves, R. S.
2012-06-01
As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions
Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.
Directory of Open Access Journals (Sweden)
S. Gubler
2012-06-01
Full Text Available As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR. In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night.
We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD and the relative root mean squared deviance (RMSD of the clear-sky global SDR scatter between between −2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations
Directory of Open Access Journals (Sweden)
A. Verhoef
1997-01-01
Full Text Available Values of the momentum roughness length, z0, and displacement height, d, derived from wind profiles and momentum flux measurements, are selected from the literature for a variety of sparse canopies. These include savannah, tiger-bush and several row crops. A quality assessment of these data, conducted using criteria such as available fetch, height of wind speed measurement and homogeneity of the experimental site, reduced the initial total of fourteen sites to eight. These datapoints, combined with values carried forward from earlier studies on the parameterization of z0 and d, led to a maximum number of 16 and 24 datapoints available for d and z0, respectively. The data are compared with estimates of roughness length and displacement height as predicted from a detailed drag partition model, R92 (Raupach, 1992, and a simplified version of this model, R94 (Raupach, 1994. A key parameter in these models is the roughness density or frontal area index, λ. Both the comprehensive and the simplified model give accurate predictions of measured z0 and d values, but the optimal model coefficients are significantly different from the ones originally proposed in R92 and R94. The original model coefficients are based predominantly on measured aerodynamic parameters of relatively closed canopies and they were fitted `by eye'. In this paper, best-fit coefficients are found from a least squares minimization using the z0 and d values of selected good-quality data for sparse canopies and for the added, mainly closed canopies. According to a statistical analysis, based on the coefficient of determination (r2, the number of observations and the number of fitted model coefficients, the simplified model, R94, is deemed to be the most appropriate for future z0 and d predictions. A CR value of 0.35 and a cd1 value of about 20 are found to be appropriate for a large range of canopies varying in density from closed to very sparse. In this case, 99% of the total variance
Abdulsttar, Marwah M.; Al-Rubaiee, A. A.; Ali, Abdul Halim Kh.
2016-01-01
Cherenkov light lateral distribution function (CLLDF) simulation was fulfilled using CORSIKA code for configurations of Tunka EAS array of different zenith angles. The parameterization of the CLLDF was carried out as a function of the distance from the shower core in extensive air showers (EAS) and zenith angle on the basis of the CORSIKA simulation of primary proton around the knee region with the energy 3.10^15 eV at different zenith angles. The parameterized CLLDF is verified in comparison...
DEFF Research Database (Denmark)
Wigan, Duncan
2013-01-01
Contemporary derivatives mark the development of capital and constitute a novel form of ownership. By reconﬁguring the temporal, spatial and legal character of ownership derivatives present a substantive challenge to the tax collecting state. While ﬁscal systems are nationally bounded...... and inherently static, capital itself is unprecedentedly mobile, ﬂuid and fungible. As such derivatives raise the specter of ‘ﬁnancial weapons of mass destruction’....
Janečková, Alena
2011-01-01
1 Abstract/ Financial derivatives The purpose of this thesis is to provide an introduction to financial derivatives which has been, from the legal perspective, described in a not satisfactory manner as quite little literature that can be found about this topic. The main objectives of this thesis are to define the term "financial derivatives" and its particular types and to analyse legal nature of these financial instruments. The last objective is to try to draft future law regulation of finan...
Directory of Open Access Journals (Sweden)
A. Zuend
2011-09-01
Full Text Available We present a new and considerably extended parameterization of the thermodynamic activity coefficient model AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients at room temperature. AIOMFAC combines a Pitzer-like electrolyte solution model with a UNIFAC-based group-contribution approach and explicitly accounts for interactions between organic functional groups and inorganic ions. Such interactions constitute the salt-effect, may cause liquid-liquid phase separation, and affect the gas-particle partitioning of aerosols. The previous AIOMFAC version was parameterized for alkyl and hydroxyl functional groups of alcohols and polyols. With the goal to describe a wide variety of organic compounds found in atmospheric aerosols, we extend here the parameterization of AIOMFAC to include the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkenyl, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon. Thermodynamic equilibrium data of organic-inorganic systems from the literature are critically assessed and complemented with new measurements to establish a comprehensive database. The database is used to determine simultaneously the AIOMFAC parameters describing interactions of organic functional groups with the ions H^{+}, Li^{+}, Na^{+}, K^{+}, NH_{4}^{+}, Mg^{2+}, Ca^{2+}, Cl^{−}, Br^{−}, NO_{3}^{−}, HSO_{4}^{−}, and SO_{4}^{2−}. Detailed descriptions of different types of thermodynamic data, such as vapor-liquid, solid-liquid, and liquid-liquid equilibria, and their use for the model parameterization are provided. Issues regarding deficiencies of the database, types and uncertainties of experimental data, and limitations of the model, are discussed. The challenging parameter optimization problem is solved with a novel combination of powerful global minimization
WRF model sensitivity to choice of parameterization: a study of the `York Flood 1999'
Remesan, Renji; Bellerby, Tim; Holman, Ian; Frostick, Lynne
2015-10-01
Numerical weather modelling has gained considerable attention in the field of hydrology especially in un-gauged catchments and in conjunction with distributed models. As a consequence, the accuracy with which these models represent precipitation, sub-grid-scale processes and exceptional events has become of considerable concern to the hydrological community. This paper presents sensitivity analyses for the Weather Research Forecast (WRF) model with respect to the choice of physical parameterization schemes (both cumulus parameterisation (CPSs) and microphysics parameterization schemes (MPSs)) used to represent the `1999 York Flood' event, which occurred over North Yorkshire, UK, 1st-14th March 1999. The study assessed four CPSs (Kain-Fritsch (KF2), Betts-Miller-Janjic (BMJ), Grell-Devenyi ensemble (GD) and the old Kain-Fritsch (KF1)) and four MPSs (Kessler, Lin et al., WRF single-moment 3-class (WSM3) and WRF single-moment 5-class (WSM5)] with respect to their influence on modelled rainfall. The study suggests that the BMJ scheme may be a better cumulus parameterization choice for the study region, giving a consistently better performance than other three CPSs, though there are suggestions of underestimation. The WSM3 was identified as the best MPSs and a combined WSM3/BMJ model setup produced realistic estimates of precipitation quantities for this exceptional flood event. This study analysed spatial variability in WRF performance through categorical indices, including POD, FBI, FAR and CSI during York Flood 1999 under various model settings. Moreover, the WRF model was good at predicting high-intensity rare events over the Yorkshire region, suggesting it has potential for operational use.
International Nuclear Information System (INIS)
Belo, Thiago F.; Fiel, Joao Claudio B.
2015-01-01
Nuclear reactor core analysis involves neutronic modeling and the calculations require problem dependent nuclear data generated with few neutron energy groups, as for instance the neutron cross sections. The methods used to obtain these problem-dependent cross sections, in the reactor calculations, generally uses nuclear computer codes that require a large processing time and computational memory, making the process computationally very expensive. Presently, analysis of the macroscopic cross section, as a function of nuclear parameters, has shown a very distinct behavior that cannot be represented by simply using linear interpolation. Indeed, a polynomial representation is more adequate for the data parameterization. To provide the cross sections of rapidly and without the dependence of complex systems calculations, this work developed a set of parameterized cross sections, based on the Tchebychev polynomials, by fitting the cross sections as a function of nuclear parameters, which include fuel temperature, moderator temperature and density, soluble boron concentration, uranium enrichment, and the burn-up. In this study is evaluated the problem-dependent about fission, scattering, total, nu-fission, capture, transport and absorption cross sections for a typical PWR fuel element reactor, considering burn-up cycle. The analysis was carried out with the SCALE 6.1 code package. The results of comparison with direct calculations with the SCALE code system and also the test using project parameters, such as the temperature coefficient of reactivity and fast fission factor, show excellent agreements. The differences between the cross-section parameterization methodology and the direct calculations based on the SCALE code system are less than 0.03 percent. (author)
Singh, K. S.; Bonthu, Subbareddy; Purvaja, R.; Robin, R. S.; Kannan, B. A. M.; Ramesh, R.
2018-04-01
This study attempts to investigate the real-time prediction of a heavy rainfall event over the Chennai Metropolitan City, Tamil Nadu, India that occurred on 01 December 2015 using Advanced Research Weather Research and Forecasting (WRF-ARW) model. The study evaluates the impact of six microphysical (Lin, WSM6, Goddard, Thompson, Morrison and WDM6) parameterization schemes of the model on prediction of heavy rainfall event. In addition, model sensitivity has also been evaluated with six Planetary Boundary Layer (PBL) and two Land Surface Model (LSM) schemes. Model forecast was carried out using nested domain and the impact of model horizontal grid resolutions were assessed at 9 km, 6 km and 3 km. Analysis of the synoptic features using National Center for Environmental Prediction Global Forecast System (NCEP-GFS) analysis data revealed strong upper-level divergence and high moisture content at lower level were favorable for the occurrence of heavy rainfall event over the northeast coast of Tamil Nadu. The study signified that forecasted rainfall was more sensitive to the microphysics and PBL schemes compared to the LSM schemes. The model provided better forecast of the heavy rainfall event using the logical combination of Goddard microphysics, YSU PBL and Noah LSM schemes, and it was mostly attributed to timely initiation and development of the convective system. The forecast with different horizontal resolutions using cumulus parameterization indicated that the rainfall prediction was not well represented at 9 km and 6 km. The forecast with 3 km horizontal resolution provided better prediction in terms of timely initiation and development of the event. The study highlights that forecast of heavy rainfall events using a high-resolution mesoscale model with suitable representations of physical parameterization schemes are useful for disaster management and planning to minimize the potential loss of life and property.
A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction
Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur
2009-07-01
For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).
Modelling and parameterizing the influence of tides on ice-shelf melt rates
Jourdain, N.; Molines, J. M.; Le Sommer, J.; Mathiot, P.; de Lavergne, C.; Gurvan, M.; Durand, G.
2017-12-01
Significant Antarctic ice sheet thinning is observed in several sectors of Antarctica, in particular in the Amundsen Sea sector, where warm circumpolar deep waters affect basal melting. The later has the potential to trigger marine ice sheet instabilities, with an associated potential for rapid sea level rise. It is therefore crucial to simulate and understand the processes associated with ice-shelf melt rates. In particular, the absence of tides representation in ocean models remains a caveat of numerous ocean hindcasts and climate projections. In the Amundsen Sea, tides are relatively weak and the melt-induced circulation is stronger than the tidal circulation. Using a regional 1/12° ocean model of the Amundsen Sea, we nonetheless find that tides can increase melt rates by up to 36% in some ice-shelf cavities. Among the processes that can possibly affect melt rates, the most important is an increased exchange at the ice/ocean interface resulting from the presence of strong tidal currents along the ice drafts. Approximately a third of this effect is compensated by a decrease in thermal forcing along the ice draft, which is related to an enhanced vertical mixing in the ocean interior in presence of tides. Parameterizing the effect of tides is an alternative to the representation of explicit tides in an ocean model, and has the advantage not to require any filtering of ocean model outputs. We therefore explore different ways to parameterize the effects of tides on ice shelf melt. First, we compare several methods to impose tidal velocities along the ice draft. We show that getting a realistic spatial distribution of tidal velocities in important, and can be deduced from the barotropic velocities of a tide model. Then, we explore several aspects of parameterized tidal mixing to reproduce the tide-induced decrease in thermal forcing along the ice drafts.
Energy Technology Data Exchange (ETDEWEB)
Belo, Thiago F.; Fiel, Joao Claudio B., E-mail: thiagofbelo@hotmail.com [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil)
2015-07-01
Nuclear reactor core analysis involves neutronic modeling and the calculations require problem dependent nuclear data generated with few neutron energy groups, as for instance the neutron cross sections. The methods used to obtain these problem-dependent cross sections, in the reactor calculations, generally uses nuclear computer codes that require a large processing time and computational memory, making the process computationally very expensive. Presently, analysis of the macroscopic cross section, as a function of nuclear parameters, has shown a very distinct behavior that cannot be represented by simply using linear interpolation. Indeed, a polynomial representation is more adequate for the data parameterization. To provide the cross sections of rapidly and without the dependence of complex systems calculations, this work developed a set of parameterized cross sections, based on the Tchebychev polynomials, by fitting the cross sections as a function of nuclear parameters, which include fuel temperature, moderator temperature and density, soluble boron concentration, uranium enrichment, and the burn-up. In this study is evaluated the problem-dependent about fission, scattering, total, nu-fission, capture, transport and absorption cross sections for a typical PWR fuel element reactor, considering burn-up cycle. The analysis was carried out with the SCALE 6.1 code package. The results of comparison with direct calculations with the SCALE code system and also the test using project parameters, such as the temperature coefficient of reactivity and fast fission factor, show excellent agreements. The differences between the cross-section parameterization methodology and the direct calculations based on the SCALE code system are less than 0.03 percent. (author)
Morphing methods to parameterize specimen-specific finite element model geometries.
Sigal, Ian A; Yang, Hongli; Roberts, Michael D; Downs, J Crawford
2010-01-19
Shape plays an important role in determining the biomechanical response of a structure. Specimen-specific finite element (FE) models have been developed to capture the details of the shape of biological structures and predict their biomechanics. Shape, however, can vary considerably across individuals or change due to aging or disease, and analysis of the sensitivity of specimen-specific models to these variations has proven challenging. An alternative to specimen-specific representation has been to develop generic models with simplified geometries whose shape is relatively easy to parameterize, and can therefore be readily used in sensitivity studies. Despite many successful applications, generic models are limited in that they cannot make predictions for individual specimens. We propose that it is possible to harness the detail available in specimen-specific models while leveraging the power of the parameterization techniques common in generic models. In this work we show that this can be accomplished by using morphing techniques to parameterize the geometry of specimen-specific FE models such that the model shape can be varied in a controlled and systematic way suitable for sensitivity analysis. We demonstrate three morphing techniques by using them on a model of the load-bearing tissues of the posterior pole of the eye. We show that using relatively straightforward procedures these morphing techniques can be combined, which allows the study of factor interactions. Finally, we illustrate that the techniques can be used in other systems by applying them to morph a femur. Morphing techniques provide an exciting new possibility for the analysis of the biomechanical role of shape, independently or in interaction with loading and material properties. Copyright 2009 Elsevier Ltd. All rights reserved.
Multisite Evaluation of APEX for Water Quality: I. Best Professional Judgment Parameterization.
Baffaut, Claire; Nelson, Nathan O; Lory, John A; Senaviratne, G M M M Anomaa; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S
2017-11-01
The Agricultural Policy Environmental eXtender (APEX) model is capable of estimating edge-of-field water, nutrient, and sediment transport and is used to assess the environmental impacts of management practices. The current practice is to fully calibrate the model for each site simulation, a task that requires resources and data not always available. The objective of this study was to compare model performance for flow, sediment, and phosphorus transport under two parameterization schemes: a best professional judgment (BPJ) parameterization based on readily available data and a fully calibrated parameterization based on site-specific soil, weather, event flow, and water quality data. The analysis was conducted using 12 datasets at four locations representing poorly drained soils and row-crop production under different tillage systems. Model performance was based on the Nash-Sutcliffe efficiency (NSE), the coefficient of determination () and the regression slope between simulated and measured annualized loads across all site years. Although the BPJ model performance for flow was acceptable (NSE = 0.7) at the annual time step, calibration improved it (NSE = 0.9). Acceptable simulation of sediment and total phosphorus transport (NSE = 0.5 and 0.9, respectively) was obtained only after full calibration at each site. Given the unacceptable performance of the BPJ approach, uncalibrated use of APEX for planning or management purposes may be misleading. Model calibration with water quality data prior to using APEX for simulating sediment and total phosphorus loss is essential. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Energy Technology Data Exchange (ETDEWEB)
Haubrock, J.
2007-12-13
Fuel cells are a very promising technology for energy conversion. For optimization purpose, useful simulation tools are needs. Simulation tools should simulate the static and dynamic electrical behaviour and the models should parameterized by measurment results which should be done easily. In this dissertation, a useful model for simulating a pem fuel cell is developed. the model should parametrizes by V-I curve measurment and by current step respond. The model based on electrical equivalent circuits and it is shown, that it is possible to simulate the dynamic behaviour of a pem fuel cell stack. The simulation results are compared by measurment results. (orig.)
DEFF Research Database (Denmark)
Yoon, Gil Ho; Kim, Y.Y.; Langelaar, M.
2008-01-01
The internal element connectivity parameterization (I-ECP) method is an alternative approach to overcome numerical instabilities associated with low-stiffness element states in non-linear problems. In I-ECP, elements are connected by zero-length links while their link stiffness values are varied....... Therefore, it is important to interpolate link stiffness properly to obtain stably converging results. The main objective of this work is two-fold (1) the investigation of the relationship between the link stiffness and the stiffness of a domain-discretizing patch by using a discrete model and a homogenized...
Magic neutrino mass matrix and the Bjorken-Harrison-Scott parameterization
International Nuclear Information System (INIS)
Lam, C.S.
2006-01-01
Observed neutrino mixing can be described by a tribimaximal MNS matrix. The resulting neutrino mass matrix in the basis of a diagonal charged lepton mass matrix is both 2-3 symmetric and magic. By a magic matrix, I mean one whose row sums and column sums are all identical. I study what happens if 2-3 symmetry is broken but the magic symmetry is kept intact. In that case, the mixing matrix is parameterized by a single complex parameter U e3 , in a form discussed recently by Bjorken, Harrison, and Scott
Physically sound parameterization of incomplete ionization in aluminum-doped silicon
Directory of Open Access Journals (Sweden)
Heiko Steinkemper
2016-12-01
Full Text Available Incomplete ionization is an important issue when modeling silicon devices featuring aluminum-doped p+ (Al-p+ regions. Aluminum has a rather deep state in the band gap compared to boron or phosphorus, causing strong incomplete ionization. In this paper, we considerably improve our recent parameterization [Steinkemper et al., J. Appl. Phys. 117, 074504 (2015]. On the one hand, we found a fundamental criterion to further reduce the number of free parameters in our fitting procedure. And on the other hand, we address a mistake in the original publication of the incomplete ionization formalism in Altermatt et al., J. Appl. Phys. 100, 113715 (2006.
Impact mitigation using kinematic constraints and the full space parameterization method
Energy Technology Data Exchange (ETDEWEB)
Morgansen, K.A.; Pin, F.G.
1996-02-01
A new method for mitigating unexpected impact of a redundant manipulator with an object in its environment is presented. Kinematic constraints are utilized with the recently developed method known as Full Space Parameterization (FSP). System performance criterion and constraints are changed at impact to return the end effector to the point of impact and halt the arm. Since large joint accelerations could occur as the manipulator is halted, joint acceleration bounds are imposed to simulate physical actuator limitations. Simulation results are presented for the case of a simple redundant planar manipulator.
Real-time image parameterization in high energy gamma-ray astronomy using transputers
International Nuclear Information System (INIS)
Punch, M.; Fegan, D.J.
1991-01-01
Recently, significant advances in Very-High-Energy gamma-ray astronomy have been made by parameterization of the Cherenkov images arising from gamma-ray initiated showers in the Earth's atmosphere. A prototype system to evaluate the use of Transputers as a parallel-processing elements for real-time analysis of data from a Cherenkov imaging camera is described in this paper. The operation of and benefits resulting from such a system are described, and the viability of an applicaiton of the prototype system is discussed
On the sensitivity of mesoscale models to surface-layer parameterization constants
Garratt, J. R.; Pielke, R. A.
1989-09-01
The Colorado State University standard mesoscale model is used to evaluate the sensitivity of one-dimensional (1D) and two-dimensional (2D) fields to differences in surface-layer parameterization “constants”. Such differences reflect the range in the published values of the von Karman constant, Monin-Obukhov stability functions and the temperature roughness length at the surface. The sensitivity of 1D boundary-layer structure, and 2D sea-breeze intensity, is generally less than that found in published comparisons related to turbulence closure schemes generally.
Parameterization of atmosphere-surface exchange of CO_{2} over sea ice
DEFF Research Database (Denmark)
Sørensen, L. L.; Jensen, B.; Glud, Ronnie N.
2014-01-01
are discussed. We found the flux to be small during the late winter with fluxes in both directions. Not surprisingly we find that the resistance across the surface controls the fluxes and detailed knowledge of the brine volume and carbon chemistry within the brines as well as knowledge of snow cover and carbon...... chemistry in the ice are essential to estimate the partial pressure of pCO2 and CO2 flux. Further investigations of surface structure and snow cover and driving parameters such as heat flux, radiation, ice temperature and brine processes are required to adequately parameterize the surface resistance....
DEFF Research Database (Denmark)
Gutin, Gregory; Van Iersel, Leo; Mnich, Matthias
2010-01-01
A ternary Permutation-CSP is specified by a subset Π of the symmetric group S3. An instance of such a problem consists of a set of variables V and a multiset of constraints, which are ordered triples of distinct variables of V. The objective is to find a linear ordering α of V that maximizes...... the number of triples whose rearrangement (under α) follows a permutation in Π. We prove that all ternary Permutation-CSPs parameterized above average have kernels with quadratic numbers of variables....
The cloud-phase feedback in the Super-parameterized Community Earth System Model
Burt, M. A.; Randall, D. A.
2016-12-01
Recent comparisons of observations and climate model simulations by I. Tan and colleagues have suggested that the Wegener-Bergeron-Findeisen (WBF) process tends to be too active in climate models, making too much cloud ice, and resulting in an exaggerated negative cloud-phase feedback on climate change. We explore the WBF process and its effect on shortwave cloud forcing in present-day and future climate simulations with the Community Earth System Model, and its super-parameterized counterpart. Results show that SP-CESM has much less cloud ice and a weaker cloud-phase feedback than CESM.
Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna
2018-01-01
We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.
Directory of Open Access Journals (Sweden)
Rachel R. Sleeter
2015-06-01
Full Text Available Spatially-explicit state-and-transition simulation models of land use and land cover (LULC increase our ability to assess regional landscape characteristics and associated carbon dynamics across multiple scenarios. By characterizing appropriate spatial attributes such as forest age and land-use distribution, a state-and-transition model can more effectively simulate the pattern and spread of LULC changes. This manuscript describes the methods and input parameters of the Land Use and Carbon Scenario Simulator (LUCAS, a customized state-and-transition simulation model utilized to assess the relative impacts of LULC on carbon stocks for the conterminous U.S. The methods and input parameters are spatially explicit and describe initial conditions (strata, state classes and forest age, spatial multipliers, and carbon stock density. Initial conditions were derived from harmonization of multi-temporal data characterizing changes in land use as well as land cover. Harmonization combines numerous national-level datasets through a cell-based data fusion process to generate maps of primary LULC categories. Forest age was parameterized using data from the North American Carbon Program and spatially-explicit maps showing the locations of past disturbances (i.e. wildfire and harvest. Spatial multipliers were developed to spatially constrain the location of future LULC transitions. Based on distance-decay theory, maps were generated to guide the placement of changes related to forest harvest, agricultural intensification/extensification, and urbanization. We analyze the spatially-explicit input parameters with a sensitivity analysis, by showing how LUCAS responds to variations in the model input. This manuscript uses Mediterranean California as a regional subset to highlight local to regional aspects of land change, which demonstrates the utility of LUCAS at many scales and applications.
Sleeter, Rachel; Acevedo, William; Soulard, Christopher E.; Sleeter, Benjamin M.
2015-01-01
Spatially-explicit state-and-transition simulation models of land use and land cover (LULC) increase our ability to assess regional landscape characteristics and associated carbon dynamics across multiple scenarios. By characterizing appropriate spatial attributes such as forest age and land-use distribution, a state-and-transition model can more effectively simulate the pattern and spread of LULC changes. This manuscript describes the methods and input parameters of the Land Use and Carbon Scenario Simulator (LUCAS), a customized state-and-transition simulation model utilized to assess the relative impacts of LULC on carbon stocks for the conterminous U.S. The methods and input parameters are spatially explicit and describe initial conditions (strata, state classes and forest age), spatial multipliers, and carbon stock density. Initial conditions were derived from harmonization of multi-temporal data characterizing changes in land use as well as land cover. Harmonization combines numerous national-level datasets through a cell-based data fusion process to generate maps of primary LULC categories. Forest age was parameterized using data from the North American Carbon Program and spatially-explicit maps showing the locations of past disturbances (i.e. wildfire and harvest). Spatial multipliers were developed to spatially constrain the location of future LULC transitions. Based on distance-decay theory, maps were generated to guide the placement of changes related to forest harvest, agricultural intensification/extensification, and urbanization. We analyze the spatially-explicit input parameters with a sensitivity analysis, by showing how LUCAS responds to variations in the model input. This manuscript uses Mediterranean California as a regional subset to highlight local to regional aspects of land change, which demonstrates the utility of LUCAS at many scales and applications.
Vahmani, P.; Hogue, T. S.
2013-12-01
Regional meteorological models are increasingly being applied in urban areas. Accurate representation of urban surface physical characteristics in these models is critical for predictions of surface-atmosphere fluxes of sensible heat, latent heat, and momentum, which in turn affect weather and climate forecasting capabilities. Yet, the specification of surface parameters largely relies on out-dated land-use maps and lookup tables. In this contribution, we use the Noah LSM (Land Surface Model)-SLUCM (Single Layer Urban Canopy Model) modeling framework to investigate the usefulness of remotely sensed data in the model parameterization and validation processes, the sensitivity of the model to the defined parameters, and the model's performance improvement when the new parameter sets are implemented. Fused Landsat ETM and MODIS data are used to generate high resolution (30 m) spatial maps of monthly GVF (Green Vegetation Fraction), ISA (Impervious Surface Area), LAI (Leaf Area Index), albedo, and emissivity over the Los Angeles metropolitan area, which are then directly implemented in the model simulations. Parameters derived from remote sensing platforms show significant temporal and spatial differences from traditional Noah LSM values. For example, GVF shows significantly less seasonal variability, reflecting the impact of heavy year round irrigation in the study domain, which is not accounted in the default parameters. Assimilating remotely sensed model parameters into Noah/SLUCM results in significant changes in the simulated energy and water fluxes over the study area. The results show a high sensitivity of model simulations to all investigated parameters except for emissivity. Finally, the model's performance is evaluated utilizing Landsat based land surface temperature and evapotranspiration measurements from CIMIS (California Irrigation Management Information System) stations. Results reveal that the surface energy and water budget estimation accuracies are
What do parameterized Om(z) diagnostics tell us in light of recent observations?
Qi, Jing-Zhao; Cao, Shuo; Biesiada, Marek; Xu, Teng-Peng; Wu, Yan; Zhang, Si-Xuan; Zhu, Zong-Hong
2018-06-01
In this paper, we propose a new parametrization for Om(z) diagnostics and show how the most recent and significantly improved observations concerning the H(z) and SN Ia measurements can be used to probe the consistency or tension between the ΛCDM model and observations. Our results demonstrate that H 0 plays a very important role in the consistency test of ΛCDM with H(z) data. Adopting the Hubble constant priors from Planck 2013 and Riess, one finds considerable tension between the current H(z) data and ΛCDM model and confirms the conclusions obtained previously by others. However, with the Hubble constant prior taken from WMAP9, the discrepancy between H(z) data and ΛCDM disappears, i.e., the current H(z) observations still support the cosmological constant scenario. This conclusion is also supported by the results derived from the Joint Light-curve Analysis (JLA) SN Ia sample. The best-fit Hubble constant from the combination of H(z)+JLA ({H}0={68.81}-1.49+1.50 km s‑1 Mpc‑1) is very consistent with results derived both by Planck 2013 and WMAP9, but is significantly different from the recent local measurement by Riess.
Directory of Open Access Journals (Sweden)
Y. Chen
2018-01-01
Full Text Available The heterogeneous hydrolysis of N2O5 on the surface of deliquescent aerosol leads to HNO3 formation and acts as a major sink of NOx in the atmosphere during night-time. The reaction constant of this heterogeneous hydrolysis is determined by temperature (T, relative humidity (RH, aerosol particle composition, and the surface area concentration (S. However, these parameters were not comprehensively considered in the parameterization of the heterogeneous hydrolysis of N2O5 in previous mass-based 3-D aerosol modelling studies. In this investigation, we propose a sophisticated parameterization (NewN2O5 of N2O5 heterogeneous hydrolysis with respect to T, RH, aerosol particle compositions, and S based on laboratory experiments. We evaluated closure between NewN2O5 and a state-of-the-art parameterization based on a sectional aerosol treatment. The comparison showed a good linear relationship (R = 0.91 between these two parameterizations. NewN2O5 was incorporated into a 3-D fully online coupled model, COSMO–MUSCAT, with the mass-based aerosol treatment. As a case study, we used the data from the HOPE Melpitz campaign (10–25 September 2013 to validate model performance. Here, we investigated the improvement of nitrate prediction over western and central Europe. The modelled particulate nitrate mass concentrations ([NO3−] were validated by filter measurements over Germany (Neuglobsow, Schmücke, Zingst, and Melpitz. The modelled [NO3−] was significantly overestimated for this period by a factor of 5–19, with the corrected NH3 emissions (reduced by 50 % and the original parameterization of N2O5 heterogeneous hydrolysis. The NewN2O5 significantly reduces the overestimation of [NO3−] by ∼ 35 %. Particularly, the overestimation factor was reduced to approximately 1.4 in our case study (12, 17–18 and 25 September 2013 when [NO3−] was dominated by local chemical formations. In our case, the suppression of organic coating
Energy Technology Data Exchange (ETDEWEB)
Russell, Lynn M. [Univ. of California, San Diego, CA (United States); Somerville, Richard C.J. [Univ. of California, San Diego, CA (United States); Burrows, Susannah [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rasch, Phil [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-12-12
Description of the Project: This project has improved the aerosol formulation in a global climate model by using innovative new field and laboratory observations to develop and implement a novel wind-driven sea ice aerosol flux parameterization. This work fills a critical gap in the understanding of clouds, aerosol, and radiation in polar regions by addressing one of the largest missing particle sources in aerosol-climate modeling. Recent measurements of Arctic organic and inorganic aerosol indicate that the largest source of natural aerosol during the Arctic winter is emitted from crystal structures, known as frost flowers, formed on a newly frozen sea ice surface [Shaw et al., 2010]. We have implemented the new parameterization in an updated climate model making it the first capable of investigating how polar natural aerosol-cloud indirect effects relate to this important and previously unrecognized sea ice source. The parameterization is constrained by Arctic ARM in situ cloud and radiation data. The modified climate model has been used to quantify the potential pan-Arctic radiative forcing and aerosol indirect effects due to this missing source. This research supported the work of one postdoc (Li Xu) for two years and contributed to the training and research of an undergraduate student. This research allowed us to establish a collaboration between SIO and PNNL in order to contribute the frost flower parameterization to the new ACME model. One peer-reviewed publications has already resulted from this work, and a manuscript for a second publication has been completed. Additional publications from the PNNL collaboration are expected to follow.
Sensitivity test of parameterizations of subgrid-scale orographic form drag in the NCAR CESM1
Liang, Yishuang; Wang, Lanning; Zhang, Guang Jun; Wu, Qizhong
2017-05-01
Turbulent drag caused by subgrid orographic form drag has significant effects on the atmosphere. It is represented through parameterization in large-scale numerical prediction models. An indirect parameterization scheme, the Turbulent Mountain Stress scheme (TMS), is currently used in the National Center for Atmospheric Research Community Earth System Model v1.0.4. In this study we test a direct scheme referred to as BBW04 (Beljaars et al. in Q J R Meteorol Soc 130:1327-1347, 10.1256/qj.03.73), which has been used in several short-term weather forecast models and earth system models. Results indicate that both the indirect and direct schemes increase surface wind stress and improve the model's performance in simulating low-level wind speed over complex orography compared to the simulation without subgrid orographic effect. It is shown that the TMS scheme produces a more intense wind speed adjustment, leading to lower wind speed near the surface. The low-level wind speed by the BBW04 scheme agrees better with the ERA-Interim reanalysis and is more sensitive to complex orography as a direct method. Further, the TMS scheme increases the 2-m temperature and planetary boundary layer height over large areas of tropical and subtropical Northern Hemisphere land.
Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application
Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni
2018-06-01
Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.
Michelson, Sara A.; Bao, Jian-Wen; Grell, Evelyn D.
2017-04-01
Bulk microphysical parameterization schemes are popularly used in numerical weather prediction (NWP) models to simulate clouds and precpitation. These schemes are based on assumed number distribution functions for individual hydrometeor species, which are integratable over size distributions of diameters from zero to infinity. Typically, hydrometeor mass and number mixing ratios are predicted in these schemes. Some schemes also predict a third parameter of hydrometeor distribution characteristics. In this study, four commonly-used microphysics schemes of various complexity that are available in the Weather Research and Forecasting Model (WRF) are investigated and compared using numerical model simulations of an idealized 2-D squall line and microphysics budget analysis. Diagnoses of the parameterized pathways for hydrometeor production reveal that the differences related to the assumptions of hydrometeor size distributions between the schemes lead to the differences in the simulations due to the net effect of various microphysical processes on the interaction between latent heating/evaporative cooling and flow dynamics as the squall line develops. Results from this study also highlight the possibility that the advantage of double-moment formulations can be overshadowed by the uncertainties in the spectral definition of individual hydrometeor categories and spectrum-dependent microphysical processes. It is concluded that the major differences between the schemes investigated here are in the assumed hydrometeor size distributions and pathways for their production.
Forbes, K. A.; Kienzle, S. W.; Coburn, C. A.; Byrne, J. M.
2006-12-01
Multiple threats, including intensification of agricultural production, non-renewable resource extraction and climate change, are threatening Southern Alberta's water supply. The objective of this research is to calibrate/evaluate the Agricultural Catchments Research Unit (ACRU) agrohydrological model; with the end goal of forecasting the impacts of a changing environment on water quantity. The strength of this model is the intensive multi-layered soil water budgeting routine that integrates water movement between the surface and atmosphere. The ACRU model was parameterized using data from Environment Canada's climate database for a twenty year period (1984-2004) and was used to simulate streamflow for Beaver Creek. The simulated streamflow was compared to Environment Canada's historical streamflow database to validate the model output. The Beaver Creek Watershed, located in the Porcupine Hills southwestern Alberta, Canada contains a heterogeneous cover of deciduous, coniferous, native prairie grasslands and forage crops. In a catchment with highly diversified land cover, canopy architecture cannot be overlooked in rainfall interception parameterization. Preliminary testing of ACRU suggests that streamflows were sensitive to varied levels of leaf area index (LAI), a representative fraction of canopy foliage. Further testing using remotely sensed LAI's will provide a more accurate representation of canopy foliage and ultimately best represent this important element of the hydrological cycle and the associated processes which govern the natural hydrology of the Beaver Creek watershed.
Liu, Ping; Li, Guodong; Liu, Xinggao
2015-09-01
Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology
Jin, Z.; Azzari, G.; Lobell, D. B.
2016-12-01
Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.
Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems
Energy Technology Data Exchange (ETDEWEB)
Lee, Kookjin [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science; Carlberg, Kevin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Elman, Howard C. [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science and Inst. for Advanced Computer Studies
2018-03-29
Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted $\\ell^2$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $\\ell^2$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.
Evaluation of surface layer flux parameterizations using in-situ observations
Katz, Jeremy; Zhu, Ping
2017-09-01
Appropriate calculation of surface turbulent fluxes between the atmosphere and the underlying ocean/land surface is one of the major challenges in geosciences. In practice, the surface turbulent fluxes are estimated from the mean surface meteorological variables based on the bulk transfer model combined with the Monnin-Obukhov Similarity (MOS) theory. Few studies have been done to examine the extent to which such a flux parameterization can be applied to different weather and surface conditions. A novel validation method is developed in this study to evaluate the surface flux parameterization using in-situ observations collected at a station off the coast of Gulf of Mexico. The main findings are: (a) the theoretical prediction that uses MOS theory does not match well with those directly computed from the observations. (b) The largest spread in exchange coefficients is shown in strong stable conditions with calm winds. (c) Large turbulent eddies, which depend strongly on the mean flow pattern and surface conditions, tend to break the constant flux assumption in the surface layer.
Actual and Idealized Crystal Field Parameterizations for the Uranium Ions in UF 4
Gajek, Z.; Mulak, J.; Krupa, J. C.
1993-12-01
The crystal field parameters for the actual coordination symmetries of the uranium ions in UF 4, C2 and C1, and for their idealizations to D2, C2 v , D4, D4 d , and the Archimedean antiprism point symmetries are given. They have been calculated by means of both the perturbative ab initio model and the angular overlap model and are referenced to the recent results fitted by Carnall's group. The equivalency of some different sets of parameters has been verified with the standardization procedure. The adequacy of several idealized approaches has been tested by comparison of the corresponding splitting patterns of the 3H 4 ground state. Our results support the parameterization given by Carnall. Furthermore, the parameterization of the crystal field potential and the splitting diagram for the symmetryless uranium ion U( C1) are given. Having at our disposal the crystal field splittings for the two kinds of uranium ions in UF 4, U( C2) and U( C1), we calculate the model plots of the paramagnetic susceptibility χ( T) and the magnetic entropy associated with the Schottky anomaly Δ S( T) for UF 4.
Freitas, S.; Grell, G. A.; Molod, A.
2017-12-01
We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization (Grell and Freitas, 2014) is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Scale dependence for deep convection is implemented either through using the method described by Arakawa et al (2011), or through lateral spreading of the subsidence terms. Aerosol effects are included though the dependence of autoconversion and evaporation on the CCN number concentration.Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, congestus, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Also, a beta-pdf is employed now to represent the normalized mass flux profile. This opens up an additional venue to apply stochasticism in the scheme.
Parameterization of Time-Averaged Suspended Sediment Concentration in the Nearshore
Directory of Open Access Journals (Sweden)
Hyun-Doug Yoon
2015-11-01
Full Text Available To quantify the effect of wave breaking turbulence on sediment transport in the nearshore, the vertical distribution of time-averaged suspended sediment concentration (SSC in the surf zone was parameterized in terms of the turbulent kinetic energy (TKE at different cross-shore locations, including the bar crest, bar trough, and inner surf zone. Using data from a large-scale laboratory experiment, a simple relationship was developed between the time-averaged SSC and the time-averaged TKE. The vertical variation of the time-averaged SSC was fitted to an equation analogous to the turbulent dissipation rate term. At the bar crest, the proposed equation was slightly modified to incorporate the effect of near-bed sediment processes and yielded reasonable agreement. This parameterization yielded the best agreement at the bar trough, with a coefficient of determination R2 ≥ 0.72 above the bottom boundary layer. The time-averaged SSC in the inner surf zone showed good agreement near the bed but poor agreement near the water surface, suggesting that there is a different sedimentation mechanism that controls the SSC in the inner surf zone.
Guitton, Antoine
2017-08-15
Choosing the right parameterization to describe a transversely isotropic medium with a vertical symmetry axis (VTI) allows us to match the scattering potential of these parameters to the available data in a way that avoids potential tradeoff and focus on the parameters to which the data are sensitive. For 2-D elastic full waveform inversion in VTI media of pressure components and for data with a reasonable range of offsets (as with those found in conventional streamer data acquisition systems), assuming that we have a kinematically accurate NMO velocity (vnmo) and anellipticity parameter η (or horizontal velocity, vh) obtained from tomographic methods, a parameterization in terms of horizontal velocity vh, η and ε is preferred to the more conventional parameterization in terms of vh, δ and ε. In the vh, η, ε parameterization and for reasonable scattering angles (<60o), ε acts as a “garbage collector” and absorbs most of the amplitude discrepancies; between modeled and observed data, more so when density ρ and shear-wave velocity vs are not inverted for (a standard practice with streamer data). On the contrary, in the vv, δ, ε parameterization, ε is mostly sensitive to large scattering angles, leaving vv exposed to strong leakages from ρ mainly. There assertions will be demonstrated on the synthetic Marmousi II as well as a North Sea OBC dataset, where inverting for the horizontal velocity rather than the vertical velocity yields more accurate models and migrated images.
Guitton, Antoine; Alkhalifah, Tariq Ali
2017-01-01
Choosing the right parameterization to describe a transversely isotropic medium with a vertical symmetry axis (VTI) allows us to match the scattering potential of these parameters to the available data in a way that avoids potential tradeoff and focus on the parameters to which the data are sensitive. For 2-D elastic full waveform inversion in VTI media of pressure components and for data with a reasonable range of offsets (as with those found in conventional streamer data acquisition systems), assuming that we have a kinematically accurate NMO velocity (vnmo) and anellipticity parameter η (or horizontal velocity, vh) obtained from tomographic methods, a parameterization in terms of horizontal velocity vh, η and ε is preferred to the more conventional parameterization in terms of vh, δ and ε. In the vh, η, ε parameterization and for reasonable scattering angles (<60o), ε acts as a “garbage collector” and absorbs most of the amplitude discrepancies; between modeled and observed data, more so when density ρ and shear-wave velocity vs are not inverted for (a standard practice with streamer data). On the contrary, in the vv, δ, ε parameterization, ε is mostly sensitive to large scattering angles, leaving vv exposed to strong leakages from ρ mainly. There assertions will be demonstrated on the synthetic Marmousi II as well as a North Sea OBC dataset, where inverting for the horizontal velocity rather than the vertical velocity yields more accurate models and migrated images.
Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia
2018-06-01
Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.
Lee, Seoung Soo; Li, Zhanqing; Zhang, Yuwei; Yoo, Hyelim; Kim, Seungbum; Kim, Byung-Gon; Choi, Yong-Sang; Mok, Jungbin; Um, Junshik; Ock Choi, Kyoung; Dong, Danhong
2018-01-01
This study investigates the roles played by model resolution and microphysics parameterizations in the well-known uncertainties or errors in simulations of clouds, precipitation, and their interactions with aerosols by the numerical weather prediction (NWP) models. For this investigation, we used cloud-system-resolving model (CSRM) simulations as benchmark simulations that adopt high-resolution and full-fledged microphysical processes. These simulations were evaluated against observations, and this evaluation demonstrated that the CSRM simulations can function as benchmark simulations. Comparisons between the CSRM simulations and the simulations at the coarse resolutions that are generally adopted by current NWP models indicate that the use of coarse resolutions as in the NWP models can lower not only updrafts and other cloud variables (e.g., cloud mass, condensation, deposition, and evaporation) but also their sensitivity to increasing aerosol concentration. The parameterization of the saturation process plays an important role in the sensitivity of cloud variables to aerosol concentrations. while the parameterization of the sedimentation process has a substantial impact on how cloud variables are distributed vertically. The variation in cloud variables with resolution is much greater than what happens with varying microphysics parameterizations, which suggests that the uncertainties in the NWP simulations are associated with resolution much more than microphysics parameterizations.
Directory of Open Access Journals (Sweden)
C. L. Tague
2013-01-01
Full Text Available Hydrologic models are one of the core tools used to project how water resources may change under a warming climate. These models are typically applied over a range of scales, from headwater streams to higher order rivers, and for a variety of purposes, such as evaluating changes to aquatic habitat or reservoir operation. Most hydrologic models require streamflow data to calibrate subsurface drainage parameters. In many cases, long-term gage records may not be available for calibration, particularly when assessments are focused on low-order stream reaches. Consequently, hydrologic modeling of climate change impacts is often performed in the absence of sufficient data to fully parameterize these hydrologic models. In this paper, we assess a geologic-based strategy for assigning drainage parameters. We examine the performance of this modeling strategy for the McKenzie River watershed in the US Oregon Cascades, a region where previous work has demonstrated sharp contrasts in hydrology based primarily on geological differences between the High and Western Cascades. Based on calibration and verification using existing streamflow data, we demonstrate that: (1 a set of streams ranging from 1st to 3rd order within the Western Cascade geologic region can share the same drainage parameter set, while (2 streams from the High Cascade geologic region require a different parameter set. Further, we show that a watershed comprised of a mixture of High and Western Cascade geologies can be modeled without additional calibration by transferring parameters from these distinctive High and Western Cascade end-member parameter sets. More generally, we show that by defining a set of end-member parameters that reflect different geologic classes, we can more efficiently apply a hydrologic model over a geologically complex landscape and resolve geo-climatic differences in how different watersheds are likely to respond to simple warming scenarios.
Global functional atlas of Escherichia coli encompassing previously uncharacterized proteins.
Directory of Open Access Journals (Sweden)
Pingzhao Hu
2009-04-01
Full Text Available One-third of the 4,225 protein-coding genes of Escherichia coli K-12 remain functionally unannotated (orphans. Many map to distant clades such as Archaea, suggesting involvement in basic prokaryotic traits, whereas others appear restricted to E. coli, including pathogenic strains. To elucidate the orphans' biological roles, we performed an extensive proteomic survey using affinity-tagged E. coli strains and generated comprehensive genomic context inferences to derive a high-confidence compendium for virtually the entire proteome consisting of 5,993 putative physical interactions and 74,776 putative functional associations, most of which are novel. Clustering of the respective probabilistic networks revealed putative orphan membership in discrete multiprotein complexes and functional modules together with annotated gene products, whereas a machine-learning strategy based on network integration implicated the orphans in specific biological processes. We provide additional experimental evidence supporting orphan participation in protein synthesis, amino acid metabolism, biofilm formation, motility, and assembly of the bacterial cell envelope. This resource provides a "systems-wide" functional blueprint of a model microbe, with insights into the biological and evolutionary significance of previously uncharacterized proteins.
Global functional atlas of Escherichia coli encompassing previously uncharacterized proteins.
Hu, Pingzhao; Janga, Sarath Chandra; Babu, Mohan; Díaz-Mejía, J Javier; Butland, Gareth; Yang, Wenhong; Pogoutse, Oxana; Guo, Xinghua; Phanse, Sadhna; Wong, Peter; Chandran, Shamanta; Christopoulos, Constantine; Nazarians-Armavil, Anaies; Nasseri, Negin Karimi; Musso, Gabriel; Ali, Mehrab; Nazemof, Nazila; Eroukova, Veronika; Golshani, Ashkan; Paccanaro, Alberto; Greenblatt, Jack F; Moreno-Hagelsieb, Gabriel; Emili, Andrew
2009-04-28
One-third of the 4,225 protein-coding genes of Escherichia coli K-12 remain functionally unannotated (orphans). Many map to distant clades such as Archaea, suggesting involvement in basic prokaryotic traits, whereas others appear restricted to E. coli, including pathogenic strains. To elucidate the orphans' biological roles, we performed an extensive proteomic survey using affinity-tagged E. coli strains and generated comprehensive genomic context inferences to derive a high-confidence compendium for virtually the entire proteome consisting of 5,993 putative physical interactions and 74,776 putative functional associations, most of which are novel. Clustering of the respective probabilistic networks revealed putative orphan membership in discrete multiprotein complexes and functional modules together with annotated gene products, whereas a machine-learning strategy based on network integration implicated the orphans in specific biological processes. We provide additional experimental evidence supporting orphan participation in protein synthesis, amino acid metabolism, biofilm formation, motility, and assembly of the bacterial cell envelope. This resource provides a "systems-wide" functional blueprint of a model microbe, with insights into the biological and evolutionary significance of previously uncharacterized proteins.
International Nuclear Information System (INIS)
Noller, Johannes
2012-01-01
We consider generalized chameleon models where the conformal coupling between matter and gravitational geometries is not only a function of the chameleon field φ, but also of its derivatives via higher order co-ordinate invariants (such as ∂ μ φ∂ μ φ,□φ,...). Specifically we consider the first such non-trivial conformal factor A(φ,∂ μ φ∂ μ φ). The associated phenomenology is investigated and we show that such theories have a new generic mass-altering mechanism, potentially assisting the generation of a sufficiently large chameleon mass in dense environments. The most general effective potential is derived for such derivative chameleon setups and explicit examples are given. Interestingly this points us to the existence of a purely derivative chameleon protected by a shift symmetry for φ → φ+c. We also discuss potential ghost-like instabilities associated with mass-lifting mechanisms and find another, mass-lowering and instability-free, branch of solutions. This suggests that, barring fine-tuning, stable derivative models are in fact typically anti-chameleons that suppress the field's mass in dense environments. Furthermore we investigate modifications to the thin-shell regime and prove a no-go theorem for chameleon effects in non-conformal geometries of the disformal type
Energy Technology Data Exchange (ETDEWEB)
Noller, Johannes, E-mail: johannes.noller08@imperial.ac.uk [Theoretical Physics, Blackett Laboratory, Imperial College London, Prince Consort Road, London, SW7 2BZ (United Kingdom)
2012-07-01
We consider generalized chameleon models where the conformal coupling between matter and gravitational geometries is not only a function of the chameleon field φ, but also of its derivatives via higher order co-ordinate invariants (such as ∂{sub μ}φ∂{sup μ}φ,□φ,...). Specifically we consider the first such non-trivial conformal factor A(φ,∂{sub μ}φ∂{sup μ}φ). The associated phenomenology is investigated and we show that such theories have a new generic mass-altering mechanism, potentially assisting the generation of a sufficiently large chameleon mass in dense environments. The most general effective potential is derived for such derivative chameleon setups and explicit examples are given. Interestingly this points us to the existence of a purely derivative chameleon protected by a shift symmetry for φ → φ+c. We also discuss potential ghost-like instabilities associated with mass-lifting mechanisms and find another, mass-lowering and instability-free, branch of solutions. This suggests that, barring fine-tuning, stable derivative models are in fact typically anti-chameleons that suppress the field's mass in dense environments. Furthermore we investigate modifications to the thin-shell regime and prove a no-go theorem for chameleon effects in non-conformal geometries of the disformal type.
Zampieri, Matteo
2012-02-01
Groundwater is an important component of the hydrological cycle, included in many land surface models to provide a lower boundary condition for soil moisture, which in turn plays a key role in the land-vegetation-atmosphere interactions and the ecosystem dynamics. In regional-scale climate applications land surface models (LSMs) are commonly coupled to atmospheric models to close the surface energy, mass and carbon balance. LSMs in these applications are used to resolve the momentum, heat, water and carbon vertical fluxes, accounting for the effect of vegetation, soil type and other surface parameters, while lack of adequate resolution prevents using them to resolve horizontal sub-grid processes. Specifically, LSMs resolve the large-scale runoff production associated with infiltration excess and sub-grid groundwater convergence, but they neglect the effect from loosing streams to groundwater. Through the analysis of observed data of soil moisture obtained from the Oklahoma Mesoscale Network stations and land surface temperature derived from MODIS we provide evidence that the regional scale soil moisture and surface temperature patterns are affected by the rivers. This is demonstrated on the basis of simulations from a land surface model (i.e., Community Land Model - CLM, version 3.5). We show that the model cannot reproduce the features of the observed soil moisture and temperature spatial patterns that are related to the underlying mechanism of reinfiltration of river water to groundwater. Therefore, we implement a simple parameterization of this process in CLM showing the ability to reproduce the soil moisture and surface temperature spatial variabilities that relate to the river distribution at regional scale. The CLM with this new parameterization is used to evaluate impacts of the improved representation of river-groundwater interactions on the simulated water cycle parameters and the surface energy budget at the regional scale. © 2011 Elsevier B.V.
Aïd, René
2015-01-01
Offering a concise but complete survey of the common features of the microstructure of electricity markets, this book describes the state of the art in the different proposed electricity price models for pricing derivatives and in the numerical methods used to price and hedge the most prominent derivatives in electricity markets, namely power plants and swings. The mathematical content of the book has intentionally been made light in order to concentrate on the main subject matter, avoiding fastidious computations. Wherever possible, the models are illustrated by diagrams. The book should allow prospective researchers in the field of electricity derivatives to focus on the actual difficulties associated with the subject. It should also offer a brief but exhaustive overview of the latest techniques used by financial engineers in energy utilities and energy trading desks.
Paul, Nilanjan; Kumar, Suman; Chatterjee, Indranil; Mukherjee, Biswarup
2011-01-01
In-depth study on laryngeal biomechanics and vocal fold vibratory patterns reveal that a single vibratory cycle can be divided into two major phases, the closed and open phase, which is subdivided into opening and closing phases. Studies reveal that the relative time course of abduction and adduction, which in turn is dependent on the relative relaxing and tensing of the vocal fold cover and body, to be the determining factor in production of a particular vocal register like the modal (or chest), falsetto, glottal fry registers. Studies further point out Electroglottography to be particularly suitable for the study of vocal vibratory patterns during register changes. However, to date, there has been limited study on quantitative parameterization of EGG wave form in vocal fry register. Moreover, contradictory findings abound in literature regarding effects of gender and vowel types on vocal vibratory patterns, especially during phonation at different registers. The present study endeavors to find out the effects of vowel and gender differences on the vocal fold vibratory patterns in different registers and how these would be reflected in standard EGG parameters of Contact Quotient (CQ) and Contact Index (CI), taking into consideration the Indian sociolinguistic context. Electroglottographic recordings of 10 young adults (5 males and 5 females) were taken while the subjects phonated the three vowels /a/,/i/,/u/ each in two vocal registers, modal and vocal fry. Obtained raw EGG were further normalized using the Derived EGG algorithm and theCQ and CI values were derived. Obtained data were subject to statistical analysis using the 3-way ANOVA with gender, vowel and vocal register as the three variables. Post-hoc Dunnett C multiple comparison analysis were also performed. Results reveal that CQ values are significantly higher in vocal fry than modal phonation for both males and females, indicating a relatively hyperconstricted vocal system during vocal fry. The males
Directory of Open Access Journals (Sweden)
B. N. Murphy
2017-09-01
Full Text Available Mounting evidence from field and laboratory observations coupled with atmospheric model analyses shows that primary combustion emissions of organic compounds dynamically partition between the vapor and particulate phases, especially as near-source emissions dilute and cool to ambient conditions. The most recent version of the Community Multiscale Air Quality model version 5.2 (CMAQv5.2 accounts for the semivolatile partitioning and gas-phase aging of these primary organic aerosol (POA compounds consistent with experimentally derived parameterizations. We also include a new surrogate species, potential secondary organic aerosol from combustion emissions (pcSOA, which provides a representation of the secondary organic aerosol (SOA from anthropogenic combustion sources that could be missing from current chemical transport model predictions. The reasons for this missing mass likely include the following: (1 unspeciated semivolatile and intermediate volatility organic compound (SVOC and IVOC, respectively emissions missing from current inventories, (2 multigenerational aging of organic vapor products from known SOA precursors (e.g., toluene, alkanes, (3 underestimation of SOA yields due to vapor wall losses in smog chamber experiments, and (4 reversible organic compounds–water interactions and/or aqueous-phase processing of known organic vapor emissions. CMAQ predicts the spatially averaged contribution of pcSOA to OA surface concentrations in the continental United States to be 38.6 and 23.6 % in the 2011 winter and summer, respectively. Whereas many past modeling studies focused on a particular measurement campaign, season, location, or model configuration, we endeavor to evaluate the model and important uncertain parameters with a comprehensive set of United States-based model runs using multiple horizontal scales (4 and 12 km, gas-phase chemical mechanisms, and seasons and years. The model with representation of semivolatile POA
Directory of Open Access Journals (Sweden)
Y. F. Cheng
2012-05-01
intensities, actual turnover rates of soot (k_{ex → in} up to 20% h^{−1} were derived, which showed a pronounced diurnal cycle peaking around noon time. This result confirms that (soot particles are undergoing fast aging/coating with the existing high levels of condensable vapors in the megacity Beijing. (5 Diurnal cycles of F_{in} were different between Aitken and accumulation mode particles, which could be explained by the faster growth of smaller Aitken mode particles into larger size bins.
To improve the F_{in} prediction in regional/global models, we suggest parameterizing F_{in} by an air mass aging indicator, i.e., F_{in} = a + bx, where a and b are empirical coefficients determined from observations, and x is the value of an air mass age indicator. At the Yufa site in the North China Plain, fitted coefficients (a, b were determined as (0.57, 0.21, (0.47, 0.21, and (0.52, 0.0088 for x (indicators as [NO_{z}]/[NO_{y}], [E]/[X] ([ethylbenzene]/[m,p-xylene] and ([IM] + [OM]/[EC] ([inorganic + organic matter]/[elemental carbon], respectively. Such a parameterization consumes little additional computing time, but yields a more realistic description of F_{in} compared with the simple treatment of soot mixing state in regional/global models.
Directory of Open Access Journals (Sweden)
Jun–Ichi Yano
2014-12-01
Full Text Available The research network “Basic Concepts for Convection Parameterization in Weather Forecast and Climate Models” was organized with European funding (COST Action ES0905 for the period of 2010–2014. Its extensive brainstorming suggests how the subgrid-scale parameterization problem in atmospheric modeling, especially for convection, can be examined and developed from the point of view of a robust theoretical basis. Our main cautions are current emphasis on massive observational data analyses and process studies. The closure and the entrainment–detrainment problems are identified as the two highest priorities for convection parameterization under the mass–flux formulation. The need for a drastic change of the current European research culture as concerns policies and funding in order not to further deplete the visions of the European researchers focusing on those basic issues is emphasized.
Directory of Open Access Journals (Sweden)
S. L. Bradley
2018-05-01
Full Text Available Observational evidence, including offshore moraines and sediment cores, confirm that at the Last Glacial Maximum (LGM the Greenland ice sheet (GrIS expanded to a significantly larger spatial extent than seen at present, grounding into Baffin Bay and out onto the continental shelf break. Given this larger spatial extent and its close proximity to the neighbouring Laurentide Ice Sheet (LIS and Innuitian Ice Sheet (IIS, it is likely these ice sheets will have had a strong non-local influence on the spatial and temporal behaviour of the GrIS. Most previous paleo ice-sheet modelling simulations recreated an ice sheet that either did not extend out onto the continental shelf or utilized a simplified marine ice parameterization which did not fully include the effect of ice shelves or neglected the sensitivity of the GrIS to this non-local bedrock signal from the surrounding ice sheets. In this paper, we investigated the evolution of the GrIS over the two most recent glacial–interglacial cycles (240 ka BP to the present day using the ice-sheet–ice-shelf model IMAU-ICE. We investigated the solid earth influence of the LIS and IIS via an offline relative sea level (RSL forcing generated by a glacial isostatic adjustment (GIA model. The RSL forcing governed the spatial and temporal pattern of sub-ice-shelf melting via changes in the water depth below the ice shelves. In the ensemble of simulations, at the glacial maximums, the GrIS coalesced with the IIS to the north and expanded to the continental shelf break to the southwest but remained too restricted to the northeast. In terms of the global mean sea level contribution, at the Last Interglacial (LIG and LGM the ice sheet added 1.46 and −2.59 m, respectively. This LGM contribution by the GrIS is considerably higher (∼ 1.26 m than most previous studies whereas the contribution to the LIG highstand is lower (∼ 0.7 m. The spatial and temporal behaviour of the northern margin was
Directory of Open Access Journals (Sweden)
J. Ray
2014-09-01
Full Text Available The characterization of fossil-fuel CO2 (ffCO2 emissions is paramount to carbon cycle studies, but the use of atmospheric inverse modeling approaches for this purpose has been limited by the highly heterogeneous and non-Gaussian spatiotemporal variability of emissions. Here we explore the feasibility of capturing this variability using a low-dimensional parameterization that can be implemented within the context of atmospheric CO2 inverse problems aimed at constraining regional-scale emissions. We construct a multiresolution (i.e., wavelet-based spatial parameterization for ffCO2 emissions using the Vulcan inventory, and examine whether such a~parameterization can capture a realistic representation of the expected spatial variability of actual emissions. We then explore whether sub-selecting wavelets using two easily available proxies of human activity (images of lights at night and maps of built-up areas yields a low-dimensional alternative. We finally implement this low-dimensional parameterization within an idealized inversion, where a sparse reconstruction algorithm, an extension of stagewise orthogonal matching pursuit (StOMP, is used to identify the wavelet coefficients. We find that (i the spatial variability of fossil-fuel emission can indeed be represented using a low-dimensional wavelet-based parameterization, (ii that images of lights at night can be used as a proxy for sub-selecting wavelets for such analysis, and (iii that implementing this parameterization within the described inversion framework makes it possible to quantify fossil-fuel emissions at regional scales if fossil-fuel-only CO2 observations are available.
Parameterizing Urban Canopy Layer transport in an Lagrangian Particle Dispersion Model
Stöckl, Stefan; Rotach, Mathias W.
2016-04-01
The percentage of people living in urban areas is rising worldwide, crossed 50% in 2007 and is even higher in developed countries. High population density and numerous sources of air pollution in close proximity can lead to health issues. Therefore it is important to understand the nature of urban pollutant dispersion. In the last decades this field has experienced considerable progress, however the influence of large roughness elements is complex and has as of yet not been completely described. Hence, this work studied urban particle dispersion close to source and ground. It used an existing, steady state, three-dimensional Lagrangian particle dispersion model, which includes Roughness Sublayer parameterizations of turbulence and flow. The model is valid for convective and neutral to stable conditions and uses the kernel method for concentration calculation. As most Lagrangian models, its lower boundary is the zero-plane displacement, which means that roughly the lower two-thirds of the mean building height are not included in the model. This missing layer roughly coincides with the Urban Canopy Layer. An earlier work "traps" particles hitting the lower model boundary for a recirculation period, which is calculated under the assumption of a vortex in skimming flow, before "releasing" them again. The authors hypothesize that improving the lower boundary condition by including Urban Canopy Layer transport could improve model predictions. This was tested herein by not only trapping the particles, but also advecting them with a mean, parameterized flow in the Urban Canopy Layer. Now the model calculates the trapping period based on either recirculation due to vortex motion in skimming flow regimes or vertical velocity if no vortex forms, depending on incidence angle of the wind on a randomly chosen street canyon. The influence of this modification, as well as the model's sensitivity to parameterization constants, was investigated. To reach this goal, the model was
Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements
Energy Technology Data Exchange (ETDEWEB)
Wang, Zhien [Univ. of Wyoming, Laramie, WY (United States)
2016-12-13
Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentration retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations
Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.
2017-12-01
Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the
Directory of Open Access Journals (Sweden)
Ana Clara Mourão Moura
2013-05-01
Full Text Available The text discusses the state-of-the-art of GIS Technologies in planning and management processes of urban and architectural spaces. It presents latest’s evolution in GIS methodology and applications, discussing how these resources have changed our way of representing and projecting territory. It discusses contemporaneous values, in interventions into urban spaces. The paper also presents legislation’s role in data registers and infrastructure, favoring wide employment of geoprocessing. It announces the arrival of new territorial representation logics, among which is azimuth visualization, considering mental maps, the employment of BIM (Building Information Modeling and the process of parameterization. It points out tendencies and values, such as being inter- operational, creating interpretative portraits for reality, producing simulated scenarios, investing in visualizing and involvement with communities, and fully employing geo-technologies, as aids for decision making. It defends that we are living a new paradigm on territorial planning: the Parametric Modeling of Territorial Occupation.
Parameterized Post-Newtonian Expansion of Scalar-Vector-Tensor Theory of Gravity
International Nuclear Information System (INIS)
Arianto; Zen, Freddy P.; Gunara, Bobby E.; Hartanto, Andreas
2010-01-01
We investigate the weak-field, post-Newtonian expansion to the solution of the field equations in scalar-vector-tensor theory of gravity. In the calculation we restrict ourselves to the first post Newtonian. The parameterized post Newtonian (PPN) parameters are determined by expanding the modified field equations in the metric perturbation. Then, we compare the solution to the PPN formalism in first PN approximation proposed by Will and Nordtvedt and read of the coefficients (the PPN parameters) of post Newtonian potentials of the theory. We find that the values of γ PPN and β PPN are the same as in General Relativity but the coupling functions β 1 , β 2 , and β 3 are the effect of the preferred frame.
The guide to PAMIR theory and use of Parameterized Adaptive Multidimensional Integration Routines
Adler, Stephen L
2013-01-01
PAMIR (Parameterized Adaptive Multidimensional Integration Routines) is a suite of Fortran programs for multidimensional numerical integration over hypercubes, simplexes, and hyper-rectangles in general dimension p, intended for use by physicists, applied mathematicians, computer scientists, and engineers. The programs, which are available on the internet at website and are free for non-profit research use, are capable of following localized peaks and valleys of the integrand. Each program comes with a Message-Passing Interface (MPI) parallel version for cluster use as well as serial versions. The first chapter presents introductory material, similar to that on the PAMIR website, and the next is a "manual" giving much more detail on the use of the programs than is on the website. They are followed by many examples of performance benchmarks and comparisons with other programs, and a discussion of the computational integration aspects of PAMIR, in comparison with other methods in the literature. The final chapt...
Rural postman parameterized by the number of components of required edges
DEFF Research Database (Denmark)
Gutin, Gregory; Wahlström, Magnus; Yeo, Anders
2017-01-01
In the Directed Rural Postman Problem (DRPP), given a strongly connected directed multigraph D=(V,A) with nonnegative integral weights on the arcs, a subset R of required arcs and a nonnegative integer ℓ, decide whether D has a closed directed walk containing every arc of R and of weight at most ...... suppresses polynomial factors. Using an algebraic approach, we prove that DRPP has a randomized algorithm of running time O⁎(2k) when ℓ is bounded by a polynomial in the number of vertices in D. The same result holds for the undirected version of DRPP........ Let k be the number of weakly connected components in the subgraph of D induced by R. Sorge et al. [30] asked whether the DRPP is fixed-parameter tractable (FPT) when parameterized by k, i.e., whether there is an algorithm of running time O⁎(f(k)) where f is a function of k only and the O⁎ notation...
The role of aerosols in cloud drop parameterizations and its applications in global climate models
Energy Technology Data Exchange (ETDEWEB)
Chuang, C.C.; Penner, J.E. [Lawrence Livermore National Lab., CA (United States)
1996-04-01
The characteristics of the cloud drop size distribution near cloud base are initially determined by aerosols that serve as cloud condensation nuclei and the updraft velocity. We have developed parameterizations relating cloud drop number concentration to aerosol number and sulfate mass concentrations and used them in a coupled global aerosol/general circulation model (GCM) to estimate the indirect aerosol forcing. The global aerosol model made use of our detailed emissions inventories for the amount of particulate matter from biomass burning sources and from fossil fuel sources as well as emissions inventories of the gas-phase anthropogenic SO{sub 2}. This work is aimed at validating the coupled model with the Atmospheric Radiation Measurement (ARM) Program measurements and assessing the possible magnitude of the aerosol-induced cloud effects on climate.
Parameterization and evaluation of sulfate adsorption in a dynamic soil chemistry model
International Nuclear Information System (INIS)
Martinson, Liisa; Alveteg, Mattias; Warfvinge, Per
2003-01-01
Including sulfate adsorption improves the dynamic behavior of the SAFE model. - Sulfate adsorption was implemented in the dynamic, multi-layer soil chemistry model SAFE. The process is modeled by an isotherm in which sulfate adsorption is considered to be fully reversible and dependent on sulfate concentration as well as pH in soil solution. The isotherm was parameterized by a site-specific series of simple batch experiments at different pH (3.8-5.0) and sulfate concentration (10-260 μmol l -1 ) levels. Application of the model to the Lake Gaardsjoen roof covered site shows that including sulfate adsorption improves the dynamic behavior of the model and sulfate adsorption and desorption delay acidification and recovery of the soil. The modeled adsorbed pool of sulfate at the site reached a maximum level of 700 mmol/m 2 in the late 1980s, well in line with experimental data
A Novel Structure and Design Optimization of Compact Spline-Parameterized UWB Slot Antenna
Directory of Open Access Journals (Sweden)
Koziel Slawomir
2016-12-01
Full Text Available In this paper, a novel structure of a compact UWB slot antenna and its design optimization procedure has been presented. In order to achieve a sufficient number of degrees of freedom necessary to obtain a considerable size reduction rate, the slot is parameterized using spline curves. All antenna dimensions are simultaneously adjusted using numerical optimization procedures. The fundamental bottleneck here is a high cost of the electromagnetic (EM simulation model of the structure that includes (for reliability an SMA connector. Another problem is a large number of geometry parameters (nineteen. For the sake of computational efficiency, the optimization process is therefore performed using variable-fidelity EM simulations and surrogate-assisted algorithms. The optimization process is oriented towards explicit reduction of the antenna size and leads to a compact footprint of 199 mm2 as well as acceptable matching within the entire UWB band. The simulation results are validated using physical measurements of the fabricated antenna prototype.
Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel
DEFF Research Database (Denmark)
Pai, Akshay Sadananda Uppinakudru; Sommer, Stefan Horst; Sørensen, Lauge
by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only...... approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed...... that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p
Modeling of clouds and radiation for development of parameterizations for general circulation models
International Nuclear Information System (INIS)
Westphal, D.; Toon, B.; Jensen, E.; Kinne, S.; Ackerman, A.; Bergstrom, R.; Walker, A.
1994-01-01
Atmospheric Radiation Measurement (ARM) Program research at NASA Ames Research Center (ARC) includes radiative transfer modeling, cirrus cloud microphysics, and stratus cloud modeling. These efforts are designed to provide the basis for improving cloud and radiation parameterizations in our main effort: mesoscale cloud modeling. The range of non-convective cloud models used by the ARM modeling community can be crudely categorized based on the number of predicted hydrometers such as cloud water, ice water, rain, snow, graupel, etc. The simplest model has no predicted hydrometers and diagnoses the presence of clouds based on the predicted relative humidity. The vast majority of cloud models have two or more predictive bulk hydrometers and are termed either bulk water (BW) or size-resolving (SR) schemes. This study compares the various cloud models within the same dynamical framework, and compares results with observations rather than climate statistics
A universal parameterization of chaos in various beam-wave interactions
International Nuclear Information System (INIS)
Lee, Jae Koo; Lee, Hae June; Hur, Min Sup; Bae, InDeog; Yang, Yi
1998-01-01
The comprehensive parameter space of sell-oscillation and its period-doubling route to chaos are shown for a bounded collisionless beam-plasma system. In this parameterization, it is helpful to use a potentially universal parameter in close analogy with free-electron-laser chaos. A common parameter, which is related to the velocity slippage and the ratio of bounce to oscillation frequencies, is shown to have similar significance for different physical systems. This single parameter replaces the dependences on many input parameters, thus suitable for a simplifying and diagnostic measure of nonlinear dynamical and chaotic phenomena for various systems of particle-wave interactions. The results of independent kinetic-simulations verify those of nonlinear fluid-simulations. Other standard routes to chaos via intermittent or quasiperiodic oscillations are also shown for the undriven plasma systems. Some correlation of linear characteristics to nonlinear phenomena was noted. (author)
Trajectory parameterization-A new approach to the study of the cosmic ray penumbra
International Nuclear Information System (INIS)
Cooke, D.J.
1982-01-01
A new approach to the examination of the structure of the cosmic ray penumbra has been developed, which, while utilizing the speed, efficiency and ''real'' geomagnetic field modeling capabilities of the digital computer, yields an analytical insight equivalent to that of the earlier and elegant approaches of Stormer, and of Lemaitre and Vallarta. The method involves an assigning of parameters to trajectory features in order to allow the structure and properties of the penumbra to be systematically related to trajectory configuration by means of an automated set of computer procedures. Initial use of this ''trajectory parameterization'' technique to explore the form and stability of the penumbra has yielded new and important knowledge of the penumbra and its phenomenology. Among the new findings is the discovery of isolated forbidden penumbral ''islands.''
Towards product design automation based on parameterized standard model with diversiform knowledge
Liu, Wei; Zhang, Xiaobing
2017-04-01
Product standardization based on CAD software is an effective way to improve design efficiency. In the past, research and development on standardization mainly focused on the level of component, and the standardization of the entire product as a whole is rarely taken into consideration. In this paper, the size and structure of 3D product models are both driven by the Excel datasheets, based on which a parameterized model library is therefore established. Diversiform knowledge including associated parameters and default properties are embedded into the templates in advance to simplify their reuse. Through the simple operation, we can obtain the correct product with the finished 3D models including single parts or complex assemblies. Two examples are illustrated later to invalid the idea, which will greatly improve the design efficiency.
Importance of including ammonium sulfate ((NH42SO4 aerosols for ice cloud parameterization in GCMs
Directory of Open Access Journals (Sweden)
P. S. Bhattacharjee
2010-02-01
Full Text Available A common deficiency of many cloud-physics parameterizations including the NASA's microphysics of clouds with aerosol-cloud interactions (hereafter called McRAS-AC is that they simulate lesser (larger than the observed ice cloud particle number (size. A single column model (SCM of McRAS-AC physics of the GEOS4 Global Circulation Model (GCM together with an adiabatic parcel model (APM for ice-cloud nucleation (IN of aerosols were used to systematically examine the influence of introducing ammonium sulfate (NH42SO4 aerosols in McRAS-AC and its influence on the optical properties of both liquid and ice clouds. First an (NH42SO4 parameterization was included in the APM to assess its effect on clouds vis-à-vis that of the other aerosols. Subsequently, several evaluation tests were conducted over the ARM Southern Great Plain (SGP and thirteen other locations (sorted into pristine and polluted conditions distributed over marine and continental sites with the SCM. The statistics of the simulated cloud climatology were evaluated against the available ground and satellite data. The results showed that inclusion of (NH42SO4 into McRAS-AC of the SCM made a remarkable improvement in the simulated effective radius of ice cloud particulates. However, the corresponding ice-cloud optical thickness increased even more than the observed. This can be caused by lack of horizontal cloud advection not performed in the SCM. Adjusting the other tunable parameters such as precipitation efficiency can mitigate this deficiency. Inclusion of ice cloud particle splintering invoked empirically further reduced simulation biases. Overall, these changes make a substantial improvement in simulated cloud optical properties and cloud distribution particularly over the Intertropical Convergence Zone (ITCZ in the GCM.
Timescale stretch parameterization of Type Ia supernova B-band light curves
International Nuclear Information System (INIS)
Goldhaber, G.; Groom, D.E.; Kim, A.; Aldering, G.; Astier, P.; Conley, A.; Deustua, S.E.; Ellis, R.; Fabbro, S.; Fruchter, A.S.; Goobar, A.; Hook, I.; Irwin, M.; Kim, M.; Knop, R.A.; Lidman, C.; McMahon, R.; Nugent, P.E.; Pain, R.; Panagia, N.; Pennypacker, C.R.; Perlmutter, S.; Ruiz-Lapuente, P.; Schaefer, B.; Walton, N.A.; York, T.
2001-01-01
R-band intensity measurements along the light curve of Type Ia supernovae discovered by the Cosmology Project (SCP) are fitted in brightness to templates allowing a free parameter the time-axis width factor w identically equal to s times (1+z). The data points are then individually aligned in the time-axis, normalized and K-corrected back to the rest frame, after which the nearly 1300 normalized intensity measurements are found to lie on a well-determined common rest-frame B-band curve which we call the ''composite curve.'' The same procedure is applied to 18 low-redshift Calan/Tololo SNe with Z < 0.11; these nearly 300 B-band photometry points are found to lie on the composite curve equally well. The SCP search technique produces several measurements before maximum light for each supernova. We demonstrate that the linear stretch factor, s, which parameterizes the light-curve timescale appears independent of z, and applies equally well to the declining and rising parts of the light curve. In fact, the B band template that best fits this composite curve fits the individual supernova photometry data when stretched by a factor s with chi 2/DoF ∼ 1, thus as well as any parameterization can, given the current data sets. The measurement of the data of explosion, however, is model dependent and not tightly constrained by the current data. We also demonstrate the 1 + z light-cure time-axis broadening expected from cosmological expansion. This argues strongly against alternative explanations, such as tired light, for the redshift of distant objects
Directory of Open Access Journals (Sweden)
Jie Zhang
2017-04-01
Full Text Available Device-free localization (DFL is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF DFL system, radio transmitters (RTs and radio receivers (RXs are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN, support vector machine (SVM, back propagation neural network (BPNN, as well as the well-known radio tomographic imaging (RTI DFL approach.
A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.
Directory of Open Access Journals (Sweden)
Courtney L Davis
Full Text Available We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS and O-membrane proteins (OMP were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1 the rate that Shigella migrates into the lamina propria or epithelium, 2 the rate that memory B cells (BM differentiate into antibody-secreting cells (ASC, 3 the rate at which antibodies are produced by activated ASC, and 4 the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.
Directory of Open Access Journals (Sweden)
Pierre-Paul Bitton
Full Text Available Perceptual models of animal vision have greatly contributed to our understanding of animal-animal and plant-animal communication. The receptor-noise model of color contrasts has been central to this research as it quantifies the difference between two colors for any visual system of interest. However, if the properties of the visual system are unknown, assumptions regarding parameter values must be made, generally with unknown consequences. In this study, we conduct a sensitivity analysis of the receptor-noise model using avian visual system parameters to systematically investigate the influence of variation in light environment, photoreceptor sensitivities, photoreceptor densities, and light transmission properties of the ocular media and the oil droplets. We calculated the chromatic contrast of 15 plumage patches to quantify a dichromatism score for 70 species of Galliformes, a group of birds that display a wide range of sexual dimorphism. We found that the photoreceptor densities and the wavelength of maximum sensitivity of the short-wavelength-sensitive photoreceptor 1 (SWS1 can change dichromatism scores by 50% to 100%. In contrast, the light environment, transmission properties of the oil droplets, transmission properties of the ocular media, and the peak sensitivities of the cone photoreceptors had a smaller impact on the scores. By investigating the effect of varying two or more parameters simultaneously, we further demonstrate that improper parameterization could lead to differences between calculated and actual contrasts of more than 650%. Our findings demonstrate that improper parameterization of tetrachromatic visual models can have very large effects on measures of dichromatism scores, potentially leading to erroneous inferences. We urge more complete characterization of avian retinal properties and recommend that researchers either determine whether their species of interest possess an ultraviolet or near-ultraviolet sensitive SWS1
Mirus, Benjamin B.
2015-01-01
Incorporating the influence of soil structure and horizons into parameterizations of distributed surface water/groundwater models remains a challenge. Often, only a single soil unit is employed, and soil-hydraulic properties are assigned based on textural classification, without evaluating the potential impact of these simplifications. This study uses a distributed physics-based model to assess the influence of soil horizons and structure on effective parameterization. This paper tests the viability of two established and widely used hydrogeologic methods for simulating runoff and variably saturated flow through layered soils: (1) accounting for vertical heterogeneity by combining hydrostratigraphic units with contrasting hydraulic properties into homogeneous, anisotropic units and (2) use of established pedotransfer functions based on soil texture alone to estimate water retention and conductivity, without accounting for the influence of pedon structures and hysteresis. The viability of this latter method for capturing the seasonal transition from runoff-dominated to evapotranspiration-dominated regimes is also tested here. For cases tested here, event-based simulations using simplified vertical heterogeneity did not capture the state-dependent anisotropy and complex combinations of runoff generation mechanisms resulting from permeability contrasts in layered hillslopes with complex topography. Continuous simulations using pedotransfer functions that do not account for the influence of soil structure and hysteresis generally over-predicted runoff, leading to propagation of substantial water balance errors. Analysis suggests that identifying a dominant hydropedological unit provides the most acceptable simplification of subsurface layering and that modified pedotransfer functions with steeper soil-water retention curves might adequately capture the influence of soil structure and hysteresis on hydrologic response in headwater catchments.
A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.
Davis, Courtney L; Wahid, Rezwanul; Toapanta, Franklin R; Simon, Jakub K; Sztein, Marcelo B
2018-01-01
We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.
Comparison of Four Mixed Layer Mesoscale Parameterizations and the Equation for an Arbitrary Tracer
Canuto, V. M.; Dubovikov, M. S.
2011-01-01
In this paper we discuss two issues, the inter-comparison of four mixed layer mesoscale parameterizations and the search for the eddy induced velocity for an arbitrary tracer. It must be stressed that our analysis is limited to mixed layer mesoscales since we do not treat sub-mesoscales and small turbulent mixing. As for the first item, since three of the four parameterizations are expressed in terms of a stream function and a residual flux of the RMT formalism (residual mean theory), while the fourth is expressed in terms of vertical and horizontal fluxes, we needed a formalism to connect the two formulations. The standard RMT representation developed for the deep ocean cannot be extended to the mixed layer since its stream function does not vanish at the ocean's surface. We develop a new RMT representation that satisfies the surface boundary condition. As for the general form of the eddy induced velocity for an arbitrary tracer, thus far, it has been assumed that there is only the one that originates from the curl of the stream function. This is because it was assumed that the tracer residual flux is purely diffusive. On the other hand, we show that in the case of an arbitrary tracer, the residual flux has also a skew component that gives rise to an additional bolus velocity. Therefore, instead of only one bolus velocity, there are now two, one coming from the curl of the stream function and other from the skew part of the residual flux. In the buoyancy case, only one bolus velocity contributes to the mean buoyancy equation since the residual flux is indeed only diffusive.
Directory of Open Access Journals (Sweden)
Andrew G. Slater
2011-05-01
Full Text Available The Community Land Model is the land component of the Community Climate System Model. Here, we describe a broad set of model improvements and additions that have been provided through the CLM development community to create CLM4. The model is extended with a carbon-nitrogen (CN biogeochemical model that is prognostic with respect to vegetation, litter, and soil carbon and nitrogen states and vegetation phenology. An urban canyon model is added and a transient land cover and land use change (LCLUC capability, including wood harvest, is introduced, enabling study of historic and future LCLUC on energy, water, momentum, carbon, and nitrogen fluxes. The hydrology scheme is modified with a revised numerical solution of the Richards equation and a revised ground evaporation parameterization that accounts for litter and within-canopy stability. The new snow model incorporates the SNow and Ice Aerosol Radiation model (SNICAR - which includes aerosol deposition, grain-size dependent snow aging, and vertically-resolved snowpack heating – as well as new snow cover and snow burial fraction parameterizations. The thermal and hydrologic properties of organic soil are accounted for and the ground column is extended to ~50-m depth. Several other minor modifications to the land surface types dataset, grass and crop optical properties, atmospheric forcing height, roughness length and displacement height, and the disposition of snow-capped runoff are also incorporated.Taken together, these augmentations to CLM result in improved soil moisture dynamics, drier soils, and stronger soil moisture variability. The new model also exhibits higher snow cover, cooler soil temperatures in organic-rich soils, greater global river discharge, and lower albedos over forests and grasslands, all of which are improvements compared to CLM3.5. When CLM4 is run with CN, the mean biogeophysical simulation is slightly degraded because the vegetation structure is prognostic rather
Testing cloud microphysics parameterizations in NCAR CAM5 with ISDAC and M-PACE observations
Liu, Xiaohong; Xie, Shaocheng; Boyle, James; Klein, Stephen A.; Shi, Xiangjun; Wang, Zhien; Lin, Wuyin; Ghan, Steven J.; Earle, Michael; Liu, Peter S. K.; Zelenyuk, Alla
2011-01-01
Arctic clouds simulated by the National Center for Atmospheric Research (NCAR) Community Atmospheric Model version 5 (CAM5) are evaluated with observations from the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Indirect and Semi-Direct Aerosol Campaign (ISDAC) and Mixed-Phase Arctic Cloud Experiment (M-PACE), which were conducted at its North Slope of Alaska site in April 2008 and October 2004, respectively. Model forecasts for the Arctic spring and fall seasons performed under the Cloud-Associated Parameterizations Testbed framework generally reproduce the spatial distributions of cloud fraction for single-layer boundary-layer mixed-phase stratocumulus and multilayer or deep frontal clouds. However, for low-level stratocumulus, the model significantly underestimates the observed cloud liquid water content in both seasons. As a result, CAM5 significantly underestimates the surface downward longwave radiative fluxes by 20-40 W m-2. Introducing a new ice nucleation parameterization slightly improves the model performance for low-level mixed-phase clouds by increasing cloud liquid water content through the reduction of the conversion rate from cloud liquid to ice by the Wegener-Bergeron-Findeisen process. The CAM5 single-column model testing shows that changing the instantaneous freezing temperature of rain to form snow from -5°C to -40°C causes a large increase in modeled cloud liquid water content through the slowing down of cloud liquid and rain-related processes (e.g., autoconversion of cloud liquid to rain). The underestimation of aerosol concentrations in CAM5 in the Arctic also plays an important role in the low bias of cloud liquid water in the single-layer mixed-phase clouds. In addition, numerical issues related to the coupling of model physics and time stepping in CAM5 are responsible for the model biases and will be explored in future studies.
Xu, Gang; Li, Ming; Mourrain, Bernard; Rabczuk, Timon; Xu, Jinlan; Bordas, Stéphane P. A.
2018-01-01
In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including B\\'ezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior B\\'ezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring B\\'ezier patches with respect to each quad in the quadrangulation, the high-quality B\\'ezier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.
Astitha, M.; Lelieveld, J.; Kader, M. Abdel; Pozzer, A.; de Meij, A.
2012-01-01
Airborne desert dust influences radiative transfer, atmospheric chemistry and dynamics, as well as nutrient transport and deposition. It directly and indirectly affects climate on regional and global scales. Two versions of a parameterization scheme to compute desert dust emissions are incorporated into the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). One uses a global...
Shi, J.; Menenti, M.; Lindenbergh, R.
2013-01-01
Glaciers in the Tibetan mountains are expected to be sensitive to turbulent sensible and latent heat fluxes. One of the most significant factors of the energy exchange between the atmospheric boundary layer and the glacier is the roughness of the glacier surface. However, methods to parameterize
Directory of Open Access Journals (Sweden)
J. Tonttila
2013-08-01
Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.
Moore, Luke; Galand, Marina; Mueller-Wodarg, Ingo; Mendillo, Michael
2009-12-01
We evaluate the effectiveness of two parameterizations in Saturn's ionosphere over a range of solar fluxes, seasons, and latitudes. First, the parameterization of the thermal electron heating rate, Q* e, introduced in [Moore, L., Galand, M., Mueller-Wodarg, I., Yelle, R.V., Mendillo, M., 2008. Plasma temperatures in Saturn's ionosphere. J. Geophys. Res. 113, A10306. doi:10.1029/2008JA013373.] for one specific set of conditions, is found to produce ion and electron temperatures that agree with self-consistent suprathermal electron calculations to within 2% on average under all conditions considered. Next, we develop a new parameterization of the secondary ion production rate at Saturn based on the calculations of [Galand, M., Moore, L., Mueller-Wodarg, I., Mendillo, M., 2009. Modeling the photoelectron secondary ionization process at Saturn. accepted. J. Geophys. Res.]; it is found to be accurate to within 4% on average. The demonstrated effectiveness of these two parameterizations over a wide range of input conditions makes them good candidates for inclusion in 3D Saturn thermosphere-ionosphere general circulation models (TIGCMs).
Directory of Open Access Journals (Sweden)
Pacôme Delva
2017-03-01
Full Text Available An extensive review of past work on relativistic gravimetry, gradiometry and chronometric geodesy is given. Then, general theoretical tools are presented and applied for the case of a stationary parameterized post-Newtonian metric. The special case of a stationary clock on the surface of the Earth is studied.
Energy Technology Data Exchange (ETDEWEB)
Marjanovic, Nikola [Department of Civil and Environmental Engineering, University of California, Berkeley, MC 1710, Berkeley, California 94720-1710, USA; Atmospheric, Earth and Energy Division, Lawrence Livermore National Laboratory, PO Box 808, L-103, Livermore, California 94551, USA; Mirocha, Jeffrey D. [Atmospheric, Earth and Energy Division, Lawrence Livermore National Laboratory, PO Box 808, L-103, Livermore, California 94551, USA; Kosović, Branko [Research Applications Laboratory, Weather Systems and Assessment Program, University Corporation for Atmospheric Research, PO Box 3000, Boulder, Colorado 80307, USA; Lundquist, Julie K. [Department of Atmospheric and Oceanic Sciences, University of Colorado, Boulder, Campus Box 311, Boulder, Colorado 80309, USA; National Renewable Energy Laboratory, 15013 Denver West Parkway, Golden, Colorado 80401, USA; Chow, Fotini Katopodes [Department of Civil and Environmental Engineering, University of California, Berkeley, MC 1710, Berkeley, California 94720-1710, USA
2017-11-01
A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulations show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.
Helsen, Michiel M.; van de Wal, Roderik S. W.; Reerink, Thomas J.; Bintanja, Richard; Madsen, Marianne S.; Yang, Shuting; Li, Qiang; Zhang, Qiong
2017-01-01
The albedo of the surface of ice sheets changes as a function of time due to the effects of deposition of new snow, ageing of dry snow, bare ice exposure, melting and run-off. Currently, the calculation of the albedo of ice sheets is highly parameterized within the earth system model EC-Earth by
Helsen, Michiel M.; Van De Wal, Roderik S.W.; Reerink, Thomas J.; Bintanja, Richard; Madsen, Marianne S.; Yang, Shuting; Li, Qiang; Zhang, Qiong
2017-01-01
The albedo of the surface of ice sheets changes as a function of time due to the effects of deposition of new snow, ageing of dry snow, bare ice exposure, melting and run-off. Currently, the calculation of the albedo of ice sheets is highly parameterized within the earth system model ECEarth by
Kratz, David P.; Chou, Ming-Dah; Yan, Michael M.-H.
1993-01-01
Fast and accurate parameterizations have been developed for the transmission functions of the CO2 9.4- and 10.4-micron bands, as well as the CFC-11, CFC-12, and CFC-22 bands located in the 8-12-micron region. The parameterizations are based on line-by-line calculations of transmission functions for the CO2 bands and on high spectral resolution laboratory measurements of the absorption coefficients for the CFC bands. Also developed are the parameterizations for the H2O transmission functions for the corresponding spectral bands. Compared to the high-resolution calculations, fluxes at the tropopause computed with the parameterizations are accurate to within 10 percent when overlapping of gas absorptions within a band is taken into account. For individual gas absorption, the accuracy is of order 0-2 percent. The climatic effects of these trace gases have been studied using a zonally averaged multilayer energy balance model, which includes seasonal cycles and a simplified deep ocean. With the trace gas abundances taken to follow the Intergovernmental Panel on Climate Change Low Emissions 'B' scenario, the transient response of the surface temperature is simulated for the period 1900-2060.
Energy Technology Data Exchange (ETDEWEB)
Markus Meier, H.E. [Swedish Meteorological and Hydrological Inst., Norrkoeping (Sweden). Rossby Centre
2000-09-01
As mixing plays a dominant role for the physics of an estuary like the Baltic Sea (seasonal heat storage, mixing in channels, deep water mixing), different mixing parameterizations for use in 3D Baltic Sea models are discussed and compared. For this purpose two different OGCMs of the Baltic Sea are utilized. Within the Swedish regional climate modeling program, SWECLIM, a 3D coupled ice-ocean model for the Baltic Sea has been coupled with an improved version of the two-equation k - {epsilon} turbulence model with corrected dissipation term, flux boundary conditions to include the effect of a turbulence enhanced layer due to breaking surface gravity waves and a parameterization for breaking internal waves. Results of multi-year simulations are compared with observations. The seasonal thermocline is simulated satisfactory and erosion of the halocline is avoided. Unsolved problems are discussed. To replace the controversial equation for dissipation the performance of a hierarchy of k-models has been tested and compared with the k - {epsilon} model. In addition, it is shown that the results of the mixing parameterization depend very much on the choice of the ocean model. Finally, the impact of two mixing parameterizations on Baltic Sea climate is investigated. In this case the sensitivity of mean SST, vertical temperature and salinity profiles, ice season and seasonal cycle of heat fluxes is quite large.
Impact of previously disadvantaged land-users on sustainable ...
African Journals Online (AJOL)
Impact of previously disadvantaged land-users on sustainable agricultural ... about previously disadvantaged land users involved in communal farming systems ... of input, capital, marketing, information and land use planning, with effect on ...
Energy Technology Data Exchange (ETDEWEB)
Vasconcellos, C. A. Zen, E-mail: cesarzen@cesarzen.com [Instituto de Física, Universidade Federal do Rio Grande do Sul (UFRGS), Av. Bento Gonçalves 9500, 91501-970, Porto Alegre (Brazil); International Center for Relativistic Astrophysics Network (ICRANet), Piazza della Repubblica 10, 65122 Pescara (Italy)
2015-12-17
Nuclear science has developed many excellent theoretical models for many-body systems in the domain of the baryon-meson strong interaction for the nucleus and nuclear matter at low, medium and high densities. However, a full microscopic understanding of nuclear systems in the extreme density domain of compact stars is still lacking. The aim of this contribution is to shed some light on open questions facing the nuclear many-body problem at the very high density domain. Here we focus our attention on the conceptual issue of naturalness and its role in shaping the baryon-meson phase space dynamics in the description of the equation of state (EoS) of nuclear matter and neutrons stars. In particular, in order to stimulate possible new directions of research, we discuss relevant aspects of a recently developed relativistic effective theory for nuclear matter within Quantum Hadrodynamics (QHD) with genuine many-body forces and derivative natural parametric couplings. Among other topics we discuss in this work the connection of this theory with other known effective QHD models of the literature and its potentiality in describing a new physics for dense matter. The model with parameterized couplings exhausts the whole fundamental baryon octet (n, p, Σ{sup −}, Σ{sup 0}, Σ{sup +}, Λ, Ξ{sup −}, Ξ{sup 0}) and simulates n-order corrections to the minimal Yukawa baryon couplings by considering nonlinear self-couplings of meson fields and meson-meson interaction terms coupled to the baryon fields involving scalar-isoscalar (σ, σ∗), vector-isoscalar (ω, Φ), vector-isovector (ϱ) and scalar-isovector (δ) virtual sectors. Following recent experimental results, we consider in our calculations the extreme case where the Σ{sup −} experiences such a strong repulsion that its influence in the nuclear structure of a neutron star is excluded at all. A few examples of calculations of properties of neutron stars are shown and prospects for the future are discussed.
22 CFR 40.91 - Certain aliens previously removed.
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Certain aliens previously removed. 40.91... IMMIGRANTS UNDER THE IMMIGRATION AND NATIONALITY ACT, AS AMENDED Aliens Previously Removed § 40.91 Certain aliens previously removed. (a) 5-year bar. An alien who has been found inadmissible, whether as a result...
Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.
2017-12-01
At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3
Determining root correspondence between previously and newly detected objects
Paglieroni, David W.; Beer, N Reginald
2014-06-17
A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can
Guo, X.; Yang, K.; Yang, W.; Li, S.; Long, Z.
2011-12-01
We present a field investigation over a melting valley glacier on the Tibetan Plateau. One particular aspect lies in that three melt phases are distinguished during the glacier's ablation season, which enables us to compare results over snow, bare-ice, and hummocky surfaces [with aerodynamic roughness lengths (z0M) varying on the order of 10-4-10-2 m]. We address two issues of common concern in the study of glacio-meteorology and micrometeorology. First, we study turbulent energy flux estimation through a critical evaluation of three parameterizations of the scalar roughness lengths (z0T for temperature and z0q for humidity), viz. key factors for the accurate estimation of sensible heat and latent heat fluxes using the bulk aerodynamic method. The first approach (Andreas 1987, Boundary-Layer Meteorol 38:159-184) is based on surface-renewal models and has been very widely applied in glaciated areas; the second (Yang et al. 2002, Q J Roy Meteorol Soc 128:2073-2087) has never received application over an ice/snow surface, despite its validity in arid regions; the third approach (Smeets and van den Broeke 2008, Boundary-Layer Meteorol 128:339-355) is proposed for use specifically over rough ice defined as z0M > 10-3 m or so. This empirical z0M threshold value is deemed of general relevance to glaciated areas (e.g. ice sheet/cap and valley/outlet glaciers), above which the first approach gives underestimated z0T and z0q. The first and the third approaches tend to underestimate and overestimate turbulent heat/moisture exchange, respectively (relative errors often > 30%). Overall, the second approach produces fairly low errors in energy flux estimates; it thus emerges as a practically useful choice to parameterize z0T and z0q over an ice/snow surface. Our evaluation of z0T and z0q parameterizations hopefully serves as a useful source of reference for physically based modeling of land-ice surface energy budget and mass balance. Second, we explore how scalar turbulence
Directory of Open Access Journals (Sweden)
N. Romano
2011-12-01
Full Text Available We investigate the potential impact of accounting for seasonal variations in the climatic forcing and using different methods to parameterize the soil water content at field capacity on the water balance components computed by a bucket model (BM. The single-layer BM of Guswa et al. (2002 is employed, whereas the Richards equation (RE based Soil Water Atmosphere Plant (SWAP model is used as a benchmark model. The results are analyzed for two differently-textured soils and for some synthetic runs under real-like seasonal weather conditions, using stochastically-generated daily rainfall data for a period of 100 years. Since transient soil-moisture dynamics and climatic seasonality play a key role in certain zones of the World, such as in Mediterranean land areas, a specific feature of this study is to test the prediction capability of the bucket model under a condition where seasonal variations in rainfall are not in phase with the variations in plant transpiration. Reference is made to a hydrologic year in which we have a rainy period (starting 1 November and lasting 151 days where vegetation is basically assumed in a dormant stage, followed by a drier and rainless period with a vegetation regrowth phase. Better agreement between BM and RE-SWAP intercomparison results are obtained when BM is parameterized by a field capacity value determined through the drainage method proposed by Romano and Santini (2002. Depending on the vegetation regrowth or dormant seasons, rainfall variability within a season results in transpiration regimes and soil moisture fluctuations with distinctive features. During the vegetation regrowth season, transpiration exerts a key control on soil water budget with respect to rainfall. During the dormant season of vegetation, the precipitation regime becomes an important climate forcing. Simulations also highlight the occurrence of bimodality in the probability distribution of soil moisture during the season when plants are
Stepping towards new parameterizations for non-canonical atmospheric surface-layer conditions
Calaf, M.; Margairaz, F.; Pardyjak, E.
2017-12-01
Representing land-atmosphere exchange processes as a lower boundary condition remains a challenge. This is partially a result of the fact that land-surface heterogeneity exists at all spatial scales and its variability does not "average" out with decreasing scales. Such variability need not rapidly blend away from the boundary thereby impacting the near-surface region of the atmosphere. Traditionally, momentum and energy fluxes linking the land surface to the flow in NWP models have been parameterized using atmospheric surface layer (ASL) similarity theory. There is ample evidence that such representation is acceptable for stationary and planar-homogeneous flows in the absence of subsidence. However, heterogeneity remains a ubiquitous feature eliciting appreciable deviations when using ASL similarity theory, especially in scalars such moisture and air temperature whose blending is less efficient when compared to momentum. The focus of this project is to quantify the effect of surface thermal heterogeneity with scales Ο(1/10) the height of the atmospheric boundary layer and characterized by uniform roughness. Such near-canonical cases describe inhomogeneous scalar transport in an otherwise planar homogeneous flow when thermal stratification is weak or absent. In this work we present a large-eddy simulation study that characterizes the effect of surface thermal heterogeneities on the atmospheric flow using the concept of dispersive fluxes. Results illustrate a regime in which the flow is mostly driven by the surface thermal heterogeneities, in which the contribution of the dispersive fluxes can account for up to 40% of the total sensible heat flux. Results also illustrate an alternative regime in which the effect of the surface thermal heterogeneities is quickly blended, and the dispersive fluxes provide instead a quantification of the flow spatial heterogeneities produced by coherent turbulent structures result of the surface shear stress. A threshold flow
Rolinski, S.; Müller, C.; Lotze-Campen, H.; Bondeau, A.
2010-12-01
More than a quarter of the Earth’s land surface is covered by grassland, which is also the major part (~ 70 %) of the agricultural area. Most of this area is used for livestock production in different degrees of intensity. The dynamic global vegetation model LPJmL (Sitch et al., Global Change Biology, 2003; Bondeau et al., Global Change Biology, 2007) is one of few process-based model that simulates biomass production on managed grasslands at the global scale. The implementation of managed grasslands and its evaluation has received little attention so far, as reference data on grassland productivity are scarce and the definition of grassland extent and usage are highly uncertain. However, grassland productivity is related to large areas, and strongly influences global estimates of carbon and water budgets and should thus be improved. Plants are implemented in LPJmL in an aggregated form as plant functional types assuming that processes concerning carbon and water fluxes are quite similar between species of the same type. Therefore, the parameterization of a functional type is possible with parameters in a physiologically meaningful range of values. The actual choice of the parameter values from the possible and reasonable phase space should satisfy the condition of the best fit of model results and measured data. In order to improve the parameterization of managed grass we follow a combined procedure using model output and measured data of carbon and water fluxes. By comparing carbon and water fluxes simultaneously, we expect well-balanced refinements and avoid over-tuning of the model in only one direction. The comparison of annual biomass from grassland to data from the Food and Agriculture Organization of the United Nations (FAO) per country provide an overview about the order of magnitude and the identification of deviations. The comparison of daily net primary productivity, soil respiration and water fluxes at specific sites (FluxNet Data) provides
Natural Ocean Carbon Cycle Sensitivity to Parameterizations of the Recycling in a Climate Model
Romanou, A.; Romanski, J.; Gregg, W. W.
2014-01-01
eventually resurfaces with the global thermohaline circulation especially in the Southern Ocean. Because of the reduced primary production and carbon export in GISSEH compared to GISSER, the biological pump efficiency, i.e., the ratio of primary production and carbon export at 75 m, is half in the GISSEH of that in GISSER, The Southern Ocean emerges as a key region where the CO2 flux is as sensitive to biological parameterizations as it is to physical parameterizations. The fidelity of ocean mixing in the Southern Ocean compared to observations is shown to be a good indicator of the magnitude of the biological pump efficiency regardless of physical model choice.
A new parameterization for integrated population models to document amphibian reintroductions.
Duarte, Adam; Pearl, Christopher A; Adams, Michael J; Peterson, James T
2017-09-01
Managers are increasingly implementing reintroduction programs as part of a global effort to alleviate amphibian declines. Given uncertainty in factors affecting populations and a need to make recurring decisions to achieve objectives, adaptive management is a useful component of these efforts. A major impediment to the estimation of demographic rates often used to parameterize and refine decision-support models is that life-stage-specific monitoring data are frequently sparse for amphibians. We developed a new parameterization for integrated population models to match the ecology of amphibians and capitalize on relatively inexpensive monitoring data to document amphibian reintroductions. We evaluate the capability of this model by fitting it to Oregon spotted frog (Rana pretiosa) monitoring data collected from 2007 to 2014 following their reintroduction within the Klamath Basin, Oregon, USA. The number of egg masses encountered and the estimated adult and metamorph abundances generally increased following reintroduction. We found that survival probability from egg to metamorph ranged from 0.01 in 2008 to 0.09 in 2009 and was not related to minimum spring temperatures, metamorph survival probability ranged from 0.13 in 2010-2011 to 0.86 in 2012-2013 and was positively related to mean monthly temperatures (logit-scale slope = 2.37), adult survival probability was lower for founders (0.40) than individuals recruited after reintroduction (0.56), and the mean number of egg masses per adult female was 0.74. Our study is the first to test hypotheses concerning Oregon spotted frog egg-to-metamorph and metamorph-to-adult transition probabilities in the wild and document their response at multiple life stages following reintroduction. Furthermore, we provide an example to illustrate how the structure of our integrated population model serves as a useful foundation for amphibian decision-support models within adaptive management programs. The integration of multiple, but
Directory of Open Access Journals (Sweden)
T. J. Anurose
2014-06-01
Full Text Available The performance of a surface-layer parameterization scheme in a high-resolution regional model (HRM is carried out by comparing the model-simulated sensible heat flux (H with the concurrent in situ measurements recorded at Thiruvananthapuram (8.5° N, 76.9° E, a coastal station in India. With a view to examining the role of atmospheric stability in conjunction with the roughness lengths in the determination of heat exchange coefficient (CH and H for varying meteorological conditions, the model simulations are repeated by assigning different values to the ratio of momentum and thermal roughness lengths (i.e. z0m/z0h in three distinct configurations of the surface-layer scheme designed for the present study. These three configurations resulted in differential behaviour for the varying meteorological conditions, which is attributed to the sensitivity of CH to the bulk Richardson number (RiB under extremely unstable, near-neutral and stable stratification of the atmosphere.
Time parameterizations and spin supplementary conditions of the Mathisson-Papapetrou-Dixon equations
Lukes-Gerakopoulos, Georgios
2017-11-01
The implications of two different time constraints on the Mathisson-Papapetrou-Dixon (MPD) equations are discussed under three spin supplementary conditions (SSCs). For this reason the MPD equations are revisited without specifying the affine parameter and several relations are reintroduced in their general form. The latter allows one to investigate the consequences of combining the Mathisson-Pirani (MP) SSC, the Tulczyjew-Dixon (TD) SSC and the Ohashi-Kyrian-Semerák (OKS) SSC with two affine parameter types: the proper time on one hand and the parameterizations introduced in [Gen. Relativ. Gravit. 8, 197 (1977), 10.1007/BF00763547] on the other. For the MP SSC and the TD SSC it is shown that quantities that are constant of motion for the one affine parameter are not for the other, while for the OKS SSC it is shown that the two affine parameters are the same. To clarify the relation between the two affine parameters in the case of the TD SSC the MPD equations are evolved and discussed.
Huang, Melin; Huang, Bormin; Huang, Allen H.-L.
2015-10-01
The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.
Lai, Changliang; Wang, Junbiao; Liu, Chuang
2014-10-01
Six typical composite grid cylindrical shells are constructed by superimposing three basic types of ribs. Then buckling behavior and structural efficiency of these shells are analyzed under axial compression, pure bending, torsion and transverse bending by finite element (FE) models. The FE models are created by a parametrical FE modeling approach that defines FE models with original natural twisted geometry and orients cross-sections of beam elements exactly. And the approach is parameterized and coded by Patran Command Language (PCL). The demonstrations of FE modeling indicate the program enables efficient generation of FE models and facilitates parametric studies and design of grid shells. Using the program, the effects of helical angles on the buckling behavior of six typical grid cylindrical shells are determined. The results of these studies indicate that the triangle grid and rotated triangle grid cylindrical shell are more efficient than others under axial compression and pure bending, whereas under torsion and transverse bending, the hexagon grid cylindrical shell is most efficient. Additionally, buckling mode shapes are compared and provide an understanding of composite grid cylindrical shells that is useful in preliminary design of such structures.
International Nuclear Information System (INIS)
Withers, R.L.; Thompson, J.G.
1993-01-01
The crystal structure of low carnegieite, NaAlSiO 4 [M r =142.05, orthorhombic, Pb2 1 a, a=10.261(1), b=14.030(2), c=5.1566(6) A, D x =2.542 g cm -3 , Z=4, Cu Kα 1 , λ=1.5406 A, μ=77.52 cm -1 , F(000)=559.85], is determined via Rietveld refinement from powder data, R p =0.057, R wp =0.076, R Bragg =0.050. Given that there are far too many parameters to be determined via unconstrained Rietveld refinement, a group theoretical or modulation wave approach is used in order to parameterize the structural deviation of low carnegieite from its underlying C9 aristotype. Appropriate crystal chemical constraints are applied in order to provide two distinct plausible starting models for the structure of the aluminosilicate framework. The correct starting model for the aluminosilicate framework as well as the ordering and positions of the non-framework Na atoms are then determined via Rietveld refinement. At all stages, chemical plausibility is checked via the use of the bond-length-bond-valence formalism. The JCPDS file number for low carnegieite is 44-1496. (orig.)
The feasibility of parameterizing four-state equilibria using relaxation dispersion measurements
International Nuclear Information System (INIS)
Li Pilong; Martins, Ilídio R. S.; Rosen, Michael K.
2011-01-01
Coupled equilibria play important roles in controlling information flow in biochemical systems, including allosteric molecules and multidomain proteins. In the simplest case, two equilibria are coupled to produce four interconverting states. In this study, we assessed the feasibility of determining the degree of coupling between two equilibria in a four-state system via relaxation dispersion measurements. A major bottleneck in this effort is the lack of efficient approaches to data analysis. To this end, we designed a strategy to efficiently evaluate the smoothness of the target function surface (TFS). Using this approach, we found that the TFS is very rough when fitting benchmark CPMG data to all adjustable variables of the four-state equilibria. After constraining a portion of the adjustable variables, which can often be achieved through independent biochemical manipulation of the system, the smoothness of TFS improves dramatically, although it is still insufficient to pinpoint the solution. The four-state equilibria can be finally solved with further incorporation of independent chemical shift information that is readily available. We also used Monte Carlo simulations to evaluate how well each adjustable parameter can be determined in a large kinetic and thermodynamic parameter space and how much improvement can be achieved in defining the parameters through additional measurements. The results show that in favorable conditions the combination of relaxation dispersion and biochemical manipulation allow the four-state equilibrium to be parameterized, and thus coupling strength between two processes to be determined.
Energy Technology Data Exchange (ETDEWEB)
Olsson, P.Q.; Meyers, M.P.; Kreidenweis, S.; Cotton, W.R. [Colorado State Univ., Fort Collins, CO (United States)
1996-04-01
The aim of this new project is to develop an aerosol/cloud microphysics parameterization of mixed-phase stratus and boundary layer clouds. Our approach is to create, test, and implement a bulk-microphysics/aerosol model using data from Atmospheric Radiation Measurement (ARM) Cloud and Radiation Testbed (CART) sites and large-eddy simulation (LES) explicit bin-resolving aerosol/microphysics models. The primary objectives of this work are twofold. First, we need the prediction of number concentrations of activated aerosol which are transferred to the droplet spectrum, so that the aerosol population directly affects the cloud formation and microphysics. Second, we plan to couple the aerosol model to the gas and aqueous-chemistry module that will drive the aerosol formation and growth. We begin by exploring the feasibility of performing cloud-resolving simulations of Arctic stratus clouds over the North Slope CART site. These simulations using Colorado State University`s regional atmospheric modeling system (RAMS) will be useful in designing the structure of the cloud-resolving model and in interpreting data acquired at the North Slope site.
Parameterization of the Age-Dependent Whole Brain Apparent Diffusion Coefficient Histogram
Batra, Marion; Nägele, Thomas
2015-01-01
Purpose. The distribution of apparent diffusion coefficient (ADC) values in the brain can be used to characterize age effects and pathological changes of the brain tissue. The aim of this study was the parameterization of the whole brain ADC histogram by an advanced model with influence of age considered. Methods. Whole brain ADC histograms were calculated for all data and for seven age groups between 10 and 80 years. Modeling of the histograms was performed for two parts of the histogram separately: the brain tissue part was modeled by two Gaussian curves, while the remaining part was fitted by the sum of a Gaussian curve, a biexponential decay, and a straight line. Results. A consistent fitting of the histograms of all age groups was possible with the proposed model. Conclusions. This study confirms the strong dependence of the whole brain ADC histograms on the age of the examined subjects. The proposed model can be used to characterize changes of the whole brain ADC histogram in certain diseases under consideration of age effects. PMID:26609526
Parameterization of the Age-Dependent Whole Brain Apparent Diffusion Coefficient Histogram
Directory of Open Access Journals (Sweden)
Uwe Klose
2015-01-01
Full Text Available Purpose. The distribution of apparent diffusion coefficient (ADC values in the brain can be used to characterize age effects and pathological changes of the brain tissue. The aim of this study was the parameterization of the whole brain ADC histogram by an advanced model with influence of age considered. Methods. Whole brain ADC histograms were calculated for all data and for seven age groups between 10 and 80 years. Modeling of the histograms was performed for two parts of the histogram separately: the brain tissue part was modeled by two Gaussian curves, while the remaining part was fitted by the sum of a Gaussian curve, a biexponential decay, and a straight line. Results. A consistent fitting of the histograms of all age groups was possible with the proposed model. Conclusions. This study confirms the strong dependence of the whole brain ADC histograms on the age of the examined subjects. The proposed model can be used to characterize changes of the whole brain ADC histogram in certain diseases under consideration of age effects.
Patra, Abhilash; Jana, Subrata; Samal, Prasanjit
2018-04-01
The construction of meta generalized gradient approximations based on the density matrix expansion (DME) is considered as one of the most accurate techniques to design semilocal exchange energy functionals in two-dimensional density functional formalism. The exchange holes modeled using DME possess unique features that make it a superior entity. Parameterized semilocal exchange energy functionals based on the DME are proposed. The use of different forms of the momentum and flexible parameters is to subsume the non-uniform effects of the density in the newly constructed semilocal functionals. In addition to the exchange functionals, a suitable correlation functional is also constructed by working upon the local correlation functional developed for 2D homogeneous electron gas. The non-local effects are induced into the correlation functional by a parametric form of one of the newly constructed exchange energy functionals. The proposed functionals are applied to the parabolic quantum dots with a varying number of confined electrons and the confinement strength. The results obtained with the aforementioned functionals are quite satisfactory, which indicates why these are suitable for two-dimensional quantum systems.
The parameterized post-Newtonian limit of bimetric theories of gravity
International Nuclear Information System (INIS)
Clifton, Timothy; Banados, Maximo; Skordis, Constantinos
2010-01-01
We consider the post-Newtonian limit of a general class of bimetric theories of gravity, in which both metrics are dynamical. The established parameterized post-Newtonian approach is followed as closely as possible, although new potentials are found that do not exist within the standard framework. It is found that these theories can evade solar system tests of post-Newtonian gravity remarkably well. We show that perturbations about Minkowski space in these theories contain both massless and massive degrees of freedom, and that in general there are two different types of massive mode, each with a different mass parameter. If both of these masses are sufficiently large then the predictions of the most general class of theories we consider are indistinguishable from those of general relativity, up to post-Newtonian order in a weak-field, low-velocity expansion. In the limit that the massive modes become massless, we find that these general theories do not exhibit a van Dam-Veltman-Zakharov-like discontinuity in their γ parameter, although there are discontinuities in other post-Newtonian parameters as the massless limit is approached. This smooth behaviour in γ is due to the discontinuities from each of the two different massive modes cancelling each other out. Such cancellations cannot occur in special cases with only one massive mode, such as the Isham-Salam-Strathdee theory.
Haiduke, Roberto Luiz A; Bartlett, Rodney J
2018-05-14
Some of the exact conditions provided by the correlated orbital theory are employed to propose new non-empirical parameterizations for exchange-correlation functionals from Density Functional Theory (DFT). This reparameterization process is based on range-separated functionals with 100% exact exchange for long-range interelectronic interactions. The functionals developed here, CAM-QTP-02 and LC-QTP, show mitigated self-interaction error, correctly predict vertical ionization potentials as the negative of eigenvalues for occupied orbitals, and provide nice excitation energies, even for challenging charge-transfer excited states. Moreover, some improvements are observed for reaction barrier heights with respect to the other functionals belonging to the quantum theory project (QTP) family. Finally, the most important achievement of these new functionals is an excellent description of vertical electron affinities (EAs) of atoms and molecules as the negative of appropriate virtual orbital eigenvalues. In this case, the mean absolute deviations for EAs in molecules are smaller than 0.10 eV, showing that physical interpretation can indeed be ascribed to some unoccupied orbitals from DFT.
Parameterization of cirrus microphysical and radiative properties in larger-scale models
International Nuclear Information System (INIS)
Heymsfield, A.J.; Coen, J.L.
1994-01-01
This study exploits measurements in clouds sampled during several field programs to develop and validate parameterizations that represent the physical and radiative properties of convectively generated cirrus clouds in intermediate and large-scale models. The focus is on cirrus anvils because they occur frequently, cover large areas, and play a large role in the radiation budget. Preliminary work focuses on understanding the microphysical, radiative, and dynamical processes that occur in these clouds. A detailed microphysical package has been constructed that considers the growth of the following hydrometer types: water drops, needles, plates, dendrites, columns, bullet rosettes, aggregates, graupel, and hail. Particle growth processes include diffusional and accretional growth, aggregation, sedimentation, and melting. This package is being implemented in a simple dynamical model that tracks the evolution and dispersion of hydrometers in a stratiform anvil cloud. Given the momentum, vapor, and ice fluxes into the stratiform region and the temperature and humidity structure in the anvil's environment, this model will suggest anvil properties and structure
Using Remote Sensing Data to Parameterize Ice Jam Modeling for a Northern Inland Delta
Directory of Open Access Journals (Sweden)
Fan Zhang
2017-04-01
Full Text Available The Slave River is a northern river in Canada, with ice being an important component of its flow regime for at least half of the year. During the spring breakup period, ice jams and ice-jam flooding can occur in the Slave River Delta, which is of benefit for the replenishment of moisture and sediment required to maintain the ecological integrity of the delta. To better understand the ice jam processes that lead to flooding, as well as the replenishment of the delta, the one-dimensional hydraulic river ice model RIVICE was implemented to simulate and explore ice jam formation in the Slave River Delta. Incoming ice volume, a crucial input parameter for RIVICE, was determined by the novel approach of using MODIS space-born remote sensing imagery. Space-borne and air-borne remote sensing data were used to parameterize the upstream ice volume available for ice jamming. Gauged data was used to complement modeling calibration and validation. HEC-RAS, another one-dimensional hydrodynamic model, was used to determine ice volumes required for equilibrium jams and the upper limit of ice volume that a jam can sustain, as well as being used as a threshold for the volumes estimated by the dynamic ice jam simulations using RIVICE. Parameter sensitivity analysis shows that morphological and hydraulic properties have great impacts on the ice jam length and water depth in the Slave River Delta.
Parameterized Linear Temporal Logics Meet Costs: Still not Costlier than LTL
Directory of Open Access Journals (Sweden)
Martin Zimmermann
2015-09-01
Full Text Available We continue the investigation of parameterized extensions of Linear Temporal Logic (LTL that retain the attractive algorithmic properties of LTL: a polynomial space model checking algorithm and a doubly-exponential time algorithm for solving games. Alur et al. and Kupferman et al. showed that this is the case for Parametric LTL (PLTL and PROMPT-LTL respectively, which have temporal operators equipped with variables that bound their scope in time. Later, this was also shown to be true for Parametric LDL (PLDL, which extends PLTL to be able to express all omega-regular properties. Here, we generalize PLTL to systems with costs, i.e., we do not bound the scope of operators in time, but bound the scope in terms of the cost accumulated during time. Again, we show that model checking and solving games for specifications in PLTL with costs is not harder than the corresponding problems for LTL. Finally, we discuss PLDL with costs and extensions to multiple cost functions.
Directory of Open Access Journals (Sweden)
Bílek Petr
2016-01-01
Full Text Available This paper deals with visualization and evaluation of flow during filtration of water seeded by artificial microscopic particles. Planar laser induced fluorescence (PLIF is a wide spread method for visualization and non-invasive characterization of flow. However the method uses fluorescent dyes or fluorescent particles in special cases. In this article the flow is seeded by non-fluorescent monodisperse polystyrene particles with the diameter smaller than one micrometer. The monodisperse sub-micron particles are very suitable for testing of textile filtration materials. Nevertheless non-fluorescent particles are not useful for PLIF method. A water filtration setup with an optical access to the place, were a tested filter is mounted, was built and used for the experiments. Concentration of particles in front of and behind the tested filter in a laser light sheet measured is and the local filtration efficiency expressed is. The article describes further progress in the measurement. It was carried out sensitivity analysis, parameterization and performance of the method during several simulations and experiments.
Ferentinos, Konstantinos P
2005-09-01
Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.
Jie, M.; Zhang, J.; Guo, B. B.
2017-12-01
As a typical distributed hydrological model, the SWAT model also has a challenge in calibrating parameters and analysis their uncertainty. This paper chooses the Chaohe River Basin China as the study area, through the establishment of the SWAT model, loading the DEM data of the Chaohe river basin, the watershed is automatically divided into several sub-basins. Analyzing the land use, soil and slope which are on the basis of the sub-basins and calculating the hydrological response unit (HRU) of the study area, after running SWAT model, the runoff simulation values in the watershed are obtained. On this basis, using weather data, known daily runoff of three hydrological stations, combined with the SWAT-CUP automatic program and the manual adjustment method are used to analyze the multi-site calibration of the model parameters. Furthermore, the GLUE algorithm is used to analyze the parameters uncertainty of the SWAT model. Through the sensitivity analysis, calibration and uncertainty study of SWAT, the results indicate that the parameterization of the hydrological characteristics of the Chaohe river is successful and feasible which can be used to simulate the Chaohe river basin.
Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.
Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen
2014-01-01
Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.
Haiduke, Roberto Luiz A.; Bartlett, Rodney J.
2018-05-01
Some of the exact conditions provided by the correlated orbital theory are employed to propose new non-empirical parameterizations for exchange-correlation functionals from Density Functional Theory (DFT). This reparameterization process is based on range-separated functionals with 100% exact exchange for long-range interelectronic interactions. The functionals developed here, CAM-QTP-02 and LC-QTP, show mitigated self-interaction error, correctly predict vertical ionization potentials as the negative of eigenvalues for occupied orbitals, and provide nice excitation energies, even for challenging charge-transfer excited states. Moreover, some improvements are observed for reaction barrier heights with respect to the other functionals belonging to the quantum theory project (QTP) family. Finally, the most important achievement of these new functionals is an excellent description of vertical electron affinities (EAs) of atoms and molecules as the negative of appropriate virtual orbital eigenvalues. In this case, the mean absolute deviations for EAs in molecules are smaller than 0.10 eV, showing that physical interpretation can indeed be ascribed to some unoccupied orbitals from DFT.
Lightweight hull surface self-design vertical parameterization method based on NURBS
Directory of Open Access Journals (Sweden)
ZHANG Yanru
2017-10-01
Full Text Available [Objectives] At present, conventional design is limited to parent ship design space, and cannot drive ship hull design using as few parameters as possible. In order to solve the above problems, [Methods] by combining the draught function with NURBS, a ship hull surface self-design method based on vertical parameterization is proposed. In this method, the waterline is designated as the basic design unit; the bottom flat end line, designed waterline, stem and stern contours, side flat end line and maximum section line are designated as the characteristic constraints of the ship hull; and the draught function values corresponding to the characteristic parameters are designated as the design objectives. In this way, a waterline approximation model is built, and an evolutionary algorithm can be used to solve the approximation model. Finally, the ship hull surface is generated on the basis of the waterline using the NURBS skinning technique.[Results] The design examples of the characteristic curves of the full-scale ship hull surface indicate the practicable and advanced nature of this method.[Conclusions] The hull surface can be designed with as little data as possible using this method, making it much more suitable for the self-design of new ship forms.
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
Double-moment cloud microphysics scheme for the deep convection parameterization in the GFDL AM3
Belochitski, A.; Donner, L.
2014-12-01
A double-moment cloud microphysical scheme originally developed by Morrision and Gettelman (2008) for the stratiform clouds and later adopted for the deep convection by Song and Zhang (2011) has been implemented in to the Geophysical Fluid Dynamics Laboratory's atmospheric general circulation model AM3. The scheme treats cloud drop, cloud ice, rain, and snow number concentrations and mixing ratios as diagnostic variables and incorporates processes of autoconversion, self-collection, collection between hydrometeor species, sedimentation, ice nucleation, drop activation, homogeneous and heterogeneous freezing, and the Bergeron-Findeisen process. Such detailed representation of microphysical processes makes the scheme suitable for studying the interactions between aerosols and convection, as well as aerosols' indirect effects on clouds and their roles in climate change. The scheme is first tested in the single column version of the GFDL AM3 using forcing data obtained at the U.S. Department of Energy Atmospheric Radiation Measurment project's Southern Great Planes site. Scheme's impact on SCM simulations is discussed. As the next step, runs of the full atmospheric GCM incorporating the new parameterization are compared to the unmodified version of GFDL AM3. Global climatological fields and their variability are contrasted with those of the original version of the GCM. Impact on cloud radiative forcing and climate sensitivity is investigated.
49 CFR 173.23 - Previously authorized packaging.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Previously authorized packaging. 173.23 Section... REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for Transportation § 173.23 Previously authorized packaging. (a) When the regulations specify a packaging with a specification marking...
28 CFR 10.5 - Incorporation of papers previously filed.
2010-07-01
... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Incorporation of papers previously filed... CARRYING ON ACTIVITIES WITHIN THE UNITED STATES Registration Statement § 10.5 Incorporation of papers previously filed. Papers and documents already filed with the Attorney General pursuant to the said act and...
75 FR 76056 - FEDERAL REGISTER CITATION OF PREVIOUS ANNOUNCEMENT:
2010-12-07
... SECURITIES AND EXCHANGE COMMISSION Sunshine Act Meeting FEDERAL REGISTER CITATION OF PREVIOUS ANNOUNCEMENT: STATUS: Closed meeting. PLACE: 100 F Street, NE., Washington, DC. DATE AND TIME OF PREVIOUSLY ANNOUNCED MEETING: Thursday, December 9, 2010 at 2 p.m. CHANGE IN THE MEETING: Time change. The closed...
No discrimination against previous mates in a sexually cannibalistic spider
Fromhage, Lutz; Schneider, Jutta M.
2005-09-01
In several animal species, females discriminate against previous mates in subsequent mating decisions, increasing the potential for multiple paternity. In spiders, female choice may take the form of selective sexual cannibalism, which has been shown to bias paternity in favor of particular males. If cannibalistic attacks function to restrict a male's paternity, females may have little interest to remate with males having survived such an attack. We therefore studied the possibility of female discrimination against previous mates in sexually cannibalistic Argiope bruennichi, where females almost always attack their mate at the onset of copulation. We compared mating latency and copulation duration of males having experienced a previous copulation either with the same or with a different female, but found no evidence for discrimination against previous mates. However, males copulated significantly shorter when inserting into a used, compared to a previously unused, genital pore of the female.
Implant breast reconstruction after salvage mastectomy in previously irradiated patients.
Persichetti, Paolo; Cagli, Barbara; Simone, Pierfranco; Cogliandro, Annalisa; Fortunato, Lucio; Altomare, Vittorio; Trodella, Lucio
2009-04-01
The most common surgical approach in case of local tumor recurrence after quadrantectomy and radiotherapy is salvage mastectomy. Breast reconstruction is the subsequent phase of the treatment and the plastic surgeon has to operate on previously irradiated and manipulated tissues. The medical literature highlights that breast reconstruction with tissue expanders is not a pursuable option, considering previous radiotherapy a contraindication. The purpose of this retrospective study is to evaluate the influence of previous radiotherapy on 2-stage breast reconstruction (tissue expander/implant). Only patients with analogous timing of radiation therapy and the same demolitive and reconstructive procedures were recruited. The results of this study prove that, after salvage mastectomy in previously irradiated patients, implant reconstruction is still possible. Further comparative studies are, of course, advisable to draw any conclusion on the possibility to perform implant reconstruction in previously irradiated patients.
Energy Technology Data Exchange (ETDEWEB)
Gharaei, R.; Hadikhani, A. [Hakim Sabzevari University, Department of Physics, Sciences Faculty, Sabzevar (Iran, Islamic Republic of)
2017-07-15
For the first time the influence of the surface energy coefficient γ and temperature T on the parameterization of the fusion barriers is systematically analyzed within the framework of the proximity formalism, namely proximity 1977, proximity 1988 and proximity 2010 models. A total of 114 fusion reactions with the condition 39 ≤ Z{sub 1}Z{sub 2} ≤ 1520 for the charge product of their participant nuclei have been studied. We present γ-dependent and T -dependent pocket formulas which reproduce the theoretical and empirical data of the fusion barrier height and position for our considered reactions with good accuracy. It is shown that the quality of the γ-dependent formula enhances by increasing the strength of the surface energy coefficient. Moreover, the obtained results confirm that imposing the thermal effects improves the agreement between the parameterized and empirical data of the barrier characteristics. (orig.)
Liu, Yuefeng; Duan, Zhuoyi; Chen, Song
2017-10-01
Aerodynamic shape optimization design aiming at improving the efficiency of an aircraft has always been a challenging task, especially when the configuration is complex. In this paper, a hybrid FFD-RBF surface parameterization approach has been proposed for designing a civil transport wing-body configuration. This approach is simple and efficient, with the FFD technique used for parameterizing the wing shape and the RBF interpolation approach used for handling the wing body junction part updating. Furthermore, combined with Cuckoo Search algorithm and Kriging surrogate model with expected improvement adaptive sampling criterion, an aerodynamic shape optimization design system has been established. Finally, the aerodynamic shape optimization design on DLR F4 wing-body configuration has been carried out as a study case, and the result has shown that the approach proposed in this paper is of good effectiveness.
Energy Technology Data Exchange (ETDEWEB)
Berg, Larry K.; Shrivastava, ManishKumar B.; Easter, Richard C.; Fast, Jerome D.; Chapman, Elaine G.; Liu, Ying
2015-01-01
A new treatment of cloud-aerosol interactions within parameterized shallow and deep convection has been implemented in WRF-Chem that can be used to better understand the aerosol lifecycle over regional to synoptic scales. The modifications to the model to represent cloud-aerosol interactions include treatment of the cloud dropletnumber mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. Thesechanges have been implemented in both the WRF-Chem chemistry packages as well as the Kain-Fritsch cumulus parameterization that has been modified to better represent shallow convective clouds. Preliminary testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS) as well as a high-resolution simulation that does not include parameterized convection. The simulation results are used to investigate the impact of cloud-aerosol interactions on the regional scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column integrated BC can be as large as -50% when cloud-aerosol interactions are considered (due largely to wet removal), or as large as +35% for sulfate in non-precipitating conditions due to the sulfate production in the parameterized clouds. The modifications to WRF-Chem version 3.2.1 are found to account for changes in the cloud drop number concentration (CDNC) and changes in the chemical composition of cloud-drop residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to WRF-Chem version 3.5, and it is anticipated that they
Astitha, M.; Abdel Kader, M.; Pozzer, A.; Lelieveld, J.
2012-04-01
Atmospheric particulate matter and more specific desert dust has been the topic of numerous research studies in the past due to the wide range of impacts in the environment and climate and the uncertainty of characterizing and quantifying these impacts in a global scale. In this work we present two physical parameterizations of the desert dust production that have been incorporated in the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). The scope of this work is to assess the impact of the two physical parameterizations in the global distribution of desert dust and highlight the advantages and disadvantages of using either technique. The dust concentration and deposition has been evaluated using the AEROCOM dust dataset for the year 2000 and data from the MODIS and MISR satellites as well as sun-photometer data from the AERONET network was used to compare the modelled aerosol optical depth with observations. The implementation of the two parameterizations and the simulations using relatively high spatial resolution (T106~1.1deg) has highlighted the large spatial heterogeneity of the dust emission sources as well as the importance of the input parameters (soil size and texture, vegetation, surface wind speed). Also, sensitivity simulations with the nudging option using reanalysis data from ECMWF and without nudging have showed remarkable differences for some areas. Both parameterizations have revealed the difficulty of simulating all arid regions with the same assumptions and mechanisms. Depending on the arid region, each emission scheme performs more or less satisfactorily which leads to the necessity of treating each desert differently. Even though this is a quite different task to accomplish in a global model, some recommendations are given and ideas for future improvements.
Zepka, G. D.; Pinto, O.
2010-12-01
The intent of this study is to identify the combination of convective and microphysical WRF parameterizations that better adjusts to lightning occurrence over southeastern Brazil. Twelve thunderstorm days were simulated with WRF model using three different convective parameterizations (Kain-Fritsch, Betts-Miller-Janjic and Grell-Devenyi ensemble) and two different microphysical schemes (Purdue-Lin and WSM6). In order to test the combinations of parameterizations at the same time of lightning occurrence, a comparison was made between the WRF grid point values of surface-based Convective Available Potential Energy (CAPE), Lifted Index (LI), K-Index (KI) and equivalent potential temperature (theta-e), and the lightning locations nearby those grid points. Histograms were built up to show the ratio of the occurrence of different values of these variables for WRF grid points associated with lightning to all WRF grid points. The first conclusion from this analysis was that the choice of microphysics did not change appreciably the results as much as different convective schemes. The Betts-Miller-Janjic parameterization has generally worst skill to relate higher magnitudes for all four variables to lightning occurrence. The differences between the Kain-Fritsch and Grell-Devenyi ensemble schemes were not large. This fact can be attributed to the similar main assumptions used by these schemes that consider entrainment/detrainment processes along the cloud boundaries. After that, we examined three case studies using the combinations of convective and microphysical options without the Betts-Miller-Janjic scheme. Differently from the traditional verification procedures, fields of surface-based CAPE from WRF 10 km domain were compared to the Eta model, satellite images and lightning data. In general the more reliable convective scheme was Kain-Fritsch since it provided more consistent distribution of the CAPE fields with respect to satellite images and lightning data.
Reyna, D.
2006-01-01
The designs of many neutrino experiments rely on calculations of the background rates arising from cosmic-ray muons at shallow depths. Understanding the angular dependence of low momentum cosmic-ray muons at the surface is necessary for these calculations. Heuristically, from examination of the data, a simple parameterization is proposed, based on a straighforward scaling variable. This in turn, allows a universal calculation of the differential muon intensity at the surface for all zenith an...
Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew
2014-01-01
The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative parameterizations in a series of simulations in September. In all, 104 simulations of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF parameterizations of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve correlations against reanalysis of 0.40-0.60 and realistic amplitudes of spatiotemporal variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell-Devenyi cumulus convection scheme, resulting in higher correlations against reanalysis than simulations using the Kain-Fritch convection. Other parameterizations have less-obvious impact, although WRF configurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations confirms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF configurations simulating the parallel period during 10 additional years are less favorable than for 2006.
Savre, J.; Ekman, A. M. L.
2015-05-01
A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.
Christensen, H. M.; Berner, J.; Sardeshmukh, P. D.
2017-12-01
Stochastic parameterizations have been used for more than a decade in atmospheric models. They provide a way to represent model uncertainty through representing the variability of unresolved sub-grid processes, and have been shown to have a beneficial effect on the spread and mean state for medium- and extended-range forecasts. There is increasing evidence that stochastic parameterization of unresolved processes can improve the bias in mean and variability, e.g. by introducing a noise-induced drift (nonlinear rectification), and by changing the residence time and structure of flow regimes. We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. SPPT results in a significant improvement in the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. We use a Linear Inverse Modelling framework to gain insight into the mechanisms by which SPPT has improved ENSO-variability.
Luong, Thang
2018-01-22
A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers) to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF) cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF). A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.
International Nuclear Information System (INIS)
González Robaina, Felicita; López Seijas, Teresa
2008-01-01
The modeling of the processes involved in the movement of water in soil solutions generally requires the general equation of water flow for the condition of saturation, or Darcy - Buckinghan approach. In this approach the hydraulic - soil moisture (K(0)) conductivity function is a fundamental property of the soil to determine for each field condition. Several methods reported in the literature for determining the hydraulic conductivity are based on simplifications of assuming unit gradient method or a fixed ratio K(0). In recent years related to the search for simple, rapid and inexpensive methods to measure this relationship in the field using a lot of work aftershocks reported. One of these methods is the parameterized equation proposed by Reichardt, using the parameters of the equations describing the process of internal drainage and explain the exponential nature of the relationship K(0). The objective of this work is to estimate the K(0), with the method of the parameterized equation. To do the test results of internal drainage on a Ferralsol area south of Havana will be used. The results show that the parameterized equation provides an estimation of K(0) for those similar to the methods that assume unit gradient conditions
Gunalan, Kabilar; Chaturvedi, Ashutosh; Howell, Bryan; Duchin, Yuval; Lempka, Scott F; Patriat, Remi; Sapiro, Guillermo; Harel, Noam; McIntyre, Cameron C
2017-01-01
Deep brain stimulation (DBS) is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports. Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM) and predict the response of the hyperdirect pathway to clinical stimulation. Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python) enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson's disease (PD). This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution. Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings. Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation.
Directory of Open Access Journals (Sweden)
Kabilar Gunalan
Full Text Available Deep brain stimulation (DBS is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports.Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM and predict the response of the hyperdirect pathway to clinical stimulation.Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson's disease (PD. This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution.Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings.Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation.
Energy Technology Data Exchange (ETDEWEB)
Parsons, Taylor; Guo, Yi; Veers, Paul; Dykes, Katherine; Damiani, Rick
2016-01-26
Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrum is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.
Salimun, Ester; Tangang, Fredolin; Juneng, Liew
2010-06-01
A comparative study has been conducted to investigate the skill of four convection parameterization schemes, namely the Anthes-Kuo (AK), the Betts-Miller (BM), the Kain-Fritsch (KF), and the Grell (GR) schemes in the numerical simulation of an extreme precipitation episode over eastern Peninsular Malaysia using the Pennsylvania State University—National Center for Atmospheric Research Center (PSU-NCAR) Fifth Generation Mesoscale Model (MM5). The event is a commonly occurring westward propagating tropical depression weather system during a boreal winter resulting from an interaction between a cold surge and the quasi-stationary Borneo vortex. The model setup and other physical parameterizations are identical in all experiments and hence any difference in the simulation performance could be associated with the cumulus parameterization scheme used. From the predicted rainfall and structure of the storm, it is clear that the BM scheme has an edge over the other schemes. The rainfall intensity and spatial distribution were reasonably well simulated compared to observations. The BM scheme was also better in resolving the horizontal and vertical structures of the storm. Most of the rainfall simulated by the BM simulation was of the convective type. The failure of other schemes (AK, GR and KF) in simulating the event may be attributed to the trigger function, closure assumption, and precipitation scheme. On the other hand, the appropriateness of the BM scheme for this episode may not be generalized for other episodes or convective environments.
Directory of Open Access Journals (Sweden)
Mikkel Brydegaard
Full Text Available In recent years, the field of remote sensing of birds and insects in the atmosphere (the aerial fauna has advanced considerably, and modern electro-optic methods now allow the assessment of the abundance and fluxes of pests and beneficials on a landscape scale. These techniques have the potential to significantly increase our understanding of, and ability to quantify and manage, the ecological environment. This paper presents a concept whereby laser radar observations of atmospheric fauna can be parameterized and table values for absolute cross sections can be catalogued to allow for the study of focal species such as disease vectors and pests. Wing-beat oscillations are parameterized with a discrete set of harmonics and the spherical scatter function is parameterized by a reduced set of symmetrical spherical harmonics. A first order spherical model for insect scatter is presented and supported experimentally, showing angular dependence of wing beat harmonic content. The presented method promises to give insights into the flight heading directions of species in the atmosphere and has the potential to shed light onto the km-range spread of pests and disease vectors.
International Nuclear Information System (INIS)
Brandt, J.; Ebel, A.; Elbern, H.; Jakobs, H.; Memmesheimer, M.; Mikkelsen, T.; Thykier-Nielsen, S.; Zlatev, Z.
1997-01-01
Atmospheric transport of air pollutants is, in principle, a well understood process. If information about the state of the atmosphere is given in all details (infinitely accurate information about wind speed, etc.) and infinitely fast computers are available then the advection equation could in principle be solved exactly. This is, however, not the case: discretization of the equations and input data introduces some uncertainties and errors in the results. Therefore many different issues have to be carefully studied in order to diminish these uncertainties and to develop an accurate transport model. Some of these are e.g. the numerical treatment of the transport equation, accuracy of the mean meteorological input fields and parameterizations of sub-grid scale phenomena (as e.g. parameterizations of the 2 nd and higher order turbulence terms in order to reach closure in the perturbation equation). A tracer model for studying transport and dispersion of air pollution caused by a single but strong source is under development. The model simulations from the first ETEX release illustrate the differences caused by using various analyzed fields directly in the tracer model or using a meteorological driver. Also different parameterizations of the mixing height and the vertical exchange are compared. (author)
Loughner, Christopher P.; Allen, Dale J.; Zhang, Da-Lin; Pickering, Kenneth E.; Dickerson, Russell R.; Landry, Laura
2012-01-01
Urban heat island (UHI) effects can strengthen heat waves and air pollution episodes. In this study, the dampening impact of urban trees on the UHI during an extreme heat wave in the Washington, D.C., and Baltimore, Maryland, metropolitan area is examined by incorporating trees, soil, and grass into the coupled Weather Research and Forecasting model and an urban canopy model (WRF-UCM). By parameterizing the effects of these natural surfaces alongside roadways and buildings, the modified WRF-UCM is used to investigate how urban trees, soil, and grass dampen the UHI. The modified model was run with 50% tree cover over urban roads and a 10% decrease in the width of urban streets to make space for soil and grass alongside the roads and buildings. Results show that, averaged over all urban areas, the added vegetation decreases surface air temperature in urban street canyons by 4.1 K and road-surface and building-wall temperatures by 15.4 and 8.9 K, respectively, as a result of tree shading and evapotranspiration. These temperature changes propagate downwind and alter the temperature gradient associated with the Chesapeake Bay breeze and, therefore, alter the strength of the bay breeze. The impact of building height on the UHI shows that decreasing commercial building heights by 8 m and residential building heights by 2.5 m results in up to 0.4-K higher daytime surface and near-surface air temperatures because of less building shading and up to 1.2-K lower nighttime temperatures because of less longwave radiative trapping in urban street canyons.
Energy Technology Data Exchange (ETDEWEB)
Koenigk, Torben; Caian, Mihaela; Doescher, Ralf; Wyser, Klaus [Swedish Meteorological and Hydrological Institute, Rossby Centre, Norrkoeping (Sweden); Koenig Beatty, Christof [Universite Catholique de Louvain, Louvain-la-Neuve (Belgium)
2012-06-15
Decadal prediction is one focus of the upcoming 5th IPCC Assessment report. To be able to interpret the results and to further improve the decadal predictions it is important to investigate the potential predictability in the participating climate models. This study analyzes the upper limit of climate predictability on decadal time scales and its dependency on sea ice albedo parameterization by performing two perfect ensemble experiments with the global coupled climate model EC-Earth. In the first experiment, the standard albedo formulation of EC-Earth is used, in the second experiment sea ice albedo is reduced. The potential prognostic predictability is analyzed for a set of oceanic and atmospheric parameters. The decadal predictability of the atmospheric circulation is small. The highest potential predictability was found in air temperature at 2 m height over the northern North Atlantic and the southern South Atlantic. Over land, only a few areas are significantly predictable. The predictability for continental size averages of air temperature is relatively good in all northern hemisphere regions. Sea ice thickness is highly predictable along the ice edges in the North Atlantic Arctic Sector. The meridional overturning circulation is highly predictable in both experiments and governs most of the decadal climate predictability in the northern hemisphere. The experiments using reduced sea ice albedo show some important differences like a generally higher predictability of atmospheric variables in the Arctic or higher predictability of air temperature in Europe. Furthermore, decadal variations are substantially smaller in the simulations with reduced ice albedo, which can be explained by reduced sea ice thickness in these simulations. (orig.)
Global direct radiative forcing by process-parameterized aerosol optical properties
KirkevâG, Alf; Iversen, Trond
2002-10-01
A parameterization of aerosol optical parameters is developed and implemented in an extended version of the community climate model version 3.2 (CCM3) of the U.S. National Center for Atmospheric Research. Direct radiative forcing (DRF) by monthly averaged calculated concentrations of non-sea-salt sulfate and black carbon (BC) is estimated. Inputs are production-specific BC and sulfate from [2002] and background aerosol size distribution and composition. The scheme interpolates between tabulated values to obtain the aerosol single scattering albedo, asymmetry factor, extinction coefficient, and specific extinction coefficient. The tables are constructed by full calculations of optical properties for an array of aerosol input values, for which size-distributed aerosol properties are estimated from theory for condensation and Brownian coagulation, assumed distribution of cloud-droplet residuals from aqueous phase oxidation, and prescribed properties of the background aerosols. Humidity swelling is estimated from the Köhler equation, and Mie calculations finally yield spectrally resolved aerosol optical parameters for 13 solar bands. The scheme is shown to give excellent agreement with nonparameterized DRF calculations for a wide range of situations. Using IPCC emission scenarios for the years 2000 and 2100, calculations with an atmospheric global cliamte model (AFCM) yield a global net anthropogenic DRF of -0.11 and 0.11 W m-2, respectively, when 90% of BC from biomass burning is assumed anthropogenic. In the 2000 scenario, the individual DRF due to sulfate and BC has separately been estimated to -0.29 and 0.19 W m-2, respectively. Our estimates of DRF by BC per BC mass burden are lower than earlier published estimates. Some sensitivity tests are included to investigate to what extent uncertain assumptions may influence these results.
Directory of Open Access Journals (Sweden)
Y. Li
2016-03-01
Full Text Available The formation and aging of organic aerosols (OA proceed through multiple steps of chemical reaction and mass transport in the gas and particle phases, which is challenging for the interpretation of field measurements and laboratory experiments as well as accurate representation of OA evolution in atmospheric aerosol models. Based on data from over 30 000 compounds, we show that organic compounds with a wide variety of functional groups fall into molecular corridors, characterized by a tight inverse correlation between molar mass and volatility. We developed parameterizations to predict the saturation mass concentration of organic compounds containing oxygen, nitrogen, and sulfur from the elemental composition that can be measured by soft-ionization high-resolution mass spectrometry. Field measurement data from new particle formation events, biomass burning, cloud/fog processing, and indoor environments were mapped into molecular corridors to characterize the chemical nature of the observed OA components. We found that less-oxidized indoor OA are constrained to a corridor of low molar mass and high volatility, whereas highly oxygenated compounds in atmospheric water extend to high molar mass and low volatility. Among the nitrogen- and sulfur-containing compounds identified in atmospheric aerosols, amines tend to exhibit low molar mass and high volatility, whereas organonitrates and organosulfates follow high O : C corridors extending to high molar mass and low volatility. We suggest that the consideration of molar mass and molecular corridors can help to constrain volatility and particle-phase state in the modeling of OA particularly for nitrogen- and sulfur-containing compounds.
Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial
Awatey, M. T.; Irving, J.; Oware, E. K.
2016-12-01
Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the
Drag of Clean and Fouled Net Panels--Measurements and Parameterization of Fouling.
Directory of Open Access Journals (Sweden)
Lars Christian Gansel
Full Text Available Biofouling is a serious problem in marine aquaculture and it has a number of negative impacts including increased forces on aquaculture structures and reduced water exchange across nets. This in turn affects the behavior of fish cages in waves and currents and has an impact on the water volume and quality inside net pens. Even though these negative effects are acknowledged by the research community and governmental institutions, there is limited knowledge about fouling related effects on the flow past nets, and more detailed investigations distinguishing between different fouling types have been called for. This study evaluates the effect of hydroids, an important fouling organism in Norwegian aquaculture, on the forces acting on net panels. Drag forces on clean and fouled nets were measured in a flume tank, and net solidity including effect of fouling were determined using image analysis. The relationship between net solidity and drag was assessed, and it was found that a solidity increase due to hydroids caused less additional drag than a similar increase caused by change in clean net parameters. For solidities tested in this study, the difference in drag force increase could be as high as 43% between fouled and clean nets with same solidity. The relationship between solidity and drag force is well described by exponential functions for clean as well as for fouled nets. A method is proposed to parameterize the effect of fouling in terms of an increase in net solidity. This allows existing numerical methods developed for clean nets to be used to model the effects of biofouling on nets. Measurements with other types of fouling can be added to build a database on effects of the accumulation of different fouling organisms on aquaculture nets.
International Nuclear Information System (INIS)
Ayala, Orlando; Rosa, Bogdan; Wang Lianping
2008-01-01
The effect of air turbulence on the geometric collision kernel of cloud droplets can be predicted if the effects of air turbulence on two kinematic pair statistics can be modeled. The first is the average radial relative velocity and the second is the radial distribution function (RDF). A survey of the literature shows that no theory is available for predicting the radial relative velocity of finite-inertia sedimenting droplets in a turbulent flow. In this paper, a theory for the radial relative velocity is developed, using a statistical approach assuming that gravitational sedimentation dominates the relative motion of droplets before collision. In the weak-inertia limit, the theory reveals a new term making a positive contribution to the radial relative velocity resulting from a coupling between sedimentation and air turbulence on the motion of finite-inertia droplets. The theory is compared to the direct numerical simulations (DNS) results in part 1, showing a reasonable agreement with the DNS data for bidisperse cloud droplets. For droplets larger than 30 μm in radius, a nonlinear drag (NLD) can also be included in the theory in terms of an effective inertial response time and an effective terminal velocity. In addition, an empirical model is developed to quantify the RDF. This, together with the theory for radial relative velocity, provides a parameterization for the turbulent geometric collision kernel. Using this integrated model, we find that turbulence could triple the geometric collision kernel, relative to the stagnant air case, for a droplet pair of 10 and 20 μm sedimenting through a cumulus cloud at R λ =2x10 4 and ε=600 cm 2 s -3 . For the self-collisions of 20 μm droplets, the collision kernel depends sensitively on the flow dissipation rate
Directory of Open Access Journals (Sweden)
S. Chauhan
2017-07-01
Full Text Available The prime aim of this study was to assess the potential of semi-empirical water cloud model (WCM in simulating hybrid-polarized SAR backscatter signatures (RH and RV retrieved from RISAT-1 data and integrate the results into a graphical user interface (GUI to facilitate easy comprehension and interpretation. A predominant agricultural wheat growing area was selected in Mathura and Bharatpur districts located in the Indian states of Uttar Pradesh and Rajasthan respectively to carry out the study. The three-date datasets were acquired covering the crucial growth stages of the wheat crop. In synchrony, the fieldwork was organized to measure crop/soil parameters. The RH and RV backscattering coefficient images were extracted from the SAR data for all the three dates. The effect of four combinations of vegetation descriptors (V1 and V2 viz., LAI-LAI, LAI-Plant water content (PWC, Leaf water area index (LWAI-LWAI, and LAI-Interaction factor (IF on the total RH and RV backscatter was analyzed. The results revealed that WCM calibrated with LAI and IF as the two vegetation descriptors simulated the total RH and RV backscatter values with highest R2 of 0.90 and 0.85 while the RMSE was lowest among the other tested models (1.18 and 1.25 dB, respectively. The theoretical considerations and interpretations have been discussed and examined in the paper. The novelty of this work emanates from the fact that it is a first step towards the modeling of hybrid-polarized backscatter data using an accurately parameterized semi-empirical approach.
3D surface parameterization using manifold learning for medial shape representation
Ward, Aaron D.; Hamarneh, Ghassan
2007-03-01
The choice of 3D shape representation for anatomical structures determines the effectiveness with which segmentation, visualization, deformation, and shape statistics are performed. Medial axis-based shape representations have attracted considerable attention due to their inherent ability to encode information about the natural geometry of parts of the anatomy. In this paper, we propose a novel approach, based on nonlinear manifold learning, to the parameterization of medial sheets and object surfaces based on the results of skeletonization. For each single-sheet figure in an anatomical structure, we skeletonize the figure, and classify its surface points according to whether they lie on the upper or lower surface, based on their relationship to the skeleton points. We then perform nonlinear dimensionality reduction on the skeleton, upper, and lower surface points, to find the intrinsic 2D coordinate system of each. We then center a planar mesh over each of the low-dimensional representations of the points, and map the meshes back to 3D using the mappings obtained by manifold learning. Correspondence between mesh vertices, established in their intrinsic 2D coordinate spaces, is used in order to compute the thickness vectors emanating from the medial sheet. We show results of our algorithm on real brain and musculoskeletal structures extracted from MRI, as well as an artificial multi-sheet example. The main advantages to this method are its relative simplicity and noniterative nature, and its ability to correctly compute nonintersecting thickness vectors for a medial sheet regardless of both the amount of coincident bending and thickness in the object, and of the incidence of local concavities and convexities in the object's surface.
Personality disorders in previously detained adolescent females: a prospective study
Krabbendam, A.; Colins, O.F.; Doreleijers, T.A.H.; van der Molen, E.; Beekman, A.T.F.; Vermeiren, R.R.J.M.
2015-01-01
This longitudinal study investigated the predictive value of trauma and mental health problems for the development of antisocial personality disorder (ASPD) and borderline personality disorder (BPD) in previously detained women. The participants were 229 detained adolescent females who were assessed
Payload specialist Reinhard Furrer show evidence of previous blood sampling
1985-01-01
Payload specialist Reinhard Furrer shows evidence of previous blood sampling while Wubbo J. Ockels, Dutch payload specialist (only partially visible), extends his right arm after a sample has been taken. Both men show bruises on their arms.
Choice of contraception after previous operative delivery at a family ...
African Journals Online (AJOL)
Choice of contraception after previous operative delivery at a family planning clinic in Northern Nigeria. Amina Mohammed‑Durosinlorun, Joel Adze, Stephen Bature, Caleb Mohammed, Matthew Taingson, Amina Abubakar, Austin Ojabo, Lydia Airede ...
Previous utilization of service does not improve timely booking in ...
African Journals Online (AJOL)
Previous utilization of service does not improve timely booking in antenatal care: Cross sectional study ... Journal Home > Vol 24, No 3 (2010) > ... Results: Past experience on antenatal care service utilization did not come out as a predictor for ...
A previous hamstring injury affects kicking mechanics in soccer players.
Navandar, Archit; Veiga, Santiago; Torres, Gonzalo; Chorro, David; Navarro, Enrique
2018-01-10
Although the kicking skill is influenced by limb dominance and sex, how a previous hamstring injury affects kicking has not been studied in detail. Thus, the objective of this study was to evaluate the effect of sex and limb dominance on kicking in limbs with and without a previous hamstring injury. 45 professional players (males: n=19, previously injured players=4, age=21.16 ± 2.00 years; females: n=19, previously injured players=10, age=22.15 ± 4.50 years) performed 5 kicks each with their preferred and non-preferred limb at a target 7m away, which were recorded with a three-dimensional motion capture system. Kinematic and kinetic variables were extracted for the backswing, leg cocking, leg acceleration and follow through phases. A shorter backswing (20.20 ± 3.49% vs 25.64 ± 4.57%), and differences in knee flexion angle (58 ± 10o vs 72 ± 14o) and hip flexion velocity (8 ± 0rad/s vs 10 ± 2rad/s) were observed in previously injured, non-preferred limb kicks for females. A lower peak hip linear velocity (3.50 ± 0.84m/s vs 4.10 ± 0.45m/s) was observed in previously injured, preferred limb kicks of females. These differences occurred in the backswing and leg-cocking phases where the hamstring muscles were the most active. A variation in the functioning of the hamstring muscles and that of the gluteus maximus and iliopsoas in the case of a previous injury could account for the differences observed in the kicking pattern. Therefore, the effects of a previous hamstring injury must be considered while designing rehabilitation programs to re-educate kicking movement.
Bateson, Thomas F; Kopylev, Leonid
2015-01-01
Recent meta-analyses of occupational epidemiology studies identified two important exposure data quality factors in predicting summary effect measures for asbestos-associated lung cancer mortality risk: sufficiency of job history data and percent coverage of work history by measured exposures. The objective was to evaluate different exposure parameterizations suggested in the asbestos literature using the Libby, MT asbestos worker cohort and to evaluate influences of exposure measurement error caused by historically estimated exposure data on lung cancer risks. Focusing on workers hired after 1959, when job histories were well-known and occupational exposures were predominantly based on measured exposures (85% coverage), we found that cumulative exposure alone, and with allowance of exponential decay, fit lung cancer mortality data similarly. Residence-time-weighted metrics did not fit well. Compared with previous analyses based on the whole cohort of Libby workers hired after 1935, when job histories were less well-known and exposures less frequently measured (47% coverage), our analyses based on higher quality exposure data yielded an effect size as much as 3.6 times higher. Future occupational cohort studies should continue to refine retrospective exposure assessment methods, consider multiple exposure metrics, and explore new methods of maintaining statistical power while minimizing exposure measurement error.
Directory of Open Access Journals (Sweden)
F. Hourdin
2015-06-01
boundary layer by a mass flux scheme leads to realistic representation of the diurnal cycle of wind in spring, with a maximum near-surface wind in the morning. This maximum occurs when the thermal plumes reach the low-level jet that forms during the night at a few hundred meters above surface. The horizontal momentum in the jet is transported downward to the surface by compensating subsidence around thermal plumes in typically less than 1 h. This leads to a rapid increase of wind speed at surface and therefore of dust emissions owing to the strong nonlinearity of emission laws. The numerical experiments are performed with a zoomed and nudged configuration of the LMDZ general circulation model coupled to the emission module of the CHIMERE chemistry transport model, in which winds are relaxed toward that of the ERA-Interim reanalyses. The new set of parameterizations leads to a strong improvement of the representation of the diurnal cycle of wind when compared to a previous version of LMDZ as well as to the reanalyses used for nudging themselves. It also generates dust emissions in better agreement with current estimates, but the aerosol optical thickness is still significantly underestimated.
Montesarchio, Myriam; Rianna, Guido; Mercogliano, Paola; Castellari, Sergio; Schiano, Pasquale
2015-04-01
In Europe, about 80% of people live in urban areas, which most of them can be particularly vulnerable to climate impacts (e.g. high air temperatures along with heat waves, flooding due to intense precipitation events, water scarcity and droughts). In fact, the density of people and assets within relatively small geographic areas, such as an urban settlements, mean more risk exposure than in rural areas. Therefore, reliable numerical climate models are needed for elaborating climate risk assessment at urban scale. These models must take into account the effects of the complex three-dimensional structure of urban settlements, combined with the mixture of surface types with contrasting radiative, thermal and moisture characteristics. In this respect, previous studies (e.g. Trusilova et al., 2013) have already assessed the importance to consider urban properties in very high resolution regional climate modeling to better reproduce the features of urban climate, especially in terms of urban heat island effect. In this work, two different configurations of the regional climate model COSMO-CLM at the horizontal resolution of 0.02° (about 2.2km), one including urban parameterization scheme and another without including them, have been applied in order to perform two different climate simulations covering the entire northern Italy. In particular, the present study is focused on large urban settlements such as Milan and Turin. Due to high computational cost required to run very high resolution simulations, the results of the two simulations have been compared over a period of ten years, from 1980 to 1989. Preliminary results indicate that the modification of climate conditions, due to the presence of urban areas, is present mainly in the areas covered by big cities and surrounding them, or rather the presence of urban areas induces modification mainly in their local climate. Other evidences are that the simulation including urban parameterization scheme shows, in general
On Computation of Generalized Derivatives of the Normal-Cone Mapping and Their Applications
Czech Academy of Sciences Publication Activity Database
Gfrerer, H.; Outrata, Jiří
2016-01-01
Roč. 41, č. 4 (2016), s. 1535-1556 ISSN 0364-765X R&D Projects: GA ČR GAP402/12/1309 Institutional support: RVO:67985556 Keywords : parameterized generalized equation * graphical derivative * regular coderivative * mathematical program with equilibrium constraints Subject RIV: BA - General Mathematics Impact factor: 1.157, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/outrata-0463357.pdf
Directory of Open Access Journals (Sweden)
J.-I. Yano
2012-11-01
Full Text Available A generalized mass-flux formulation is presented, which no longer takes a limit of vanishing fractional areas for subgrid-scale components. The presented formulation is applicable to a~situation in which the scale separation is still satisfied, but fractional areas occupied by individual subgrid-scale components are no longer small. A self-consistent formulation is presented by generalizing the mass-flux formulation under the segmentally-constant approximation (SCA to the grid–scale variabilities. The present formulation is expected to alleviate problems arising from increasing resolutions of operational forecast models without invoking more extensive overhaul of parameterizations.
The present formulation leads to an analogy of the large-scale atmospheric flow with multi-component flows. This analogy allows a generality of including any subgrid-scale variability into the mass-flux parameterization under SCA. Those include stratiform clouds as well as cold pools in the boundary layer.
An important finding under the present formulation is that the subgrid-scale quantities are advected by the large-scale velocities characteristic of given subgrid-scale components (large-scale subcomponent flows, rather than by the total large-scale flows as simply defined by grid-box average. In this manner, each subgrid-scale component behaves as if like a component of multi-component flows. This formulation, as a result, ensures the lateral interaction of subgrid-scale variability crossing the grid boxes, which are missing in the current parameterizations based on vertical one-dimensional models, and leading to a reduction of the grid-size dependencies in its performance. It is shown that the large-scale subcomponent flows are driven by large-scale subcomponent pressure gradients. The formulation, as a result, furthermore includes a self-contained description of subgrid-scale momentum transport.
The main purpose of the present paper
Secondary recurrent miscarriage is associated with previous male birth.
LENUS (Irish Health Repository)
Ooi, Poh Veh
2012-01-31
Secondary recurrent miscarriage (RM) is defined as three or more consecutive pregnancy losses after delivery of a viable infant. Previous reports suggest that a firstborn male child is associated with less favourable subsequent reproductive potential, possibly due to maternal immunisation against male-specific minor histocompatibility antigens. In a retrospective cohort study of 85 cases of secondary RM we aimed to determine if secondary RM was associated with (i) gender of previous child, maternal age, or duration of miscarriage history, and (ii) increased risk of pregnancy complications. Fifty-three women (62.0%; 53\\/85) gave birth to a male child prior to RM compared to 32 (38.0%; 32\\/85) who gave birth to a female child (p=0.002). The majority (91.7%; 78\\/85) had uncomplicated, term deliveries and normal birth weight neonates, with one quarter of the women previously delivered by Caesarean section. All had routine RM investigations and 19.0% (16\\/85) had an abnormal result. Fifty-seven women conceived again and 33.3% (19\\/57) miscarried, but there was no significant difference in failure rates between those with a previous male or female child (13\\/32 vs. 6\\/25, p=0.2). When patients with abnormal results were excluded, or when women with only one previous child were considered, there was still no difference in these rates. A previous male birth may be associated with an increased risk of secondary RM but numbers preclude concluding whether this increases recurrence risk. The suggested association with previous male birth provides a basis for further investigations at a molecular level.
Secondary recurrent miscarriage is associated with previous male birth.
LENUS (Irish Health Repository)
Ooi, Poh Veh
2011-01-01
Secondary recurrent miscarriage (RM) is defined as three or more consecutive pregnancy losses after delivery of a viable infant. Previous reports suggest that a firstborn male child is associated with less favourable subsequent reproductive potential, possibly due to maternal immunisation against male-specific minor histocompatibility antigens. In a retrospective cohort study of 85 cases of secondary RM we aimed to determine if secondary RM was associated with (i) gender of previous child, maternal age, or duration of miscarriage history, and (ii) increased risk of pregnancy complications. Fifty-three women (62.0%; 53\\/85) gave birth to a male child prior to RM compared to 32 (38.0%; 32\\/85) who gave birth to a female child (p=0.002). The majority (91.7%; 78\\/85) had uncomplicated, term deliveries and normal birth weight neonates, with one quarter of the women previously delivered by Caesarean section. All had routine RM investigations and 19.0% (16\\/85) had an abnormal result. Fifty-seven women conceived again and 33.3% (19\\/57) miscarried, but there was no significant difference in failure rates between those with a previous male or female child (13\\/32 vs. 6\\/25, p=0.2). When patients with abnormal results were excluded, or when women with only one previous child were considered, there was still no difference in these rates. A previous male birth may be associated with an increased risk of secondary RM but numbers preclude concluding whether this increases recurrence risk. The suggested association with previous male birth provides a basis for further investigations at a molecular level.
DEFF Research Database (Denmark)
Fuchs, Sven; Balling, Niels
2016-01-01
The subsurface temperature field and the geothermal conditions in sedimentary basins are frequently examined by using numerical thermal models. For those models, detailed knowledge of rock thermal properties are paramount for a reliable parameterization of layer properties and boundary conditions...
1989-01-01
scale separation, the small scales would provide a diffusivity given by the Green - Kubo formula D = >dt: ( 1 ) 246 where V is the velocity followed by...the Green - Kubo convergence time can be considerably larger than the eddy under consideration. It is the nature of diffusion that if there isn’t any...adopt an eddy viscosity parameterization based on SCM or even a lower level parameterization, then one does find that, by integrating the mean
Erlotinib-induced rash spares previously irradiated skin
International Nuclear Information System (INIS)
Lips, Irene M.; Vonk, Ernest J.A.; Koster, Mariska E.Y.; Houwing, Ronald H.
2011-01-01
Erlotinib is an epidermal growth factor receptor inhibitor prescribed to patients with locally advanced or metastasized non-small cell lung carcinoma after failure of at least one earlier chemotherapy treatment. Approximately 75% of the patients treated with erlotinib develop acneiform skin rashes. A patient treated with erlotinib 3 months after finishing concomitant treatment with chemotherapy and radiotherapy for non-small cell lung cancer is presented. Unexpectedly, the part of the skin that had been included in his previously radiotherapy field was completely spared from the erlotinib-induced acneiform skin rash. The exact mechanism of erlotinib-induced rash sparing in previously irradiated skin is unclear. The underlying mechanism of this phenomenon needs to be explored further, because the number of patients being treated with a combination of both therapeutic modalities is increasing. The therapeutic effect of erlotinib in the area of the previously irradiated lesion should be assessed. (orig.)
Reasoning with Previous Decisions: Beyond the Doctrine of Precedent
DEFF Research Database (Denmark)
Komárek, Jan
2013-01-01
in different jurisdictions use previous judicial decisions in their argument, we need to move beyond the concept of precedent to a wider notion, which would embrace practices and theories in legal systems outside the Common law tradition. This article presents the concept of ‘reasoning with previous decisions...... law method’, but they are no less rational and intellectually sophisticated. The reason for the rather conceited attitude of some comparatists is in the dominance of the common law paradigm of precedent and the accompanying ‘case law method’. If we want to understand how courts and lawyers......’ as such an alternative and develops its basic models. The article first points out several shortcomings inherent in limiting the inquiry into reasoning with previous decisions by the common law paradigm (1). On the basis of numerous examples provided in section (1), I will present two basic models of reasoning...
[Prevalence of previously diagnosed diabetes mellitus in Mexico.
Rojas-Martínez, Rosalba; Basto-Abreu, Ana; Aguilar-Salinas, Carlos A; Zárate-Rojas, Emiliano; Villalpando, Salvador; Barrientos-Gutiérrez, Tonatiuh
2018-01-01
To compare the prevalence of previously diagnosed diabetes in 2016 with previous national surveys and to describe treatment and its complications. Mexico's national surveys Ensa 2000, Ensanut 2006, 2012 and 2016 were used. For 2016, logistic regression models and measures of central tendency and dispersion were obtained. The prevalence of previously diagnosed diabetes in 2016 was 9.4%. The increase of 2.2% relative to 2012 was not significant and only observed in patients older than 60 years. While preventive measures have increased, the access to medical treatment and lifestyle has not changed. The treatment has been modified, with an increase in insulin and decrease in hypoglycaemic agents. Population aging, lack of screening actions and the increase in diabetes complications will lead to an increase on the burden of disease. Policy measures targeting primary and secondary prevention of diabetes are crucial.
Cardiovascular magnetic resonance in adults with previous cardiovascular surgery.
von Knobelsdorff-Brenkenhoff, Florian; Trauzeddel, Ralf Felix; Schulz-Menger, Jeanette
2014-03-01
Cardiovascular magnetic resonance (CMR) is a versatile non-invasive imaging modality that serves a broad spectrum of indications in clinical cardiology and has proven evidence. Most of the numerous applications are appropriate in patients with previous cardiovascular surgery in the same manner as in non-surgical subjects. However, some specifics have to be considered. This review article is intended to provide information about the application of CMR in adults with previous cardiovascular surgery. In particular, the two main scenarios, i.e. following coronary artery bypass surgery and following heart valve surgery, are highlighted. Furthermore, several pictorial descriptions of other potential indications for CMR after cardiovascular surgery are given.
International Nuclear Information System (INIS)
Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J
2008-01-01
All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization
Energy Technology Data Exchange (ETDEWEB)
Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J [Atmospheric Science and Global Change Division, Pacific Northwest National Laboratory, PO Box 999, MSIN K9-30, Richland, WA (United States)], E-mail: William.Gustafson@pnl.gov
2008-04-15
All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization.
An Analytical Finite-Strain Parameterization for Texture Evolution in Deformed Olivine Polycrystals
Ribe, N. M.; Castelnau, O.
2017-12-01
) that can be fit using simple analytical functions. Our new parameterization allows CPO to be calculated some 107 times faster than full self-consistent methods such as SOSC.
Z-Index Parameterization for Volumetric CT Image Reconstruction via 3-D Dictionary Learning.
Bai, Ti; Yan, Hao; Jia, Xun; Jiang, Steve; Wang, Ge; Mou, Xuanqin
2017-12-01
Despite the rapid developments of X-ray cone-beam CT (CBCT), image noise still remains a major issue for the low dose CBCT. To suppress the noise effectively while retain the structures well for low dose CBCT image, in this paper, a sparse constraint based on the 3-D dictionary is incorporated into a regularized iterative reconstruction framework, defining the 3-D dictionary learning (3-DDL) method. In addition, by analyzing the sparsity level curve associated with different regularization parameters, a new adaptive parameter selection strategy is proposed to facilitate our 3-DDL method. To justify the proposed method, we first analyze the distributions of the representation coefficients associated with the 3-D dictionary and the conventional 2-D dictionary to compare their efficiencies in representing volumetric images. Then, multiple real data experiments are conducted for performance validation. Based on these results, we found: 1) the 3-D dictionary-based sparse coefficients have three orders narrower Laplacian distribution compared with the 2-D dictionary, suggesting the higher representation efficiencies of the 3-D dictionary; 2) the sparsity level curve demonstrates a clear Z-shape, and hence referred to as Z-curve, in this paper; 3) the parameter associated with the maximum curvature point of the Z-curve suggests a nice parameter choice, which could be adaptively located with the proposed Z-index parameterization (ZIP) method; 4) the proposed 3-DDL algorithm equipped with the ZIP method could deliver reconstructions with the lowest root mean squared errors and the highest structural similarity index compared with the competing methods; 5) similar noise performance as the regular dose FDK reconstruction regarding the standard deviation metric could be achieved with the proposed method using (1/2)/(1/4)/(1/8) dose level projections. The contrast-noise ratio is improved by ~2.5/3.5 times with respect to two different cases under the (1/8) dose level compared
Advances in understanding, models and parameterizations of biosphere-atmosphere ammonia exchange
Flechard, C. R.; Massad, R.-S.; Loubet, B.; Personne, E.; Simpson, D.; Bash, J. O.; Cooter, E. J.; Nemitz, E.; Sutton, M. A.
2013-07-01
. Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterization, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM) and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.
Winter QPF Sensitivities to Snow Parameterizations and Comparisons to NASA CloudSat Observations
Molthan, Andrew; Haynes, John M.; Jedlovec, Gary J.; Lapenta, William M.
2009-01-01
Steady increases in computing power have allowed for numerical weather prediction models to be initialized and run at high spatial resolution, permitting a transition from larger scale parameterizations of the effects of clouds and precipitation to the simulation of specific microphysical processes and hydrometeor size distributions. Although still relatively coarse in comparison to true cloud resolving models, these high resolution forecasts (on the order of 4 km or less) have demonstrated value in the prediction of severe storm mode and evolution and are being explored for use in winter weather events . Several single-moment bulk water microphysics schemes are available within the latest release of the Weather Research and Forecast (WRF) model suite, including the NASA Goddard Cumulus Ensemble, which incorporate some assumptions in the size distribution of a small number of hydrometeor classes in order to predict their evolution, advection and precipitation within the forecast domain. Although many of these schemes produce similar forecasts of events on the synoptic scale, there are often significant details regarding precipitation and cloud cover, as well as the distribution of water mass among the constituent hydrometeor classes. Unfortunately, validating data for cloud resolving model simulations are sparse. Field campaigns require in-cloud measurements of hydrometeors from aircraft in coordination with extensive and coincident ground based measurements. Radar remote sensing is utilized to detect the spatial coverage and structure of precipitation. Here, two radar systems characterize the structure of winter precipitation for comparison to equivalent features within a forecast model: a 3 GHz, Weather Surveillance Radar-1988 Doppler (WSR-88D) based in Omaha, Nebraska, and the 94 GHz NASA CloudSat Cloud Profiling Radar, a spaceborne instrument and member of the afternoon or "A-Train" of polar orbiting satellites tasked with cataloguing global cloud
Directory of Open Access Journals (Sweden)
M. J. Alvarado
2016-07-01
Full Text Available Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS campaign. The four models are the NASA Global Modeling Initiative (GMI Combo model, GEOS-Chem v9-02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT, and the Optical Properties of Aerosol and Clouds (OPAC v3.1 package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1 to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10–23 %, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GC-RT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass
The Empirical Canadian High Arctic Ionospheric Model (E-CHAIM): Bottomside Parameterization
Themens, D. R.; Jayachandran, P. T.
2017-12-01
It is well known that the International Reference Ionosphere (IRI) suffers reduced accuracy in its representation of monthly median ionospheric electron density at high latitudes. These inaccuracies are believed to stem, at least in part, from a historical lack of data from these regions. Now, roughly thirty and forty years after the development of the original URSI and CCIR foF2 maps, respectively, there exists a much larger dataset of high latitude observations of ionospheric electron density. These new measurements come in the form of new ionosonde deployments, such as those of the Canadian High Arctic Ionospheric Network, the CHAMP, GRACE, and COSMIC radio occultation missions, and the construction of the Poker Flat, Resolute, and EISCAT Incoherent Scatter Radar systems. These new datasets afford an opportunity to revise the IRI's representation of the high latitude ionosphere. Using a spherical cap harmonic expansion to represent horizontal and diurnal variability and a Fourier expansion in day of year to represent seasonal variations, we have developed a new model of the bottomside ionosphere's electron density for the high latitude ionosphere, above 50N geomagnetic latitude. For the peak heights of the E and F1 layers (hmE and hmF1, respectively), current standards use a constant value for hmE and either use a single-parameter model for hmF1 (IRI) or scale hmF1 with the F peak (NeQuick). For E-CHAIM, we have diverged from this convention to account for the greater variability seen in these characteristics at high latitudes, opting to use a full spherical harmonic model description for each of these characteristics. For the description of the bottomside vertical electron density profile, we present a single-layer model with altitude-varying scale height. The scale height function is taken as the sum three scale height layer functions anchored to the F2 peak, hmF1, and hmE. This parameterization successfully reproduces the structure of the various bottomside
Colangelo, Antonio C.
2010-05-01
The central purpose of this work is to perform a reverse procedure in the mass movement conventional parameterization approach. The idea is to generate a number of synthetic mass movements by means of the "slope stability simulator" (Colangelo, 2007), and compeer their morphological and physical properties with "real" conditions of effective mass movements. This device is an integrated part of "relief unity emulator" (rue), that permits generate synthetic mass movements in a synthetic slope environment. The "rue" was build upon fundamental geomorphological concepts. These devices operate with an integrated set of mechanical, geomorphic and hydrological models. The "slope stability simulator" device (sss) permits to perform a detailed slope stability analysis in a theoretical three dimensional space, by means of evaluation the spatial behavior of critical depths, gradients and saturation levels in the "potential rupture surfaces" inferred along a set of slope profiles, that compounds a synthetic slope unity. It's a meta-stable 4-dimensional object generated by means of "rue", that represents a sequence evolution of a generator profile applied here, was adapted the infinite slope model for slope. Any slope profiles were sliced by means of finite element solution like in Bishop method. For the synthetic slope systems generated, we assume that the potential rupture surface occurs at soil-regolith or soil-rock boundary in slope material. Sixteen variables were included in the "rue-sss" device that operates in an integrated manner. For each cell, the factor of safety was calculated considering the value of shear strength (cohesion and friction) of material, soil-regolith boundary depth, soil moisture level content, potential rupture surface gradient, slope surface gradient, top of subsurface flow gradient, apparent soil bulk density and vegetation surcharge. The slope soil was considered as cohesive material. The 16 variables incorporated in the models were analyzed for
Advances in understanding, models and parameterizations of biosphere-atmosphere ammonia exchange
Directory of Open Access Journals (Sweden)
C. R. Flechard
2013-07-01
-chemical species schemes. Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterization, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.