WorldWideScience

Sample records for cloud-resolving modeling study

  1. A Coupled GCM-Cloud Resolving Modeling System, and a Regional Scale Model to Study Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo

    2007-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a superparameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (2ICE, several 31CE), Goddard radiation (including explicitly calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generatio11 regional scale model, WRF. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications).

  2. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  3. Numerical simulations of altocumulus with a cloud resolving model

    Energy Technology Data Exchange (ETDEWEB)

    Liu, S.; Krueger, S.K. [Univ. of Utah, Salt Lake City, UT (United States)

    1996-04-01

    Altocumulus and altostratus clouds together cover approximately 22% of the earth`s surface. They play an important role in the earth`s energy budget through their effect on solar and infrared radiation. However, there has been little altocumulus cloud investigation by either modelers or observational programs. Starr and Cox (SC) (1985a,b) simulated an altostratus case as part of the same study in which they modeled a thin layer of cirrus. Although this calculation was originally described as representing altostratus, it probably better represents altocumulus stratiformis. In this paper, we simulate altocumulus cloud with a cloud resolving model (CRM). We simply describe the CRM first. We calculate the same middle-level cloud case as SC to compare our results with theirs. We will look at the role of cloud-scale processes in response to large-scale forcing. We will also discuss radiative effects by simulating diurnal and nocturnal cases. Finally, we discuss the utility of a 1D model by comparing 1D simulations and 2D simulations.

  4. Spectral cumulus parameterization based on cloud-resolving model

    Science.gov (United States)

    Baba, Yuya

    2018-02-01

    We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.

  5. Cloud-Resolving Modeling Intercomparison Study of a Squall Line Case from MC3E - Properties of Convective Core

    Science.gov (United States)

    Fan, J.; Han, B.; Varble, A.; Morrison, H.; North, K.; Kollias, P.; Chen, B.; Dong, X.; Giangrande, S. E.; Khain, A.; Lin, Y.; Mansell, E.; Milbrandt, J.; Stenz, R.; Thompson, G.; Wang, Y.

    2016-12-01

    The large spread in CRM model simulations of deep convection and aerosol effects on deep convective clouds (DCCs) makes it difficult to (1) further our understanding of deep convection and (2) define "benchmarks" and then limit their use in parameterization developments. A constrained model intercomparsion study on a mid-latitude mesoscale squall line is performed using the Weather Research & Forecasting (WRF) model at 1-km horizontal grid spacing with eight cloud microphysics schemes to understand specific processes that lead to the large spreads of simulated convection and precipitation. Various observational data are employed to evaluate the baseline simulations. All simulations tend to produce a wider convective area but a much narrower stratiform area. The magnitudes of virtual potential temperature drop, pressure rise, and wind speed peak associated with the passage of the gust front are significantly smaller compared with the observations, suggesting simulated cool pools are weaker. Simulations generally overestimate the vertical velocity and radar reflectivity in convective cores compared with the retrievals. The modeled updraft velocity and precipitation have a significant spread across eight schemes. The spread of updraft velocity is the combination of both low-level pressure perturbation gradient (PPG) and buoyancy. Both PPG and thermal buoyancy are small for simulations of weak convection but both are large for those of strong convection. Ice-related parameterizations contribute majorly to the spread of updraft velocity, while they are not the reason for the large spread of precipitation. The understandings gained in this study can help to focus future observations and parameterization development.

  6. Forecasting Lightning Threat using Cloud-Resolving Model Simulations

    Science.gov (United States)

    McCaul, Eugene W., Jr.; Goodman, Steven J.; LaCasse, Katherine M.; Cecil, Daniel J.

    2008-01-01

    Two new approaches are proposed and developed for making time and space dependent, quantitative short-term forecasts of lightning threat, and a blend of these approaches is devised that capitalizes on the strengths of each. The new methods are distinctive in that they are based entirely on the ice-phase hydrometeor fields generated by regional cloud-resolving numerical simulations, such as those produced by the WRF model. These methods are justified by established observational evidence linking aspects of the precipitating ice hydrometeor fields to total flash rates. The methods are straightforward and easy to implement, and offer an effective near-term alternative to the incorporation of complex and costly cloud electrification schemes into numerical models. One method is based on upward fluxes of precipitating ice hydrometeors in the mixed phase region at the-15 C level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domain-wide statistics of the peak values of simulated flash rate proxy fields against domain-wide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. Our blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Exploratory tests for selected North Alabama cases show that, because WRF can distinguish the general character of most convective events, our methods show promise as a means of generating quantitatively realistic fields of lightning threat. However, because the models tend to have more difficulty in predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single

  7. Representation of Arctic mixed-phase clouds and the Wegener-Bergeron-Findeisen process in climate models: Perspectives from a cloud-resolving study

    Science.gov (United States)

    Fan, Jiwen; Ghan, Steven; Ovchinnikov, Mikhail; Liu, Xiaohong; Rasch, Philip J.; Korolev, Alexei

    2011-01-01

    Two types of Arctic mixed-phase clouds observed during the ISDAC and M-PACE field campaigns are simulated using a 3-dimensional cloud-resolving model (CRM) with size-resolved cloud microphysics. The modeled cloud properties agree reasonably well with aircraft measurements and surface-based retrievals. Cloud properties such as the probability density function (PDF) of vertical velocity (w), cloud liquid and ice, regimes of cloud particle growth, including the Wegener-Bergeron-Findeisen (WBF) process, and the relationships among properties/processes in mixed-phase clouds are examined to gain insights for improving their representation in General Circulation Models (GCMs). The PDF of the simulated w is well represented by a Gaussian function, validating, at least for arctic clouds, the subgrid treatment used in GCMs. The PDFs of liquid and ice water contents can be approximated by Gamma functions, and a Gaussian function can describe the total water distribution, but a fixed variance assumption should be avoided in both cases. The CRM results support the assumption frequently used in GCMs that mixed phase clouds maintain water vapor near liquid saturation. Thus, ice continues to grow throughout the stratiform cloud but the WBF process occurs in about 50% of cloud volume where liquid and ice co-exist, predominantly in downdrafts. In updrafts, liquid and ice particles grow simultaneously. The relationship between the ice depositional growth rate and cloud ice strongly depends on the capacitance of ice particles. The simplified size-independent capacitance of ice particles used in GCMs could lead to large deviations in ice depositional growth.

  8. Forecasting Lightning Threat using Cloud-resolving Model Simulations

    Science.gov (United States)

    McCaul, E. W., Jr.; Goodman, S. J.; LaCasse, K. M.; Cecil, D. J.

    2009-01-01

    As numerical forecasts capable of resolving individual convective clouds become more common, it is of interest to see if quantitative forecasts of lightning flash rate density are possible, based on fields computed by the numerical model. Previous observational research has shown robust relationships between observed lightning flash rates and inferred updraft and large precipitation ice fields in the mixed phase regions of storms, and that these relationships might allow simulated fields to serve as proxies for lightning flash rate density. It is shown in this paper that two simple proxy fields do indeed provide reasonable and cost-effective bases for creating time-evolving maps of predicted lightning flash rate density, judging from a series of diverse simulation case study events in North Alabama for which Lightning Mapping Array data provide ground truth. One method is based on the product of upward velocity and the mixing ratio of precipitating ice hydrometeors, modeled as graupel only, in the mixed phase region of storms at the -15\\dgc\\ level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domainwide statistics of the peak values of simulated flash rate proxy fields against domainwide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. A blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Weather Research and Forecast Model simulations of selected North Alabama cases show that this model can distinguish the general character and intensity of most convective events, and that the proposed methods show promise as a means of generating

  9. Coupled fvGCM-GCE Modeling System, 3D Cloud-Resolving Model and Cloud Library

    Science.gov (United States)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud- resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. A seed fund is available at NASA Goddard to build a MMF based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM). A prototype MMF in being developed and production runs will be conducted at the beginning of 2005. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes, ( 2 ) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), (3) A cloud library generated by Goddard MMF, and 3D GCE model, and (4) A brief discussion on the GCE model on developing a global cloud simulator.

  10. Tropical Oceanic Precipitation Processes Over Warm Pool: 2D and 3D Cloud Resolving Model Simulations

    Science.gov (United States)

    Tao, W.-K.; Johnson, D.; Simpson, J.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Rainfall is a key link in the hydrologic cycle as well as the primary heat source for the atmosphere. The vertical distribution of convective latent-heat release modulates the large-scale circulations of the topics. Furthermore, changes in the moisture distribution at middle and upper levels of the troposphere can affect cloud distributions and cloud liquid water and ice contents. How the incoming solar and outgoing longwave radiation respond to these changes in clouds is a major factor in assessing climate change. Present large-scale weather and climate model simulate processes only crudely, reducing confidence in their predictions on both global and regional scales. One of the most promising methods to test physical parameterizations used in General Circulation Models (GCMs) and climate models is to use field observations together with Cloud Resolving Models (CRMs). The CRMs use more sophisticated and physically realistic parameterizations of cloud microphysical processes, and allow for their complex interactions with solar and infrared radiative transfer processes. The CRMs can reasonably well resolve the evolution, structure, and life cycles of individual clouds and clouds systems. The major objective of this paper is to investigate the latent heating, moisture and momentum budgets associated with several convective systems developed during the TOGA COARE IFA - westerly wind burst event (late December, 1992). The tool for this study is the Goddard Cumulus Ensemble (GCE) model which includes a 3-class ice-phase microphysics scheme.

  11. Role of atmospheric aerosol concentration on deep convective precipitation: Cloud-resolving model simulations

    Science.gov (United States)

    Tao, Wei-Kuo; Li, Xiaowen; Khain, Alexander; Matsui, Toshihisa; Lang, Stephen; Simpson, Joanne

    2007-12-01

    A two-dimensional cloud-resolving model with detailed spectral bin microphysics is used to examine the effect of aerosols on three different deep convective cloud systems that developed in different geographic locations: south Florida, Oklahoma, and the central Pacific. A pair of model simulations, one with an idealized low cloud condensation nuclei (CCN) (clean) and one with an idealized high CCN (dirty environment), is conducted for each case. In all three cases, rain reaches the ground earlier for the low-CCN case. Rain suppression is also evident in all three cases with high CCN. However, this suppression only occurs during the early stages of the simulations. During the mature stages of the simulations the effects of increasing aerosol concentration range from rain suppression in the Oklahoma case to almost no effect in the Florida case to rain enhancement in the Pacific case. The model results suggest that evaporative cooling in the lower troposphere is a key process in determining whether high CCN reduces or enhances precipitation. Stronger evaporative cooling can produce a stronger cold pool and thus stronger low-level convergence through interactions with the low-level wind shear. Consequently, precipitation processes can be more vigorous. For example, the evaporative cooling is more than two times stronger in the lower troposphere with high CCN for the Pacific case. Sensitivity tests also suggest that ice processes are crucial for suppressing precipitation in the Oklahoma case with high CCN. A comparison and review of other modeling studies are also presented.

  12. Mechanisms of diurnal precipitation over the US Great Plains: a cloud resolving model perspective

    Science.gov (United States)

    Lee, Myong-In; Choi, Ildae; Tao, Wei-Kuo; Schubert, Siegfried D.; Kang, In-Sik

    2010-02-01

    The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program’s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer

  13. Mechanisms of Diurnal Precipitation over the United States Great Plains: A Cloud-Resolving Model Simulation

    Science.gov (United States)

    Lee, M.-I.; Choi, I.; Tao, W.-K.; Schubert, S. D.; Kang, I.-K.

    2010-01-01

    The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.

  14. IO strategies and data services for petascale data sets from a global cloud resolving model

    International Nuclear Information System (INIS)

    Schuchardt, K L; Palmer, B J; Daily, J A; Elsethagen, T O; Koontz, A S

    2007-01-01

    Global cloud resolving models at resolutions of 4km or less create significant challenges for simulation output, data storage, data management, and post-simulation analysis and visualization. To support efficient model output as well as data analysis, new methods for IO and data organization must be evaluated. The model we are supporting, the Global Cloud Resolving Model being developed at Colorado State University, uses a geodesic grid. The non-monotonic nature of the grid's coordinate variables requires enhancements to existing data processing tools and community standards for describing and manipulating grids. The resolution, size and extent of the data suggest the need for parallel analysis tools and allow for the possibility of new techniques in data mining, filtering and comparison to observations. We describe the challenges posed by various aspects of data generation, management, and analysis, our work exploring IO strategies for the model, and a preliminary architecture, web portal, and tool enhancements which, when complete, will enable broad community access to the data sets in familiar ways to the community

  15. A study of cloud microphysics and precipitation over the Tibetan Plateau by radar observations and cloud-resolving model simulations: Cloud Microphysics over Tibetan Plateau

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Wenhua [State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China; Pacific Northwest National Laboratory, Richland Washington USA; Sui, Chung-Hsiung [Department of Atmospheric Sciences, National Taiwan University, Taipei Taiwan; Fan, Jiwen [Pacific Northwest National Laboratory, Richland Washington USA; Hu, Zhiqun [State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China; Zhong, Lingzhi [State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China

    2016-11-27

    Cloud microphysical properties and precipitation over the Tibetan Plateau (TP) are unique because of the high terrains, clean atmosphere, and sufficient water vapor. With dual-polarization precipitation radar and cloud radar measurements during the Third Tibetan Plateau Atmospheric Scientific Experiment (TIPEX-III), the simulated microphysics and precipitation by the Weather Research and Forecasting model (WRF) with the Chinese Academy of Meteorological Sciences (CAMS) microphysics and other microphysical schemes are investigated through a typical plateau rainfall event on 22 July 2014. Results show that the WRF-CAMS simulation reasonably reproduces the spatial distribution of 24-h accumulated precipitation, but has limitations in simulating time evolution of precipitation rates. The model-calculated polarimetric radar variables have biases as well, suggesting bias in modeled hydrometeor types. The raindrop sizes in convective region are larger than those in stratiform region indicated by the small intercept of raindrop size distribution in the former. The sensitivity experiments show that precipitation processes are sensitive to the changes of warm rain processes in condensation and nucleated droplet size (but less sensitive to evaporation process). Increasing droplet condensation produces the best area-averaged rain rate during weak convection period compared with the observation, suggesting a considerable bias in thermodynamics in the baseline simulation. Increasing the initial cloud droplet size causes the rain rate reduced by half, an opposite effect to that of increasing droplet condensation.

  16. A Madden-Julian oscillation event realistically simulated by a global cloud-resolving model.

    Science.gov (United States)

    Miura, Hiroaki; Satoh, Masaki; Nasuno, Tomoe; Noda, Akira T; Oouchi, Kazuyoshi

    2007-12-14

    A Madden-Julian Oscillation (MJO) is a massive weather event consisting of deep convection coupled with atmospheric circulation, moving slowly eastward over the Indian and Pacific Oceans. Despite its enormous influence on many weather and climate systems worldwide, it has proven very difficult to simulate an MJO because of assumptions about cumulus clouds in global meteorological models. Using a model that allows direct coupling of the atmospheric circulation and clouds, we successfully simulated the slow eastward migration of an MJO event. Topography, the zonal sea surface temperature gradient, and interplay between eastward- and westward-propagating signals controlled the timing of the eastward transition of the convective center. Our results demonstrate the potential making of month-long MJO predictions when global cloud-resolving models with realistic initial conditions are used.

  17. Sensitivity of tropical convection in cloud-resolving WRF simulations to model physics and forcing procedures

    Science.gov (United States)

    Endo, S.; Lin, W.; Jackson, R. C.; Collis, S. M.; Vogelmann, A. M.; Wang, D.; Oue, M.; Kollias, P.

    2017-12-01

    Tropical convection is one of the main drivers of the climate system and recognized as a major source of uncertainty in climate models. High-resolution modeling is performed with a focus on the deep convection cases during the active monsoon period of the TWP-ICE field campaign to explore ways to improve the fidelity of convection permitting tropical simulations. Cloud resolving model (CRM) simulations are performed with WRF modified to apply flexible configurations for LES/CRM simulations. We have enhanced the capability of the forcing module to test different implementations of large-scale vertical advective forcing, including a function for optional use of large-scale thermodynamic profiles and a function for the condensate advection. The baseline 3D CRM configurations are, following Fridlind et al. (2012), driven by observationally-constrained ARM forcing and tested with diagnosed surface fluxes and fixed sea-surface temperature and prescribed aerosol size distributions. After the spin-up period, the simulations follow the observed precipitation peaks associated with the passages of precipitation systems. Preliminary analysis shows that the simulation is generally not sensitive to the treatment of the large-scale vertical advection of heat and moisture, while more noticeable changes in the peak precipitation rate are produced when thermodynamic profiles above the boundary layer were nudged to the reference profiles from the forcing dataset. The presentation will explore comparisons with observationally-based metrics associated with convective characteristics and examine the model performance with a focus on model physics, doubly-periodic vs. nested configurations, and different forcing procedures/sources. A radar simulator will be used to understand possible uncertainties in radar-based retrievals of convection properties. Fridlind, A. M., et al. (2012), A comparison of TWP-ICE observational data with cloud-resolving model results, J. Geophys. Res., 117, D05204

  18. Convective Systems Over the Japan Sea: Cloud-Resolving Model Simulations

    Science.gov (United States)

    Tao, Wei-Kuo; Yoshizaki, Masanori; Shie, Chung-Lin; Kato, Teryuki

    2002-01-01

    Wintertime observations of MCSs (Mesoscale Convective Systems) over the Sea of Japan - 2001 (WMO-01) were collected from January 12 to February 1, 2001. One of the major objectives is to better understand and forecast snow systems and accompanying disturbances and the associated key physical processes involved in the formation and development of these disturbances. Multiple observation platforms (e.g., upper-air soundings, Doppler radar, wind profilers, radiometers, etc.) during WMO-01 provided a first attempt at investigating the detailed characteristics of convective storms and air pattern changes associated with winter storms over the Sea of Japan region. WMO-01 also provided estimates of the apparent heat source (Q1) and apparent moisture sink (Q2). The vertical integrals of Q1 and Q2 are equal to the surface precipitation rates. The horizontal and vertical adjective components of Q1 and Q2 can be used as large-scale forcing for the Cloud Resolving Models (CRMs). The Goddard Cumulus Ensemble (GCE) model is a CRM (typically run with a 1-km grid size). The GCE model has sophisticated microphysics and allows explicit interactions between clouds, radiation, and surface processes. It will be used to understand and quantify precipitation processes associated with wintertime convective systems over the Sea of Japan (using data collected during the WMO-01). This is the first cloud-resolving model used to simulate precipitation processes in this particular region. The GCE model-simulated WMO-01 results will also be compared to other GCE model-simulated weather systems that developed during other field campaigns (i.e., South China Sea, west Pacific warm pool region, eastern Atlantic region and central USA).

  19. Development of Spaceborne Radar Simulator by NICT and JAXA using JMA Cloud-resolving Model

    Science.gov (United States)

    Kubota, T.; Eito, H.; Aonashi, K.; Hashimoto, A.; Iguchi, T.; Hanado, H.; Shimizu, S.; Yoshida, N.; Oki, R.

    2009-12-01

    We are developing synthetic spaceborne radar data toward a simulation of the Dual-frequency Precipitation Radar (DPR) aboard the Global Precipitation Measurement (GPM) core-satellite. Our purposes are a production of test-bed data for higher level DPR algorithm developers, in addition to a diagnosis of a cloud resolving model (CRM). To make the synthetic data, we utilize the CRM by the Japan Meteorological Agency (JMA-NHM) (Ikawa and Saito 1991, Saito et al. 2006, 2007), and the spaceborne radar simulation algorithm by the National Institute of Information and Communications Technology (NICT) and the Japan Aerospace Exploration Agency (JAXA) named as the Integrated Satellite Observation Simulator for Radar (ISOSIM-Radar). The ISOSIM-Radar simulates received power data in a field of view of the spaceborne radar with consideration to a scan angle of the radar (Oouchi et al. 2002, Kubota et al. 2009). The received power data are computed with gaseous and hydrometeor attenuations taken into account. The backscattering and extinction coefficients are calculated assuming the Mie approximation for all species. The dielectric constants for solid particles are computed by the Maxwell-Garnett model (Bohren and Battan 1982). Drop size distributions are treated in accordance with those of the JMA-NHM. We assume a spherical sea surface, a Gaussian antenna pattern, and 49 antenna beam directions for scan angles from -17 to 17 deg. in the PR. In this study, we report the diagnosis of the JMA-NHM with reference to the TRMM Precipitation Radar (PR) and CloudSat Cloud Profiling Radar (CPR) using the ISOSIM-Radar from the view of comparisons in cloud microphysics schemes of the JMA-NHM. We tested three kinds of explicit bulk microphysics schemes based on Lin et al. (1983), that is, three-ice 1-moment scheme, three-ice 2-moment scheme (Eito and Aonashi 2009), and newly developed four-ice full 2-moment scheme (Hashimoto 2008). The hydrometeor species considered here are rain, graupel

  20. A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model

    Science.gov (United States)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A Brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), (3) A discussion on the Goddard WRF version (its developments and applications), and (4) The characteristics of the four-dimensional cloud data

  1. Evaluation of cloud resolving model simulations of midlatitude cirrus with ARM and A-Train observations

    Science.gov (United States)

    Muehlbauer, A. D.; Ackerman, T. P.; Lawson, P.; Xie, S.; Zhang, Y.

    2015-12-01

    This paper evaluates cloud resolving model (CRM) and cloud system-resolving model (CSRM) simulations of a midlatitude cirrus case with comprehensive observations collected under the auspices of the Atmospheric Radiation Measurements (ARM) program and with spaceborne observations from the National Aeronautics and Space Administration (NASA) A-train satellites. Vertical profiles of temperature, relative humidity and wind speeds are reasonably well simulated by the CSRM and CRM but there are remaining biases in the temperature, wind speeds and relative humidity, which can be mitigated through nudging the model simulations toward the observed radiosonde profiles. Simulated vertical velocities are underestimated in all simulations except in the CRM simulations with grid spacings of 500m or finer, which suggests that turbulent vertical air motions in cirrus clouds need to be parameterized in GCMs and in CSRM simulations with horizontal grid spacings on the order of 1km. The simulated ice water content and ice number concentrations agree with the observations in the CSRM but are underestimated in the CRM simulations. The underestimation of ice number concentrations is consistent with the overestimation of radar reflectivity in the CRM simulations and suggests that the model produces too many large ice particles especially toward cloud base. Simulated cloud profiles are rather insensitive to perturbations in the initial conditions or the dimensionality of the model domain but the treatment of the forcing data has a considerable effect on the outcome of the model simulations. Despite considerable progress in observations and microphysical parameterizations, simulating the microphysical, macrophysical and radiative properties of cirrus remains challenging. Comparing model simulations with observations from multiple instruments and observational platforms is important for revealing model deficiencies and for providing rigorous benchmarks. However, there still is considerable

  2. Estimation of convective entrainment properties from a cloud-resolving model simulation during TWP-ICE

    Science.gov (United States)

    Zhang, Guang J.; Wu, Xiaoqing; Zeng, Xiping; Mitovski, Toni

    2016-10-01

    The fractional entrainment rate in convective clouds is an important parameter in current convective parameterization schemes of climate models. In this paper, it is estimated using a 1-km-resolution cloud-resolving model (CRM) simulation of convective clouds from TWP-ICE (the Tropical Warm Pool-International Cloud Experiment). The clouds are divided into different types, characterized by cloud-top heights. The entrainment rates and moist static energy that is entrained or detrained are determined by analyzing the budget of moist static energy for each cloud type. Results show that the entrained air is a mixture of approximately equal amount of cloud air and environmental air, and the detrained air is a mixture of ~80 % of cloud air and 20 % of the air with saturation moist static energy at the environmental temperature. After taking into account the difference in moist static energy between the entrained air and the mean environment, the estimated fractional entrainment rate is much larger than those used in current convective parameterization schemes. High-resolution (100 m) large-eddy simulation of TWP-ICE convection was also analyzed to support the CRM results. It is shown that the characteristics of entrainment rates estimated using both the high-resolution data and CRM-resolution coarse-grained data are similar. For each cloud category, the entrainment rate is high near cloud base and top, but low in the middle of clouds. The entrainment rates are best fitted to the inverse of in-cloud vertical velocity by a second order polynomial.

  3. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    Science.gov (United States)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a

  4. Precipitation processes developed during TOGA COARE (1992), GATE (1974), SCSMEX (1998), and KWAJEX (1999): 3D Cloud Resolving Model Simulation

    Science.gov (United States)

    Tao, W.-K.

    2006-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. Only recently have 3D experiments been performed for multi-day periods for tropical cloud systems with large horizontal domains at the National Center for Atmospheric Research (NCAR), NOAA GFDL, the U.K. Met. Office, Colorado State University and NASA Goddard Space Flight Center. An improved 3D Goddard Cumulus Ensemble (GCE) model was recently used to simulate periods during TOGA COARE (December 19-27, 1992), GATE (september 1-7, 1974), SCSMEX (May 18-26, June 2-11, 1998) and KWAJEX (August 7-13, August 18-21, and August 29-September 12, 1999) using a 512 by 512 km domain and 41 vertical layers. The major objectives of this paper are: (1) to identify the differences and similarities in the simulated precipitation processes and their associated surface and water energy budgets in TOGA COARE, GATE, KWAJEX, and SCSMEX, and (2) to asses the impact of microphysics, radiation budget and surface fluxes on the organization of convection in tropics.

  5. Analysis of the environments of seven Mediterranean tropical-like storms using an axisymmetric, nonhydrostatic, cloud resolving model

    Directory of Open Access Journals (Sweden)

    L. Fita

    2007-01-01

    Full Text Available Tropical-like storms on the Mediterranean Sea are occasionally observed on satellite images, often with a clear eye surrounded by an axysimmetric cloud structure. These storms sometimes attain hurricane intensity and can severely affect coastal lands. A deep, cut-off, cold-core low is usually observed at mid-upper tropospheric levels in association with the development of these tropical-like systems. In this study we attempt to apply some tools previously used in studies of tropical hurricanes to characterise the environments in which seven known Mediterranean events developed. In particular, an axisymmetric, nonhydrostatic, cloud resolving model is applied to simulate the tropical-like storm genesis and evolution. Results are compared to surface observations when landfall occurred and with satellite microwave derived wind speed measurements over the sea. Finally, sensitivities of the numerical simulations to different factors (e.g. sea surface temperature, vertical humidity profile and size of the initial precursor of the storm are examined.

  6. Microphysical variability of vigorous Amazonian deep convection observed by CloudSat, and relevance for cloud-resolving model

    Science.gov (United States)

    Dodson, J. B.; Taylor, P. C.

    2017-12-01

    The number and varieties of both satellite cloud observations and cloud simulations are increasing rapidly. This create a challenge in identifying the best methods for quantifying the physical processes associated with deep convection, and then comparing convective observations with simulations. The use of satellite simulators in conjunction with model output is an increasingly popular method of comparison studies. However, the complexity of deep convective systems renders simplistic comparison metrics hazardous, possibly resulting is misleading or even contradicting conclusions. To investigate this, CloudSat observations of Amazonian deep convective cores (DCCs) and associated anvils are compared and contrasted with output from cloud resolving models in a manner that both highlights microphysical proprties of observed convection, and displays the effects of microphysical parameterizations on allowing robust comparisons. First, contoured frequency by altitude diagrams (CFAD) are calculated from the reflectivity fields of DCCs observed by CloudSat. This reveals two distinct modes of hydrometeor variability in the high level cloud region, with one dominated by snow and aggregates, and the other by large graupel and hail. Second, output from the superparameterized Community Atmospheric Model (SP-CAM) data are processed with the Quickbeam radar simulator to produce CFADs which can be compared with the observed CFADs. Two versions of SP-CAM are used, with one (version 4) having single-moment microphysics which excludes graupel/hail, and the other (version 5) a double-moment scheme with graupel. The change from version 4 to 5 improves the reflectivity CFAD, even without corresponding changes to non-hydrometeor fields such as vertical velocity. However, it does not produce a realistic double hydrometeor mode. Finally, the influences of microphysics are further tested in the System for Atmospheric Modeling (SAM), which allows for higher control over model parameters than

  7. Cloud-Resolving Model Simulations of Aerosol-Cloud Interactions Triggered by Strong Aerosol Emissions in the Arctic

    Science.gov (United States)

    Wang, H.; Kravitz, B.; Rasch, P. J.; Morrison, H.; Solomon, A.

    2014-12-01

    Previous process-oriented modeling studies have highlighted the dependence of effectiveness of cloud brightening by aerosols on cloud regimes in warm marine boundary layer. Cloud microphysical processes in clouds that contain ice, and hence the mechanisms that drive aerosol-cloud interactions, are more complicated than in warm clouds. Interactions between ice particles and liquid drops add additional levels of complexity to aerosol effects. A cloud-resolving model is used to study aerosol-cloud interactions in the Arctic triggered by strong aerosol emissions, through either geoengineering injection or concentrated sources such as shipping and fires. An updated cloud microphysical scheme with prognostic aerosol and cloud particle numbers is employed. Model simulations are performed in pure super-cooled liquid and mixed-phase clouds, separately, with or without an injection of aerosols into either a clean or a more polluted Arctic boundary layer. Vertical mixing and cloud scavenging of particles injected from the surface is still quite efficient in the less turbulent cold environment. Overall, the injection of aerosols into the Arctic boundary layer can delay the collapse of the boundary layer and increase low-cloud albedo. The pure liquid clouds are more susceptible to the increase in aerosol number concentration than the mixed-phase clouds. Rain production processes are more effectively suppressed by aerosol injection, whereas ice precipitation (snow) is affected less; thus the effectiveness of brightening mixed-phase clouds is lower than for liquid-only clouds. Aerosol injection into a clean boundary layer results in a greater cloud albedo increase than injection into a polluted one, consistent with current knowledge about aerosol-cloud interactions. Unlike previous studies investigating warm clouds, the impact of dynamical feedback due to precipitation changes is small. According to these results, which are dependent upon the representation of ice nucleation

  8. Comparison of convective clouds observed by spaceborne W-band radar and simulated by cloud-resolving atmospheric models

    Science.gov (United States)

    Dodson, Jason B.

    Deep convective clouds (DCCs) play an important role in regulating global climate through vertical mass flux, vertical water transport, and radiation. For general circulation models (GCMs) to simulate the global climate realistically, they must simulate DCCs realistically. GCMs have traditionally used cumulus parameterizations (CPs). Much recent research has shown that multiple persistent unrealistic behaviors in GCMs are related to limitations of CPs. Two alternatives to CPs exist: the global cloud-resolving model (GCRM), and the multiscale modeling framework (MMF). Both can directly simulate the coarser features of DCCs because of their multi-kilometer horizontal resolutions, and can simulate large-scale meteorological processes more realistically than GCMs. However, the question of realistic behavior of simulated DCCs remains. How closely do simulated DCCs resemble observed DCCs? In this study I examine the behavior of DCCs in the Nonhydrostatic Icosahedral Atmospheric Model (NICAM) and Superparameterized Community Atmospheric Model (SP-CAM), the latter with both single-moment and double-moment microphysics. I place particular emphasis on the relationship between cloud vertical structure and convective environment. I also emphasize the transition between shallow clouds and mature DCCs. The spatial domains used are the tropical oceans and the contiguous United States (CONUS), the latter of which produces frequent vigorous convection during the summer. CloudSat is used to observe DCCs, and A-Train and reanalysis data are used to represent the large-scale environment in which the clouds form. The CloudSat cloud mask and radar reflectivity profiles for CONUS cumuliform clouds (defined as clouds with a base within the planetary boundary layer) during boreal summer are first averaged and compared. Both NICAM and SP-CAM greatly underestimate the vertical growth of cumuliform clouds. Then they are sorted by three large-scale environmental variables: total preciptable

  9. The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations

    Science.gov (United States)

    Tao, Wei-Kuo; Li, Xiaowen; Khain, Alexander; Matsui, Toshihisa; Lang, Stephen; Simpson, Joanne

    2012-01-01

    Recently, a detailed spectral-bin microphysical scheme was implemented into the Goddard Cumulus Ensemble (GCE) model. Atmospheric aerosols are also described using number density size-distribution functions. A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep tropical clouds in the west Pacific warm pool region and summertime convection over a mid-latitude continent with different concentrations of CCN: a low clean concentration and a high dirty concentration. The impact of atmospheric aerosol concentration on cloud and precipitation will be investigated.

  10. Improving representation of convective transport for scale-aware parameterization: 2. Analysis of cloud-resolving model simulations

    Science.gov (United States)

    Liu, Yi-Chin; Fan, Jiwen; Zhang, Guang J.; Xu, Kuan-Man; Ghan, Steven J.

    2015-04-01

    Following Part I, in which 3-D cloud-resolving model (CRM) simulations of a squall line and mesoscale convective complex in the midlatitude continental and the tropical regions are conducted and evaluated, we examine the scale dependence of eddy transport of water vapor, evaluate different eddy transport formulations, and improve the representation of convective transport across all scales by proposing a new formulation that more accurately represents the CRM-calculated eddy flux. CRM results show that there are strong grid-spacing dependencies of updraft and downdraft fractions regardless of altitudes, cloud life stage, and geographical location. As for the eddy transport of water vapor, updraft eddy flux is a major contributor to total eddy flux in the lower and middle troposphere. However, downdraft eddy transport can be as large as updraft eddy transport in the lower atmosphere especially at the mature stage of midlatitude continental convection. We show that the single-updraft approach significantly underestimates updraft eddy transport of water vapor because it fails to account for the large internal variability of updrafts, while a single downdraft represents the downdraft eddy transport of water vapor well. We find that using as few as three updrafts can account for the internal variability of updrafts well. Based on the evaluation with the CRM simulated data, we recommend a simplified eddy transport formulation that considers three updrafts and one downdraft. Such formulation is similar to the conventional one but much more accurately represents CRM-simulated eddy flux across all grid scales.

  11. The Neighboring Column Approximation (NCA) – A fast approach for the calculation of 3D thermal heating rates in cloud resolving models

    International Nuclear Information System (INIS)

    Klinger, Carolin; Mayer, Bernhard

    2016-01-01

    Due to computational costs, radiation is usually neglected or solved in plane parallel 1D approximation in today's numerical weather forecast and cloud resolving models. We present a fast and accurate method to calculate 3D heating and cooling rates in the thermal spectral range that can be used in cloud resolving models. The parameterization considers net fluxes across horizontal box boundaries in addition to the top and bottom boundaries. Since the largest heating and cooling rates occur inside the cloud, close to the cloud edge, the method needs in first approximation only the information if a grid box is at the edge of a cloud or not. Therefore, in order to calculate the heating or cooling rates of a specific grid box, only the directly neighboring columns are used. Our so-called Neighboring Column Approximation (NCA) is an analytical consideration of cloud side effects which can be considered a convolution of a 1D radiative transfer result with a kernel or radius of 1 grid-box (5 pt stencil) and which does usually not break the parallelization of a cloud resolving model. The NCA can be easily applied to any cloud resolving model that includes a 1D radiation scheme. Due to the neglect of horizontal transport of radiation further away than one model column, the NCA works best for model resolutions of about 100 m or lager. In this paper we describe the method and show a set of applications of LES cloud field snap shots. Correction terms, gains and restrictions of the NCA are described. Comprehensive comparisons to the 3D Monte Carlo Model MYSTIC and a 1D solution are shown. In realistic cloud fields, the full 3D simulation with MYSTIC shows cooling rates up to −150 K/d (100 m resolution) while the 1D solution shows maximum coolings of only −100 K/d. The NCA is capable of reproducing the larger 3D cooling rates. The spatial distribution of the heating and cooling is improved considerably. Computational costs are only a factor of 1.5–2 higher compared to a 1D

  12. The Role of Aerosols on Precipitation Processes: Cloud Resolving Model Simulations

    Science.gov (United States)

    Tao, Wei-Kuo; Li, X.; Matsui, T.

    2012-01-01

    Cloud microphysics is inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, a detailed spectral-bin microphysical scheme was implemented into the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e. pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e., 33 bins). Atmospheric aerosols are also described using number density size-distribution functions. The model is tested by studying the evolution of deep cloud systems in the west Pacific warm pool region, the sub-tropics (Florida) and midlatitudes using identical thermodynamic conditions but with different concentrations of CCN: a low "clean" concentration and a high "dirty" concentration. Results indicate that the low CCN concentration case produces rainfall at the surface sooner than the high CeN case but has less cloud water mass aloft. Because the spectral-bin model explicitly calculates and allows for the examination of both the mass and number concentration of species in each size category, a detailed analysis of the instantaneous size spectrum can be obtained for these cases. It is shown that since the low (CN case produces fewer droplets, larger sizes develop due to greater condensational and collection growth, leading to a broader size spectrum in comparison to the high CCN case. Sensitivity tests were performed to

  13. The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations

    Science.gov (United States)

    Tao, Wei-Kuo; Li, Xiaowen; Khain, Alexander; Matsui, Toshihisa; Lang, Stephen; Simpson, Joanne

    2008-01-01

    ]. Please see Tao et al. (2007) for more detailed description on aerosol impact on precipitation. Recently, a detailed spectral-bin microphysical scheme was implemented into the Goddard Cumulus Ensemble (GCE) model. Atmospheric aerosols are also described using number density size-distribution functions. A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep tropical clouds in the west Pacific warm pool region and summertime convection over a mid-latitude continent with different concentrations of CCN: a low "clean" concentration and a high "dirty" concentration. The impact of atmospheric aerosol concentration on cloud and precipitation will be investigated.

  14. Toward Seamless Weather-Climate Prediction with a Global Cloud Resolving Model

    Science.gov (United States)

    2016-01-14

    PAGE 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON Tim Li 19b. TELEPHONE NUMBER (Include area code) 808...The coupled model initial condition was derived based on a nudging scheme in which the model prognostic variables such as U, V, SLP, geopotential...height, air temperature and SST were nudged toward NCEP final analysis (FNL) fields. There were 24 ensemble forecast members each day. TCs in the model

  15. Evaluation of Cloud-Resolving and Limited Area Model Intercomparison Simulations Using TWP-ICE Observations. Part 2 ; Precipitation Microphysics

    Science.gov (United States)

    Varble, Adam; Zipser, Edward J.; Fridland, Ann M.; Zhu, Ping; Ackerman, Andrew S.; Chaboureau, Jean-Pierre; Fan, Jiwen; Hill, Adrian; Shipway, Ben; Williams, Christopher

    2014-01-01

    Ten 3-D cloud-resolving model (CRM) simulations and four 3-D limited area model (LAM) simulations of an intense mesoscale convective system observed on 23-24 January 2006 during the Tropical Warm Pool-International Cloud Experiment (TWP-ICE) are compared with each other and with observations and retrievals from a scanning polarimetric radar, colocated UHF and VHF vertical profilers, and a Joss-Waldvogel disdrometer in an attempt to explain a low bias in simulated stratiform rainfall. Despite different forcing methodologies, similar precipitation microphysics errors appear in CRMs and LAMs with differences that depend on the details of the bulk microphysics scheme used. One-moment schemes produce too many small raindrops, which biases Doppler velocities low, but produces rainwater contents (RWCs) that are similar to observed. Two-moment rain schemes with a gamma shape parameter (mu) of 0 produce excessive size sorting, which leads to larger Doppler velocities than those produced in one-moment schemes but lower RWCs. Two-moment schemes also produce a convective median volume diameter distribution that is too broad relative to observations and, thus, may have issues balancing raindrop formation, collision-coalescence, and raindrop breakup. Assuming a mu of 2.5 rather than 0 for the raindrop size distribution improves one-moment scheme biases, and allowing mu to have values greater than 0 may improve excessive size sorting in two-moment schemes. Underpredicted stratiform rain rates are associated with underpredicted ice water contents at the melting level rather than excessive rain evaporation, in turn likely associated with convective detrainment that is too high in the troposphere and mesoscale circulations that are too weak. A limited domain size also prevents a large, well-developed stratiform region like the one observed from developing in CRMs, although LAMs also fail to produce such a region.

  16. Advances towards the development of a cloud-resolving model in South Africa

    CSIR Research Space (South Africa)

    Bopape, Mary-Jane M

    2014-09-01

    Full Text Available , Nelson A.F. Miranda , Rosemary A. Dorrington , Gwynneth F. Matcher , Nadine Strydom & Nasreen Peer .............................................................................................. 89 Open access in South Africa: A case study... and slows scientific progress.5 There are also more radical and, at the same time, very human positions on the importance of open access. Delivering an address entitled ‘The case against privatising knowledge’, during a Vice Chancellor’s Open Lecture...

  17. Evaluating Microphysics in Cloud-Resolving Models using TRMM and Ground-based Precipitation Radar Observations

    Science.gov (United States)

    Krueger, S. K.; Zulauf, M. A.; Li, Y.; Zipser, E. J.

    2005-05-01

    Global satellite datasets such as those produced by ISCCP, ERBE, and CERES provide strong observational constraints on cloud radiative properties. Such observations have been widely used for model evaluation, tuning, and improvement. Cloud radiative properties depend primarily on small, non-precipitating cloud droplets and ice crystals, yet the dynamical, microphysical and radiative processes which produce these small particles often involve large, precipitating hydrometeors. There now exists a global dataset of tropical cloud system precipitation feature (PF) properties, collected by TRMM and produced by Steve Nesbitt, that provides additional observational constraints on cloud system properties. We are using the TRMM PF dataset to evaluate the precipitation microphysics of two simulations of deep, precipitating, convective cloud systems: one is a 29-day summertime, continental case (ARM Summer 1997 SCM IOP, at the Southern Great Plains site); the second is a tropical maritime case: the Kwajalein MCS of 11-12 August 1999 (part of a 52-day simulation). Both simulations employed the same bulk, three-ice category microphysical parameterization (Krueger et al. 1995). The ARM simulation was executed using the UCLA/Utah 2D CRM, while the KWAJEX simulation was produced using the 3D CSU CRM (SAM). The KWAJEX simulation described above is compared with both the actual radar data and the TRMM statistics. For the Kwajalein MCS of 11 to 12 August 1999, there are research radar data available for the lifetime of the system. This particular MCS was large in size and rained heavily, but it was weak to average in measures of convective intensity, against the 5-year TRMM sample of 108. For the Kwajalein MCS simulation, the 20 dBZ contour is at 15.7 km and the 40 dBZ contour at 14.5 km! Of all 108 MCSs observed by TRMM, the highest value for the 40 dBZ contour is 8 km. Clearly, the high reflectivity cores are off scale compared with observed cloud systems in this area. A similar

  18. Evaluation of cloud-resolving model simulations of midlatitude cirrus with ARM and A-train observations

    Science.gov (United States)

    Muhlbauer, A.; Ackerman, T. P.; Lawson, R. P.; Xie, S.; Zhang, Y.

    2015-07-01

    Cirrus clouds are ubiquitous in the upper troposphere and still constitute one of the largest uncertainties in climate predictions. This paper evaluates cloud-resolving model (CRM) and cloud system-resolving model (CSRM) simulations of a midlatitude cirrus case with comprehensive observations collected under the auspices of the Atmospheric Radiation Measurements (ARM) program and with spaceborne observations from the National Aeronautics and Space Administration A-train satellites. The CRM simulations are driven with periodic boundary conditions and ARM forcing data, whereas the CSRM simulations are driven by the ERA-Interim product. Vertical profiles of temperature, relative humidity, and wind speeds are reasonably well simulated by the CSRM and CRM, but there are remaining biases in the temperature, wind speeds, and relative humidity, which can be mitigated through nudging the model simulations toward the observed radiosonde profiles. Simulated vertical velocities are underestimated in all simulations except in the CRM simulations with grid spacings of 500 m or finer, which suggests that turbulent vertical air motions in cirrus clouds need to be parameterized in general circulation models and in CSRM simulations with horizontal grid spacings on the order of 1 km. The simulated ice water content and ice number concentrations agree with the observations in the CSRM but are underestimated in the CRM simulations. The underestimation of ice number concentrations is consistent with the overestimation of radar reflectivity in the CRM simulations and suggests that the model produces too many large ice particles especially toward the cloud base. Simulated cloud profiles are rather insensitive to perturbations in the initial conditions or the dimensionality of the model domain, but the treatment of the forcing data has a considerable effect on the outcome of the model simulations. Despite considerable progress in observations and microphysical parameterizations, simulating

  19. A new single-moment microphysics scheme for cloud-resolving models using observed dependence of ice concentration on temperature.

    Science.gov (United States)

    Khairoutdinov, M.

    2015-12-01

    The representation of microphysics, especially ice microphysics, remains one of the major uncertainties in cloud-resolving models (CRMs). Most of the cloud schemes use the so-called bulk microphysics approach, in which a few moments of such distributions are used as the prognostic variables. The System for Atmospheric Modeling (SAM) is the CRM that employs two such schemes. The single-moment scheme, which uses only mass for each of the water phases, and the two-moment scheme, which adds the particle concentration for each of the hydrometeor category. Of the two, the single-moment scheme is much more computationally efficient as it uses only two prognostic microphysics variables compared to ten variables used by the two-moment scheme. The efficiency comes from a rather considerable oversimplification of the microphysical processes. For instance, only a sum of the liquid and icy cloud water is predicted with the temperature used to diagnose the mixing ratios of different hydrometeors. The main motivation for using such simplified microphysics has been computational efficiency, especially in the applications of SAM as the super-parameterization in global climate models. Recently, we have extended the single-moment microphysics by adding only one additional prognostic variable, which has, nevertheless, allowed us to separate the cloud ice from liquid water. We made use of some of the recent observations of ice microphysics collected at various parts of the world to parameterize several aspects of ice microphysics that have not been explicitly represented before in our sing-moment scheme. For example, we use the observed broad dependence of ice concentration on temperature to diagnose the ice concentration in addition to prognostic mass. Also, there is no artificial separation between the pristine ice and snow, often used by bulk models. Instead we prescribed the ice size spectrum as the gamma distribution, with the distribution shape parameter controlled by the

  20. An examination of two pathways to tropical cyclogenesis occurring in idealized simulations with a cloud-resolving numerical model

    Directory of Open Access Journals (Sweden)

    M. E. Nicholls

    2013-06-01

    Full Text Available Simulations are conducted with a cloud-resolving numerical model to examine the transformation of a weak incipient mid-level cyclonic vortex into a tropical cyclone. Results demonstrate that two distinct pathways are possible and that development along a particular pathway is sensitive to model physics and initial conditions. One pathway involves a steady increase of the surface winds to tropical cyclone strength as the radius of maximum winds gradually decreases. A notable feature of this evolution is the creation of small-scale lower tropospheric cyclonic vorticity anomalies by deep convective towers and subsequent merger and convergence by the low-level secondary circulation. The second pathway also begins with a strengthening low-level circulation, but eventually a significantly stronger mid-level circulation develops. Cyclogenesis occurs subsequently when a small-scale surface concentrated vortex forms abruptly near the center of the larger-scale circulation. The small-scale vortex is warm core throughout the troposphere and results in a fall in local surface pressure of a few millibars. It usually develops rapidly, undergoing a modest growth to form a small tropical cyclone. Many of the simulated systems approach or reach tropical cyclone strength prior to development of a prominent mid-level vortex so that the subsequent formation of a strong small-scale surface concentrated vortex in these cases could be considered intensification rather than genesis. Experiments are performed to investigate the dependence on the inclusion of the ice phase, radiation, the size and strength of the incipient mid-level vortex, the amount of moisture present in the initial vortex, and the sea surface temperature. Notably, as the sea surface temperature is raised, the likelihood of development along the second pathway is increased. This appears to be related to an increased production of ice. The sensitivity of the pathway taken to model physics and initial

  1. An Optical Lightning Simulator in an Electrified Cloud-Resolving Model to Prepare the Future Space Lightning Missions

    Science.gov (United States)

    Bovalo, Christophe; Defer, Eric; Pinty, Jean-Pierre

    2016-04-01

    The future decade will see the launch of several space missions designed to monitor the total lightning activity. Among these missions, the American (Geostationary Lightning Mapper - GLM) and European (Lightning Imager - LI) optical detectors will be onboard geostationary satellites (GOES-R and MTG, respectively). For the first time, the total lightning activity will be monitored over the full Earth disk and at a very high temporal resolution (2 and 1 ms, respectively). Missions like the French Tool for the Analysis of Radiation from lightNIng and Sprites (TARANIS) and ISS-LIS will bring complementary information in order to better understand the lightning physics and to improve the weather prediction (nowcasting and forecasting). Such missions will generate a huge volume of new and original observations for the scientific community and weather prediction centers that have to be prepared. Moreover, before the launch of these missions, fundamental questions regarding the interpretation of the optical signal property and its relation to cloud optical thickness and lightning discharge processes need to be further investigated. An innovative approach proposed here is to use the synergy existing in the French MesoNH Cloud-Resolving Model (CRM). Indeed, MesoNH is one of the only CRM able to simulate the lifecycle of electrical charges generated within clouds through non-inductive charging process (dependent of the 1-moment microphysical scheme). The lightning flash geometry is based on a fractal law while the electrical field is diagnosed thanks to the Gauss' law. The lightning optical simulator is linked to the electrical scheme as the lightning radiance at 777.4 nm is a function of the lightning current, approximated by the charges neutralized along the lightning path. Another important part is the scattering of this signal by the hydrometeors (mainly ice particles) that is taken into account. Simulations at 1-km resolution are done over the Langmuir Laboratory (New

  2. Cloud-resolving model intercomparison of an MC3E squall line case: Part I-Convective updrafts: CRM Intercomparison of a Squall Line

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Jiwen [Pacific Northwest National Laboratory, Richland Washington USA; Han, Bin [Pacific Northwest National Laboratory, Richland Washington USA; School of Atmospheric Sciences, Nanjing University, Nanjing China; Varble, Adam [Department of Atmospheric Sciences, University of Utah, Salt Lake City Utah USA; Morrison, Hugh [National Center for Atmospheric Research, Boulder Colorado USA; North, Kirk [Department of Atmospheric and Oceanic Sciences, McGill University, Montreal Quebec USA; Kollias, Pavlos [Department of Atmospheric and Oceanic Sciences, McGill University, Montreal Quebec USA; School of Marine and Atmospheric Sciences, Stony Brook University, Stony Brook New York USA; Chen, Baojun [School of Atmospheric Sciences, Nanjing University, Nanjing China; Dong, Xiquan [Department of Hydrology and Atmospheric Sciences, University of Arizona, Tucson Arizona USA; Giangrande, Scott E. [Environmental and Climate Sciences Department, Brookhaven National Laboratory, Upton New York USA; Khain, Alexander [The Institute of the Earth Science, The Hebrew University of Jerusalem, Jerusalem Israel; Lin, Yun [Department of Atmospheric Sciences, Texas A& M University, College Station Texas USA; Mansell, Edward [NOAA/OAR/National Severe Storms Laboratory, Norman Oklahoma USA; Milbrandt, Jason A. [Meteorological Research Division, Environment and Climate Change Canada, Dorval Canada; Stenz, Ronald [Department of Atmospheric Sciences, University of North Dakota, Grand Forks North Dakota USA; Thompson, Gregory [National Center for Atmospheric Research, Boulder Colorado USA; Wang, Yuan [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena California USA

    2017-09-06

    A constrained model intercomparison study of a mid-latitude mesoscale squall line is performed using the Weather Research & Forecasting (WRF) model at 1-km horizontal grid spacing with eight cloud microphysics schemes, to understand specific processes that lead to the large spread of simulated cloud and precipitation at cloud-resolving scales, with a focus of this paper on convective cores. Various observational data are employed to evaluate the baseline simulations. All simulations tend to produce a wider convective area than observed, but a much narrower stratiform area, with most bulk schemes overpredicting radar reflectivity. The magnitudes of the virtual potential temperature drop, pressure rise, and the peak wind speed associated with the passage of the gust front are significantly smaller compared with the observations, suggesting simulated cool pools are weaker. Simulations also overestimate the vertical velocity and Ze in convective cores as compared with observational retrievals. The modeled updraft velocity and precipitation have a significant spread across the eight schemes even in this strongly dynamically-driven system. The spread of updraft velocity is attributed to the combined effects of the low-level perturbation pressure gradient determined by cold pool intensity and buoyancy that is not necessarily well correlated to differences in latent heating among the simulations. Variability of updraft velocity between schemes is also related to differences in ice-related parameterizations, whereas precipitation variability increases in no-ice simulations because of scheme differences in collision-coalescence parameterizations.

  3. Cloud Properties Simulated by a Single-Column Model. Part II: Evaluation of Cumulus Detrainment and Ice-phase Microphysics Using a Cloud Resolving Model

    Science.gov (United States)

    Luo, Yali; Krueger, Steven K.; Xu, Kuan-Man

    2005-01-01

    This paper is the second in a series in which kilometer-scale-resolving observations from the Atmospheric Radiation Measurement program and a cloud-resolving model (CRM) are used to evaluate the single-column model (SCM) version of the National Centers for Environmental Prediction Global Forecast System model. Part I demonstrated that kilometer-scale cirrus properties simulated by the SCM significantly differ from the cloud radar observations while the CRM simulation reproduced most of the cirrus properties as revealed by the observations. The present study describes an evaluation, through a comparison with the CRM, of the SCM's representation of detrainment from deep cumulus and ice-phase microphysics in an effort to better understand the findings of Part I. It is found that detrainment occurs too infrequently at a single level at a time in the SCM, although the detrainment rate averaged over the entire simulation period is somewhat comparable to that of the CRM simulation. Relatively too much detrained ice is sublimated when first detrained. Snow falls over too deep of a layer due to the assumption that snow source and sink terms exactly balance within one time step in the SCM. These characteristics in the SCM parameterizations may explain many of the differences in the cirrus properties between the SCM and the observations (or between the SCM and the CRM). A possible improvement for the SCM consists of the inclusion of multiple cumulus cloud types as in the original Arakawa-Schubert scheme, prognostically determining the stratiform cloud fraction and snow mixing ratio. This would allow better representation of the detrainment from deep convection, better coupling of the volume of detrained air with cloud fraction, and better representation of snow field.

  4. An improved lightning flash rate parameterization developed from Colorado DC3 thunderstorm data for use in cloud-resolving chemical transport models

    Science.gov (United States)

    Basarab, B. M.; Rutledge, S. A.; Fuchs, B. R.

    2015-09-01

    Accurate prediction of total lightning flash rate in thunderstorms is important to improve estimates of nitrogen oxides (NOx) produced by lightning (LNOx) from the storm scale to the global scale. In this study, flash rate parameterization schemes from the literature are evaluated against observed total flash rates for a sample of 11 Colorado thunderstorms, including nine storms from the Deep Convective Clouds and Chemistry (DC3) experiment in May-June 2012. Observed flash rates were determined using an automated algorithm that clusters very high frequency radiation sources emitted by electrical breakdown in clouds and detected by the northern Colorado lightning mapping array. Existing schemes were found to inadequately predict flash rates and were updated based on observed relationships between flash rate and simple storm parameters, yielding significant improvement. The most successful updated scheme predicts flash rate based on the radar-derived mixed-phase 35 dBZ echo volume. Parameterizations based on metrics for updraft intensity were also updated but were found to be less reliable predictors of flash rate for this sample of storms. The 35 dBZ volume scheme was tested on a data set containing radar reflectivity volume information for thousands of isolated convective cells in different regions of the U.S. This scheme predicted flash rates to within 5.8% of observed flash rates on average. These results encourage the application of this scheme to larger radar data sets and its possible implementation into cloud-resolving models.

  5. Evolution of Precipitation Structure During the November DYNAMO MJO Event: Cloud-Resolving Model Intercomparison and Cross Validation Using Radar Observations

    Science.gov (United States)

    Li, Xiaowen; Janiga, Matthew A.; Wang, Shuguang; Tao, Wei-Kuo; Rowe, Angela; Xu, Weixin; Liu, Chuntao; Matsui, Toshihisa; Zhang, Chidong

    2018-04-01

    Evolution of precipitation structures are simulated and compared with radar observations for the November Madden-Julian Oscillation (MJO) event during the DYNAmics of the MJO (DYNAMO) field campaign. Three ground-based, ship-borne, and spaceborne precipitation radars and three cloud-resolving models (CRMs) driven by observed large-scale forcing are used to study precipitation structures at different locations over the central equatorial Indian Ocean. Convective strength is represented by 0-dBZ echo-top heights, and convective organization by contiguous 17-dBZ areas. The multi-radar and multi-model framework allows for more stringent model validations. The emphasis is on testing models' ability to simulate subtle differences observed at different radar sites when the MJO event passed through. The results show that CRMs forced by site-specific large-scale forcing can reproduce not only common features in cloud populations but also subtle variations observed by different radars. The comparisons also revealed common deficiencies in CRM simulations where they underestimate radar echo-top heights for the strongest convection within large, organized precipitation features. Cross validations with multiple radars and models also enable quantitative comparisons in CRM sensitivity studies using different large-scale forcing, microphysical schemes and parameters, resolutions, and domain sizes. In terms of radar echo-top height temporal variations, many model sensitivity tests have better correlations than radar/model comparisons, indicating robustness in model performance on this aspect. It is further shown that well-validated model simulations could be used to constrain uncertainties in observed echo-top heights when the low-resolution surveillance scanning strategy is used.

  6. Projection of the change in future extremes over Japan using a cloud-resolving model: (2) Precipitation Extremes and the results of the NHM-1km experiments

    Science.gov (United States)

    Kanada, S.; Nakano, M.; Nakamura, M.; Hayashi, S.; Kato, T.; Kurihara, K.; Sasaki, H.; Uchiyama, T.; Aranami, K.; Honda, Y.; Kitoh, A.

    2008-12-01

    In order to study changes in the regional climate in the vicinity of Japan during the summer rainy season due to global warming, experiments by a semi-cloud resolving non-hydrostatic model with a horizontal resolution of 5km (NHM-5km) have been conducted from June to October by nesting within the results of the 10-year time-integrated experiments using a hydrostatic atmospheric general circulation model with a horizontal grid of 20 km (AGCM-20km: TL959L60) for the present and future up to the year 2100. A non-hydrostatic model developed by the Japan Meteorological Agency (JMA) (JMA-NHM; Saito et al. 2001, 2006) was adopted. Detailed descriptions of the NHM-5km are shown by the poster of Nakano et al. Our results show that rainy days over most of the Japanese Islands will decrease in June and July and increase in August and September in the future climate. Especially, remarkable increases in intense precipitations such as larger than 150 - 300 mm/day are projected from the present to future climate. The 90th percentiles of regional largest values among maximum daily precipitations (R-MDPs) grow 156 to 207 mm/day in the present and future climates, respectively. It is well-known that the horizontal distribution of precipitation, especially the heavy rainfall in the vicinity of Japan, much depends on the topography. Therefore, higher resolution experiments by a cloud-resolving model with a horizontal resolution of 1km (NHM-1km) are one-way nested within the results of NHM-5km. The basic frame and design of the NHM-1km is the same as those of the NHM-5km, but the topography is finer and no cumulus parameterization is used in the NHM-1km experiments. The NHM-1km, which treats the convection and cloud microphysics explicitly, can represent not only horizontal distributions of rainfall in detail but also the 3-dimensional structures of meso-beta-scale convective systems (MCSs). Because of the limitation of computation resources, only heavy rainfall events that rank in top

  7. Ensemble cloud-resolving modelling of a historic back-building mesoscale convective system over Liguria: the San Fruttuoso case of 1915

    Science.gov (United States)

    Parodi, Antonio; Ferraris, Luca; Gallus, William; Maugeri, Maurizio; Molini, Luca; Siccardi, Franco; Boni, Giorgio

    2017-05-01

    Highly localized and persistent back-building mesoscale convective systems represent one of the most dangerous flash-flood-producing storms in the north-western Mediterranean area. Substantial warming of the Mediterranean Sea in recent decades raises concerns over possible increases in frequency or intensity of these types of events as increased atmospheric temperatures generally support increases in water vapour content. However, analyses of the historical record do not provide a univocal answer, but these are likely affected by a lack of detailed observations for older events. In the present study, 20th Century Reanalysis Project initial and boundary condition data in ensemble mode are used to address the feasibility of performing cloud-resolving simulations with 1 km horizontal grid spacing of a historic extreme event that occurred over Liguria: the San Fruttuoso case of 1915. The proposed approach focuses on the ensemble Weather Research and Forecasting (WRF) model runs that show strong convergence over the Ligurian Sea (17 out of 56 members) as these runs are the ones most likely to best simulate the event. It is found that these WRF runs generally do show wind and precipitation fields that are consistent with the occurrence of highly localized and persistent back-building mesoscale convective systems, although precipitation peak amounts are underestimated. Systematic small north-westward position errors with regard to the heaviest rain and strongest convergence areas imply that the reanalysis members may not be adequately representing the amount of cool air over the Po Plain outflowing into the Ligurian Sea through the Apennines gap. Regarding the role of historical data sources, this study shows that in addition to reanalysis products, unconventional data, such as historical meteorological bulletins, newspapers, and even photographs, can be very valuable sources of knowledge in the reconstruction of past extreme events.

  8. Effects of Precipitation on Ocean Mixed-Layer Temperature and Salinity as Simulated in a 2-D Coupled Ocean-Cloud Resolving Atmosphere Model

    Science.gov (United States)

    Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.

    1999-01-01

    A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.

  9. Precipitation Processes developed during ARM (1997), TOGA COARE(1992), GATE(1 974), SCSMEX(1998) and KWAJEX(1999): Consistent 2D and 3D Cloud Resolving Model Simulations

    Science.gov (United States)

    Tao, W.-K.; Shie, C.-H.; Simpson, J.; Starr, D.; Johnson, D.; Sud, Y.

    2003-01-01

    Real clouds and clouds systems are inherently three dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. Only recently have 3D experiments been performed for multi-day periods for tropical cloud system with large horizontal domains at the National Center for Atmospheric Research. The results indicate that surface precipitation and latent heating profiles are very similar between the 2D and 3D simulations of these same cases. The reason for the strong similarity between the 2D and 3D CRM simulations is that the observed large-scale advective tendencies of potential temperature, water vapor mixing ratio, and horizontal momentum were used as the main forcing in both the 2D and 3D models. Interestingly, the 2D and 3D versions of the CRM used in CSU and U.K. Met Office showed significant differences in the rainfall and cloud statistics for three ARM cases. The major objectives of this project are to calculate and axamine: (1)the surface energy and water budgets, (2) the precipitation processes in the convective and stratiform regions, (3) the cloud upward and downward mass fluxes in the convective and stratiform regions; (4) cloud characteristics such as size, updraft intensity and lifetime, and (5) the entrainment and detrainment rates associated with clouds and cloud systems that developed in TOGA COARE, GATE, SCSMEX, ARM and KWAJEX. Of special note is that the analyzed (model generated) data sets are all produced by the same current version of the GCE model, i.e. consistent model physics and configurations. Trajectory analyse and inert tracer calculation will be conducted to identify the differences and similarities in the organization of convection between simulated 2D and 3D cloud systems.

  10. Toward Quantitative Estimation of the Effect of Aerosol Particles in the Global Climate Model and Cloud Resolving Model

    Science.gov (United States)

    Eskes, H.; Boersma, F.; Dirksen, R.; van der A, R.; Veefkind, P.; Levelt, P.; Brinksma, E.; van Roozendael, M.; de Smedt, I.; Gleason, J.

    2005-05-01

    Based on measurements of GOME on ESA ERS-2, SCIAMACHY on ESA-ENVISAT, and Ozone Monitoring Instrument (OMI) on the NASA EOS-Aura satellite there is now a unique 11-year dataset of global tropospheric nitrogen dioxide measurements from space. The retrieval approach consists of two steps. The first step is an application of the DOAS (Differential Optical Absorption Spectroscopy) approach which delivers the total absorption optical thickness along the light path (the slant column). For GOME and SCIAMACHY this is based on the DOAS implementation developed by BIRA/IASB. For OMI the DOAS implementation was developed in a collaboration between KNMI and NASA. The second retrieval step, developed at KNMI, estimates the tropospheric vertical column of NO2 based on the slant column, cloud fraction and cloud top height retrieval, stratospheric column estimates derived from a data assimilation approach and vertical profile estimates from space-time collocated profiles from the TM chemistry-transport model. The second step was applied with only minor modifications to all three instruments to generate a uniform 11-year data set. In our talk we will address the following topics: - A short summary of the retrieval approach and results - Comparisons with other retrievals - Comparisons with global and regional-scale models - OMI-SCIAMACHY and SCIAMACHY-GOME comparisons - Validation with independent measurements - Trend studies of NO2 for the past 11 years

  11. Precipitation Processes developed during ARM (1997), TOGA COARE (1992), GATE (1974), SCSMEX (1998), and KWAJEX (1999), Consistent 2D, semi-3D and 3D Cloud Resolving Model Simulations

    Science.gov (United States)

    Tao, Wei-Kuo; Hou, A.; Atlas, R.; Starr, D.; Sud, Y.

    2003-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. The major objectives of this paper are: (1) to assess the performance of the super-parameterization technique (i.e. is 2D or semi-3D CRM appropriate for the super-parameterization?); (2) calculate and examine the surface energy (especially radiation) and water budgets; (3) identify the differences and similarities in the organization and entrainment rates of convection between simulated 2D and 3D cloud systems.

  12. Comparison of mean properties of simulated convection in a cloud-resolving model with those produced by cumulus parameterization

    Energy Technology Data Exchange (ETDEWEB)

    Dudhia, J.; Parsons, D.B. [National Center for Atmospheric Research, Boulder, CO (United States)

    1996-04-01

    An Intensive Observation Period (IOP) of the Atmospheric Radiation Measurement (ARM) Program took place at the Southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site from June 16-26, 1993. The National Center for Atmospheric Research (NCAR)/Penn State Mesoscale Model (MM5) has been used to simulate this period on a 60-km domain with 20- and 6.67-km nests centered on Lamont, Oklahoma. Simulations are being run with data assimilation by the nudging technique to incorporate upper-air and surface data from a variety of platforms. The model maintains dynamical consistency between the fields, while the data correct for model biases that may occur during long-term simulations and provide boundary conditions. For the work reported here the Mesoscale Atmospheric Prediction System (MAPS) of the National Ocean and Atmospheric Administration (NOAA) 3-hourly analyses were used to drive the 60-km domain while the inner domains were unforced. A continuous 10-day period was simulated.

  13. Effects of sea surface temperature, cloud radiative and microphysical processes, and diurnal variations on rainfall in equilibrium cloud-resolving model simulations

    International Nuclear Information System (INIS)

    Jiang Zhe; Li Xiao-Fan; Zhou Yu-Shu; Gao Shou-Ting

    2012-01-01

    The effects of sea surface temperature (SST), cloud radiative and microphysical processes, and diurnal variations on rainfall statistics are documented with grid data from the two-dimensional equilibrium cloud-resolving model simulations. For a rain rate of higher than 3 mm·h −1 , water vapor convergence prevails. The rainfall amount decreases with the decrease of SST from 29 °C to 27 °C, the inclusion of diurnal variation of SST, or the exclusion of microphysical effects of ice clouds and radiative effects of water clouds, which are primarily associated with the decreases in water vapor convergence. However, the amount of rainfall increases with the increase of SST from 29 °C to 31 °C, the exclusion of diurnal variation of solar zenith angle, and the exclusion of the radiative effects of ice clouds, which are primarily related to increases in water vapor convergence. For a rain rate of less than 3 mm·h −1 , water vapor divergence prevails. Unlike rainfall statistics for rain rates of higher than 3 mm·h −1 , the decrease of SST from 29 °C to 27 °C and the exclusion of radiative effects of water clouds in the presence of radiative effects of ice clouds increase the rainfall amount, which corresponds to the suppression in water vapor divergence. The exclusion of microphysical effects of ice clouds decreases the amount of rainfall, which corresponds to the enhancement in water vapor divergence. The amount of rainfall is less sensitive to the increase of SST from 29 °C to 31 °C and to the radiative effects of water clouds in the absence of the radiative effects of ice clouds. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  14. A cloud chemistry module for the 3-D cloud-resolving mesoscale model Meso-NH with application to idealized cases

    Directory of Open Access Journals (Sweden)

    M. Leriche

    2013-08-01

    Full Text Available A complete chemical module has been developed for use in the Meso-NH three-dimensional cloud resolving mesoscale model. This module includes gaseous- and aqueous-phase chemical reactions that are analysed by a pre-processor generating the Fortran90 code automatically. The kinetic solver is based on a Rosenbrock algorithm, which is robust and accurate for integrating stiff systems and especially multiphase chemistry. The exchange of chemical species between the gas phase and cloud droplets and raindrops is computed kinetically by mass transfers considering non-equilibrium between the gas- and the condensed phases. Microphysical transfers of chemical species are considered for the various cloud microphysics schemes available, which are based on one-moment or two-moment schemes. The pH of the droplets and of the raindrops is diagnosed separately as the root of a high order polynomial equation. The chemical concentrations in the ice phase are modelled in a single phase encompassing the two categories of precipitating ice particles (snow and graupel of the microphysical scheme. The only process transferring chemical species in ice is retention during freezing or riming of liquid hydrometeors. Three idealized simulations are reported, which highlight the sensitivity of scavenging efficiency to the choice of the microphysical scheme and the retention coefficient in the ice phase. A two-dimensional warm, shallow convection case is used to compare the impact of the microphysical schemes on the temporal evolution and rates of acid precipitation. Acid wet deposition rates are shown to be overestimated when a one-moment microphysics scheme is used compared to a two-moment scheme. The difference is induced by a better prediction of raindrop radius and raindrop number concentration in the latter scheme. A two-dimensional mixed-phase squall line and a three-dimensional mixed-phase supercell were simulated to test the sensitivity of cloud vertical transport to

  15. Precipitation Processes Developed During ARM (1997), TOGA COARE (1992) GATE (1974), SCSMEX (1998), and KWAJEX (1999): Consistent 3D, Semi-3D and 3D Cloud Resolving Model Simulations

    Science.gov (United States)

    Tao, W.-K.; Hou, A.; Atlas, R.; Starr, D.; Sud, Y.

    2003-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D) have been used to study the response of clouds to large-scale forcing. IN these 3D simulators, the model domain was small, and the integration time was 6 hours. Only recently have 3D experiments been performed for multi-day periods for tropical clouds systems with large horizontal domains at the National Center of Atmospheric Research (NCAR) and at NASA Goddard Space Center. At Goddard, a 3D cumulus Ensemble (GCE) model was used to simulate periods during TOGA COARE, GATE, SCSMEX, ARM, and KWAJEX using a 512 by 512 km domain (with 2-km resolution). The result indicate that surface precipitation and latent heating profiles are very similar between the 2D and 3D GCE model simulation. The major objective of this paper are: (1) to assess the performance of the super-parametrization technique, (2) calculate and examine the surface energy (especially radiation) and water budget, and (3) identify the differences and similarities in the organization and entrainment rates of convection between simulated 2D and 3D cloud systems.

  16. Precipitation Processes Developed During ARM (1997), TOGA COARE (1992), GATE (1974), SCSMEX (1998), and KWAJEX (1999): Consistent 2D, Semi-3D and 3D Cloud Resolving Model Simulations

    Science.gov (United States)

    Tao, W-K.

    2003-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. Only recently have 3D experiments been performed for multi-day periods for tropical cloud systems with large horizontal domains at the National Center for Atmospheric Research (NACAR) and at NASA Goddard Space Flight Center . At Goddard, a 3D Goddard Cumulus Ensemble (GCE) model was used to simulate periods during TOGA COARE, SCSMEX and KWAJEX using 512 by 512 km domain (with 2 km resolution). The results indicate that surface precipitation and latent heating profiles are very similar between the 2D and 3D GCE model simulations. The reason for the strong similarity between the 2D and 3D CRM simulations is that the same observed large-scale advective tendencies of potential temperature, water vapor mixing ratio, and horizontal momentum were used as the main focusing in both the 2D and 3D models. Interestingly, the 2D and 3D versions of the CRM used at CSU showed significant differences in the rainfall and cloud statistics for three ARM cases. The major objectives of this paper are: (1) to assess the performance of the super-parameterization technique, (2) calculate and examine the surface energy (especially radiation) and water budgets, and (3) identify the differences and similarities in the organization and entrainment rates of convection between simulated 2D and 3D cloud systems.

  17. Clausius-Clapeyron Scaling of Convective Available Potential Energy (CAPE) in Cloud-Resolving Simulations

    Science.gov (United States)

    Seeley, J.; Romps, D. M.

    2015-12-01

    Recent work by Singh and O'Gorman has produced a theory for convective available potential energy (CAPE) in radiative-convective equilibrium. In this model, the atmosphere deviates from a moist adiabat—and, therefore, has positive CAPE—because entrainment causes evaporative cooling in cloud updrafts, thereby steepening their lapse rate. This has led to the proposal that CAPE increases with global warming because the strength of evaporative cooling scales according to the Clausius-Clapeyron (CC) relation. However, CAPE could also change due to changes in cloud buoyancy and changes in the entrainment rate, both of which could vary with global warming. To test the relative importance of changes in CAPE due to CC scaling of evaporative cooling, changes in cloud buoyancy, and changes in the entrainment rate, we subject a cloud-resolving model to a suite of natural (and unnatural) forcings. We find that CAPE changes are primarily driven by changes in the strength of evaporative cooling; the effect of changes in the entrainment rate and cloud buoyancy are comparatively small. This builds support for CC scaling of CAPE.

  18. Final Report to the U.S. Department of Energy for studies of Evaluation of Turbulence Parameterizations for Cloud-Resolving Models

    Energy Technology Data Exchange (ETDEWEB)

    Randall, David A. [Colorado State Univ., Fort Collins, CO (United States). Dept. of Atmospheric Science; Cheng, Anning [Science Systems and Applications, Inc. (SSAI), Lanham, MD (United States); NASA Langley Research Center, Hampton, VA (United States); Ghan, Steve [Science Systems and Applications, Inc. (SSAI), Lanham, MD (United States); NASA Langley Research Center, Hampton, VA (United States); Khairoutdinov, Marat [Science Systems and Applications, Inc. (SSAI), Lanham, MD (United States); NASA Langley Research Center, Hampton, VA (United States); Larson, Vince [Science Systems and Applications, Inc. (SSAI), Lanham, MD (United States); NASA Langley Research Center, Hampton, VA (United States); Moeng, Chin-Hoh [Science Systems and Applications, Inc. (SSAI), Lanham, MD (United States); NASA Langley Research Center, Hampton, VA (United States)

    2015-07-27

    The intermediately-prognostic higher-order turbulence closure (IPHOC) introduces a joint double-Gaussian distribution of liquid water potential temperature (θl ), total water mixing ratio (qt ), and vertical velocity (w ) to represent any skewed turbulence circulations .The distribution is inferred from the first-, second-, and third-order moments of the variables given above, and is used to diagnose cloud fraction and grid-mean liquid water mixing ratio, as well as the buoyancy and fourth-order terms in the equations describing the evolution of the second- and third-order moments. Only three third-order moments (those of θl , qt , and w ) are predicted in the IPHOC.

  19. Strategy for long-term 3D cloud-resolving simulations over the ARM SGP site and preliminary results

    Science.gov (United States)

    Lin, W.; Liu, Y.; Song, H.; Endo, S.

    2011-12-01

    Parametric representations of cloud/precipitation processes continue having to be adopted in climate simulations with increasingly higher spatial resolution or with emerging adaptive mesh framework; and it is only becoming more critical that such parameterizations have to be scale aware. Continuous cloud measurements at DOE's ARM sites have provided a strong observational basis for novel cloud parameterization research at various scales. Despite significant progress in our observational ability, there are important cloud-scale physical and dynamical quantities that are either not currently observable or insufficiently sampled. To complement the long-term ARM measurements, we have explored an optimal strategy to carry out long-term 3-D cloud-resolving simulations over the ARM SGP site using Weather Research and Forecasting (WRF) model with multi-domain nesting. The factors that are considered to have important influences on the simulated cloud fields include domain size, spatial resolution, model top, forcing data set, model physics and the growth of model errors. The hydrometeor advection that may play a significant role in hydrological process within the observational domain but is often lacking, and the limitations due to the constraint of domain-wide uniform forcing in conventional cloud system-resolving model simulations, are at least partly accounted for in our approach. Conventional and probabilistic verification approaches are employed first for selected cases to optimize the model's capability of faithfully reproducing the observed mean and statistical distributions of cloud-scale quantities. This then forms the basis of our setup for long-term cloud-resolving simulations over the ARM SGP site. The model results will facilitate parameterization research, as well as understanding and dissecting parameterization deficiencies in climate models.

  20. Feasibility of performing high resolution cloud-resolving simulations of historic extreme events: The San Fruttuoso (Liguria, italy) case of 1915.

    Science.gov (United States)

    Parodi, Antonio; Boni, Giorgio; Ferraris, Luca; Gallus, William; Maugeri, Maurizio; Molini, Luca; Siccardi, Franco

    2017-04-01

    Recent studies show that highly localized and persistent back-building mesoscale convective systems represent one of the most dangerous flash-flood producing storms in the north-western Mediterranean area. Substantial warming of the Mediterranean Sea in recent decades raises concerns over possible increases in frequency or intensity of these types of events as increased atmospheric temperatures generally support increases in water vapor content. Analyses of available historical records do not provide a univocal answer, since these may be likely affected by a lack of detailed observations for older events. In the present study, 20th Century Reanalysis Project initial and boundary condition data in ensemble mode are used to address the feasibility of performing cloud-resolving simulations with 1 km horizontal grid spacing of a historic extreme event that occurred over Liguria (Italy): The San Fruttuoso case of 1915. The proposed approach focuses on the ensemble Weather Research and Forecasting (WRF) model runs, as they are the ones most likely to best simulate the event. It is found that these WRF runs generally do show wind and precipitation fields that are consistent with the occurrence of highly localized and persistent back-building mesoscale convective systems, although precipitation peak amounts are underestimated. Systematic small north-westward position errors with regard to the heaviest rain and strongest convergence areas imply that the Reanalysis members may not be adequately representing the amount of cool air over the Po Plain outflowing into the Liguria Sea through the Apennines gap. Regarding the role of historical data sources, this study shows that in addition to Reanalysis products, unconventional data, such as historical meteorological bulletins, newspapers and even photographs can be very valuable sources of knowledge in the reconstruction of past extreme events.

  1. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    Science.gov (United States)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  2. From Global to Cloud Resolving Scale: Experiments with a Scale- and Aerosol-Aware Physics Package and Impact on Tracer Transport

    Science.gov (United States)

    Grell, G. A.; Freitas, S. R.; Olson, J.; Bela, M.

    2017-12-01

    We will start by providing a summary of the latest cumulus parameterization modeling efforts at NOAA's Earth System Research Laboratory (ESRL) will be presented on both regional and global scales. The physics package includes a scale-aware parameterization of subgrid cloudiness feedback to radiation (coupled PBL, microphysics, radiation, shallow and congestus type convection), the stochastic Grell-Freitas (GF) scale- and aerosol-aware convective parameterization, and an aerosol aware microphysics package. GF is based on a stochastic approach originally implemented by Grell and Devenyi (2002) and described in more detail in Grell and Freitas (2014, ACP). It was expanded to include PDF's for vertical mass flux, as well as modifications to improve the diurnal cycle. This physics package will be used on different scales, spanning global to cloud resolving, to look at the impact on scalar transport and numerical weather prediction.

  3. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to

  4. Diurnal variation of summer precipitation over the Tibetan Plateau. A cloud-resolving simulation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Jianyu; Zhang, Bing; Wang, Minghuan [China Meteorological Administration, Wuhan (China). Wuhan Inst. of Heavy Rain; Wang, Huijuan [Weather Modification Office of Hubei Province, Wuhan (China)

    2012-07-01

    In this study, the Weather Research and Forecasting model was used to simulate the diurnal variation in summer precipitation over the Tibetan Plateau (TP) at a cloudresolving scale. Compared with the TRMM, precipitation data shows that the model can well simulate the diurnal rainfall cycle with an overall late-afternoon maximum precipitation in the central TP and a nighttime maximum in the southern edge. The simulated diurnal variations in regional circulation and thermodynamics are in good correspondence with the precipitation diurnal cycles in the central and southern edge of TP, respectively. A possible mechanism responsible for the nocturnal precipitation maximum in the southern edge has been proposed, indicating the importance of the TP in regulating the regional circulation and precipitation. (orig.)

  5. Edouard's (2014) Intensification: An Investigation of Precipitation and Thermodynamic Symmetrization Using a Cloud-Resolving Ensemble

    Science.gov (United States)

    Alvey, G., III; Zipser, E. J.

    2017-12-01

    Literature over the past 10 years has provided conflicting views about the relative importance of precipitation symmetry and convective intensity for tropical cyclone intensification. While several modeling studies (Braun et al. 2006, Guimond et al. 2010, Molinari et al. 2013, Rogers et al. 2013, 2015) have favored intense deep convection, satellite-based composite studies, on the other hand, have offered a differing pathway towards tropical cyclone intensification emphasizing shallow to moderate precipitation (Zagrodnik and Jiang 2014, Tao and Jiang 2015, Alvey et al. 2015). This has left fundamental questions unanswered regarding the relationships between precipitation and TC intensity change: What are the dominant precipitation types, their spatial distributions, and the timing of these features with respect to intensification? And what causes precipitation to symmetrize and increase in the upshear quadrants? One potentially important process, the humidification of upshear quadrants, has been identified to occur nearly coincidental with increased precipitation symmetry prior to and during Edouard's (2014) intensification (Zawislak et al. 2016). While observations from the Global Hawk and P-3 provided important snapshots throughout the life cycle of Edouard (2014), numerical simulations complement and reveal, in more detail, the processes behind these relationships through filling an 48-hour airborne observational gap during a crucial period of intensification between 12-14 Sept. We use a high resolution, full physics ensemble of Edouard (2014) simulated by the Weather Research and Forecasting (WRF) model - Advanced Research WRF (ARW; Skamarock et al., 2008). We deem the quantification of azimuthal variations — with a focus on the shear-relative quadrants — as particularly important, especially early in intensification when thermodynamic and precipitation distributions tend to be more asymmetric. Using a water vapor budget and trajectories we examine whether

  6. The Microphysical Properties of Convective Precipitation Over the Tibetan Plateau by a Subkilometer Resolution Cloud-Resolving Simulation

    Science.gov (United States)

    Gao, Wenhua; Liu, Liping; Li, Jian; Lu, Chunsong

    2018-03-01

    The microphysical properties of convective precipitation over the Tibetan Plateau are unique because of the extremely high topography and special atmospheric conditions. In this study, the ground-based cloud radar and disdrometer observations as well as high-resolution Weather Research and Forecasting simulations with the Chinese Academy of Meteorological Sciences microphysics and four other microphysical schemes are used to investigate the microphysics and precipitation mechanisms of a convection event on 24 July 2014. The Weather Research and Forecasting-Chinese Academy of Meteorological Sciences simulation reasonably reproduces the spatial distribution of 24-hr accumulated rainfall, yet the temporal evolution of rain rate has a delay of 1-3 hr. The model reflectivity shares the common features with the cloud radar observations. The simulated raindrop size distributions demonstrate more of small- and large-size raindrops produced with the increase of rain rate, suggesting that changeable shape parameter should be used in size distribution. Results show that abundant supercooled water exists through condensation of water vapor above the freezing layer. The prevailing ice crystal microphysical processes are depositional growth and autoconversion of ice crystal to snow. The dominant source term of snow/graupel is riming of supercooled water. Sedimentation of graupel can play a vital role in the formation of precipitation, but melting of snow is rather small and quite different from that in other regions. Furthermore, water vapor budgets suggest that surface moisture flux be the principal source of water vapor and self-circulation of moisture happen at the beginning of convection, while total moisture flux convergence determine condensation and precipitation during the convective process over the Tibetan Plateau.

  7. The ARM-GCSS Intercomparison Study of Single-Column Models and Cloud System Models

    International Nuclear Information System (INIS)

    Cederwall, R.T.; Rodriques, D.J.; Krueger, S.K.; Randall, D.A.

    1999-01-01

    The Single-Column Model (SCM) Working Group (WC) and the Cloud Working Group (CWG) in the Atmospheric Radiation Measurement (ARM) Program have begun a collaboration with the GEWEX Cloud System Study (GCSS) WGs. The forcing data sets derived from the special ARM radiosonde measurements made during the SCM Intensive Observation Periods (IOPs), the wealth of cloud and related data sets collected by the ARM Program, and the ARM infrastructure support of the SCM WG are of great value to GCSS. In return, GCSS brings the efforts of an international group of cloud system modelers to bear on ARM data sets and ARM-related scientific questions. The first major activity of the ARM-GCSS collaboration is a model intercomparison study involving SCMs and cloud system models (CSMs), also known as cloud-resolving or cloud-ensemble models. The SCM methodologies developed in the ARM Program have matured to the point where an intercomparison will help identify the strengths and weaknesses of various approaches. CSM simulations will bring much additional information about clouds to evaluate cloud parameterizations used in the SCMs. CSMs and SCMs have been compared successfully in previous GCSS intercomparison studies for tropical conditions. The ARM Southern Great Plains (SGP) site offers an opportunity for GCSS to test their models in continental, mid-latitude conditions. The Summer 1997 SCM IOP has been chosen since it provides a wide range of summertime weather events that will be a challenging test of these models

  8. Simulating moist convection with a quasi-elastic sigma coordinate model

    CSIR Research Space (South Africa)

    Bopape, Mary-Jane M

    2012-09-01

    Full Text Available Cloud Resolving Models (CRMs) employ microphysics parameterisations which are grouped into bin and bulk approaches. Bulk Microphysics Parameterisation (BMP) schemes specify a functional form for the particle distribution and predict one or more...

  9. Use of ARM Data to address the Climate Change Further Development and Applications of A Multi-scale Modeling Framework

    Energy Technology Data Exchange (ETDEWEB)

    David A. Randall; Marat Khairoutdinov

    2007-12-14

    The Colorado State University (CSU) Multi-scale Modeling Framework (MMF) is a new type of general circulation model (GCM) that replaces the conventional parameterizations of convection, clouds and boundary layer with a cloud-resolving model (CRM) embedded into each grid column. The MMF that we have been working with is a “super-parameterized” version of the Community Atmosphere Model (CAM). As reported in the publications listed below, we have done extensive work with the model. We have explored the MMF’s performance in several studies, including an AMIP run and a CAPT test, and we have applied the MMF to an analysis of climate sensitivity.

  10. ARM/GCSS/SPARC TWP-ICE CRM Intercomparison Study

    Science.gov (United States)

    Fridlind, Ann; Ackerman, Andrew; Petch, Jon; Field, Paul; Hill, Adrian; McFarquhar, Greg; Xie, Shaocheng; Zhang, Minghua

    2010-01-01

    Specifications are provided for running a cloud-resolving model (CRM) and submitting results in a standardized format for inclusion in a n intercomparison study and archiving for public access. The simulated case study is based on measurements obtained during the 2006 Tropical Warm Pool - International Cloud Experiment (TWP-ICE) led by the U. S. department of Energy Atmospheric Radiation Measurement (ARM) program. The modeling intercomparison study is based on objectives developed in concert with the Stratospheric Processes And their Role in Climate (SPARC) program and the GEWEX cloud system study (GCSS) program. The Global Energy and Water Cycle Experiment (GEWEX) is a core project of the World Climate Research PRogramme (WCRP).

  11. A Numerical Study of Vortex and Precipitating Cloud Merging in Middle Latitudes

    Institute of Scientific and Technical Information of China (English)

    PING Fan; LUO Zhe-Xian; JU Jian-Hua

    2006-01-01

    @@ We mainly focus on the study of precipitating cloud merging associated with vortex merging. The vortex and precipitating cloud merging are simulated by the cloud resolving model from 0000 21 to 1800 23 July 2003. The results show that the model well simulates vortex circulation associated with precipitating clouds. It is also proven that the vortex merging follows the precipitating cloud merging although vortices show the spatial and temporal differences. The convection vorticity vector is introduced to describe the merging processes. Two merging cases are identified during the 42-h simulation and are studied.

  12. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part II: Multi-layered cloud

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, H; McCoy, R B; Klein, S A; Xie, S; Luo, Y; Avramov, A; Chen, M; Cole, J; Falk, M; Foster, M; Genio, A D; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; McFarquhar, G; Poellot, M; Shipway, B; Shupe, M; Sud, Y; Turner, D; Veron, D; Walker, G; Wang, Z; Wolf, A; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a deep, multi-layered, mixed-phase cloud system observed during the ARM Mixed-Phase Arctic Cloud Experiment. This cloud system was associated with strong surface turbulent sensible and latent heat fluxes as cold air flowed over the open Arctic Ocean, combined with a low pressure system that supplied moisture at mid-level. The simulations, performed by 13 single-column and 4 cloud-resolving models, generally overestimate the liquid water path and strongly underestimate the ice water path, although there is a large spread among the models. This finding is in contrast with results for the single-layer, low-level mixed-phase stratocumulus case in Part I of this study, as well as previous studies of shallow mixed-phase Arctic clouds, that showed an underprediction of liquid water path. The overestimate of liquid water path and underestimate of ice water path occur primarily when deeper mixed-phase clouds extending into the mid-troposphere were observed. These results suggest important differences in the ability of models to simulate Arctic mixed-phase clouds that are deep and multi-layered versus shallow and single-layered. In general, models with a more sophisticated, two-moment treatment of the cloud microphysics produce a somewhat smaller liquid water path that is closer to observations. The cloud-resolving models tend to produce a larger cloud fraction than the single-column models. The liquid water path and especially the cloud fraction have a large impact on the cloud radiative forcing at the surface, which is dominated by the longwave flux for this case.

  13. Why do general circulation models overestimate the aerosol cloud lifetime effect? A case study comparing CAM5 and a CRM

    Science.gov (United States)

    Zhou, Cheng; Penner, Joyce E.

    2017-01-01

    Observation-based studies have shown that the aerosol cloud lifetime effect or the increase of cloud liquid water path (LWP) with increased aerosol loading may have been overestimated in climate models. Here, we simulate shallow warm clouds on 27 May 2011 at the southern Great Plains (SGP) measurement site established by the Department of Energy's (DOE) Atmospheric Radiation Measurement (ARM) program using a single-column version of a global climate model (Community Atmosphere Model or CAM) and a cloud resolving model (CRM). The LWP simulated by CAM increases substantially with aerosol loading while that in the CRM does not. The increase of LWP in CAM is caused by a large decrease of the autoconversion rate when cloud droplet number increases. In the CRM, the autoconversion rate is also reduced, but this is offset or even outweighed by the increased evaporation of cloud droplets near the cloud top, resulting in an overall decrease in LWP. Our results suggest that climate models need to include the dependence of cloud top growth and the evaporation/condensation process on cloud droplet number concentrations.

  14. Low Cloud Feedback to Surface Warming in the World's First Global Climate Model with Explicit Embedded Boundary Layer Turbulence

    Science.gov (United States)

    Parishani, H.; Pritchard, M. S.; Bretherton, C. S.; Wyant, M. C.; Khairoutdinov, M.; Singh, B.

    2017-12-01

    Biases and parameterization formulation uncertainties in the representation of boundary layer clouds remain a leading source of possible systematic error in climate projections. Here we show the first results of cloud feedback to +4K SST warming in a new experimental climate model, the ``Ultra-Parameterized (UP)'' Community Atmosphere Model, UPCAM. We have developed UPCAM as an unusually high-resolution implementation of cloud superparameterization (SP) in which a global set of cloud resolving arrays is embedded in a host global climate model. In UP, the cloud-resolving scale includes sufficient internal resolution to explicitly generate the turbulent eddies that form marine stratocumulus and trade cumulus clouds. This is computationally costly but complements other available approaches for studying low clouds and their climate interaction, by avoiding parameterization of the relevant scales. In a recent publication we have shown that UP, while not without its own complexity trade-offs, can produce encouraging improvements in low cloud climatology in multi-month simulations of the present climate and is a promising target for exascale computing (Parishani et al. 2017). Here we show results of its low cloud feedback to warming in multi-year simulations for the first time. References: Parishani, H., M. S. Pritchard, C. S. Bretherton, M. C. Wyant, and M. Khairoutdinov (2017), Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence, J. Adv. Model. Earth Syst., 9, doi:10.1002/2017MS000968.

  15. Simulating moist convection with a quasi-elastic sigma coordinate model

    CSIR Research Space (South Africa)

    Bopape, Mary-Jane M

    2012-10-01

    Full Text Available : Corrected TOGA COARE Sounding Humidity Data: Impact on Diagnosed Properties of Convection and Climate over the Warm Pool. Journal of Climate, 12, 2370-2384. WW, X Wu and MW Moncrieff, 1996: Cloud-Resolving Modeling of Tropical Cloud Systems during Phase... during the suppressed phase of a Madden-Julian Oscillation: Comparing single-column models with cloud resolving models. Quarterly Journal of the Royal Meteorological Society, 1-22. Sun S and W Sun, 2002: A One-dimensional Time Dependent Cloud Model...

  16. A Robust Multi-Scale Modeling System for the Study of Cloud and Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    During the past decade, numerical weather and global non-hydrostatic models have started using more complex microphysical schemes originally developed for high resolution cloud resolving models (CRMs) with 1-2 km or less horizontal resolutions. These microphysical schemes affect the dynamic through the release of latent heat (buoyancy loading and pressure gradient) the radiation through the cloud coverage (vertical distribution of cloud species), and surface processes through rainfall (both amount and intensity). Recently, several major improvements of ice microphysical processes (or schemes) have been developed for cloud-resolving model (Goddard Cumulus Ensemble, GCE, model) and regional scale (Weather Research and Forecast, WRF) model. These improvements include an improved 3-ICE (cloud ice, snow and graupel) scheme (Lang et al. 2010); a 4-ICE (cloud ice, snow, graupel and hail) scheme and a spectral bin microphysics scheme and two different two-moment microphysics schemes. The performance of these schemes has been evaluated by using observational data from TRMM and other major field campaigns. In this talk, we will present the high-resolution (1 km) GeE and WRF model simulations and compared the simulated model results with observation from recent field campaigns [i.e., midlatitude continental spring season (MC3E; 2010), high latitude cold-season (C3VP, 2007; GCPEx, 2012), and tropical oceanic (TWP-ICE, 2006)].

  17. Skills of different mesoscale models over Indian region during ...

    Indian Academy of Sciences (India)

    tion and prediction of high impact severe weather systems. Such models ... mesoscale models can be run at cloud resolving resolutions (∼1km) ... J. Earth Syst. Sci. 117, No. ..... similar to climate drift, indicating that those error components are ...

  18. A 2-d modeling approach for studying the formation, maintenance, and decay of Tropical Tropopause Layer Cirrus associated with Deep Convection

    Science.gov (United States)

    Henz, D. R.; Hashino, T.; Tripoli, G. J.; Smith, E. A.

    2009-12-01

    This study is being conducted to examine the distribution, variability, and formation-decay processes of TTL cirrus associated with tropical deep convection using the University of Wisconsin Non-Hydrostatic modeling system (NMS). The experimental design is based on Tripoli, Hack and Kiehl (1992) which explicitly simulates the radiative-convective equilibrium of the tropical atmosphere over extended periods of weeks or months using a 2D periodic cloud resolving model. The experiment design includes a radiation parameterization to explicitly simulate radiative transfer through simulated crystals. Advanced Microphysics Prediction System (AMP) will be used to simulate microphysics by employing SHIPS (Spectral Habit Ice Prediction System) for ice, SLiPS (Spectral Liquid Prediction System) for droplets, and SAPS (Spectral Aerosol Prediction System) for aerosols. The ice scheme called SHIPS is unique in that ice particle properties (such as size, particle density, and crystal habitats) are explicitly predicted in a CRM (Hashino and Tripoli, 2007, 2008). The Advanced Microphysics Prediction System (AMPS) technology provides a particularly strong tool that effectively enables the explicit modeling of the TTL cloud microphysics and dynamical processes which has yet to be accomplished by more traditional bulk microphysics approaches.

  19. A study of reduced numerical precision to make superparameterization more competitive using a hardware emulator in the OpenIFS model

    Science.gov (United States)

    Düben, Peter D.; Subramanian, Aneesh; Dawson, Andrew; Palmer, T. N.

    2017-03-01

    The use of reduced numerical precision to reduce computing costs for the cloud resolving model of superparameterized simulations of the atmosphere is investigated. An approach to identify the optimal level of precision for many different model components is presented, and a detailed analysis of precision is performed. This is nontrivial for a complex model that shows chaotic behavior such as the cloud resolving model in this paper. It is shown not only that numerical precision can be reduced significantly but also that the results of the reduced precision analysis provide valuable information for the quantification of model uncertainty for individual model components. The precision analysis is also used to identify model parts that are of less importance thus enabling a reduction of model complexity. It is shown that the precision analysis can be used to improve model efficiency for both simulations in double precision and in reduced precision. Model simulations are performed with a superparameterized single-column model version of the OpenIFS model that is forced by observational data sets. A software emulator was used to mimic the use of reduced precision floating point arithmetic in simulations.

  20. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part I: Single layer cloud

    Energy Technology Data Exchange (ETDEWEB)

    Klein, S A; McCoy, R B; Morrison, H; Ackerman, A; Avramov, A; deBoer, G; Chen, M; Cole, J; DelGenio, A; Golaz, J; Hashino, T; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; Luo, Y; McFarquhar, G; Menon, S; Neggers, R; Park, S; Poellot, M; von Salzen, K; Schmidt, J; Sednev, I; Shipway, B; Shupe, M; Spangenberg, D; Sud, Y; Turner, D; Veron, D; Falk, M; Foster, M; Fridlind, A; Walker, G; Wang, Z; Wolf, A; Xie, S; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a cold-air outbreak mixed-phase stratocumulus cloud observed during the Atmospheric Radiation Measurement (ARM) program's Mixed-Phase Arctic Cloud Experiment. The observed cloud occurred in a well-mixed boundary layer with a cloud top temperature of -15 C. The observed liquid water path of around 160 g m{sup -2} was about two-thirds of the adiabatic value and much greater than the mass of ice crystal precipitation which when integrated from the surface to cloud top was around 15 g m{sup -2}. The simulations were performed by seventeen single-column models (SCMs) and nine cloud-resolving models (CRMs). While the simulated ice water path is generally consistent with the observed values, the median SCM and CRM liquid water path is a factor of three smaller than observed. Results from a sensitivity study in which models removed ice microphysics indicate that in many models the interaction between liquid and ice-phase microphysics is responsible for the large model underestimate of liquid water path. Despite this general underestimate, the simulated liquid and ice water paths of several models are consistent with the observed values. Furthermore, there is some evidence that models with more sophisticated microphysics simulate liquid and ice water paths that are in better agreement with the observed values, although considerable scatter is also present. Although no single factor guarantees a good simulation, these results emphasize the need for improvement in the model representation of mixed-phase microphysics. This case study, which has been well observed from both aircraft and ground-based remote sensors, could be a benchmark for model simulations of mixed-phase clouds.

  1. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part I: Single layer cloud

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Stephen A.; McCoy, Renata B.; Morrison, Hugh; Ackerman, Andrew S.; Avramov, Alexander; de Boer, Gijs; Chen, Mingxuan; Cole, Jason N.S.; Del Genio, Anthony D.; Falk, Michael; Foster, Michael J.; Fridlind, Ann; Golaz, Jean-Christophe; Hashino, Tempei; Harrington, Jerry Y.; Hoose, Corinna; Khairoutdinov, Marat F.; Larson, Vincent E.; Liu, Xiaohong; Luo, Yali; McFarquhar, Greg M.; Menon, Surabi; Neggers, Roel A. J.; Park, Sungsu; Poellot, Michael R.; Schmidt, Jerome M.; Sednev, Igor; Shipway, Ben J.; Shupe, Matthew D.; Spangenberg, Douglas A.; Sud, Yogesh C.; Turner, David D.; Veron, Dana E.; von Salzen, Knut; Walker, Gregory K.; Wang, Zhien; Wolf, Audrey B.; Xie, Shaocheng; Xu, Kuan-Man; Yang, Fanglin; Zhang, Gong

    2009-02-02

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a cold-air outbreak mixed-phase stratocumulus cloud observed during the Atmospheric Radiation Measurement (ARM) program's Mixed-Phase Arctic Cloud Experiment. The observed cloud occurred in a well-mixed boundary layer with a cloud top temperature of -15 C. The observed average liquid water path of around 160 g m{sup -2} was about two-thirds of the adiabatic value and much greater than the average mass of ice crystal precipitation which when integrated from the surface to cloud top was around 15 g m{sup -2}. The simulations were performed by seventeen single-column models (SCMs) and nine cloud-resolving models (CRMs). While the simulated ice water path is generally consistent with the observed values, the median SCM and CRM liquid water path is a factor of three smaller than observed. Results from a sensitivity study in which models removed ice microphysics suggest that in many models the interaction between liquid and ice-phase microphysics is responsible for the large model underestimate of liquid water path. Despite this general underestimate, the simulated liquid and ice water paths of several models are consistent with the observed values. Furthermore, there is evidence that models with more sophisticated microphysics simulate liquid and ice water paths that are in better agreement with the observed values, although considerable scatter is also present. Although no single factor guarantees a good simulation, these results emphasize the need for improvement in the model representation of mixed-phase microphysics.

  2. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  3. Modeling and cellular studies

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    Testing the applicability of mathematical models with carefully designed experiments is a powerful tool in the investigations of the effects of ionizing radiation on cells. The modeling and cellular studies complement each other, for modeling provides guidance for designing critical experiments which must provide definitive results, while the experiments themselves provide new input to the model. Based on previous experimental results the model for the accumulation of damage in Chlamydomonas reinhardi has been extended to include various multiple two-event combinations. Split dose survival experiments have shown that models tested to date predict most but not all the observed behavior. Stationary-phase mammalian cells, required for tests of other aspects of the model, have been shown to be at different points in the cell cycle depending on how they were forced to stop proliferating. These cultures also demonstrate different capacities for repair of sublethal radiation damage

  4. A Multiscale Modeling System: Developments, Applications, and Critical Issues

    Science.gov (United States)

    Tao, Wei-Kuo; Lau, William; Simpson, Joanne; Chern, Jiun-Dar; Atlas, Robert; Khairoutdinov, David Randall Marat; Li, Jui-Lin; Waliser, Duane E.; Jiang, Jonathan; Hou, Arthur; hide

    2009-01-01

    The foremost challenge in parameterizing convective clouds and cloud systems in large-scale models are the many coupled dynamical and physical processes that interact over a wide range of scales, from microphysical scales to the synoptic and planetary scales. This makes the comprehension and representation of convective clouds and cloud systems one of the most complex scientific problems in Earth science. During the past decade, the Global Energy and Water Cycle Experiment (GEWEX) Cloud System Study (GCSS) has pioneered the use of single-column models (SCMs) and cloud-resolving models (CRMs) for the evaluation of the cloud and radiation parameterizations in general circulation models (GCMs; e.g., GEWEX Cloud System Science Team 1993). These activities have uncovered many systematic biases in the radiation, cloud and convection parameterizations of GCMs and have led to the development of new schemes (e.g., Zhang 2002; Pincus et al, 2003; Zhang and Wu 2003; Wu et al. 2003; Liang and Wu 2005; Wu and Liang 2005, and others). Comparisons between SCMs and CRMs using the same large-scale forcing derived from field campaigns have demonstrated that CRMs are superior to SCMs in the prediction of temperature and moisture tendencies (e.g., Das et al. 1999; Randall et al 2003b; Xie et al. 2005).

  5. ICRF edge modeling studies

    Energy Technology Data Exchange (ETDEWEB)

    Lehrman, I.S. (Grumman Corp. Research Center, Princeton, NJ (USA)); Colestock, P.L. (Princeton Univ., NJ (USA). Plasma Physics Lab.)

    1990-04-01

    Theoretical models have been developed, and are currently being refined, to explain the edge plasma-antenna interaction that occurs during ICRF heating. The periodic structure of a Faraday shielded antenna is found to result in strong ponderomotive force in the vicinity of the antenna. A fluid model, which incorporates the ponderomotive force, shows an increase in transport to the Faraday shield. A kinetic model shows that the strong antenna near fields act to increase the energy of deuterons which strike the shield, thereby increasing the sputtering of shield material. Estimates of edge impurity harmonic heating show no significant heating for either in or out-of-phase antenna operation. Additionally, a particle model for electrons near the shield shows that heating results from the parallel electric field associated with the fast wave. A quasilinear model for edge electron heating is presented and compared to the particle calculations. The models' predictions are shown to be consistent with measurements of enhanced transport. (orig.).

  6. Studies on DANESS Code Modeling

    International Nuclear Information System (INIS)

    Jeong, Chang Joon

    2009-09-01

    The DANESS code modeling study has been performed. DANESS code is widely used in a dynamic fuel cycle analysis. Korea Atomic Energy Research Institute (KAERI) has used the DANESS code for the Korean national nuclear fuel cycle scenario analysis. In this report, the important models such as Energy-demand scenario model, New Reactor Capacity Decision Model, Reactor and Fuel Cycle Facility History Model, and Fuel Cycle Model are investigated. And, some models in the interface module are refined and inserted for Korean nuclear fuel cycle model. Some application studies have also been performed for GNEP cases and for US fast reactor scenarios with various conversion ratios

  7. Comment on 'Modeling of Convective-Stratiform Precipitation Processes: Sensitivity to Partitioning Methods' by Matthias Steiner

    Science.gov (United States)

    Lang, Steve; Tao, W.-K.; Simpson, J.; Ferrier, B.

    2003-01-01

    Despite the obvious notion that the presence of hail or graupel is a good indication of convection, the model results show this does not provide an objective benchmark partly due to the unrealistic presence of small amounts of hail or graupel throughout the anvil in the model but mainly because of the significant amounts of hail or graupel, especially in the tropical TOGA COARE simulation, in the transition zone. Without use of a "transition" category, it is open to debate as how this region should best be defined, as stratiform or as convective. So, the presence of significant hail or graupel contents in this zone significantly degrades its use an objective benchmark for convection. The separation algorithm comparison was done in the context of a cloud-resolving model. These models are widely used and serve a variety of purposes especially with regard to retrieving information that cannot be directly measured by providing synthetic data sets that are consistent and complete. Separation algorithms are regularly applied in these models. However, as with any modeling system, these types 'of models are constantly being improved to overcome any known deficiencies and make them more accurate representations of observed systems. The presence of hail and graupel in the anvil and the bias towards heavy rainfall rates are two such examples of areas that need improvement. Since, both of these can effect the perceived performance of the separation algorithms, the Lang et al. (2003) study did not want to overstate the relative performance of any specific algorithms.

  8. Model simulations of aerosol effects on clouds and precipitation in comparison with ARM data

    Energy Technology Data Exchange (ETDEWEB)

    Penner, Joyce E. [Univ. of Michigan, Ann Arbor, MI (United States); Zhou, Cheng [Univ. of Michigan, Ann Arbor, MI (United States)

    2017-01-12

    Observation-based studies have shown that the aerosol cloud lifetime effect or the increase of cloud liquid water path (LWP) with increased aerosol loading may have been overestimated in climate models. Here, we simulate shallow warm clouds on 05/27/2011 at the Southern Great Plains (SGP) measurement site established by Department of Energy's Atmospheric Radiation Measurement (ARM) Program using a single column version of a global climate model (Community Atmosphere Model or CAM) and a cloud resolving model (CRM). The LWP simulated by CAM increases substantially with aerosol loading while that in the CRM does not. The increase of LWP in CAM is caused by a large decrease of the autoconversion rate when cloud droplet number increases. In the CRM, the autoconversion rate is also reduced, but this is offset or even outweighed by the increased evaporation of cloud droplets near cloud top, resulting in an overall decrease in LWP. Our results suggest that climate models need to include the dependence of cloud top growth and the evaporation/condensation process on cloud droplet number concentrations.

  9. Radiative-convective equilibrium model intercomparison project

    Science.gov (United States)

    Wing, Allison A.; Reed, Kevin A.; Satoh, Masaki; Stevens, Bjorn; Bony, Sandrine; Ohno, Tomoki

    2018-03-01

    RCEMIP, an intercomparison of multiple types of models configured in radiative-convective equilibrium (RCE), is proposed. RCE is an idealization of the climate system in which there is a balance between radiative cooling of the atmosphere and heating by convection. The scientific objectives of RCEMIP are three-fold. First, clouds and climate sensitivity will be investigated in the RCE setting. This includes determining how cloud fraction changes with warming and the role of self-aggregation of convection in climate sensitivity. Second, RCEMIP will quantify the dependence of the degree of convective aggregation and tropical circulation regimes on temperature. Finally, by providing a common baseline, RCEMIP will allow the robustness of the RCE state across the spectrum of models to be assessed, which is essential for interpreting the results found regarding clouds, climate sensitivity, and aggregation, and more generally, determining which features of tropical climate a RCE framework is useful for. A novel aspect and major advantage of RCEMIP is the accessibility of the RCE framework to a variety of models, including cloud-resolving models, general circulation models, global cloud-resolving models, single-column models, and large-eddy simulation models.

  10. Evaluating and Improving Cloud Processes in the Multi-Scale Modeling Framework

    Energy Technology Data Exchange (ETDEWEB)

    Ackerman, Thomas P. [Univ. of Washington, Seattle, WA (United States)

    2015-03-01

    The research performed under this grant was intended to improve the embedded cloud model in the Multi-scale Modeling Framework (MMF) for convective clouds by using a 2-moment microphysics scheme rather than the single moment scheme used in all the MMF runs to date. The technical report and associated documents describe the results of testing the cloud resolving model with fixed boundary conditions and evaluation of model results with data. The overarching conclusion is that such model evaluations are problematic because errors in the forcing fields control the results so strongly that variations in parameterization values cannot be usefully constrained

  11. Model quality and safety studies

    DEFF Research Database (Denmark)

    Petersen, K.E.

    1997-01-01

    The paper describes the EC initiative on model quality assessment and emphasizes some of the problems encountered in the selection of data from field tests used in the evaluation process. Further, it discusses the impact of model uncertainties in safety studies of industrial plants. The model...... that most of these have never been through a procedure of evaluation, but nonetheless are used to assist in making decisions that may directly affect the safety of the public and the environment. As a major funder of European research on major industrial hazards, DGXII is conscious of the importance......-tain model is appropriate for use in solving a given problem. Further, the findings from the REDIPHEM project related to dense gas dispersion will be highlighted. Finally, the paper will discuss the need for model quality assessment in safety studies....

  12. Mathematical study of mixing models

    International Nuclear Information System (INIS)

    Lagoutiere, F.; Despres, B.

    1999-01-01

    This report presents the construction and the study of a class of models that describe the behavior of compressible and non-reactive Eulerian fluid mixtures. Mixture models can have two different applications. Either they are used to describe physical mixtures, in the case of a true zone of extensive mixing (but then this modelization is incomplete and must be considered only as a point of departure for the elaboration of models of mixtures actually relevant). Either they are used to solve the problem of the numerical mixture. This problem appears during the discretization of an interface which separates fluids having laws of different state: the zone of numerical mixing is the set of meshes which cover the interface. The attention is focused on numerical mixtures, for which the hypothesis of non-miscibility (physics) will bring two equations (the sixth and the eighth of the system). It is important to emphasize that even in the case of the only numerical mixture, the presence in one and same place (same mesh) of several fluids have to be taken into account. This will be formalized by the possibility for mass fractions to take all values between 0 and 1. This is not at odds with the equations that derive from the hypothesis of non-miscibility. One way of looking at things is to consider that there are two scales of observation: the physical scale at which one observes the separation of fluids, and the numerical scale, given by the fineness of the mesh, to which a mixture appears. In this work, mixtures are considered from the mathematical angle (both in the elaboration phase and during their study). In particular, Chapter 5 shows a result of model degeneration for a non-extended mixing zone (case of an interface): this justifies the use of models in the case of numerical mixing. All these models are based on the classical model of non-viscous compressible fluids recalled in Chapter 2. In Chapter 3, the central point of the elaboration of the class of models is

  13. Urban Studies: A Learning Model.

    Science.gov (United States)

    Cooper, Terry L.; Sundeen, Richard

    1979-01-01

    The urban studies learning model described in this article was found to increase students' self-esteem, imbue a more flexible and open perspective, contribute to the capacity for self-direction, produce increases on the feeling reactivity, spontaneity, and acceptance of aggression scales, and expand interpersonal competence. (Author/WI)

  14. Campus network security model study

    Science.gov (United States)

    Zhang, Yong-ku; Song, Li-ren

    2011-12-01

    Campus network security is growing importance, Design a very effective defense hacker attacks, viruses, data theft, and internal defense system, is the focus of the study in this paper. This paper compared the firewall; IDS based on the integrated, then design of a campus network security model, and detail the specific implementation principle.

  15. Kuala Kemaman hydraulic model study

    International Nuclear Information System (INIS)

    Abdul Kadir Ishak

    2005-01-01

    There The problems facing the area of Kuala Kemaman are siltation and erosion at shoreline. The objectives of study are to assess the best alignment of the groyne alignment, to ascertain the most stable shoreline regime and to investigate structural measures to overcome the erosion. The scope of study are data collection, wave analysis, hydrodynamic simulation and sediment transport simulation. Numerical models MIKE 21 are used - MIKE 21 NSW, for wind-wave model, which describes the growth, decay and transformation of wind-generated waves and swell in nearshore areas. The study takes into account effects of refraction and shoaling due to varying depth, energy dissipation due to bottom friction and wave breaking, MIKE 21 HD - modelling system for 2D free-surface flow which to stimulate the hydraulics phenomena in estuaries, coastal areas and seas. Predicted tidal elevation and waves (radiation stresses) are considered into study while wind is not considered. MIKE 21 ST - the system that calculates the rates of non-cohesive (sand) sediment transport for both pure content and combined waves and current situation

  16. Process-model simulations of cloud albedo enhancement by aerosols in the Arctic

    Science.gov (United States)

    Kravitz, Ben; Wang, Hailong; Rasch, Philip J.; Morrison, Hugh; Solomon, Amy B.

    2014-01-01

    A cloud-resolving model is used to simulate the effectiveness of Arctic marine cloud brightening via injection of cloud condensation nuclei (CCN), either through geoengineering or other increased sources of Arctic aerosols. An updated cloud microphysical scheme is employed, with prognostic CCN and cloud particle numbers in both liquid and mixed-phase marine low clouds. Injection of CCN into the marine boundary layer can delay the collapse of the boundary layer and increase low-cloud albedo. Albedo increases are stronger for pure liquid clouds than mixed-phase clouds. Liquid precipitation can be suppressed by CCN injection, whereas ice precipitation (snow) is affected less; thus, the effectiveness of brightening mixed-phase clouds is lower than for liquid-only clouds. CCN injection into a clean regime results in a greater albedo increase than injection into a polluted regime, consistent with current knowledge about aerosol–cloud interactions. Unlike previous studies investigating warm clouds, dynamical changes in circulation owing to precipitation changes are small. According to these results, which are dependent upon the representation of ice nucleation processes in the employed microphysical scheme, Arctic geoengineering is unlikely to be effective as the sole means of altering the global radiation budget but could have substantial local radiative effects. PMID:25404677

  17. Studies of Catalytic Model Systems

    DEFF Research Database (Denmark)

    Holse, Christian

    The overall topic of this thesis is within the field of catalysis, were model systems of different complexity have been studied utilizing a multipurpose Ultra High Vacuum chamber (UHV). The thesis falls in two different parts. First a simple model system in the form of a ruthenium single crystal...... of the Cu/ZnO nanoparticles is highly relevant to industrial methanol synthesis for which the direct interaction of Cu and ZnO nanocrystals synergistically boost the catalytic activity. The dynamical behavior of the nanoparticles under reducing and oxidizing environments were studied by means of ex situ X......-ray Photoelectron Electron Spectroscopy (XPS) and in situ Transmission Electron Microscopy (TEM). The surface composition of the nanoparticles changes reversibly as the nanoparticles exposed to cycles of high-pressure oxidation and reduction (200 mbar). Furthermore, the presence of metallic Zn is observed by XPS...

  18. Development of a cloud microphysical model and parameterizations to describe the effect of CCN on warm cloud

    Directory of Open Access Journals (Sweden)

    N. Kuba

    2006-01-01

    Full Text Available First, a hybrid cloud microphysical model was developed that incorporates both Lagrangian and Eulerian frameworks to study quantitatively the effect of cloud condensation nuclei (CCN on the precipitation of warm clouds. A parcel model and a grid model comprise the cloud model. The condensation growth of CCN in each parcel is estimated in a Lagrangian framework. Changes in cloud droplet size distribution arising from condensation and coalescence are calculated on grid points using a two-moment bin method in a semi-Lagrangian framework. Sedimentation and advection are estimated in the Eulerian framework between grid points. Results from the cloud model show that an increase in the number of CCN affects both the amount and the area of precipitation. Additionally, results from the hybrid microphysical model and Kessler's parameterization were compared. Second, new parameterizations were developed that estimate the number and size distribution of cloud droplets given the updraft velocity and the number of CCN. The parameterizations were derived from the results of numerous numerical experiments that used the cloud microphysical parcel model. The input information of CCN for these parameterizations is only several values of CCN spectrum (they are given by CCN counter for example. It is more convenient than conventional parameterizations those need values concerned with CCN spectrum, C and k in the equation of N=CSk, or, breadth, total number and median radius, for example. The new parameterizations' predictions of initial cloud droplet size distribution for the bin method were verified by using the aforesaid hybrid microphysical model. The newly developed parameterizations will save computing time, and can effectively approximate components of cloud microphysics in a non-hydrostatic cloud model. The parameterizations are useful not only in the bin method in the regional cloud-resolving model but also both for a two-moment bulk microphysical model and

  19. A Single-column Model Ensemble Approach Applied to the TWP-ICE Experiment

    Science.gov (United States)

    Davies, L.; Jakob, C.; Cheung, K.; DelGenio, A.; Hill, A.; Hume, T.; Keane, R. J.; Komori, T.; Larson, V. E.; Lin, Y.; hide

    2013-01-01

    Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.

  20. NURE uranium deposit model studies

    International Nuclear Information System (INIS)

    Crew, M.E.

    1981-01-01

    The National Uranium Resource Evaluation (NURE) Program has sponsored uranium deposit model studies by Bendix Field Engineering Corporation (Bendix), the US Geological Survey (USGS), and numerous subcontractors. This paper deals only with models from the following six reports prepared by Samuel S. Adams and Associates: GJBX-1(81) - Geology and Recognition Criteria for Roll-Type Uranium Deposits in Continental Sandstones; GJBX-2(81) - Geology and Recognition Criteria for Uraniferous Humate Deposits, Grants Uranium Region, New Mexico; GJBX-3(81) - Geology and Recognition Criteria for Uranium Deposits of the Quartz-Pebble Conglomerate Type; GJBX-4(81) - Geology and Recognition Criteria for Sandstone Uranium Deposits in Mixed Fluvial-Shallow Marine Sedimentary Sequences, South Texas; GJBX-5(81) - Geology and Recognition Criteria for Veinlike Uranium Deposits of the Lower to Middle Proterozoic Unconformity and Strata-Related Types; GJBX-6(81) - Geology and Recognition Criteria for Sandstone Uranium Deposits of the Salt Wash Type, Colorado Plateau Province. A unique feature of these models is the development of recognition criteria in a systematic fashion, with a method for quantifying the various items. The recognition-criteria networks are used in this paper to illustrate the various types of deposits

  1. Saltstone SDU6 Modeling Study

    International Nuclear Information System (INIS)

    Lee, Si Y.; Hyun, Sinjae

    2013-01-01

    A new disposal unit, designated as Saltstone Disposal Unit 6 (SDU6), is being designed for support of site accelerated closure goals and salt waste projections identified in the new Liquid Waste System Plan. The unit is a cylindrical disposal cell of 375 ft in diameter and 43 ft in height, and it has a minimum 30 million gallons of capacity. SRNL was requested to evaluate the impact of an increased grout placement height on the flow patterns radially spread on the floor and to determine whether grout quality is impacted by the height. The primary goals of the work are to develop the baseline Computational Fluid Dynamics (CFD) model and to perform the evaluations for the flow patterns of grout material in SDU6 as a function of elevation of grout discharge port and grout rheology. Two transient grout models have been developed by taking a three-dimensional multiphase CFD approach to estimate the domain size of the grout materials radially spread on the facility floor and to perform the sensitivity analysis with respect to the baseline design and operating conditions such as elevation height of the discharge port and fresh grout properties. For the CFD modeling calculations, air-grout Volume of Fluid (VOF) method combined with Bingham plastic and time-dependent grout models were used for examining the impact of fluid spread performance for the initial baseline configurations and to evaluate the impact of grout pouring height on grout quality. The grout quality was estimated in terms of the air volume fraction for the grout layer formed on the SDU6 floor, resulting in the change of grout density. The study results should be considered as preliminary scoping analyses since benchmarking analysis is not included in this task scope. Transient analyses with the Bingham plastic model were performed with the FLUENTTM code on the high performance parallel computing platform in SRNL. The analysis coupled with a transient grout aging model was performed by using ANSYS-CFX code

  2. Benthic boundary layer modelling studies

    International Nuclear Information System (INIS)

    Richards, K.J.

    1984-01-01

    A numerical model has been developed to study the factors which control the height of the benthic boundary layer in the deep ocean and the dispersion of a tracer within and directly above the layer. This report covers tracer clouds of horizontal scales of 10 to 100 km. The dispersion of a tracer has been studied in two ways. Firstly, a number of particles have been introduced into the flow. The trajectories of these particles provide information on dispersion rates. For flow conditions similar to those observed in the abyssal N.E. Atlantic the diffusivity of a tracer was found to be 5 x 10 6 cm 2 s -1 for a tracer within the boundary layer and 8 x 10 6 cm 2 s -1 for a tracer above the boundary layer. The results are in accord with estimates made from current meter measurements. The second method of studying dispersion was to calculate the evolution of individual tracer clouds. Clouds within and above the benthic boundary layer often show quite different behaviour from each other although the general structure of the clouds in the two regions were found to have no significant differences. (author)

  3. Meso-scale modeling of air pollution transport/chemistry/deposition and its application

    International Nuclear Information System (INIS)

    Kitada, Toshihiro

    2007-01-01

    Transport/chemistry/deposition model for atmospheric trace chemical species is now regarded as an important tool for an understanding of the effects of various human activities, such as fuel combustion and deforestation, on human health, eco-system, and climate and for planning of appropriate control of emission sources. Several 'comprehensive' models have been proposed such as RADM (Chang, et al., 1987), STEM-II (Carmichael, et al., 1986), and CMAQ (Community Multi-scale Air Quality model, e.g., EPA website, 2003); the 'comprehensive' models include not only gas/aerosol phase chemistry but also aqueous phase chemistry in cloud/rain water in addition to the processes of advection, diffusion, wet deposition (mass transfer between aqueous and gas/aerosol phases), and dry deposition. The target of the development of the 'comprehensive' model will be that the model can correctly reproduce mass balance of various chemical species in the atmosphere with keeping adequate accuracy for calculated concentration distributions of chemical species. For the purpose, one of the important problems is a reliable wet deposition modeling, and here, we introduce two types of methods of 'cloud-resolving' and 'non-cloud-resolving' modeling for the wet deposition of pollutants. (author)

  4. Joint ARM/GCSS/SPARC TWP-ICE CRM Intercomparison Study: Description, Preliminary Results, and Invitation to Participate

    Science.gov (United States)

    Fridlind, A. M.; Ackerman, A. S.; Allen, G.; Beringer, J.; Comstock, J. M.; Field, P. R.; Gallagher, M.; Hacker, J. M.; Hume, T.; Jakob, C.; Liu, G.; Long, C. N.; Mather, J. H.; May, P. T.; McCoy, R. F.; McFarlane, S. A.; McFarquhar, G. M.; Minnis, P.; Petch, J. C.; Schumacher, C.; Turner, D. D.; Whiteway, J. A.; Williams, C. R.; Williams, P. I.; Xie, S.; Zhang, M.

    2008-12-01

    The 2006 Tropical Warm Pool - International Cloud Experiment (TWP-ICE) is 'the first field program in the tropics that attempted to describe the evolution of tropical convection, including the large-scale heat, moisture, and momentum budgets at 3-hourly time resolution, while at the same time obtaining detailed observations of cloud properties and the impact of the clouds on the environment' [May et al., 2008]. A cloud- resolving model (CRM) intercomparison based on TWP-ICE is now being undertaken by the Atmospheric Radiation Measurement (ARM), GEWEX Cloud Systems Study (GCSS), and Stratospheric Processes And their Role in Climate (SPARC) programs. We summarize the 16-day case study and the wealth of data being used to provide initial and boundary conditions, and evaluate some preliminary findings in the context of existing theories of moisture evolution in the tropical tropopause layer (TTL). Overall, simulated cloud fields evolve realistically by many measures. Budgets indicate that simulated convective flux convergence of water vapor is always positive or near zero at TTL elevations, except locally at lower levels during the driest suppressed monsoon conditions, while simulated water vapor deposition to hydrometeors always exceeds sublimation on average at all TTL elevations over 24-hour timescales. The next largest water vapor budget term is generally the nudging required to keep domain averages consistent with observations, which is at least partly attributable to large-scale forcing terms that cannot be derived from measurements. We discuss the primary uncertainties.

  5. Crystal study and econometric model

    Science.gov (United States)

    1975-01-01

    An econometric model was developed that can be used to predict demand and supply figures for crystals over a time horizon roughly concurrent with that of NASA's Space Shuttle Program - that is, 1975 through 1990. The model includes an equation to predict the impact on investment in the crystal-growing industry. Actually, two models are presented. The first is a theoretical model which follows rather strictly the standard theoretical economic concepts involved in supply and demand analysis, and a modified version of the model was developed which, though not quite as theoretically sound, was testable utilizing existing data sources.

  6. Fallout model for system studies

    International Nuclear Information System (INIS)

    Harvey, T.F.; Serduke, F.J.D.

    1979-01-01

    A versatile fallout model was developed to assess complex civil defense and military effect issues. Large technical and scenario uncertainties require a fast, adaptable, time-dependent model to obtain technically defensible fallout results in complex demographic scenarios. The KDFOC2 capability, coupled with other data bases, provides the essential tools to consider tradeoffs between various plans and features in different nuclear scenarios and estimate the technical uncertainties in the predictions. All available data were used to validate the model. In many ways, the capability is unmatched in its ability to predict fallout hazards to a society

  7. BIOMOVS: an international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1988-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (author)

  8. BIOMOVS: An international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1987-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (orig.)

  9. Imitation Modeling and Institutional Studies

    Directory of Open Access Journals (Sweden)

    Maksim Y. Barbashin

    2017-09-01

    Full Text Available This article discusses the use of imitation modeling in the conduct of institutional research. The institutional approach is based on the observation of social behavior. To understand a social process means to determine the key rules that individuals use, undertaking social actions associated with this process or phenomenon. This does not mean that institutions determine behavioral reactions, although there are a number of social situations where the majority of individuals follow the dominant rules. If the main laws of development of the institutional patterns are known, one can describe most of the social processes accurately. The author believes that the main difficulty with the analysis of institutional processes is their recursive nature: from the standards of behavior one may find the proposed actions of social agents who follow, obey or violate institutions, but the possibility of reconstructive analysis is not obvious. The author demonstrates how the institutional approach is applied to the analysis of social behavior. The article describes the basic principles and methodology of imitation modeling. Imitation modeling reveals the importance of institutions in structuring social transactions. The article concludes that in the long term institutional processes are not determined by initial conditions.

  10. Studying shocks in model astrophysical flows

    International Nuclear Information System (INIS)

    Chakrabarti, S.K.

    1989-01-01

    We briefly discuss some properties of the shocks in the existing models for quasi two-dimensional astrophysical flows. All of these models which allow the study of shock analytically have some unphysical characteristics due to inherent assumptions made. We propose a hybrid model for a thin flow which has fewer unpleasant features and is suitable for the study of shocks. (author). 5 refs

  11. Operations planning simulation: Model study

    Science.gov (United States)

    1974-01-01

    The use of simulation modeling for the identification of system sensitivities to internal and external forces and variables is discussed. The technique provides a means of exploring alternate system procedures and processes, so that these alternatives may be considered on a mutually comparative basis permitting the selection of a mode or modes of operation which have potential advantages to the system user and the operator. These advantages are measurements is system efficiency are: (1) the ability to meet specific schedules for operations, mission or mission readiness requirements or performance standards and (2) to accomplish the objectives within cost effective limits.

  12. Simulation of Flash-Flood-Producing Storm Events in Saudi Arabia Using the Weather Research and Forecasting Model

    KAUST Repository

    Deng, Liping

    2015-05-01

    The challenges of monitoring and forecasting flash-flood-producing storm events in data-sparse and arid regions are explored using the Weather Research and Forecasting (WRF) Model (version 3.5) in conjunction with a range of available satellite, in situ, and reanalysis data. Here, we focus on characterizing the initial synoptic features and examining the impact of model parameterization and resolution on the reproduction of a number of flood-producing rainfall events that occurred over the western Saudi Arabian city of Jeddah. Analysis from the European Centre for Medium-Range Weather Forecasts (ECMWF) interim reanalysis (ERA-Interim) data suggests that mesoscale convective systems associated with strong moisture convergence ahead of a trough were the major initial features for the occurrence of these intense rain events. The WRF Model was able to simulate the heavy rainfall, with driving convective processes well characterized by a high-resolution cloud-resolving model. The use of higher (1 km vs 5 km) resolution along the Jeddah coastline favors the simulation of local convective systems and adds value to the simulation of heavy rainfall, especially for deep-convection-related extreme values. At the 5-km resolution, corresponding to an intermediate study domain, simulation without a cumulus scheme led to the formation of deeper convective systems and enhanced rainfall around Jeddah, illustrating the need for careful model scheme selection in this transition resolution. In analysis of multiple nested WRF simulations (25, 5, and 1 km), localized volume and intensity of heavy rainfall together with the duration of rainstorms within the Jeddah catchment area were captured reasonably well, although there was evidence of some displacements of rainstorm events.

  13. QCD and Standard Model Studies

    Energy Technology Data Exchange (ETDEWEB)

    Gagliardi, Carl A [Texas A & M Univ., College Station, TX (United States)

    2017-02-28

    Our group has focused on using jets in STAR to investigate the longitudinal and transverse spin structure of the proton. We performed measurements of the longitudinal double-spin asymmetry for inclusive jet production that provide the strongest evidence to date that the gluons in the proton with x>0.05 are polarized. We also made the first observation of the Collins effect in pp collisions, thereby providing an important test of the universality of the Collins fragmentation function and opening a new tool to probe quark transversity in the proton. Our studies of forward rapidity electromagnetic jet-like events raise serious question whether the large transverse spin asymmetries that have been seen for forward inclusive hadron production arise from conventional 2 → 2 parton scattering. This is the final technical report for DOE Grant DE-FG02-93ER40765. It covers activities during the period January 1, 2015 through November 30, 2016.

  14. Evaluating the Performance of the Goddard Multi-Scale Modeling Framework against GPM, TRMM and CloudSat/CALIPSO Products

    Science.gov (United States)

    Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.

    2014-12-01

    Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.

  15. A SYSTEMATIC STUDY OF SOFTWARE QUALITY MODELS

    OpenAIRE

    Dr.Vilas. M. Thakare; Ashwin B. Tomar

    2011-01-01

    This paper aims to provide a basis for software quality model research, through a systematic study ofpapers. It identifies nearly seventy software quality research papers from journals and classifies paper asper research topic, estimation approach, study context and data set. The paper results combined withother knowledge provides support for recommendations in future software quality model research, toincrease the area of search for relevant studies, carefully select the papers within a set ...

  16. Higher-fidelity yet efficient modeling of radiation energy transport through three-dimensional clouds

    International Nuclear Information System (INIS)

    Hall, M.L.; Davis, A.B.

    2005-01-01

    Accurate modeling of radiative energy transport through cloudy atmospheres is necessary for both climate modeling with GCMs (Global Climate Models) and remote sensing. Previous modeling efforts have taken advantage of extreme aspect ratios (cells that are very wide horizontally) by assuming a 1-D treatment vertically - the Independent Column Approximation (ICA). Recent attempts to resolve radiation transport through the clouds have drastically changed the aspect ratios of the cells, moving them closer to unity, such that the ICA model is no longer valid. We aim to provide a higher-fidelity atmospheric radiation transport model which increases accuracy while maintaining efficiency. To that end, this paper describes the development of an efficient 3-D-capable radiation code that can be easily integrated into cloud resolving models as an alternative to the resident 1-D model. Applications to test cases from the Intercomparison of 3-D Radiation Codes (I3RC) protocol are shown

  17. Models to Study Colonisation and Colonisation Resistance

    OpenAIRE

    Boreau, H.; Hartmann, L.; Karjalainen, T.; Rowland, I.; Wilkinson, M. H. F.

    2011-01-01

    This review describes various in vivo animal models (humans; conventional animals administered antimicrobial agents and animals species used; gnotobiotic and germ-free animals), in vitro models (luminal and mucosal), and in silico and mathematicalmodels which have been developed to study colonisation and colonisation resistance and effects of gut flora on hosts. Where applicable, the advantages and disadvantages of each model are discussed.Keywords: colonisation, colonisation resistance, anim...

  18. Subtropical Low Cloud Response to a Warmer Climate in an Superparameterized Climate Model: Part I. Regime Sorting and Physical Mechanisms

    Directory of Open Access Journals (Sweden)

    Peter N Blossey

    2009-07-01

    Full Text Available The subtropical low cloud response to a climate with SST uniformly warmed by 2 K is analyzed in the SP- CAM superparameterized climate model, in which each grid column is replaced by a two-dimensional cloud-resolving model (CRM. Intriguingly, SP-CAM shows substantial low cloud increases over the subtropical oceans in the warmer climate. The paper aims to understand the mechanism for these increases. The subtropical low cloud increase is analyzed by sorting grid-column months of the climate model into composite cloud regimes using percentile ranges of lower tropospheric stability (LTS. LTS is observed to be well correlated to subtropical low cloud amount and boundary layer vertical structure. The low cloud increase in SP-CAM is attributed to boundary-layer destabilization due to increased clear-sky radiative cooling in the warmer climate. This drives more shallow cumulus convection and a moister boundary layer, inducing cloud increases and further increasing the radiative cooling. The boundary layer depth does not change substantially, due to compensation between increased radiative cooling (which promotes more turbulent mixing and boundary-layer deepening and slight strengthening of the boundary-layer top inversion (which inhibits turbulent entrainment and promotes a shallower boundary layer. The widespread changes in low clouds do not appear to be driven by changes in mean subsidence.
    In a companion paper we use column-mode CRM simulations based on LTS-composite profiles to further study the low cloud response mechanisms and to explore the sensitivity of low cloud response to grid resolution in SP-CAM.

  19. A Study of Simple Diffraction Models

    DEFF Research Database (Denmark)

    Agerkvist, Finn

    In this paper two simple methods for cabinet edge diffraction are examined. Calculations with both models are compared with more sophisticated theoretical models and with measured data. The parameters involved are studied and their importance for normal loudspeaker box designs is examined....

  20. Nuclear clustering - a cluster core model study

    International Nuclear Information System (INIS)

    Paul Selvi, G.; Nandhini, N.; Balasubramaniam, M.

    2015-01-01

    Nuclear clustering, similar to other clustering phenomenon in nature is a much warranted study, since it would help us in understanding the nature of binding of the nucleons inside the nucleus, closed shell behaviour when the system is highly deformed, dynamics and structure at extremes. Several models account for the clustering phenomenon of nuclei. We present in this work, a cluster core model study of nuclear clustering in light mass nuclei

  1. A mixed model framework for teratology studies.

    Science.gov (United States)

    Braeken, Johan; Tuerlinckx, Francis

    2009-10-01

    A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.

  2. Mining Product Data Models: A Case Study

    Directory of Open Access Journals (Sweden)

    Cristina-Claudia DOLEAN

    2014-01-01

    Full Text Available This paper presents two case studies used to prove the validity of some data-flow mining algorithms. We proposed the data-flow mining algorithms because most part of mining algorithms focuses on the control-flow perspective. First case study uses event logs generated by an ERP system (Navision after we set several trackers on the data elements needed in the process analyzed; while the second case study uses the event logs generated by YAWL system. We offered a general solution of data-flow model extraction from different data sources. In order to apply the data-flow mining algorithms the event logs must comply a certain format (using InputOutput extension. But to respect this format, a set of conversion tools is needed. We depicted the conversion tools used and how we got the data-flow models. Moreover, the data-flow model is compared to the control-flow model.

  3. Improving the Understanding and Model Representation of Processes that Couple Shallow Clouds, Aerosols, and Land-Ecosystems

    Science.gov (United States)

    Fast, J. D.; Berg, L. K.; Schmid, B.; Alexander, M. L. L.; Bell, D.; D'Ambro, E.; Hubbe, J. M.; Liu, J.; Mei, F.; Pekour, M. S.; Pinterich, T.; Schobesberger, S.; Shilling, J.; Springston, S. R.; Thornton, J. A.; Tomlinson, J. M.; Wang, J.; Zelenyuk, A.

    2016-12-01

    Cumulus convection is an important component in the atmospheric radiation budget and hydrologic cycle over the southern Great Plains and over many regions of the world, particularly during the summertime growing season when intense turbulence induced by surface radiation couples the land surface to clouds. Current convective cloud parameterizations, however, contain uncertainties resulting from insufficient coincident data that couples cloud macrophysical and microphysical properties to inhomogeneity in surface layer, boundary layer, and aerosol properties. We describe the measurement strategy and preliminary findings from the recent Holistic Interactions of Shallow Clouds, Aerosols, and Land-Ecosystems (HI-SCALE) campaign conducted in May and September of 2016 in the vicinity of the DOE's Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site located in Oklahoma. The goal of the HI-SCALE campaign is to provide a detailed set of aircraft and surface measurements needed to obtain a more complete understanding and improved parameterizations of the lifecycle of shallow clouds. The sampling is done in two periods, one in the spring and the other in the late summer to take advantage of variations in the "greenness" for various types of vegetation, new particle formation, anthropogenic enhancement of biogenic secondary organic aerosol (SOA), and other aerosol properties. The aircraft measurements will be coupled with extensive routine ARM SGP measurements as well as Large Eddy Simulation (LES), cloud resolving, and cloud-system resolving models. Through these integrated analyses and modeling studies, the affects of inhomogeneity in land use, vegetation, soil moisture, convective eddies, and aerosol properties on the evolution of shallow clouds will be determined, including the feedbacks of cloud radiative effects.

  4. A model study of bridge hydraulics

    Science.gov (United States)

    2010-08-01

    Most flood studies in the United States use the Army Corps of Engineers HEC-RAS (Hydrologic Engineering : Centers River Analysis System) computer program. This study was carried out to compare results of HEC-RAS : bridge modeling with laboratory e...

  5. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  6. Neonatal Seizure Models to Study Epileptogenesis

    Directory of Open Access Journals (Sweden)

    Yuka Kasahara

    2018-04-01

    Full Text Available Current therapeutic strategies for epilepsy include anti-epileptic drugs and surgical treatments that are mainly focused on the suppression of existing seizures rather than the occurrence of the first spontaneous seizure. These symptomatic treatments help a certain proportion of patients, but these strategies are not intended to clarify the cellular and molecular mechanisms underlying the primary process of epilepsy development, i.e., epileptogenesis. Epileptogenic changes include reorganization of neural and glial circuits, resulting in the formation of an epileptogenic focus. To achieve the goal of developing “anti-epileptogenic” drugs, we need to clarify the step-by-step mechanisms underlying epileptogenesis for patients whose seizures are not controllable with existing “anti-epileptic” drugs. Epileptogenesis has been studied using animal models of neonatal seizures because such models are useful for studying the latent period before the occurrence of spontaneous seizures and the lowering of the seizure threshold. Further, neonatal seizure models are generally easy to handle and can be applied for in vitro studies because cells in the neonatal brain are suitable for culture. Here, we review two animal models of neonatal seizures for studying epileptogenesis and discuss their features, specifically focusing on hypoxia-ischemia (HI-induced seizures and febrile seizures (FSs. Studying these models will contribute to identifying the potential therapeutic targets and biomarkers of epileptogenesis.

  7. Differential Equations Models to Study Quorum Sensing.

    Science.gov (United States)

    Pérez-Velázquez, Judith; Hense, Burkhard A

    2018-01-01

    Mathematical models to study quorum sensing (QS) have become an important tool to explore all aspects of this type of bacterial communication. A wide spectrum of mathematical tools and methods such as dynamical systems, stochastics, and spatial models can be employed. In this chapter, we focus on giving an overview of models consisting of differential equations (DE), which can be used to describe changing quantities, for example, the dynamics of one or more signaling molecule in time and space, often in conjunction with bacterial growth dynamics. The chapter is divided into two sections: ordinary differential equations (ODE) and partial differential equations (PDE) models of QS. Rates of change are represented mathematically by derivatives, i.e., in terms of DE. ODE models allow describing changes in one independent variable, for example, time. PDE models can be used to follow changes in more than one independent variable, for example, time and space. Both types of models often consist of systems (i.e., more than one equation) of equations, such as equations for bacterial growth and autoinducer concentration dynamics. Almost from the onset, mathematical modeling of QS using differential equations has been an interdisciplinary endeavor and many of the works we revised here will be placed into their biological context.

  8. Mixed models in cerebral ischemia study

    Directory of Open Access Journals (Sweden)

    Matheus Henrique Dal Molin Ribeiro

    2016-06-01

    Full Text Available The data modeling from longitudinal studies stands out in the current scientific scenario, especially in the areas of health and biological sciences, which induces a correlation between measurements for the same observed unit. Thus, the modeling of the intra-individual dependency is required through the choice of a covariance structure that is able to receive and accommodate the sample variability. However, the lack of methodology for correlated data analysis may result in an increased occurrence of type I or type II errors and underestimate/overestimate the standard errors of the model estimates. In the present study, a Gaussian mixed model was adopted for the variable response latency of an experiment investigating the memory deficits in animals subjected to cerebral ischemia when treated with fish oil (FO. The model parameters estimation was based on maximum likelihood methods. Based on the restricted likelihood ratio test and information criteria, the autoregressive covariance matrix was adopted for errors. The diagnostic analyses for the model were satisfactory, since basic assumptions and results obtained corroborate with biological evidence; that is, the effectiveness of the FO treatment to alleviate the cognitive effects caused by cerebral ischemia was found.

  9. A Case Study Application Of Time Study Model In Paint ...

    African Journals Online (AJOL)

    This paper presents a case study in the development and application of a time study model in a paint manufacturing company. The organization specializes in the production of different grades of paint and paint containers. The paint production activities include; weighing of raw materials, drying of raw materials, dissolving ...

  10. Overhead distribution line models for harmonics studies

    Energy Technology Data Exchange (ETDEWEB)

    Nagpal, M.; Xu, W.; Dommel, H.W.

    1994-01-01

    Carson's formulae and Maxwell's potential coefficients are used for calculating the per unit length series impedances and shunt capacitances of the overhead lines. The per unit length values are then used for building the models, nominal pi-circuit, and equivalent pi-circuit at the harmonic frequencies. This paper studies the accuracy of these models for presenting the overhead distribution lines in steady-state harmonic solutions at frequencies up to 5 kHz. The models are verified with a field test on a 25 kV distribution line and the sensitivity of the models to ground resistivity, skin effect, and multiple grounding is reported.

  11. Parametric study of a thorium model

    International Nuclear Information System (INIS)

    Lourenco, M.C.; Lipztein, J.L.; Szwarcwald, C.L.

    1997-01-01

    Full text. Models for radionuclides distribution in the human body and dosimetry involve assumptions on the biokinetic behaviour of the material among compartments representing organs and tissues in the body. The lack of knowledge about the metabolic behaviour of a radionuclide represents a factor of uncertainty in estimates of committed dose equivalent. An important problem in biokinetic modeling is the correct assignment of transfer coefficients and biological half-lives to body compartments. The purpose of this study is to analyze the variability in the activities of the body compartments in relation to the variations in the transfer coefficients and compartments biological half-lives in a certain model. A thorium specific recycling model for continuous exposure was used. Multiple regression analysis methods were applied to analyze the results

  12. Process modeling study of the CIF incinerator

    International Nuclear Information System (INIS)

    Hang, T.

    1995-01-01

    The Savannah River Site (SRS) plans to begin operating the Consolidated Incineration Facility (CIF) in 1996. The CIF will treat liquid and solid low-level radioactive, mixed and RCRA hazardous wastes generated at SRS. In addition to experimental test programs, process modeling was applied to provide guidance in areas of safety, environmental regulation compliances, process improvement and optimization. A steady-state flowsheet model was used to calculate material/energy balances and to track key chemical constituents throughout the process units. Dynamic models were developed to predict the CIF transient characteristics in normal and abnormal operation scenarios. Predictions include the rotary kiln heat transfer, dynamic responses of the CIF to fluctuations in the solid waste feed or upsets in the system equipments, performance of the control system, air inleakage in the kiln, etc. This paper reviews the modeling study performed to assist in the deflagration risk assessment

  13. Leggett's noncontextual model studied with neutrons

    International Nuclear Information System (INIS)

    Durstberger-Rennhofer, K.; Sponar, S.; Badurek, G.; Hasegawa, Y.; Schmitzer, C.; Bartosik, H.; Klepp, J.

    2011-01-01

    Full text: It is a long-lasting debate whether nature can be described by deterministic hidden variable theories (HVT) underlying quantum mechanics (QM). Bell inequalities for local HVT as well as the Kochen- Specker theorem for non-contextual models stress the conflict between these alternative theories and QM. Leggett showed that even nonlocal hidden variable models are incompatible with quantum predictions. Neutron interferometry and polarimetry are very proper tools to analyse the behaviour of single neutron systems, where entanglement is created between different degrees of freedom (e.g., spin/ path, spin/energy) and thus quantum contextuality can be studied. We report the first experimental test of a contextual model of quantum mechanics a la Leggett, which deals with definiteness of measurement results before the measurements. The results show a discrepancy between our model and quantum mechanics of more than 7 standard deviations and confirm quantum indefiniteness under the contextual condition. (author)

  14. Parametric study of a thorium model

    International Nuclear Information System (INIS)

    Lourenco, M.C.; Lipsztein, J.L.; Szwarcwald, C.L.

    2002-01-01

    Models for radionuclides distribution in the human body and dosimetry involve assumptions on the biokinetic behavior of the material among compartments representing organs and tissues in the body. One of the most important problem in biokinetic modeling is the assignment of transfer coefficients and biological half-lives to body compartments. In Brazil there are many areas of high natural radioactivity, where the population is chronically exposed to radionuclides of the thorium series. The uncertainties of the thorium biokinetic model are a major cause of uncertainty in the estimates of the committed dose equivalent of the population living in high background areas. The purpose of this study is to discuss the variability in the thorium activities accumulated in the body compartments in relation to the variations in the transfer coefficients and compartments biological half-lives of a thorium-recycling model for continuous exposure. Multiple regression analysis methods were applied to analyze the results. (author)

  15. Parametric study for horizontal steam generator modelling

    Energy Technology Data Exchange (ETDEWEB)

    Ovtcharova, I. [Energoproekt, Sofia (Bulgaria)

    1995-12-31

    In the presentation some of the calculated results of horizontal steam generator PGV - 440 modelling with RELAP5/Mod3 are described. Two nodalization schemes have been used with different components in the steam dome. A study of parameters variation on the steam generator work and calculated results is made in cases with separator and branch.

  16. Parametric study for horizontal steam generator modelling

    Energy Technology Data Exchange (ETDEWEB)

    Ovtcharova, I [Energoproekt, Sofia (Bulgaria)

    1996-12-31

    In the presentation some of the calculated results of horizontal steam generator PGV - 440 modelling with RELAP5/Mod3 are described. Two nodalization schemes have been used with different components in the steam dome. A study of parameters variation on the steam generator work and calculated results is made in cases with separator and branch.

  17. Experimental and modelling studies of infiltration

    International Nuclear Information System (INIS)

    Giudici, M.

    2004-01-01

    This presentation describes a study of infiltration in the unsaturated soil with the objective of estimating the recharge to a phreatic aquifer. The study area is at the border of the city of Milan (Northern Italy), which draws water for both domestic and industrial purposes from ground water resources located beneath the urban area. The rate of water pumping from the aquifer system has been varying during the XX century, depending upon the number of inhabitants and the development of industrial activities. This caused variations with time of the depth of the water table below the ground surface and in turn some emergencies: the two most prominent episodes correspond to the middle '70s, when the water table in the city centre was about 30 m below the undisturbed natural conditions, and to the last decade, when the water table has raised at a rate of approximately 1 m/year and caused infiltrations in deep constructions (garages and building foundations, the underground railways, etc.). We have developed four ground water flow models at different scales, which share some characteristics: they are based on quasi-3D approximation (horizontal flow in the aquifers and vertical flow in the aquitards), conservative finite-differences schemes for regular grid with square cells in the horizontal plane and are implemented with proprietary computer codes. Among the problems that were studied for the development of these models, I recall some numerical problems, related to the behaviour of the phreatic aquifer under conditions of strong exploitation. Model calibration and validation for ModMil has been performed with a two-stage process, i.e., using some of the available data for model calibration and the remaining data for model validation. The application of geophysical exploration techniques, in particular seismic and geo-electrical prospecting, has been very useful to complete the data and information on the hydro-geological structure obtained from stratigraphic logs

  18. Final Technical Report for "High-resolution global modeling of the effects of subgrid-scale clouds and turbulence on precipitating cloud systems"

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Vincent [Univ. of Wisconsin, Milwaukee, WI (United States)

    2016-11-25

    The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. The chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.

  19. Pitting corrosion of copper. Further model studies

    International Nuclear Information System (INIS)

    Taxen, C.

    2002-08-01

    The work presented in this report is a continuation and expansion of a previous study. The aim of the work is to provide background information about pitting corrosion of copper for a safety analysis of copper canisters for final deposition of radioactive waste. A mathematical model for the propagation of corrosion pits is used to estimate the conditions required for stationary propagation of a localised anodic corrosion process. The model uses equilibrium data for copper and its corrosion products and parameters for the aqueous mass transport of dissolved species. In the present work we have, in the model, used a more extensive set of aqueous and solid compounds and equilibrium data from a different source. The potential dependence of pitting in waters with different compositions is studied in greater detail. More waters have been studied and single parameter variations in the composition of the water have been studied over wider ranges of concentration. The conclusions drawn in the previous study are not contradicted by the present results. However, the combined effect of potential and water composition on the possibility of pitting corrosion is more complex than was realised. In the previous study we found what seemed to be a continuous aggravation of a pitting situation by increasing potentials. The present results indicate that pitting corrosion can take place only over a certain potential range and that there is an upper potential limit for pitting as well as a lower. A sensitivity analysis indicates that the model gives meaningful predictions of the minimum pitting potential also when relatively large errors in the input parameters are allowed for

  20. Analytical study of anisotropic compact star models

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, B.V. [Bulgarian Academy of Science, Institute for Nuclear Research and Nuclear Energy, Sofia (Bulgaria)

    2017-11-15

    A simple classification is given of the anisotropic relativistic star models, resembling the one of charged isotropic solutions. On the ground of this database, and taking into account the conditions for physically realistic star models, a method is proposed for generating all such solutions. It is based on the energy density and the radial pressure as seeding functions. Numerous relations between the realistic conditions are found and the need for a graphic proof is reduced just to one pair of inequalities. This general formalism is illustrated with an example of a class of solutions with linear equation of state and simple energy density. It is found that the solutions depend on three free constants and concrete examples are given. Some other popular models are studied with the same method. (orig.)

  1. Integrated core-edge-divertor modeling studies

    International Nuclear Information System (INIS)

    Stacey, W.M.

    2001-01-01

    An integrated calculation model for simulating the interaction of physics phenomena taking place in the plasma core, in the plasma edge and in the SOL and divertor of tokamaks has been developed and applied to study such interactions. The model synthesises a combination of numerical calculations (1) the power and particle balances for the core plasma, using empirical confinement scaling laws and taking into account radiation losses (2), the particle, momentum and power balances in the SOL and divertor, taking into account the effects of radiation and recycling neutrals, (3) the transport of feeling and recycling neutrals, explicitly representing divertor and pumping geometry, and (4) edge pedestal gradient scale lengths and widths, evaluation of theoretical predictions (5) confinement degradation due to thermal instabilities in the edge pedestals, (6) detachment and divertor MARFE onset, (7) core MARFE onsets leading to a H-L transition, and (8) radiative collapse leading to a disruption and evaluation of empirical fits (9) power thresholds for the L-H and H-L transitions and (10) the width of the edge pedestals. The various components of the calculation model are coupled and must be iterated to a self-consistent convergence. The model was developed over several years for the purpose of interpreting various edge phenomena observed in DIII-D experiments and thereby, to some extent, has been benchmarked against experiment. Because the model treats the interactions of various phenomena in the core, edge and divertor, yet is computationally efficient, it lends itself to the investigation of the effects of different choices of various edge plasma operating conditions on overall divertor and core plasma performance. Studies of the effect of feeling location and rate, divertor geometry, plasma shape, pumping and over 'edge parameters' on core plasma properties (line average density, confinement, density limit, etc.) have been performed for DIII-D model problems. A

  2. Mathematical modelling a case studies approach

    CERN Document Server

    Illner, Reinhard; McCollum, Samantha; Roode, Thea van

    2004-01-01

    Mathematical modelling is a subject without boundaries. It is the means by which mathematics becomes useful to virtually any subject. Moreover, modelling has been and continues to be a driving force for the development of mathematics itself. This book explains the process of modelling real situations to obtain mathematical problems that can be analyzed, thus solving the original problem. The presentation is in the form of case studies, which are developed much as they would be in true applications. In many cases, an initial model is created, then modified along the way. Some cases are familiar, such as the evaluation of an annuity. Others are unique, such as the fascinating situation in which an engineer, armed only with a slide rule, had 24 hours to compute whether a valve would hold when a temporary rock plug was removed from a water tunnel. Each chapter ends with a set of exercises and some suggestions for class projects. Some projects are extensive, as with the explorations of the predator-prey model; oth...

  3. Visualization study of operators' plant knowledge model

    International Nuclear Information System (INIS)

    Kanno, Tarou; Furuta, Kazuo; Yoshikawa, Shinji

    1999-03-01

    Nuclear plants are typically very complicated systems and are required extremely high level safety on the operations. Since it is never possible to include all the possible anomaly scenarios in education/training curriculum, plant knowledge formation is desired for operators to enable thein to act against unexpected anomalies based on knowledge base decision making. The authors have been conducted a study on operators' plant knowledge model for the purpose of supporting operators' effort in forming this kind of plant knowledge. In this report, an integrated plant knowledge model consisting of configuration space, causality space, goal space and status space is proposed. The authors examined appropriateness of this model and developed a prototype system to support knowledge formation by visualizing the operators' knowledge model and decision making process in knowledge-based actions with this model on a software system. Finally the feasibility of this prototype as a supportive method in operator education/training to enhance operators' ability in knowledge-based performance has been evaluated. (author)

  4. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.; Tang, X.Z.; Strauss, H.R.; Sugiyama, L.E.

    1999-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of δf particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future. copyright 1999 American Institute of Physics

  5. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.

    2000-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future

  6. Model study radioecology Biblis. Vol. 2

    International Nuclear Information System (INIS)

    1976-01-01

    The second part of the study deals with questions concerning the release of radioactive substances into the environment via the air pathway and their diffusion. On the basis of an introductory lecture on basic principles the interrelations within the whole complex to be considered are discussed. For the release itself, the fundamental issues concerning type and quantity of radioactive substances and their time behaviour are established. With regard to diffusion, calculation models and their parameters are reviewed. Much the same as the first colloquium, this one too aims at compiling existing models and data, determining the known facts and the questions still open and formulating tasks for the further activities. Thirteen questions emerged. They are concerned, above all, with the existing conditions for release and diffusion at the site. These questions will be dealt with in the forms of partial studies by individual participants or by task groups. Results will be reported on at further colloquia. (orig.) [de

  7. Simulating the 2012 High Plains Drought Using Three Single Column Model Versions of the Community Earth System Model (SCM-CESM)

    Science.gov (United States)

    Medina, I. D.; Denning, S.

    2014-12-01

    The impact of changes in the frequency and severity of drought on fresh water sustainability is a great concern for many regions of the world. One such location is the High Plains, where the local economy is primarily driven by fresh water withdrawals from the Ogallala Aquifer, which accounts for approximately 30% of total irrigation withdrawals from all U.S. aquifers combined. Modeling studies that focus on the feedback mechanisms that control the climate and eco-hydrology during times of drought are limited in the sense that they use conventional General Circulation Models (GCMs) with grid length scales ranging from one hundred to several hundred kilometers. Additionally, these models utilize crude statistical parameterizations of cloud processes for estimating sub-grid fluxes of heat and moisture and have a poor representation of land surface heterogeneity. For this research, we focus on the 2012 High Plains drought, and will perform numerical simulations using three single column model versions of the Community Earth System Model (SCM-CESM) at multiple sites overlying the Ogallala Aquifer for the 2010-2012 period. In the first version of SCM-CESM, CESM will be used in standard mode (Community Atmospheric Model (CAM) coupled to a single instance of the Community Land Model (CLM)), secondly, CESM will be used in Super-Parameterized mode (SP-CESM), where a cloud resolving model (CRM consists of 32 atmospheric columns) replaces the standard CAM atmospheric parameterization and is coupled to a single instance of CLM, and thirdly, CESM is used in "Multi Instance" SP-CESM mode, where an instance of CLM is coupled to each CRM column of SP-CESM (32 CRM columns coupled to 32 instances of CLM). To assess the physical realism of the land-atmosphere feedbacks simulated at each site by all versions of SCM-CESM, differences in simulated energy and moisture fluxes will be computed between years for the 2010-2012 period, and will be compared to differences calculated using

  8. Experimental study and modelling of transient boiling

    International Nuclear Information System (INIS)

    Baudin, Nicolas

    2015-01-01

    A failure in the control system of the power of a nuclear reactor can lead to a Reactivity Initiated Accident in a nuclear power plant. Then, a power peak occurs in some fuel rods, high enough to lead to the coolant film boiling. It leads to an important increase of the temperature of the rod. The possible risk of the clad failure is a matter of interest for the Institut de Radioprotection et de Securite Nucleaire. The transient boiling heat transfer is not yet understood and modelled. An experimental set-up has been built at the Institut de Mecanique des Fluides de Toulouse (IMFT). Subcooled HFE-7000 flows vertically upward in a semi annulus test section. The inner half cylinder simulates the clad and is made of a stainless steel foil, heated by Joule effect. Its temperature is measured by an infrared camera, coupled with a high speed camera for the visualization of the flow topology. The whole boiling curve is studied in steady state and transient regimes: convection, onset of boiling, nucleate boiling, critical heat flux, film boiling and rewetting. The steady state heat transfers are well modelled by literature correlations. Models are suggested for the transient heat flux: the convection and nucleate boiling evolutions are self-similar during a power step. This observation allows to model more complex evolutions, as temperature ramps. The transient Hsu model well represents the onset of nucleate boiling. When the intensity of the power step increases, the film boiling begins at the same temperature but with an increasing heat flux. For power ramps, the critical heat flux decreases while the corresponding temperature increases with the heating rate. When the wall is heated, the film boiling heat transfer is higher than in steady state but it is not understood. A two-fluid model well simulates the cooling film boiling and the rewetting. (author)

  9. Simulation model for studying low frequency microinstabilities

    International Nuclear Information System (INIS)

    Lee, W.W.; Okuda, H.

    1976-03-01

    A 2 1 / 2 dimensional, electrostatic particle code in a slab geometry has been developed to study low frequency oscillations such as drift wave and trapped particle instabilities in a nonuniform bounded plasma. A drift approximation for the electron transverse motion is made which eliminates the high frequency oscillations at the electron gyrofrequency and its multiples. It is, therefore, possible to study the nonlinear effects such as the anomalous transport of plasmas within a reasonable computing time using a real mass ratio. Several examples are given to check the validity and usefulness of the model

  10. BIOMASS REBURNING - MODELING/ENGINEERING STUDIES

    International Nuclear Information System (INIS)

    Vladimir Zamansky; David Moyeda; Mark Sheldon

    2000-01-01

    This project is designed to develop engineering and modeling tools for a family of NO(sub x) control technologies utilizing biomass as a reburning fuel. During the tenth reporting period (January 1-March 31, 2000), EER and NETL R and D group continued to work on Tasks 2, 3, 4, and 5. Information regarding these tasks will be included in the next Quarterly Report. This report includes (Appendix 1) a conceptual design study for the introduction of biomass reburning in a working coal-fired utility boiler. This study was conducted under the coordinated SBIR program funded by the U. S. Department of Agriculture

  11. Technical data report : marine acoustics modelling study

    Energy Technology Data Exchange (ETDEWEB)

    Chorney, N.; Warner, G.; Austin, M. [Jasco Applied Sciences, Victoria, BC (Canada)

    2010-07-01

    This study was conducted to predict the ensonification produced by vessel traffic transiting to and from the Enbridge Northern Gateway Project's marine terminal located near Kitimat, British Columbia (BC). An underwater acoustic propagation model was used to model frequency bands from 20 Hz to 5 kHz at a standard depth of 20 metres. The model included bathymetric grids of the modelling area; underwater sound speed as a function of depth; and geo-acoustic profiles based on the stratified composition of the seafloor. The obtained 1/3 octave band levels were then used to determine broadband received sound levels for 4 scenarios along various transit routes: the Langara and Triple Island in Dixon Entrance; the Browning Entrance in Hecate Strait, and Cape St. James in the Queen Charlotte Basin. The scenarios consisted of a tanker transiting at 16 knots, and an accompanying tug boat. Underwater sound level maps for each scenario were presented. 14 refs., 5 tabs., 16 figs.

  12. Explicit prediction of ice clouds in general circulation models

    Science.gov (United States)

    Kohler, Martin

    1999-11-01

    Although clouds play extremely important roles in the radiation budget and hydrological cycle of the Earth, there are large quantitative uncertainties in our understanding of their generation, maintenance and decay mechanisms, representing major obstacles in the development of reliable prognostic cloud water schemes for General Circulation Models (GCMs). Recognizing their relative neglect in the past, both observationally and theoretically, this work places special focus on ice clouds. A recent version of the UCLA - University of Utah Cloud Resolving Model (CRM) that includes interactive radiation is used to perform idealized experiments to study ice cloud maintenance and decay mechanisms under various conditions in term of: (1) background static stability, (2) background relative humidity, (3) rate of cloud ice addition over a fixed initial time-period and (4) radiation: daytime, nighttime and no-radiation. Radiation is found to have major effects on the life-time of layer-clouds. Optically thick ice clouds decay significantly slower than expected from pure microphysical crystal fall-out (taucld = 0.9--1.4 h as opposed to no-motion taumicro = 0.5--0.7 h). This is explained by the upward turbulent fluxes of water induced by IR destabilization, which partially balance the downward transport of water by snowfall. Solar radiation further slows the ice-water decay by destruction of the inversion above cloud-top and the resulting upward transport of water. Optically thin ice clouds, on the other hand, may exhibit even longer life-times (>1 day) in the presence of radiational cooling. The resulting saturation mixing ratio reduction provides for a constant cloud ice source. These CRM results are used to develop a prognostic cloud water scheme for the UCLA-GCM. The framework is based on the bulk water phase model of Ose (1993). The model predicts cloud liquid water and cloud ice separately, and which is extended to split the ice phase into suspended cloud ice (predicted

  13. Simulating the 2012 High Plains Drought Using Three Single Column Models (SCM)

    Science.gov (United States)

    Medina, I. D.; Baker, I. T.; Denning, S.; Dazlich, D. A.

    2015-12-01

    The impact of changes in the frequency and severity of drought on fresh water sustainability is a great concern for many regions of the world. One such location is the High Plains, where the local economy is primarily driven by fresh water withdrawals from the Ogallala Aquifer, which accounts for approximately 30% of total irrigation withdrawals from all U.S. aquifers combined. Modeling studies that focus on the feedback mechanisms that control the climate and eco-hydrology during times of drought are limited, and have used conventional General Circulation Models (GCMs) with grid length scales ranging from one hundred to several hundred kilometers. Additionally, these models utilize crude statistical parameterizations of cloud processes for estimating sub-grid fluxes of heat and moisture and have a poor representation of land surface heterogeneity. For this research, we focus on the 2012 High Plains drought and perform numerical simulations using three single column model (SCM) versions of BUGS5 (Colorado State University (CSU) GCM coupled to the Simple Biosphere Model (SiB3)). In the first version of BUGS5, the model is used in its standard bulk setting (single atmospheric column coupled to a single instance of SiB3), secondly, the Super-Parameterized Community Atmospheric Model (SP-CAM), a cloud resolving model (CRM) (CRM consists of 32 atmospheric columns), replaces the single CSU GCM atmospheric parameterization and is coupled to a single instance of SiB3, and for the third version of BUGS5, an instance of SiB3 is coupled to each CRM column of the SP-CAM (32 CRM columns coupled to 32 instances of SiB3). To assess the physical realism of the land-atmosphere feedbacks simulated by all three versions of BUGS5, differences in simulated energy and moisture fluxes are computed between the 2011 and 2012 period and are compared to those calculated using observational data from the AmeriFlux Tower Network for the same period at the ARM Site in Lamont, OK. This research

  14. Variational study of the pair hopping model

    International Nuclear Information System (INIS)

    Fazekas, P.

    1990-01-01

    We study the ground state of a Hamiltonian introduced by Kolb and Penson for modelling situations in which small electron pairs are formed. The Hamiltonian consists of a tight binding band term, and a term describing the nearest neighbour hopping of electron pairs. We give a Gutzwiller-type variational treatment, first with a single-parameter Ansatz treated in the single site Gutzwiller approximation, and then with more complicated trial wave functions, and an improved Gutzwiller approximation. The calculation yields a transition from a partially paired normal state, in which the spin susceptibility has a diminished value, into a fully paired state. (author). 16 refs, 2 figs

  15. A Simple Model to Study Tau Pathology

    Directory of Open Access Journals (Sweden)

    Alexander L. Houck

    2016-01-01

    Full Text Available Tau proteins play a role in the stabilization of microtubules, but in pathological conditions, tauopathies, tau is modified by phosphorylation and can aggregate into aberrant aggregates. These aggregates could be toxic to cells, and different cell models have been used to test for compounds that might prevent these tau modifications. Here, we have used a cell model involving the overexpression of human tau in human embryonic kidney 293 cells. In human embryonic kidney 293 cells expressing tau in a stable manner, we have been able to replicate the phosphorylation of intracellular tau. This intracellular tau increases its own level of phosphorylation and aggregates, likely due to the regulatory effect of some growth factors on specific tau kinases such as GSK3. In these conditions, a change in secreted tau was observed. Reversal of phosphorylation and aggregation of tau was found by the use of lithium, a GSK3 inhibitor. Thus, we propose this as a simple cell model to study tau pathology in nonneuronal cells due to their viability and ease to work with.

  16. Risk modelling study for carotid endarterectomy.

    Science.gov (United States)

    Kuhan, G; Gardiner, E D; Abidia, A F; Chetter, I C; Renwick, P M; Johnson, B F; Wilkinson, A R; McCollum, P T

    2001-12-01

    The aims of this study were to identify factors that influence the risk of stroke or death following carotid endarterectomy (CEA) and to develop a model to aid in comparative audit of vascular surgeons and units. A series of 839 CEAs performed by four vascular surgeons between 1992 and 1999 was analysed. Multiple logistic regression analysis was used to model the effect of 15 possible risk factors on the 30-day risk of stroke or death. Outcome was compared for four surgeons and two units after adjustment for the significant risk factors. The overall 30-day stroke or death rate was 3.9 per cent (29 of 741). Heart disease, diabetes and stroke were significant risk factors. The 30-day predicted stroke or death rates increased with increasing risk scores. The observed 30-day stroke or death rate was 3.9 per cent for both vascular units and varied from 3.0 to 4.2 per cent for the four vascular surgeons. Differences in the outcomes between the surgeons and vascular units did not reach statistical significance after risk adjustment. Diabetes, heart disease and stroke are significant risk factors for stroke or death following CEA. The risk score model identified patients at higher risk and aided in comparative audit.

  17. A study on an optimal movement model

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton BN1 9QH, UK (United Kingdom); Zhang, Kewei [SMS, Sussex University, Brighton BN1 9QH (United Kingdom); Luo Yousong [Department of Mathematics and Statistics, RMIT University, GOP Box 2476V, Melbourne, Vic 3001 (Australia)

    2003-07-11

    We present an analytical and rigorous study on a TOPS (task optimization in the presence of signal-dependent noise) model with a hold-on or an end-point control. Optimal control signals are rigorously obtained, which enables us to investigate various issues about the model including its trajectories, velocities, control signals, variances and the dependence of these quantities on various model parameters. With the hold-on control, we find that the optimal control can be implemented with an almost 'nil' hold-on period. The optimal control signal is a linear combination of two sub-control signals. One of the sub-control signals is positive and the other is negative. With the end-point control, the end-point variance is dramatically reduced, in comparison with the hold-on control. However, the velocity is not symmetric (bell shape). Finally, we point out that the velocity with a hold-on control takes the bell shape only within a limited parameter region.

  18. Evaluating Cloud and Precipitation Processes in Numerical Models using Current and Potential Future Satellite Missions

    Science.gov (United States)

    van den Heever, S. C.; Tao, W. K.; Skofronick Jackson, G.; Tanelli, S.; L'Ecuyer, T. S.; Petersen, W. A.; Kummerow, C. D.

    2015-12-01

    Cloud, aerosol and precipitation processes play a fundamental role in the water and energy cycle. It is critical to accurately represent these microphysical processes in numerical models if we are to better predict cloud and precipitation properties on weather through climate timescales. Much has been learned about cloud properties and precipitation characteristics from NASA satellite missions such as TRMM, CloudSat, and more recently GPM. Furthermore, data from these missions have been successfully utilized in evaluating the microphysical schemes in cloud-resolving models (CRMs) and global models. However, there are still many uncertainties associated with these microphysics schemes. These uncertainties can be attributed, at least in part, to the fact that microphysical processes cannot be directly observed or measured, but instead have to be inferred from those cloud properties that can be measured. Evaluation of microphysical parameterizations are becoming increasingly important as enhanced computational capabilities are facilitating the use of more sophisticated schemes in CRMs, and as future global models are being run on what has traditionally been regarded as cloud-resolving scales using CRM microphysical schemes. In this talk we will demonstrate how TRMM, CloudSat and GPM data have been used to evaluate different aspects of current CRM microphysical schemes, providing examples of where these approaches have been successful. We will also highlight CRM microphysical processes that have not been well evaluated and suggest approaches for addressing such issues. Finally, we will introduce a potential NASA satellite mission, the Cloud and Precipitation Processes Mission (CAPPM), which would facilitate the development and evaluation of different microphysical-dynamical feedbacks in numerical models.

  19. Model study radioecology Biblis. Vol. 1

    International Nuclear Information System (INIS)

    1976-01-01

    The first part of the study deals with questions concerning radiation on account of the release of radioactive substances into the environment via the aquatic pathway. The discussion is preceded by a lecture on the basis for the calculations, and on a lecture on the results of primary radiological model calculations. The colloquium aims at establishing existing knowledge, at coordinating deviating opinions, and at elaborating questions still open and the possibility to deal with these. On the whole, ten questions have emerged. They are mainly concerned with site conditions. These questions will be dealt with in the form of partial studies by individual participants or by task groups. The results will be discussed at further colloquia. (orig.) [de

  20. Facilitating the Easy Use of Earth Observation Data in Earth System Models through CyberConnector

    Science.gov (United States)

    Di, L.; Sun, Z.; Zhang, C.

    2017-12-01

    Earth system models (ESM) are an important tool used to understand the Earth system and predict its future states. On other hand, Earth observations (EO) provides the current state of the system. EO data are very useful in ESM initialization, verification, validation, and inter-comparison. However, EO data often cannot directly be consumed by ESMs because of the syntactic and semantic mismatches between EO products and ESM requirements. In order to remove the mismatches, scientists normally spend long time to customize EO data for ESM consumption. CyberConnector, a NSF EarthCube building block, is intended to automate the data customization so that scientists can be relieved from the laborious EO data customization. CyberConnector uses web-service-based geospatial processing models (GPM) as the mechanism to automatically customize the EO data into the right products in the right form needed by ESMs. It can support many different ESMs through its standard interfaces. It consists of seven modules: GPM designer, GPM binder, GPM runner, GPM monitor, resource register, order manager, and result display. In CyberConnector, EO data instances and GPMs are independent and loosely coupled. A modeler only needs to create a GPM in the GMP designer for EO data customization. Once the modeler specifies a study area, the designed GPM will be activated and take the temporal and spatial extents as constraints to search the data sources and customize the available EO data into the ESM-acceptable form. The execution of GMP is completely automatic. Currently CyberConnector has been fully developed. In order to validate the feasibility, flexibility, and ESM independence of CyberConnector, three ESMs from different geoscience disciplines, including the Cloud-Resolving Model (CRM), the Finite Volume Coastal Ocean Model (FVCOM), and the Community Multiscale Air Quality Model (CMAQ), have been experimented with CyberConnector through closely collaborating with modelers. In the experiment

  1. Cloud Feedbacks on Greenhouse Warming in a Multi-Scale Modeling Framework with a Higher-Order Turbulence Closure

    Science.gov (United States)

    Cheng, Anning; Xu, Kuan-Man

    2015-01-01

    Five-year simulation experiments with a multi-scale modeling Framework (MMF) with a advanced intermediately prognostic higher-order turbulence closure (IPHOC) in its cloud resolving model (CRM) component, also known as SPCAM-IPHOC (super parameterized Community Atmospheric Model), are performed to understand the fast tropical (30S-30N) cloud response to an instantaneous doubling of CO2 concentration with SST held fixed at present-day values. SPCAM-IPHOC has substantially improved the low-level representation compared with SPCAM. It is expected that the cloud responses to greenhouse warming in SPCAM-IPHOC is more realistic. The change of rising motion, surface precipitation, cloud cover, and shortwave and longwave cloud radiative forcing in SPCAM-IPHOC from the greenhouse warming will be presented in the presentation.

  2. Animal models for HCV and HBV studies

    Directory of Open Access Journals (Sweden)

    Isabelle Chemin

    2007-02-01

    develop fulminant hepatitis, acute hepatitis, or chronic liver disease after adoptive transfer, and others spontaneously develop hepatocellular carcinoma (HCC. Among HCV transgenic mice, most develop no disease, but acute hepatitis has been observed in one model, and HCC in another. Although mice are not susceptible to HBV and HCV, their ability to replicate these viruses and to develop liver diseases characteristic of human infections provides opportunities to study pathogenesis and develop novel therapeutics In the search for the mechanism of hepatocarcinogenesis in hepatitis viral infection, two viral proteins, the core protein of hepatitis C virus (HCV and the HBx protein of hepatitis B virus (HBV, have been shown to possess oncogenic potential through transgenic mouse studies, indicating the direct involvement of the hepatitis viruses in hepatocarcinogenesis.

    This may explain the very high frequency of HCC in patients with HCV or HBV infection.

    Chimpanzees remain the only recognized animal model for the study of hepatitis C virus (HCV. Studies performed in chimpanzees played a critical role in the discovery of HCV and are continuing to play an essential role in defining the natural history of this important human pathogen. In the absence of a reproducible cell culture system, the infectivity titer of HCV challenge pools can be determined only in chimpanzees.

    Recent studies in chimpanzees have provided new insight into the nature of host immune responses-particularly the intrahepatic responses-following primary and secondary experimental HCV infections. The immunogenicity and efficacy of vaccine candidates against HCV can be tested only in chimpanzees. Finally, it would not have been possible to demonstrate

  3. A study on the intrusion model by physical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Yul; Kim, Yoo Sung; Hyun, Hye Ja [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of)

    1995-12-01

    In physical modeling, the actual phenomena of seismic wave propagation are directly measured like field survey and furthermore the structure and physical properties of subsurface can be known. So the measured datasets from physical modeling can be very desirable as input data to test the efficiency of various inversion algorithms. An underground structure formed by intrusion, which can be often seen in seismic section for oil exploration, is investigated by physical modeling. The model is characterized by various types of layer boundaries with steep dip angle. Therefore, this physical modeling data are very available not only to interpret seismic sections for oil exploration as a case history, but also to develop data processing techniques and estimate the capability of software such as migration, full waveform inversion. (author). 5 refs., 18 figs.

  4. An animal model to study regenerative endodontics.

    Science.gov (United States)

    Torabinejad, Mahmoud; Corr, Robert; Buhrley, Matthew; Wright, Kenneth; Shabahang, Shahrokh

    2011-02-01

    A growing body of evidence is demonstrating the possibility for regeneration of tissues within the pulp space and continued root development in teeth with necrotic pulps and open apices. There are areas of research related to regenerative endodontics that need to be investigated in an animal model. The purpose of this study was to investigate ferret cuspid teeth as a model to investigate factors involved in regenerative endodontics. Six young male ferrets between the ages of 36-133 days were used in this investigation. Each animal was anesthetized and perfused with 10% buffered formalin. Block sections including the mandibular and maxillary cuspid teeth and their surrounding periapical tissues were obtained, radiographed, decalcified, sectioned, and stained with hematoxylin-eosin to determine various stages of apical closure in these teeth. The permanent mandibular and maxillary cuspid teeth with open apices erupted approximately 50 days after birth. Initial signs of closure of the apical foramen in these teeth were observed between 90-110 days. Complete apical closure was observed in the cuspid teeth when the animals were 133 days old. Based on the experiment, ferret cuspid teeth can be used to investigate various factors involved in regenerative endodontics that cannot be tested in human subjects. The most appropriate time to conduct the experiments would be when the ferrets are between the ages of 50 and 90 days. Copyright © 2011. Published by Elsevier Inc.

  5. An Exploratory Study: Assessment of Modeled Dioxin ...

    Science.gov (United States)

    EPA has released an external review draft entitled, An Exploratory Study: Assessment of Modeled Dioxin Exposure in Ceramic Art Studios(External Review Draft). The public comment period and the external peer-review workshop are separate processes that provide opportunities for all interested parties to comment on the document. In addition to consideration by EPA, all public comments submitted in accordance with this notice will also be forwarded to EPA’s contractor for the external peer-review panel prior to the workshop. EPA has realeased this draft document solely for the purpose of pre-dissemination peer review under applicable information quality guidelines. This document has not been formally disseminated by EPA. It does not represent and should not be construed to represent any Agency policy or determination. The purpose of this report is to describe an exploratory investigation of potential dioxin exposures to artists/hobbyists who use ball clay to make pottery and related products.

  6. Ovine model for studying pulmonary immune responses

    International Nuclear Information System (INIS)

    Joel, D.D.; Chanana, A.D.

    1984-01-01

    Anatomical features of the sheep lung make it an excellent model for studying pulmonary immunity. Four specific lung segments were identified which drain exclusively to three separate lymph nodes. One of these segments, the dorsal basal segment of the right lung, is drained by the caudal mediastinal lymph node (CMLN). Cannulation of the efferent lymph duct of the CMLN along with highly localized intrabronchial instillation of antigen provides a functional unit with which to study factors involved in development of pulmonary immune responses. Following intrabronchial immunization there was an increased output of lymphoblasts and specific antibody-forming cells in efferent CMLN lymph. Continuous divergence of efferent lymph eliminated the serum antibody response but did not totally eliminate the appearance of specific antibody in fluid obtained by bronchoalveolar lavage. In these studies localized immunization of the right cranial lobe served as a control. Efferent lymphoblasts produced in response to intrabronchial antigen were labeled with 125 I-iododeoxyuridine and their migrational patterns and tissue distribution compared to lymphoblasts obtained from the thoracic duct. The results indicated that pulmonary immunoblasts tend to relocate in lung tissue and reappear with a higher specific activity in pulmonary lymph than in thoracic duct lymph. The reverse was observed with labeled intestinal lymphoblasts. 35 references, 2 figures, 3 tables

  7. Ovine model for studying pulmonary immune responses

    Energy Technology Data Exchange (ETDEWEB)

    Joel, D.D.; Chanana, A.D.

    1984-11-25

    Anatomical features of the sheep lung make it an excellent model for studying pulmonary immunity. Four specific lung segments were identified which drain exclusively to three separate lymph nodes. One of these segments, the dorsal basal segment of the right lung, is drained by the caudal mediastinal lymph node (CMLN). Cannulation of the efferent lymph duct of the CMLN along with highly localized intrabronchial instillation of antigen provides a functional unit with which to study factors involved in development of pulmonary immune responses. Following intrabronchial immunization there was an increased output of lymphoblasts and specific antibody-forming cells in efferent CMLN lymph. Continuous divergence of efferent lymph eliminated the serum antibody response but did not totally eliminate the appearance of specific antibody in fluid obtained by bronchoalveolar lavage. In these studies localized immunization of the right cranial lobe served as a control. Efferent lymphoblasts produced in response to intrabronchial antigen were labeled with /sup 125/I-iododeoxyuridine and their migrational patterns and tissue distribution compared to lymphoblasts obtained from the thoracic duct. The results indicated that pulmonary immunoblasts tend to relocate in lung tissue and reappear with a higher specific activity in pulmonary lymph than in thoracic duct lymph. The reverse was observed with labeled intestinal lymphoblasts. 35 references, 2 figures, 3 tables.

  8. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

    Energy Technology Data Exchange (ETDEWEB)

    Jablonowski, Christiane [Univ. of Michigan, Ann Arbor, MI (United States)

    2015-07-14

    The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project

  9. Study on Uncertainty and Contextual Modelling

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana; Ocelíková, E.

    2007-01-01

    Roč. 1, č. 1 (2007), s. 12-15 ISSN 1998-0140 Institutional research plan: CEZ:AV0Z10750506 Keywords : Knowledge * contextual modelling * temporal modelling * uncertainty * knowledge management Subject RIV: BD - Theory of Information

  10. Modeling the Bergeron-Findeisen Process Using PDF Methods With an Explicit Representation of Mixing

    Science.gov (United States)

    Jeffery, C.; Reisner, J.

    2005-12-01

    Currently, the accurate prediction of cloud droplet and ice crystal number concentration in cloud resolving, numerical weather prediction and climate models is a formidable challenge. The Bergeron-Findeisen process in which ice crystals grow by vapor deposition at the expense of super-cooled droplets is expected to be inhomogeneous in nature--some droplets will evaporate completely in centimeter-scale filaments of sub-saturated air during turbulent mixing while others remain unchanged [Baker et al., QJRMS, 1980]--and is unresolved at even cloud-resolving scales. Despite the large body of observational evidence in support of the inhomogeneous mixing process affecting cloud droplet number [most recently, Brenguier et al., JAS, 2000], it is poorly understood and has yet to be parameterized and incorporated into a numerical model. In this talk, we investigate the Bergeron-Findeisen process using a new approach based on simulations of the probability density function (PDF) of relative humidity during turbulent mixing. PDF methods offer a key advantage over Eulerian (spatial) models of cloud mixing and evaporation: the low probability (cm-scale) filaments of entrained air are explicitly resolved (in probability space) during the mixing event even though their spatial shape, size and location remain unknown. Our PDF approach reveals the following features of the inhomogeneous mixing process during the isobaric turbulent mixing of two parcels containing super-cooled water and ice, respectively: (1) The scavenging of super-cooled droplets is inhomogeneous in nature; some droplets evaporate completely at early times while others remain unchanged. (2) The degree of total droplet evaporation during the initial mixing period depends linearly on the mixing fractions of the two parcels and logarithmically on Damköhler number (Da)---the ratio of turbulent to evaporative time-scales. (3) Our simulations predict that the PDF of Lagrangian (time-integrated) subsaturation (S) goes as

  11. Evaluation of Model Microphysics Within Precipitation Bands of Extratropical Cyclones

    Science.gov (United States)

    Colle, Brian A.; Molthan, Andrew; Yu, Ruyi; Stark, David; Yuter, Sandra; Nesbitt, Steven

    2013-01-01

    Recent studies evaluating the bulk microphysical schemes (BMPs) within cloud resolving models (CRMs) have indicated large uncertainties and errors in the amount and size distributions of snow and cloud ice aloft. The snow prediction is sensitive to the snow densities, habits, and degree of riming within the BMPs. Improving these BMPs is a crucial step toward improving both weather forecasting and climate predictions. Several microphysical schemes in the Weather Research and Forecasting (WRF) model down to 1.33-km grid spacing are evaluated using aircraft, radar, and ground in situ data from the Global Precipitation Mission Coldseason Precipitation Experiment (GCPEx) experiment, as well as a few years (15 winter storms) of surface measurements of riming, crystal habit, snow density, and radar measurements at Stony Brook, NY (SBNY on north shore of Long Island) during the 2009-2012 winter seasons. Surface microphysical measurements at SBNY were taken every 15 to 30 minutes using a stereo microscope and camera, and snow depth and snow density were also recorded. During these storms, a vertically-pointing Ku-band radar was used to observe the vertical evolution of reflectivity and Doppler vertical velocities. A Particle Size and Velocity (PARSIVEL) disdrometer was also used to measure the surface size distribution and fall speeds of snow at SBNY. For the 15 cases at SBNY, the WSM6, Morrison (MORR), Thompson (THOM2), and Stony Brook (SBU-YLIN) BMPs were validated. A non-spherical snow assumption (THOM2 and SBU-YLIN) simulated a more realistic distribution of reflectivity than spherical snow assumptions in the WSM6 and MORR schemes. The MORR, WSM6, and SBU-YLIN schemes are comparable to the observed velocity distribution in light and moderate riming periods. The THOM2 is 0.25 meters per second too slow with its velocity distribution in these periods. In heavier riming, the vertical Doppler velocities in the WSM6, THOM2, and MORR schemes were 0.25 meters per second too

  12. Biomass thermochemical gasification: Experimental studies and modeling

    Science.gov (United States)

    Kumar, Ajay

    The overall goals of this research were to study the biomass thermochemical gasification using experimental and modeling techniques, and to evaluate the cost of industrial gas production and combined heat and power generation. This dissertation includes an extensive review of progresses in biomass thermochemical gasification. Product gases from biomass gasification can be converted to biopower, biofuels and chemicals. However, for its viable commercial applications, the study summarizes the technical challenges in the gasification and downstream processing of product gas. Corn stover and dried distillers grains with solubles (DDGS), a non-fermentable byproduct of ethanol production, were used as the biomass feedstocks. One of the objectives was to determine selected physical and chemical properties of corn stover related to thermochemical conversion. The parameters of the reaction kinetics for weight loss were obtained. The next objective was to investigate the effects of temperature, steam to biomass ratio and equivalence ratio on gas composition and efficiencies. DDGS gasification was performed on a lab-scale fluidized-bed gasifier with steam and air as fluidizing and oxidizing agents. Increasing the temperature resulted in increases in hydrogen and methane contents and efficiencies. A model was developed to simulate the performance of a lab-scale gasifier using Aspen Plus(TM) software. Mass balance, energy balance and minimization of Gibbs free energy were applied for the gasification to determine the product gas composition. The final objective was to optimize the process by maximizing the net energy efficiency, and to estimate the cost of industrial gas, and combined heat and power (CHP) at a biomass feedrate of 2000 kg/h. The selling price of gas was estimated to be 11.49/GJ for corn stover, and 13.08/GJ for DDGS. For CHP generation, the electrical and net efficiencies were 37 and 86%, respectively for corn stover, and 34 and 78%, respectively for DDGS. For

  13. Mathematical modelling with case studies using Maple and Matlab

    CERN Document Server

    Barnes, B

    2014-01-01

    Introduction to Mathematical ModelingMathematical models An overview of the book Some modeling approaches Modeling for decision makingCompartmental Models Introduction Exponential decay and radioactivity Case study: detecting art forgeries Case study: Pacific rats colonize New Zealand Lake pollution models Case study: Lake Burley Griffin Drug assimilation into the blood Case study: dull, dizzy, or dead? Cascades of compartments First-order linear DEs Equilibrium points and stability Case study: money, money, money makes the world go aroundModels of Single PopulationsExponential growth Density-

  14. Comparative study of void fraction models

    International Nuclear Information System (INIS)

    Borges, R.C.; Freitas, R.L.

    1985-01-01

    Some models for the calculation of void fraction in water in sub-cooled boiling and saturated vertical upward flow with forced convection have been selected and compared with experimental results in the pressure range of 1 to 150 bar. In order to know the void fraction axial distribution it is necessary to determine the net generation of vapour and the fluid temperature distribution in the slightly sub-cooled boiling region. It was verified that the net generation of vapour was well represented by the Saha-Zuber model. The selected models for the void fraction calculation present adequate results but with a tendency to super-estimate the experimental results, in particular the homogeneous models. The drift flux model is recommended, followed by the Armand and Smith models. (F.E.) [pt

  15. Study of dissolution process and its modelling

    Directory of Open Access Journals (Sweden)

    Juan Carlos Beltran-Prieto

    2017-01-01

    Full Text Available The use of mathematical concepts and language aiming to describe and represent the interactions and dynamics of a system is known as a mathematical model. Mathematical modelling finds a huge number of successful applications in a vast amount of science, social and engineering fields, including biology, chemistry, physics, computer sciences, artificial intelligence, bioengineering, finance, economy and others. In this research, we aim to propose a mathematical model that predicts the dissolution of a solid material immersed in a fluid. The developed model can be used to evaluate the rate of mass transfer and the mass transfer coefficient. Further research is expected to be carried out to use the model as a base to develop useful models for the pharmaceutical industry to gain information about the dissolution of medicaments in the body stream and this could play a key role in formulation of medicaments.

  16. Bariatric Outcomes and Obesity Modeling: Study Meeting

    Science.gov (United States)

    2010-09-17

    Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA...developed a cost-effectiveness model and a payer-based budget and fiscal impact tool to compare bariatric surgical procedures to non- operative ...SURVIVAL MODELED FROM NHIS -NDI • Statistical analysis adapts the methods from Schauer 2010. • Logistic regression model is used to predict the 5-year

  17. Model boiler studies on deposition and corrosion

    International Nuclear Information System (INIS)

    Balakrishnan, P.V.; McVey, E.G.

    1977-09-01

    Deposit formation was studied in a model boiler, with sea-water injections to simulate the in-leakage which could occur from sea-water cooled condensers. When All Volatile Treatment (AVT) was used for chemistry control the deposits consisted of the sea-water salts and corrosion products. With sodium phosphate added to the boiler water, the deposits also contained the phosphates derived from the sea-water salts. The deposits were formed in layers of differing compositions. There was no significant corrosion of the Fe-Ni-Cr alloy boiler tube under deposits, either on the open area of the tube or in crevices. However, carbon steel that formed a crevice around the tube was corroded severely when the boiler water did not contain phosphate. The observed corrosion of carbon steel was caused by the presence of acidic, highly concentrated chloride solution produced from the sea-water within the crevice. Results of theoretical calculations of the composition of the concentrated solution are presented. (author)

  18. Bayesian graphical models for genomewide association studies.

    Science.gov (United States)

    Verzilli, Claudio J; Stallard, Nigel; Whittaker, John C

    2006-07-01

    As the extent of human genetic variation becomes more fully characterized, the research community is faced with the challenging task of using this information to dissect the heritable components of complex traits. Genomewide association studies offer great promise in this respect, but their analysis poses formidable difficulties. In this article, we describe a computationally efficient approach to mining genotype-phenotype associations that scales to the size of the data sets currently being collected in such studies. We use discrete graphical models as a data-mining tool, searching for single- or multilocus patterns of association around a causative site. The approach is fully Bayesian, allowing us to incorporate prior knowledge on the spatial dependencies around each marker due to linkage disequilibrium, which reduces considerably the number of possible graphical structures. A Markov chain-Monte Carlo scheme is developed that yields samples from the posterior distribution of graphs conditional on the data from which probabilistic statements about the strength of any genotype-phenotype association can be made. Using data simulated under scenarios that vary in marker density, genotype relative risk of a causative allele, and mode of inheritance, we show that the proposed approach has better localization properties and leads to lower false-positive rates than do single-locus analyses. Finally, we present an application of our method to a quasi-synthetic data set in which data from the CYP2D6 region are embedded within simulated data on 100K single-nucleotide polymorphisms. Analysis is quick (<5 min), and we are able to localize the causative site to a very short interval.

  19. Assessing 1D Atmospheric Solar Radiative Transfer Models: Interpretation and Handling of Unresolved Clouds.

    Science.gov (United States)

    Barker, H. W.; Stephens, G. L.; Partain, P. T.; Bergman, J. W.; Bonnel, B.; Campana, K.; Clothiaux, E. E.; Clough, S.; Cusack, S.; Delamere, J.; Edwards, J.; Evans, K. F.; Fouquart, Y.; Freidenreich, S.; Galin, V.; Hou, Y.; Kato, S.; Li, J.;  Mlawer, E.;  Morcrette, J.-J.;  O'Hirok, W.;  Räisänen, P.;  Ramaswamy, V.;  Ritter, B.;  Rozanov, E.;  Schlesinger, M.;  Shibata, K.;  Sporyshev, P.;  Sun, Z.;  Wendisch, M.;  Wood, N.;  Yang, F.

    2003-08-01

    The primary purpose of this study is to assess the performance of 1D solar radiative transfer codes that are used currently both for research and in weather and climate models. Emphasis is on interpretation and handling of unresolved clouds. Answers are sought to the following questions: (i) How well do 1D solar codes interpret and handle columns of information pertaining to partly cloudy atmospheres? (ii) Regardless of the adequacy of their assumptions about unresolved clouds, do 1D solar codes perform as intended?One clear-sky and two plane-parallel, homogeneous (PPH) overcast cloud cases serve to elucidate 1D model differences due to varying treatments of gaseous transmittances, cloud optical properties, and basic radiative transfer. The remaining four cases involve 3D distributions of cloud water and water vapor as simulated by cloud-resolving models. Results for 25 1D codes, which included two line-by-line (LBL) models (clear and overcast only) and four 3D Monte Carlo (MC) photon transport algorithms, were submitted by 22 groups. Benchmark, domain-averaged irradiance profiles were computed by the MC codes. For the clear and overcast cases, all MC estimates of top-of-atmosphere albedo, atmospheric absorptance, and surface absorptance agree with one of the LBL codes to within ±2%. Most 1D codes underestimate atmospheric absorptance by typically 15-25 W m-2 at overhead sun for the standard tropical atmosphere regardless of clouds.Depending on assumptions about unresolved clouds, the 1D codes were partitioned into four genres: (i) horizontal variability, (ii) exact overlap of PPH clouds, (iii) maximum/random overlap of PPH clouds, and (iv) random overlap of PPH clouds. A single MC code was used to establish conditional benchmarks applicable to each genre, and all MC codes were used to establish the full 3D benchmarks. There is a tendency for 1D codes to cluster near their respective conditional benchmarks, though intragenre variances typically exceed those for

  20. Simulation of the intraseasonal variability over the Eastern Pacific ITCZ in climate models

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Xianan [Univ. of California, Los Angeles, CA (United States); Waliser, Duane E. [California Inst. of Technology (CalTech), La Canada Flintridge, CA (United States). Jet Propulsion Lab.; Kim, Daehyun [Columbia Univ., New York, NY (United States); Zhao, Ming [Princeton Univ., NJ (United States); Sperber, Kenneth R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Stern, William F. [Princeton Univ., NJ (United States); Schubert, Siegfried D. [NASA Goddard Space Flight Center (GSFC), Greenbelt, MD (United States); Zhang, Guang J. [Scripps Institute of Oceanography. La Jolla, California (United States); Wang, Wanqiu [National Oceanic and Atmospheric Administration (NOAA), National Centers for Environmental Protection. Camp Springs, MD (United States); Khairoutdinov, Marat [Institute for Terrestrial and Planetary Atmospheres. Stony Brook Univ., NY (United States); Neale, Richard B. [National Center for Atmospheric Research. Boulder, CO (United States); Lee, Myong-In [Ulsan National Institute for Science and Technology. Seoul (Korea)

    2012-08-01

    During boreal summer, convective activity over the eastern Pacific (EPAC) inter-tropical convergence zone (ITCZ) exhibits vigorous intraseasonal variability (ISV). Previous observational studies identified two dominant ISV modes over the EPAC, i.e., a 40-day mode and a quasi-biweekly mode (QBM). The 40-day ISV mode is generally considered a local expression of the Madden-Julian Oscillation. However, in addition to the eastward propagation, northward propagation of the 40-day mode is also evident. The QBM mode bears a smaller spatial scale than the 40-day mode, and is largely characterized by northward propagation. While the ISV over the EPAC exerts significant influences on regional climate/weather systems, investigation of contemporary model capabilities in representing these ISV modes over the EPAC is limited. In this study, the model fidelity in representing these two dominant ISV modes over the EPAC is assessed by analyzing six atmospheric and three coupled general circulation models (GCMs), including one super-parameterized GCM (SPCAM) and one recently developed high-resolution GCM (GFDL HIRAM) with horizontal resolution of about 50 km. While it remains challenging for GCMs to faithfully represent these two ISV modes including their amplitude, evolution patterns, and periodicities, encouraging simulations are also noted. In general, SPCAM and HIRAM exhibit relatively superior skill in representing the two ISV modes over the EPAC. While the advantage of SPCAM is achieved through explicit representation of the cumulus process by the embedded 2-D cloud resolving models, the improved representation in HIRAM could be ascribed to the employment of a strongly entraining plume cumulus scheme, which inhibits the deep convection, and thus effectively enhances the stratiform rainfall. The sensitivity tests based on HIRAM also suggest that fine horizontal resolution could also be conducive to realistically capture the ISV over the EPAC, particularly for the QBM mode

  1. Simulation of the intraseasonal variability over the Eastern Pacific ITCZ in climate models

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Xianan [University of California, Joint Institute for Regional Earth System Science and Engineering, Los Angeles, CA (United States); California Institute of Technology, Jet Propulsion Laboratory, Pasadena, CA (United States); Waliser, Duane E. [California Institute of Technology, Jet Propulsion Laboratory, Pasadena, CA (United States); Kim, Daehyun [Lamont-Doherty Earth Observatory of Columbia University, New York, NY (United States); Zhao, Ming; Stern, William F. [NOAA/Geophysical Fluid Dynamics Laboratory, Princeton, NJ (United States); Sperber, Kenneth R. [Lawrence Livermore National Laboratory, Livermore, CA (United States); Schubert, Siegfried D. [NASA Goddard Space Flight Center, Greenbelt, MD (United States); Zhang, Guang J. [Scripps Institution of Oceanography, La Jolla, CA (United States); Wang, Wanqiu [NOAA/National Centers for Environmental Prediction, Camp Springs, MD (United States); Khairoutdinov, Marat [Stony Brook University, Institute for Terrestrial and Planetary Atmospheres, Stony Brook, NY (United States); Neale, Richard B. [National Center for Atmospheric Research, Boulder, CO (United States); Lee, Myong-In [Ulsan National Institute of Science and Technology, Seoul (Korea, Republic of)

    2012-08-15

    During boreal summer, convective activity over the eastern Pacific (EPAC) inter-tropical convergence zone (ITCZ) exhibits vigorous intraseasonal variability (ISV). Previous observational studies identified two dominant ISV modes over the EPAC, i.e., a 40-day mode and a quasi-biweekly mode (QBM). The 40-day ISV mode is generally considered a local expression of the Madden-Julian Oscillation. However, in addition to the eastward propagation, northward propagation of the 40-day mode is also evident. The QBM mode bears a smaller spatial scale than the 40-day mode, and is largely characterized by northward propagation. While the ISV over the EPAC exerts significant influences on regional climate/weather systems, investigation of contemporary model capabilities in representing these ISV modes over the EPAC is limited. In this study, the model fidelity in representing these two dominant ISV modes over the EPAC is assessed by analyzing six atmospheric and three coupled general circulation models (GCMs), including one super-parameterized GCM (SPCAM) and one recently developed high-resolution GCM (GFDL HIRAM) with horizontal resolution of about 50 km. While it remains challenging for GCMs to faithfully represent these two ISV modes including their amplitude, evolution patterns, and periodicities, encouraging simulations are also noted. In general, SPCAM and HIRAM exhibit relatively superior skill in representing the two ISV modes over the EPAC. While the advantage of SPCAM is achieved through explicit representation of the cumulus process by the embedded 2-D cloud resolving models, the improved representation in HIRAM could be ascribed to the employment of a strongly entraining plume cumulus scheme, which inhibits the deep convection, and thus effectively enhances the stratiform rainfall. The sensitivity tests based on HIRAM also suggest that fine horizontal resolution could also be conducive to realistically capture the ISV over the EPAC, particularly for the QBM mode

  2. Simulation of the Intraseasonal Variability over the Eastern Pacific ITCZ in Climate Models

    Science.gov (United States)

    Jiang, Xianan; Waliser, Duane E.; Kim, Daehyun; Zhao, Ming; Sperber, Kenneth R.; Stern, W. F.; Schubert, Siegfried D.; Zhang, Guang J.; Wang, Wanqiu; Khairoutdinov, Marat; hide

    2012-01-01

    During boreal summer, convective activity over the eastern Pacific (EPAC) inter-tropical convergence zone (ITCZ) exhibits vigorous intraseasonal variability (ISV). Previous observational studies identified two dominant ISV modes over the EPAC, i.e., a 40-day mode and a quasi-biweekly mode (QBM). The 40-day ISV mode is generally considered a local expression of the Madden-Julian Oscillation. However, in addition to the eastward propagation, northward propagation of the 40-day mode is also evident. The QBM mode bears a smaller spatial scale than the 40-day mode, and is largely characterized by northward propagation. While the ISV over the EPAC exerts significant influences on regional climate/weather systems, investigation of contemporary model capabilities in representing these ISV modes over the EPAC is limited. In this study, the model fidelity in representing these two dominant ISV modes over the EPAC is assessed by analyzing six atmospheric and three coupled general circulation models (GCMs), including one super-parameterized GCM (SPCAM) and one recently developed high-resolution GCM (GFDL HIRAM) with horizontal resolution of about 50 km. While it remains challenging for GCMs to faithfully represent these two ISV modes including their amplitude, evolution patterns, and periodicities, encouraging simulations are also noted. In general, SPCAM and HIRAM exhibit relatively superior skill in representing the two ISV modes over the EPAC. While the advantage of SPCAM is achieved through explicit representation of the cumulus process by the embedded 2-D cloud resolving models, the improved representation in HIRAM could be ascribed to the employment of a strongly entraining plume cumulus scheme, which inhibits the deep convection, and thus effectively enhances the stratiform rainfall. The sensitivity tests based on HIRAM also suggest that fine horizontal resolution could also be conducive to realistically capture the ISV over the EPAC, particularly for the QBM mode

  3. Theoretical studies of Anderson impurity models

    International Nuclear Information System (INIS)

    Glossop, M.T.

    2000-01-01

    A Local Moment Approach (LMA) is developed for single-particle excitations of a symmetric single impurity Anderson model (SIAM) with a soft-gap hybridization vanishing at the Fermi level, Δ I ∝ vertical bar W vertical bar r with r > 0, and for the generic asymmetric case of the 'normal' (r = 0) SIAM. In all cases we work within a two-self-energy description with local moments introduced explicitly from the outset, and in which single-particle excitations are coupled dynamically to low-energy transverse spin fluctuations. For the soft-gap symmetric SIAM, the resultant theory is applicable on all energy scales, and captures both the spin-fluctuation regime of strong coupling (large-U), as well as the weak coupling regime where it is perturbatively exact for those r-domains in which perturbation theory in U is non-singular. While the primary emphasis is on single-particle dynamics, the quantum phase transition between strong coupling (SC) and local moment (LM) phases can also be addressed directly; for the spin-fluctuation regime in particular a number of asymptotically exact results are thereby obtained, notably for the behaviour of the critical U c (r) separating SC/LM states and the Kondo scale w m (r) characteristic of the SC phase. Results for both single-particle spectra and SG/LM phase boundaries are found to agree well with recent numerical renormalization group (NRG) studies; and a number of further testable predictions are made. Single-particle spectra are examined systematically for both SC and LM states; in particular, for all 0 ≤ r 0 SC phase which, in agreement with conclusions drawn from recent NRG work, may be viewed as a non-trivial but natural generalization of Fermi liquid physics. We also reinvestigate the problem via the NRG in light of the predictions arising from the LMA: all are borne out and excellent agreement is found. For the asymmetric single impurity Anderson model (ASIAM) we establish general conditions which must be satisfied

  4. Theoretical study on optical model potential

    International Nuclear Information System (INIS)

    Lim Hung Gi.

    1984-08-01

    The optical model potential of non-local effect on the rounded edge of the potential is derived. On the basis of this potential the functional form of the optical model potential, the energy dependence and relationship of its parameters, and the dependency of the values of the parameters on energy change are shown in this paper. (author)

  5. Clinton River Sediment Transport Modeling Study

    Science.gov (United States)

    The U.S. ACE develops sediment transport models for tributaries to the Great Lakes that discharge to AOCs. The models developed help State and local agencies to evaluate better ways for soil conservation and non-point source pollution prevention.

  6. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  7. Some lessons and thoughts from development of an old-fashioned high-resolution atmospheric general circulation model

    Science.gov (United States)

    Ohfuchi, Wataru; Enomoto, Takeshi; Yoshioka, Mayumi K.; Takaya, Koutarou

    2014-05-01

    spectral transform AGCMs, such as AFES, have no future. Developing globally homogeneous nonhydrostatic cloud resolving grid AGCMs is obviously a straightforward direction for the future. However these models will be very expensive for many users for a while, perhaps for the next some decades. On the other hand, old-fashioned AGCMs with a grid interval of 20-100 km will remain to be accurate and efficient tools for many users for many years to come. Also by coupling with a fine-resolution regional nonhydrostatic model, a conventional AGCM may overcome its limitation for use in climate and weather studies in the future.

  8. Glistening-region model for multipath studies

    Science.gov (United States)

    Groves, Gordon W.; Chow, Winston C.

    1998-07-01

    The goal is to achieve a model of radar sea reflection with improved fidelity that is amenable to practical implementation. The geometry of reflection from a wavy surface is formulated. The sea surface is divided into two components: the smooth `chop' consisting of the longer wavelengths, and the `roughness' of the short wavelengths. Ordinary geometric reflection from the chop surface is broadened by the roughness. This same representation serves both for forward scatter and backscatter (sea clutter). The `Road-to-Happiness' approximation, in which the mean sea surface is assumed cylindrical, simplifies the reflection geometry for low-elevation targets. The effect of surface roughness is assumed to make the sea reflection coefficient depending on the `Deviation Angle' between the specular and the scattering directions. The `specular' direction is that into which energy would be reflected by a perfectly smooth facet. Assuming that the ocean waves are linear and random allows use of Gaussian statistics, greatly simplifying the formulation by allowing representation of the sea chop by three parameters. An approximation of `low waves' and retention of the sea-chop slope components only through second order provides further simplification. The simplifying assumptions make it possible to take the predicted 2D ocean wave spectrum into account in the calculation of sea-surface radar reflectivity, to provide algorithms for support of an operational system for dealing with target tracking in the presence of multipath. The product will be of use in simulated studies to evaluate different trade-offs in alternative tracking schemes, and will form the basis of a tactical system for ship defense against low flyers.

  9. Study on Standard Fatigue Vehicle Load Model

    Science.gov (United States)

    Huang, H. Y.; Zhang, J. P.; Li, Y. H.

    2018-02-01

    Based on the measured data of truck from three artery expressways in Guangdong Province, the statistical analysis of truck weight was conducted according to axle number. The standard fatigue vehicle model applied to industrial areas in the middle and late was obtained, which adopted equivalence damage principle, Miner linear accumulation law, water discharge method and damage ratio theory. Compared with the fatigue vehicle model Specified by the current bridge design code, the proposed model has better applicability. It is of certain reference value for the fatigue design of bridge in China.

  10. Amorphous track models: a numerical comparison study

    DEFF Research Database (Denmark)

    Greilich, Steffen; Grzanka, Leszek; Hahn, Ute

    in carbon ion treatment at the particle facility HIT in Heidelberg. Apparent differences between the LEM and the Katz model are the way how interactions of individual particle tracks and how extended targets are handled. Complex scenarios, however, can mask the actual effect of these differences. Here, we......Amorphous track models such as Katz' Ion-Gamma-Kill (IGK) approach [1, 2] or the Local Effect Model (LEM) [3, 4] had reasonable success in predicting the response of solid state dosimeters and radiobiological systems. LEM is currently applied in radiotherapy for biological dose optimization...

  11. Expanded Large-Scale Forcing Properties Derived from the Multiscale Data Assimilation System and Its Application to Single-Column Models

    Science.gov (United States)

    Feng, S.; Li, Z.; Liu, Y.; Lin, W.; Toto, T.; Vogelmann, A. M.; Fridlind, A. M.

    2013-12-01

    We present an approach to derive large-scale forcing that is used to drive single-column models (SCMs) and cloud resolving models (CRMs)/large eddy simulation (LES) for evaluating fast physics parameterizations in climate models. The forcing fields are derived by use of a newly developed multi-scale data assimilation (MS-DA) system. This DA system is developed on top of the NCEP Gridpoint Statistical Interpolation (GSI) System and is implemented in the Weather Research and Forecasting (WRF) model at a cloud resolving resolution of 2 km. This approach has been applied to the generation of large scale forcing for a set of Intensive Operation Periods (IOPs) over the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plains (SGP) site. The dense ARM in-situ observations and high-resolution satellite data effectively constrain the WRF model. The evaluation shows that the derived forcing displays accuracies comparable to the existing continuous forcing product and, overall, a better dynamic consistency with observed cloud and precipitation. One important application of this approach is to derive large-scale hydrometeor forcing and multiscale forcing, which is not provided in the existing continuous forcing product. It is shown that the hydrometeor forcing poses an appreciable impact on cloud and precipitation fields in the single-column model simulations. The large-scale forcing exhibits a significant dependency on domain-size that represents SCM grid-sizes. Subgrid processes often contribute a significant component to the large-scale forcing, and this contribution is sensitive to the grid-size and cloud-regime.

  12. Pulse radiolysis studies of model membranes

    International Nuclear Information System (INIS)

    Heijman, M.G.J.

    1984-01-01

    In this thesis the influence of the structure of membranes on the processes in cell membranes were examined. Different models of the membranes were evaluated. Pulse radiolysis was used as the technique to examine the membranes. (R.B.)

  13. Lower Monumental Spillway Hydraulic Model Study

    National Research Council Canada - National Science Library

    Wilhelms, Steven

    2003-01-01

    A 1:40 Froudian Scale model was used to investigate the hydraulic performance of the Lower Monumental Dam spillway, stilling basin, and tailrace for dissolved gas reduction and stilling basin apron scour...

  14. А mathematical model study of suspended monorail

    OpenAIRE

    Viktor GUTAREVYCH

    2012-01-01

    The mathematical model of suspended monorail track with allowance for elastic strain which occurs during movement of the monorail carriage was developed. Standard forms for single span and double span of suspended monorail sections were established.

  15. А mathematical model study of suspended monorail

    Directory of Open Access Journals (Sweden)

    Viktor GUTAREVYCH

    2012-01-01

    Full Text Available The mathematical model of suspended monorail track with allowance for elastic strain which occurs during movement of the monorail carriage was developed. Standard forms for single span and double span of suspended monorail sections were established.

  16. STRESS RESPONSE STUDIES USING ANIMAL MODELS

    Science.gov (United States)

    This presentation will provide the evidence that ozone exposure in animal models induce neuroendocrine stress response and this stress response modulates lung injury and inflammation through adrenergic and glucocorticoid receptors.

  17. Neuronal Models for Studying Tau Pathology

    Directory of Open Access Journals (Sweden)

    Thorsten Koechling

    2010-01-01

    Full Text Available Alzheimer's disease (AD is the most frequent neurodegenerative disorder leading to dementia in the aged human population. It is characterized by the presence of two main pathological hallmarks in the brain: senile plaques containing -amyloid peptide and neurofibrillary tangles (NFTs, consisting of fibrillar polymers of abnormally phosphorylated tau protein. Both of these histological characteristics of the disease have been simulated in genetically modified animals, which today include numerous mouse, fish, worm, and fly models of AD. The objective of this review is to present some of the main animal models that exist for reproducing symptoms of the disorder and their advantages and shortcomings as suitable models of the pathological processes. Moreover, we will discuss the results and conclusions which have been drawn from the use of these models so far and their contribution to the development of therapeutic applications for AD.

  18. A micromagnetic study of domain structure modeling

    International Nuclear Information System (INIS)

    Matsuo, Tetsuji; Mimuro, Naoki; Shimasaki, Masaaki

    2008-01-01

    To develop a mesoscopic model for magnetic-domain behavior, a domain structure model (DSM) was examined and compared with a micromagnetic simulation. The domain structure of this model is given by several domains with uniform magnetization vectors and domain walls. The directions of magnetization vectors and the locations of domain walls are determined so as to minimize the magnetic total energy of the magnetic material. The DSM was modified to improve its representation capability for domain behavior. The domain wall energy is multiplied by a vanishing factor to represent the disappearance of magnetic domain. The sequential quadratic programming procedure is divided into two steps to improve an energy minimization process. A comparison with micromagnetic simulation shows that the modified DSM improves the representation accuracy of the magnetization process

  19. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  20. Geomagnetic field models for satellite angular motion studies

    Science.gov (United States)

    Ovchinnikov, M. Yu.; Penkov, V. I.; Roldugin, D. S.; Pichuzhkina, A. V.

    2018-03-01

    Four geomagnetic field models are discussed: IGRF, inclined, direct and simplified dipoles. Geomagnetic induction vector expressions are provided in different reference frames. Induction vector behavior is compared for different models. Models applicability for the analysis of satellite motion is studied from theoretical and engineering perspectives. Relevant satellite dynamics analysis cases using analytical and numerical techniques are provided. These cases demonstrate the benefit of a certain model for a specific dynamics study. Recommendations for models usage are summarized in the end.

  1. Model Servqual Dengan Pendekatan Structural Equation Modeling (Studi Pada Mahasiswa Sistem Informasi)

    OpenAIRE

    Nurfaizal, Yusmedi

    2015-01-01

    Penelitian ini berjudul “MODEL SERVQUAL DENGAN PENDEKATAN STRUCTURAL EQUATION MODELING (Studi Pada Mahasiswa Sistem Informasi)”. Tujuan penelitian ini adalah untuk mengetahui model Servqual dengan pendekatan Structural Equation Modeling pada mahasiswa sistem informasi. Peneliti memutuskan untuk mengambil sampel sebanyak 100 responden. Untuk menguji model digunakan analisis SEM. Hasil penelitian menunjukkan bahwa tangibility, reliability responsiveness, assurance dan emphaty mempunyai pengaruh...

  2. Study on geo-information modelling

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana

    2006-01-01

    Roč. 5, č. 5 (2006), s. 1108-1113 ISSN 1109-2777 Institutional research plan: CEZ:AV0Z10750506 Keywords : control GIS * geo-information modelling * uncertainty * spatial temporal approach Web Services Subject RIV: BC - Control Systems Theory

  3. Studies on chemoviscosity modeling for thermosetting resins

    Science.gov (United States)

    Bai, J. M.; Hou, T. H.; Tiwari, S. N.

    1987-01-01

    A new analytical model for simulating chemoviscosity of thermosetting resins has been formulated. The model is developed by modifying the well-established Williams-Landel-Ferry (WLF) theory in polymer rheology for thermoplastic materials. By introducing a relationship between the glass transition temperature Tg(t) and the degree of cure alpha(t) of the resin system under cure, the WLF theory can be modified to account for the factor of reaction time. Temperature dependent functions of the modified WLF theory constants C sub 1 (t) and C sub 2 (t) were determined from the isothermal cure data. Theoretical predictions of the model for the resin under dynamic heating cure cycles were shown to compare favorably with the experimental data. This work represents progress toward establishing a chemoviscosity model which is capable of not only describing viscosity profiles accurately under various cure cycles, but also correlating viscosity data to the changes of physical properties associated with the structural transformation of the thermosetting resin systems during cure.

  4. A Study of Simple Diffraction Models

    DEFF Research Database (Denmark)

    Agerkvist, Finn

    1997-01-01

    Three different models for calculating edge diffraction are examined. The methods of Vanderkooy, Terai and Biot & Tolstoy are compared with measurements. Although a good agreement is obtained, the measurements also show that none of the methods work completely satisfactorily. The desired properties...

  5. Preliminary report on electromagnetic model studies

    Science.gov (United States)

    Frischknecht, F.C.; Mangan, G.B.

    1960-01-01

    More than 70 resopnse curves for various models have been obtained using the slingram and turam electromagnetic methods. Results show that for the slingram method, horizontal co-planar coils are usually more sensitive than vertical, co-axial or vertical, co-planar coils. The shape of the anomaly usually is simpler for the vertical coils.

  6. Animal models to study plaque vulnerability

    NARCIS (Netherlands)

    Schapira, K.; Heeneman, S.; Daemen, M. J. A. P.

    2007-01-01

    The need to identify and characterize vulnerable atherosclerotic lesions in humans has lead to the development of various animal models of plaque vulnerability. In this review, current concepts of the vulnerable plaque as it leads to an acute coronary event are described, such as plaque rupture,

  7. Improved scheme for parametrization of convection in the Met Office's Numerical Atmospheric-dispersion Modelling Environment (NAME)

    Science.gov (United States)

    Meneguz, Elena; Thomson, David; Witham, Claire; Kusmierczyk-Michulec, Jolanta

    2015-04-01

    NAME is a Lagrangian atmospheric dispersion model used by the Met Office to predict the dispersion of both natural and man-made contaminants in the atmosphere, e.g. volcanic ash, radioactive particles and chemical species. Atmospheric convection is responsible for transport and mixing of air resulting in a large exchange of heat and energy above the boundary layer. Although convection can transport material through the whole troposphere, convective clouds have a small horizontal length scale (of the order of few kilometres). Therefore, for large-scale transport the horizontal scale on which the convection exists is below the global NWP resolution used as input to NAME and convection must be parametrized. Prior to the work presented here, the enhanced vertical mixing generated by non-resolved convection was reproduced by randomly redistributing Lagrangian particles between the cloud base and cloud top with probability equal to 1/25th of the NWP predicted convective cloud fraction. Such a scheme is essentially diffusive and it does not make optimal use of all the information provided by the driving meteorological model. To make up for these shortcomings and make the parametrization more physically based, the convection scheme has been recently revised. The resulting version, presented in this paper, is now based on the balance equation between upward, entrainment and detrainment fluxes. In particular, upward mass fluxes are calculated with empirical formulas derived from Cloud Resolving Models and using the NWP convective precipitation diagnostic as closure. The fluxes are used to estimate how many particles entrain, move upward and detrain. Lastly, the scheme is completed by applying a compensating subsidence flux. The performance of the updated convection scheme is benchmarked against available observational data of passive tracers. In particular, radioxenon is a noble gas that can undergo significant long range transport: this study makes use of observations of

  8. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  9. Study on developing energy-macro model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Duk [Korea Energy Economics Institute, Euiwang (Kenya)

    1999-12-01

    It analyzed the effect of international oil price on domestic economy and its path, time difference and degree of effect. First of all, it analyzed whether the long-term relationship between international oil price and price exists focusing on integral relationship, and estimated dynamic price fluctuation by using error correction model. Moreover, using structural VAR model, it analyzed what kind of shocking reactions are showed when the increase of international oil price affects on domestic macro economic variables. Commonly it is estimated that price is increasing in a long term not in a short term as the international oil price is increasing. When the international oil price increases, it is estimated that its effect in a short term is insignificant because of direct price control by the government and then its spreading effect on economy shows a long-term effect by deepening the price control. (author). 16 refs., 3 figs., 10 tabs.

  10. Study on modeling of operator's learning mechanism

    International Nuclear Information System (INIS)

    Yoshimura, Seichi; Hasegawa, Naoko

    1998-01-01

    One effective method to analyze the causes of human errors is to model the behavior of human and to simulate it. The Central Research Institute of Electric Power Industry (CRIEPI) has developed an operator team behavior simulation system called SYBORG (Simulation System for the Behavior of an Operating Group) to analyze the human errors and to establish the countermeasures for them. As an operator behavior model which composes SYBORG has no learning mechanism and the knowledge of a plant is fixed, it cannot take suitable actions when unknown situations occur nor learn anything from the experience. However, considering actual operators, learning is an essential human factor to enhance their abilities to diagnose plant anomalies. In this paper, Q learning with 1/f fluctuation was proposed as a learning mechanism of an operator and simulation using the mechanism was conducted. The results showed the effectiveness of the learning mechanism. (author)

  11. Contaminant transport modeling studies of Russian sites

    International Nuclear Information System (INIS)

    Tsang, Chin-Fu

    1993-01-01

    Lawrence Berkeley Laboratory (LBL) established mechanisms that promoted cooperation between U.S. and Russian scientists in scientific research as well as environmental technology transfer. Using Russian experience and U.S technology, LBL developed approaches for field investigations, site evaluation, waste disposal, and remediation at Russian contaminated sites. LBL assessed a comprehensive database as well as an actual, large-scale contaminated site to evaluate existing knowledge of and test mathematical models used for the assessment of U.S. contaminated sites

  12. Auditing predictive models : a case study in crop growth

    NARCIS (Netherlands)

    Metselaar, K.

    1999-01-01

    Methods were developed to assess and quantify the predictive quality of simulation models, with the intent to contribute to evaluation of model studies by non-scientists. In a case study, two models of different complexity, LINTUL and SUCROS87, were used to predict yield of forage maize

  13. Computerised modelling for developmental biology : an exploration with case studies

    NARCIS (Netherlands)

    Bertens, Laura M.F.

    2012-01-01

    Many studies in developmental biology rely on the construction and analysis of models. This research presents a broad view of modelling approaches for developmental biology, with a focus on computational methods. An overview of modelling techniques is given, followed by several case studies. Using

  14. A Comparative Study Of Stock Price Forecasting Using Nonlinear Models

    Directory of Open Access Journals (Sweden)

    Diteboho Xaba

    2017-03-01

    Full Text Available This study compared the in-sample forecasting accuracy of three forecasting nonlinear models namely: the Smooth Transition Regression (STR model, the Threshold Autoregressive (TAR model and the Markov-switching Autoregressive (MS-AR model. Nonlinearity tests were used to confirm the validity of the assumptions of the study. The study used model selection criteria, SBC to select the optimal lag order and for the selection of appropriate models. The Mean Square Error (MSE, Mean Absolute Error (MAE and Root Mean Square Error (RMSE served as the error measures in evaluating the forecasting ability of the models. The MS-AR models proved to perform well with lower error measures as compared to LSTR and TAR models in most cases.

  15. Comparison of conventional study model measurements and 3D digital study model measurements from laser scanned dental impressions

    Science.gov (United States)

    Nugrahani, F.; Jazaldi, F.; Noerhadi, N. A. I.

    2017-08-01

    The field of orthodontics is always evolving,and this includes the use of innovative technology. One type of orthodontic technology is the development of three-dimensional (3D) digital study models that replace conventional study models made by stone. This study aims to compare the mesio-distal teeth width, intercanine width, and intermolar width measurements between a 3D digital study model and a conventional study model. Twelve sets of upper arch dental impressions were taken from subjects with non-crowding teeth. The impressions were taken twice, once with alginate and once with polivinylsiloxane. The alginate impressions used in the conventional study model and the polivinylsiloxane impressions were scanned to obtain the 3D digital study model. Scanning was performed using a laser triangulation scanner device assembled by the School of Electrical Engineering and Informatics at the Institut Teknologi Bandung and David Laser Scan software. For the conventional model, themesio-distal width, intercanine width, and intermolar width were measured using digital calipers; in the 3D digital study model they were measured using software. There were no significant differences between the mesio-distal width, intercanine width, and intermolar width measurments between the conventional and 3D digital study models (p>0.05). Thus, measurements using 3D digital study models are as accurate as those obtained from conventional study models

  16. Digital Forensic Investigation Models, an Evolution study

    Directory of Open Access Journals (Sweden)

    Khuram Mushtaque

    2015-10-01

    Full Text Available In business today, one of the most important segments that enable any business to get competitive advantage over others is appropriate, effective adaptation of Information Technology into business and then managing and governing it on their will. To govern IT organizations need to identify value of acquiring services of forensic firms to compete cyber criminals. Digital forensic firms follow different mechanisms to perform investigation. Time by time forensic firms are facilitated with different models for investigation containing phases for different purposes of the entire process. Along with forensic firms, enterprises also need to build a secure and supportive platform to make successful investigation process possible. We have underlined different elements of organizations in Pakistan; need to be addressed to provide support to forensic firms.

  17. Studies and modeling of cold neutron sources

    International Nuclear Information System (INIS)

    Campioni, G.

    2004-11-01

    With the purpose of updating knowledge in the fields of cold neutron sources, the work of this thesis has been run according to the 3 following axes. First, the gathering of specific information forming the materials of this work. This set of knowledge covers the following fields: cold neutron, cross-sections for the different cold moderators, flux slowing down, different measurements of the cold flux and finally, issues in the thermal analysis of the problem. Secondly, the study and development of suitable computation tools. After an analysis of the problem, several tools have been planed, implemented and tested in the 3-dimensional radiation transport code Tripoli-4. In particular, a module of uncoupling, integrated in the official version of Tripoli-4, can perform Monte-Carlo parametric studies with a spare factor of Cpu time fetching 50 times. A module of coupling, simulating neutron guides, has also been developed and implemented in the Monte-Carlo code McStas. Thirdly, achieving a complete study for the validation of the installed calculation chain. These studies focus on 3 cold sources currently functioning: SP1 from Orphee reactor and 2 other sources (SFH and SFV) from the HFR at the Laue Langevin Institute. These studies give examples of problems and methods for the design of future cold sources

  18. Land-Atmosphere Coupling in the Multi-Scale Modelling Framework

    Science.gov (United States)

    Kraus, P. M.; Denning, S.

    2015-12-01

    The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced

  19. A model system for DNA repair studies

    International Nuclear Information System (INIS)

    Lange, C.S.; Perlmutter, E.

    1984-01-01

    The search for the ''lethal lesion:'' which would yield a molecular explanation of biological survival curves led to attempts to correlate unrepaired DNA lesions with loss of reproductive integrity. Such studies have shown the crucial importance of DNA repair systems. The unrepaired DSB has been sought for such correlation, but in such study the DNA was too large, polydisperse, and/or structurally complex to permit precise measurement of break induction and repair. Therefore, an analog of higher order systems but with a genome of readily measurable size, is needed. Bacteriophage T4 is such an analog. Both its biological (PFU) and molecular (DNA) survival curves are exponentials. Its aerobic /sub PFU/D/sub 37///sub DNA/D/sub 37/ ratio, (410 +- 4.5Gy/540 +- 25 Gy) indicates that 76 +- 4% of lethality at low multiplicity infection (moi 1) the survival is greater than can be explained if the assumption of no parental DSB repair were valid. Both T4 and its host have DSB repair systems which can be studied by the infectious center method. Results of such studies are discussed

  20. Alternative Middle School Models: An Exploratory Study

    Science.gov (United States)

    Duffield, Stacy Kay

    2018-01-01

    A Midwestern state allocated grant funding to encourage more accessible alternative programming at the middle level. Seventeen schools were approved for this grant and used the funds to supplement the operation of a new or existing program. This study provides policymakers and educators with an overview of the various types of alternative middle…

  1. Flow model study of 'Monju' reactor vessel

    International Nuclear Information System (INIS)

    Miyaguchi, Kimihide

    1980-01-01

    In the case of designing the structures in nuclear reactors, various problems to be considered regarding thermo-hydrodynamics exist, such as the distribution of flow quantity and the pressure loss in reactors and the thermal shock to inlet and outlet nozzles. In order to grasp the flow characteristics of coolant in reactors, the 1/2 scale model of the reactor structure of ''Monju'' was attached to the water flow testing facility in the Oarai Engineering Center, and the simulation experiment has been carried out. The flow characteristics in reactors clarified by experiment and analysis so far are the distribution of flow quantity between high and low pressure regions in reactors, the distribution of flow quantity among flow zones in respective regions of high and low pressure, the pressure loss in respective parts in reactors, the flow pattern and the mixing effect of coolant in upper and lower plenums, the effect of the twisting angle of inlet nozzles on the flow characteristics in lower plenums, the effect of internal cylinders on the flow characteristics in upper plenums and so on. On the basis of these test results, the improvement of the design of structures in reactors was made, and the confirmation test on the improved structures was carried out. The testing method, the calculation method, the test results and the reflection to the design of actual machines are described. (Kako, I.)

  2. Theory, modeling, and integrated studies in the Arase (ERG) project

    Science.gov (United States)

    Seki, Kanako; Miyoshi, Yoshizumi; Ebihara, Yusuke; Katoh, Yuto; Amano, Takanobu; Saito, Shinji; Shoji, Masafumi; Nakamizo, Aoi; Keika, Kunihiro; Hori, Tomoaki; Nakano, Shin'ya; Watanabe, Shigeto; Kamiya, Kei; Takahashi, Naoko; Omura, Yoshiharu; Nose, Masahito; Fok, Mei-Ching; Tanaka, Takashi; Ieda, Akimasa; Yoshikawa, Akimasa

    2018-02-01

    Understanding of underlying mechanisms of drastic variations of the near-Earth space (geospace) is one of the current focuses of the magnetospheric physics. The science target of the geospace research project Exploration of energization and Radiation in Geospace (ERG) is to understand the geospace variations with a focus on the relativistic electron acceleration and loss processes. In order to achieve the goal, the ERG project consists of the three parts: the Arase (ERG) satellite, ground-based observations, and theory/modeling/integrated studies. The role of theory/modeling/integrated studies part is to promote relevant theoretical and simulation studies as well as integrated data analysis to combine different kinds of observations and modeling. Here we provide technical reports on simulation and empirical models related to the ERG project together with their roles in the integrated studies of dynamic geospace variations. The simulation and empirical models covered include the radial diffusion model of the radiation belt electrons, GEMSIS-RB and RBW models, CIMI model with global MHD simulation REPPU, GEMSIS-RC model, plasmasphere thermosphere model, self-consistent wave-particle interaction simulations (electron hybrid code and ion hybrid code), the ionospheric electric potential (GEMSIS-POT) model, and SuperDARN electric field models with data assimilation. ERG (Arase) science center tools to support integrated studies with various kinds of data are also briefly introduced.[Figure not available: see fulltext.

  3. Mathematical Modelling Research in Turkey: A Content Analysis Study

    Science.gov (United States)

    Çelik, H. Coskun

    2017-01-01

    The aim of the present study was to examine the mathematical modelling studies done between 2004 and 2015 in Turkey and to reveal their tendencies. Forty-nine studies were selected using purposeful sampling based on the term, "mathematical modelling" with Higher Education Academic Search Engine. They were analyzed with content analysis.…

  4. Fostering Transfer of Study Strategies: A Spiral Model.

    Science.gov (United States)

    Davis, Denise M.; Clery, Carolsue

    1994-01-01

    Describes the design and implementation of a Spiral Model for the introduction and repeated practice of study strategies, based on Taba's model for social studies. In a college reading and studies strategies course, key strategies were introduced early and used through several sets of humanities and social and physical sciences readings. (Contains…

  5. Bayesian Graphical Models for Genomewide Association Studies

    OpenAIRE

    Verzilli, Claudio J.; Stallard, Nigel; Whittaker, John C.

    2006-01-01

    As the extent of human genetic variation becomes more fully characterized, the research community is faced with the challenging task of using this information to dissect the heritable components of complex traits. Genomewide association studies offer great promise in this respect, but their analysis poses formidable difficulties. In this article, we describe a computationally efficient approach to mining genotype-phenotype associations that scales to the size of the data sets currently being ...

  6. [Statistical modeling studies of turbulent reacting flows

    International Nuclear Information System (INIS)

    Dwyer, H.A.

    1987-01-01

    This paper discusses the study of turbulent wall shear flows, and we feel that this problem is both more difficult and a better challenge for the new methods we are developing. Turbulent wall flows have a wide variety of length and time scales which interact with the transport processes to produce very large fluxes of mass, heat, and momentum. At the present time we have completed the first calculation of a wall diffusion flame, and we have begun a velocity PDF calculation for the flat plate boundary layer. A summary of the various activities is contained in this report

  7. Physical Model Method for Seismic Study of Concrete Dams

    Directory of Open Access Journals (Sweden)

    Bogdan Roşca

    2008-01-01

    Full Text Available The study of the dynamic behaviour of concrete dams by means of the physical model method is very useful to understand the failure mechanism of these structures to action of the strong earthquakes. Physical model method consists in two main processes. Firstly, a study model must be designed by a physical modeling process using the dynamic modeling theory. The result is a equations system of dimensioning the physical model. After the construction and instrumentation of the scale physical model a structural analysis based on experimental means is performed. The experimental results are gathered and are available to be analysed. Depending on the aim of the research may be designed an elastic or a failure physical model. The requirements for the elastic model construction are easier to accomplish in contrast with those required for a failure model, but the obtained results provide narrow information. In order to study the behaviour of concrete dams to strong seismic action is required the employment of failure physical models able to simulate accurately the possible opening of joint, sliding between concrete blocks and the cracking of concrete. The design relations for both elastic and failure physical models are based on dimensional analysis and consist of similitude relations among the physical quantities involved in the phenomenon. The using of physical models of great or medium dimensions as well as its instrumentation creates great advantages, but this operation involves a large amount of financial, logistic and time resources.

  8. System Dynamic Modelling for a Balanced Scorecard: A Case Study

    DEFF Research Database (Denmark)

    Nielsen, Steen; Nielsen, Erland Hejn

    Purpose - The purpose of this research is to make an analytical model of the BSC foundation by using a dynamic simulation approach for a 'hypothetical case' model, based on only part of an actual case study of BSC. Design/methodology/approach - The model includes five perspectives and a number...

  9. Salt intrusion study in Cochin estuary - Using empirical models

    Digital Repository Service at National Institute of Oceanography (India)

    Jacob, B.; Revichandran, C.; NaveenKumar, K.R.

    been applied to the Cochin estuary in the present study to identify the most suitable model for predicting the salt intrusion length. Comparison of the obtained results indicate that the model of Van der Burgh (1972) is the most suitable empirical model...

  10. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  11. Adolescents Family Models : A Cross-Cultural Study

    OpenAIRE

    Mayer, Boris

    2009-01-01

    This study explores and compares the family models of adolescents across ten cultures using a typological and multilevel approach. Thereby, it aims to empirically contribute to Kagitcibasi s (2007) theory of family change. This theory postulates the existence of three ideal-typical family models across cultures: a family model of independence prevailing in Western societies, a family model of (total) interdependence prevailing in non-industrialized agrarian cultures, and as a synthesis of the...

  12. Conducting field studies for testing pesticide leaching models

    Science.gov (United States)

    Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.

    1990-01-01

    A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.

  13. Pulse radiolysis studies in model lipid systems

    International Nuclear Information System (INIS)

    Patterson, L.K.; Hasegawa, K.

    1978-01-01

    The kinetic and spectral behavior of radicals formed by hydroxyl radical attack on linoleate anions has been studied by pulse radiolysis. Reactivity of OH toward this surfactant is an order of magnitude greater in monomeric form (kOH + linoleate = 8.0 x 10 9 M -1 sec -1 ) than in mecellar form (kOH + lin(micelle) = 1.0 x 10 9 M -1 sec -1 ). Abstraction of a hydrogen atom from the doubly allylic position gives rise to an intense absorption in the UV region (lambda max = 282-286 nm, epsilon approximately 3 x 10 4 M -1 cm -1 ) which may be used as a probe of radical activity at that site. This abstraction may occur, to a small extent, directly via OH attack. However, greater than 90% of initial attack occurs at other sites. Subsequent secondary abstraction of doubly allylic H atoms appears to occur predominantly by: (1) intramolecular processes in monomers, (2) intermolecular processes in micelles. Disappearance of radicals by secondary processes is slower in the micellar pseudo phase than in monomeric solution. (orig.) 891 HK 892 KR [de

  14. Mobile radio alternative systems study traffic model

    Science.gov (United States)

    Tucker, W. T.; Anderson, R. E.

    1983-06-01

    The markets for mobile radio services in non-urban areas of the United States are examined for the years 1985-2000. Three market categories are identified. New Services are defined as those for which there are different expressed ideas but which are not now met by any application of available technology. The complete fulfillment of the needs requires nationwide radio access to vehicles without knowledge of vehicle location, wideband data transmission from remote sites, one- and two way exchange of short data and control messages between vehicles and dispatch or control centers, and automatic vehicle location (surveillance). The commercial and public services market of interest to the study is drawn from existing users of mobile radio in non-urban areas who are dissatisfied with the geographical range or coverage of their systems. The mobile radio telephone market comprises potential users who require access to the public switched telephone network in areas that are not likely to be served by the traditional growth patterns of terrestrial mobile telephone services. Conservative, likely, and optimistic estimates of the markets are presented in terms of numbers of vehicles that will be served and the radio traffic they will generate.

  15. Process modeling for the Integrated Nonthermal Treatment System (INTS) study

    Energy Technology Data Exchange (ETDEWEB)

    Brown, B.W.

    1997-04-01

    This report describes the process modeling done in support of the Integrated Nonthermal Treatment System (INTS) study. This study was performed to supplement the Integrated Thermal Treatment System (ITTS) study and comprises five conceptual treatment systems that treat DOE contract-handled mixed low-level wastes (MLLW) at temperatures of less than 350{degrees}F. ASPEN PLUS, a chemical process simulator, was used to model the systems. Nonthermal treatment systems were developed as part of the INTS study and include sufficient processing steps to treat the entire inventory of MLLW. The final result of the modeling is a process flowsheet with a detailed mass and energy balance. In contrast to the ITTS study, which modeled only the main treatment system, the INTS study modeled each of the various processing steps with ASPEN PLUS, release 9.1-1. Trace constituents, such as radionuclides and minor pollutant species, were not included in the calculations.

  16. An Integrated Approach to Mathematical Modeling: A Classroom Study.

    Science.gov (United States)

    Doerr, Helen M.

    Modeling, simulation, and discrete mathematics have all been identified by professional mathematics education organizations as important areas for secondary school study. This classroom study focused on the components and tools for modeling and how students use these tools to construct their understanding of contextual problems in the content area…

  17. Evaluating EML Modeling Tools for Insurance Purposes: A Case Study

    Directory of Open Access Journals (Sweden)

    Mikael Gustavsson

    2010-01-01

    Full Text Available As with any situation that involves economical risk refineries may share their risk with insurers. The decision process generally includes modelling to determine to which extent the process area can be damaged. On the extreme end of modelling the so-called Estimated Maximum Loss (EML scenarios are found. These scenarios predict the maximum loss a particular installation can sustain. Unfortunately no standard model for this exists. Thus the insurers reach different results due to applying different models and different assumptions. Therefore, a study has been conducted on a case in a Swedish refinery where several scenarios previously had been modelled by two different insurance brokers using two different softwares, ExTool and SLAM. This study reviews the concept of EML and analyses the used models to see which parameters are most uncertain. Also a third model, EFFECTS, was employed in an attempt to reach a conclusion with higher reliability.

  18. A case study of consensus modelling for tracking oil spills

    International Nuclear Information System (INIS)

    King, Brian; Brushett, Ben; Lemckert, Charles

    2010-01-01

    Metocean forecast datasets are essential for the timely response to marine incidents and pollutant spill mitigation at sea. To effectively model the likely drift pattern and the area of impact for a marine spill, both wind and ocean current forecast datasets are required. There are two ocean current forecast models and two wind forecast models currently used operationally in the Australia and Asia Pacific region. The availability of several different forecast models provides a unique opportunity to compare the outcome of a particular modelling exercise with the outcome of another using a different model and determining whether there is consensus in the results. Two recent modelling exercises, the oil spill resulting from the damaged Pacific Adventurer (in Queensland) and the oil spill from the Montara well blowout (in Western Australia) are presented as case studies to examine consensus modelling.

  19. Complete wind farm electromagnetic transient modelling for grid integration studies

    International Nuclear Information System (INIS)

    Zubia, I.; Ostolaza, X.; Susperregui, A.; Tapia, G.

    2009-01-01

    This paper presents a modelling methodology to analyse the impact of wind farms in surrounding networks. Based on the transient modelling of the asynchronous generator, the multi-machine model of a wind farm composed of N generators is developed. The model incorporates step-up power transformers, distribution lines and surrounding loads up to their connection to the power network. This model allows the simulation of symmetric and asymmetric short-circuits located in the distribution network and the analysis of transient stability of wind farms. It can be also used to study the islanding operation of wind farms

  20. Isolated heart models: cardiovascular system studies and technological advances.

    Science.gov (United States)

    Olejnickova, Veronika; Novakova, Marie; Provaznik, Ivo

    2015-07-01

    Isolated heart model is a relevant tool for cardiovascular system studies. It represents a highly reproducible model for studying broad spectrum of biochemical, physiological, morphological, and pharmaceutical parameters, including analysis of intrinsic heart mechanics, metabolism, and coronary vascular response. Results obtained in this model are under no influence of other organ systems, plasma concentration of hormones or ions and influence of autonomic nervous system. The review describes various isolated heart models, the modes of heart perfusion, and advantages and limitations of various experimental setups. It reports the improvements of perfusion setup according to Langendorff introduced by the authors.

  1. Modelling and propagation of uncertainties in the German Risk Study

    International Nuclear Information System (INIS)

    Hofer, E.; Krzykacz, B.

    1982-01-01

    Risk assessments are generally subject to uncertainty considerations. This is because of the various estimates that are involved. The paper points out those estimates in the so-called phase A of the German Risk Study, for which uncertainties were quantified. It explains the probabilistic models applied in the assessment to their impact on the findings of the study. Finally the resulting subjective confidence intervals of the study results are presented and their sensitivity to these probabilistic models is investigated

  2. Air Pollution Exposure Modeling for Health Studies | Science ...

    Science.gov (United States)

    Dr. Michael Breen is leading the development of air pollution exposure models, integrated with novel personal sensor technologies, to improve exposure and risk assessments for individuals in health studies. He is co-investigator for multiple health studies assessing the exposure and effects of air pollutants. These health studies include participants with asthma, diabetes, and coronary artery disease living in various U.S. cities. He has developed, evaluated, and applied novel exposure modeling and time-activity tools, which includes the Exposure Model for Individuals (EMI), GPS-based Microenvironment Tracker (MicroTrac) and Exposure Tracker models. At this seminar, Dr. Breen will present the development and application of these models to predict individual-level personal exposures to particulate matter (PM) for two health studies in central North Carolina. These health studies examine the association between PM and adverse health outcomes for susceptible individuals. During Dr. Breen’s visit, he will also have the opportunity to establish additional collaborations with researchers at Harvard University that may benefit from the use of exposure models for cohort health studies. These research projects that link air pollution exposure with adverse health outcomes benefit EPA by developing model-predicted exposure-dose metrics for individuals in health studies to improve the understanding of exposure-response behavior of air pollutants, and to reduce participant

  3. Injury Based on Its Study in Experimental Models

    Directory of Open Access Journals (Sweden)

    M. Mendes-Braz

    2012-01-01

    Full Text Available The present review focuses on the numerous experimental models used to study the complexity of hepatic ischemia/reperfusion (I/R injury. Although experimental models of hepatic I/R injury represent a compromise between the clinical reality and experimental simplification, the clinical transfer of experimental results is problematic because of anatomical and physiological differences and the inevitable simplification of experimental work. In this review, the strengths and limitations of the various models of hepatic I/R are discussed. Several strategies to protect the liver from I/R injury have been developed in animal models and, some of these, might find their way into clinical practice. We also attempt to highlight the fact that the mechanisms responsible for hepatic I/R injury depend on the experimental model used, and therefore the therapeutic strategies also differ according to the model used. Thus, the choice of model must therefore be adapted to the clinical question being answered.

  4. Bethe ansatz study for ground state of Fateev Zamolodchikov model

    International Nuclear Information System (INIS)

    Ray, S.

    1997-01-01

    A Bethe ansatz study of a self-dual Z N spin lattice model, originally proposed by V. A. Fateev and A. B. Zamolodchikov, is undertaken. The connection of this model to the Chiral Potts model is established. Transcendental equations connecting the zeros of Fateev endash Zamolodchikov transfer matrix are derived. The free energies for the ferromagnetic and the anti-ferromagnetic ground states are found for both even and odd spins. copyright 1997 American Institute of Physics

  5. A Dynamic Wind Generation Model for Power Systems Studies

    OpenAIRE

    Estanqueiro, Ana

    2007-01-01

    In this paper, a wind park dynamic model is presented together with a base methodology for its application to power system studies. This detailed wind generation model addresses the wind turbine components and phenomena more relevant to characterize the power quality of a grid connected wind park, as well as the wind park response to the grid fast perturbations, e.g., low voltage ride through fault. The developed model was applied to the operating conditions of the selected sets of wind turbi...

  6. MODELING SIMULATION AND PERFORMANCE STUDY OF GRIDCONNECTED PHOTOVOLTAIC ENERGY SYSTEM

    OpenAIRE

    Nagendra K; Karthik J; Keerthi Rao C; Kumar Raja Pemmadi

    2017-01-01

    This paper presents Modeling Simulation of grid connected Photovoltaic Energy System and performance study using MATLAB/Simulink. The Photovoltaic energy system is considered in three main parts PV Model, Power conditioning System and Grid interface. The Photovoltaic Model is inter-connected with grid through full scale power electronic devices. The simulation is conducted on the PV energy system at normal temperature and at constant load by using MATLAB.

  7. Modeling and Testing of EVs - Preliminary Study and Laboratory Development

    DEFF Research Database (Denmark)

    Yang, Guang-Ya; Marra, Francesco; Nielsen, Arne Hejde

    2010-01-01

    Electric vehicles (EVs) are expected to play a key role in the future energy management system to stabilize both supply and consumption with the presence of high penetration of renewable generation. A reasonably accurate model of battery is a key element for the study of EVs behavior and the grid...... tests, followed by the suggestions towards a feasible battery model for further studies.......Electric vehicles (EVs) are expected to play a key role in the future energy management system to stabilize both supply and consumption with the presence of high penetration of renewable generation. A reasonably accurate model of battery is a key element for the study of EVs behavior and the grid...... impact at different geographical areas, as well as driving and charging patterns. Electric circuit model is deployed in this work to represent the electrical properties of a lithium-ion battery. This paper reports the preliminary modeling and validation work based on manufacturer data sheet and realistic...

  8. Study and optimization of the partial discharges in capacitor model ...

    African Journals Online (AJOL)

    Page 1 ... experiments methodology for the study of such processes, in view of their modeling and optimization. The obtained result is a mathematical model capable to identify the parameters and the interactions between .... 5mn; the next landing is situated in 200 V over the voltage of partial discharges appearance and.

  9. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    2009-08-07

    Aug 7, 2009 ... Sensitivity study of reduced models of the activated sludge process, for the purposes of parameter estimation and process optimisation: Benchmark process with ASM1 and UCT reduced biological models. S du Plessis and R Tzoneva*. Department of Electrical Engineering, Cape Peninsula University of ...

  10. A Theoretical Study of Subsurface Drainage Model Simulation of ...

    African Journals Online (AJOL)

    A three-dimensional variable-density groundwater flow model, the SEAWAT model, was used to assess the influence of subsurface drain spacing, evapotranspiration and irrigation water quality on salt concentration at the base of the root zone, leaching and drainage in salt affected irrigated land. The study was carried out ...

  11. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    Science.gov (United States)

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  12. 2-D Model Test Study of the Suape Breakwater, Brazil

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Burcharth, Hans F.; Sopavicius, A.

    This report deals with a two-dimensional model test study of the extension of the breakwater in Suape, Brazil. One cross-section was tested for stability and overtopping in various sea conditions. The length scale used for the model tests was 1:35. Unless otherwise specified all values given...

  13. A Descriptive Study of Differing School Health Delivery Models

    Science.gov (United States)

    Becker, Sherri I.; Maughan, Erin

    2017-01-01

    The purpose of this exploratory qualitative study was to identify and describe emerging models of school health services. Participants (N = 11) provided information regarding their models in semistructured phone interviews. Results identified a variety of funding sources as well as different staffing configurations and supervision. Strengths of…

  14. Use of travel cost models in planning: A case study

    Science.gov (United States)

    Allan Marsinko; William T. Zawacki; J. Michael Bowker

    2002-01-01

    This article examines the use of the travel cost, method in tourism-related decision making in the area of nonconsumptive wildlife-associated recreation. A travel cost model of nonconsumptive wildlife-associated recreation, developed by Zawacki, Maninko, and Bowker, is used as a case study for this analysis. The travel cost model estimates the demand for the activity...

  15. Model for the dynamic study of AC contactors

    Energy Technology Data Exchange (ETDEWEB)

    Corcoles, F.; Pedra, J.; Garrido, J.P.; Baza, R. [Dep. d' Eng. Electrica ETSEIB. UPC, Barcelona (Spain)

    2000-08-01

    This paper proposes a model for the dynamic analysis of AC contactors. The calculation algorithm and implementation are discussed. The proposed model can be used to study the influence of the design parameters and the supply in their dynamic behaviour. The high calculation speed of the implemented algorithm allows extensive ranges of parameter variations to be analysed. (orig.)

  16. A test-bed modeling study for wave resource assessment

    Science.gov (United States)

    Yang, Z.; Neary, V. S.; Wang, T.; Gunawan, B.; Dallman, A.

    2016-02-01

    Hindcasts from phase-averaged wave models are commonly used to estimate standard statistics used in wave energy resource assessments. However, the research community and wave energy converter industry is lacking a well-documented and consistent modeling approach for conducting these resource assessments at different phases of WEC project development, and at different spatial scales, e.g., from small-scale pilot study to large-scale commercial deployment. Therefore, it is necessary to evaluate current wave model codes, as well as limitations and knowledge gaps for predicting sea states, in order to establish best wave modeling practices, and to identify future research needs to improve wave prediction for resource assessment. This paper presents the first phase of an on-going modeling study to address these concerns. The modeling study is being conducted at a test-bed site off the Central Oregon Coast using two of the most widely-used third-generation wave models - WaveWatchIII and SWAN. A nested-grid modeling approach, with domain dimension ranging from global to regional scales, was used to provide wave spectral boundary condition to a local scale model domain, which has a spatial dimension around 60km by 60km and a grid resolution of 250m - 300m. Model results simulated by WaveWatchIII and SWAN in a structured-grid framework are compared to NOAA wave buoy data for the six wave parameters, including omnidirectional wave power, significant wave height, energy period, spectral width, direction of maximum directionally resolved wave power, and directionality coefficient. Model performance and computational efficiency are evaluated, and the best practices for wave resource assessments are discussed, based on a set of standard error statistics and model run times.

  17. Data assimilation in modeling ocean processes: A bibliographic study

    Digital Repository Service at National Institute of Oceanography (India)

    Mahadevan, R.; Fernandes, A.A.; Saran, A.K.

    An annotated bibliography on studies related to data assimilation in modeling ocean processes has been prepared. The bibliography listed here is not comprehensive and is not prepared from the original references. Information obtainable from...

  18. Models for the study of Clostridium difficile infection

    Science.gov (United States)

    Best, Emma L.; Freeman, Jane; Wilcox, Mark H.

    2012-01-01

    Models of Clostridium difficile infection (C. difficile) have been used extensively for Clostridium difficile (C. difficile) research. The hamster model of C. difficile infection has been most extensively employed for the study of C. difficile and this has been used in many different areas of research, including the induction of C. difficile, the testing of new treatments, population dynamics and characterization of virulence. Investigations using in vitro models for C. difficile introduced the concept of colonization resistance, evaluated the role of antibiotics in C. difficile development, explored population dynamics and have been useful in the evaluation of C. difficile treatments. Experiments using models have major advantages over clinical studies and have been indispensible in furthering C. difficile research. It is important for future study programs to carefully consider the approach to use and therefore be better placed to inform the design and interpretation of clinical studies. PMID:22555466

  19. Guadalupe River, California, Sedimentation Study. Numerical Model Investigation

    National Research Council Canada - National Science Library

    Copeland, Ronald

    2002-01-01

    A numerical model study was conducted to evaluate the potential impact that the Guadalupe River flood-control project would have on channel stability in terms of channel aggradation and degradation...

  20. A study for production simulation model generation system based on data model at a shipyard

    Directory of Open Access Journals (Sweden)

    Myung-Gi Back

    2016-09-01

    Full Text Available Simulation technology is a type of shipbuilding product lifecycle management solution used to support production planning or decision-making. Normally, most shipbuilding processes are consisted of job shop production, and the modeling and simulation require professional skills and experience on shipbuilding. For these reasons, many shipbuilding companies have difficulties adapting simulation systems, regardless of the necessity for the technology. In this paper, the data model for shipyard production simulation model generation was defined by analyzing the iterative simulation modeling procedure. The shipyard production simulation data model defined in this study contains the information necessary for the conventional simulation modeling procedure and can serve as a basis for simulation model generation. The efficacy of the developed system was validated by applying it to the simulation model generation of the panel block production line. By implementing the initial simulation model generation process, which was performed in the past with a simulation modeler, the proposed system substantially reduced the modeling time. In addition, by reducing the difficulties posed by different modeler-dependent generation methods, the proposed system makes the standardization of the simulation model quality possible.

  1. Study of the properties of general relativistic Kink model (GRK)

    International Nuclear Information System (INIS)

    Oliveira, L.C.S. de.

    1980-01-01

    The stability of the general relativistic Kink model (GRK) is studied. It is shown that the model is stable at least against radial perturbations. Furthermore, the Dirac field in the background of the geometry generated by the GRK is studied. It is verified that the GRK localizes the Dirac field, around the region of largest curvature. The physical interpretation of this system (the Dirac field in the GRK background) is discussed. (Author) [pt

  2. Model study on radioecology in Biblis. Pt. 2

    Energy Technology Data Exchange (ETDEWEB)

    1980-03-01

    The present volume 'Water Pathway II' of the model study radioecology Biblis contains the remaining six part studies on the subjects: 1. Concentration of radionuclides in river sediments. 2. Incorporation via terrestrial food (milk, fruit, vegetables). 3. Radioactive substances in the Rhine not arising from nuclear power stations. 4. Dynamic model for intermittent outlet during reactor operation. 5. Exposure to radiation of the Rhine-fishes. 6. Influence of contaminated waste water on industrial utilization of surface waters.

  3. Drosophila melanogaster as a model organism to study nanotoxicity.

    Science.gov (United States)

    Ong, Cynthia; Yung, Lin-Yue Lanry; Cai, Yu; Bay, Boon-Huat; Baeg, Gyeong-Hun

    2015-05-01

    Drosophila melanogaster has been used as an in vivo model organism for the study of genetics and development since 100 years ago. Recently, the fruit fly Drosophila was also developed as an in vivo model organism for toxicology studies, in particular, the field of nanotoxicity. The incorporation of nanomaterials into consumer and biomedical products is a cause for concern as nanomaterials are often associated with toxicity in many in vitro studies. In vivo animal studies of the toxicity of nanomaterials with rodents and other mammals are, however, limited due to high operational cost and ethical objections. Hence, Drosophila, a genetically tractable organism with distinct developmental stages and short life cycle, serves as an ideal organism to study nanomaterial-mediated toxicity. This review discusses the basic biology of Drosophila, the toxicity of nanomaterials, as well as how the Drosophila model can be used to study the toxicity of various types of nanomaterials.

  4. Construction of a biodynamic model for Cry protein production studies.

    Science.gov (United States)

    Navarro-Mtz, Ana Karin; Pérez-Guevara, Fermín

    2014-12-01

    Mathematical models have been used from growth kinetic simulation to gen regulatory networks prediction for B. thuringiensis culture. However, this culture is a time dependent dynamic process where cells physiology suffers several changes depending on the changes in the cell environment. Therefore, through its culture, B. thuringiensis presents three phases related with the predominance of three major metabolic pathways: vegetative growth (Embded-Meyerhof-Parnas pathway), transition (γ-aminobutiric cycle) and sporulation (tricarboxylic acid cycle). There is not available a mathematical model that relates the different stages of cultivation with the metabolic pathway active on each one of them. Therefore, in the present study, and based on published data, a biodynamic model was generated to describe the dynamic of the three different phases based on their major metabolic pathways. The biodynamic model is used to study the interrelation between the different culture phases and their relationship with the Cry protein production. The model consists of three interconnected modules where each module represents one culture phase and its principal metabolic pathway. For model validation four new fermentations were done showing that the model constructed describes reasonably well the dynamic of the three phases. The main results of this model imply that poly-β-hydroxybutyrate is crucial for endospore and Cry protein production. According to the yields of dipicolinic acid and Cry from poly-β-hydroxybutyrate, calculated with the model, the endospore and Cry protein production are not just simultaneous and parallel processes they are also competitive processes.

  5. Study on Tower Models for EHV Transmission Line

    Directory of Open Access Journals (Sweden)

    Xu Bao-Qing

    2016-01-01

    Full Text Available Lightning outage accident is one of the main factors that threat seriously the safe and reliable operation of power system. So it is very important to establish reasonable transmission tower model and evaluate the impulse response characteristic of lightning wave traveling on the transmission tower properly for determining reliable lightning protection performance. With the help of Electromagnetic Transient Program (EMTP, six 500kV tower models are built. Aiming at one line to one transformer operating mode of 500kV substation, the intruding wave overvoltage under different tower models is calculated. The effect of tower model on intruding overvoltage has been studied. The results show that different tower models can result in great differences to the calculation results. Hence, reasonable selection of the tower model in the calculation of back- strike intruding wave is very important.

  6. Consequence model of the German reactor safety study

    International Nuclear Information System (INIS)

    Bayer, A.; Aldrich, D.; Burkart, K.; Horsch, F.; Hubschmann, W.; Schueckler, M.; Vogt, S.

    1979-01-01

    The consequency model developed for phase A of the German Reactor Safety Study (RSS) is similar in many respects to its counterpart in WASH-1400. As in that previous study, the model describes the atmosphere dispersion and transport of radioactive material released from the containment during a postulated reactor accident, and predicts its interaction with and influence on man. Differences do exist between the two models however, for the following reasons: (1) to more adequately reflect central European conditions, (2) to include improved submodels, and (3) to apply additional data and knowledge that have become available since publication of WASH-1400. The consequence model as used in phase A of the German RSS is described, highlighting differences between it and the U.S. model

  7. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  8. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  9. Studying historical occupational careers with multilevel growth models

    Directory of Open Access Journals (Sweden)

    Wiebke Schulz

    2010-10-01

    Full Text Available In this article we propose to study occupational careers with historical data by using multilevel growth models. Historical career data are often characterized by a lack of information on the timing of occupational changes and by different numbers of observations of occupations per individual. Growth models can handle these specificities, whereas standard methods, such as event history analyses can't. We illustrate the use of growth models by studying career success of men and women, using data from the Historical Sample of the Netherlands. The results show that the method is applicable to male careers, but causes trouble when analyzing female careers.

  10. Contribution to the study of conformal theories and integrable models

    International Nuclear Information System (INIS)

    Sochen, N.

    1992-05-01

    The purpose of this thesis is the 2-D physics study. The main tool is the conformal field theory with Kac-Moody and W algebra. This theory describes the 2-D models that have translation, rotation and dilatation symmetries, at their critical point. The expanded conformal theories describe models that have a larger symmetry than conformal symmetry. After a review of conformal theory methods, the author effects a detailed study of singular vector form in sl(2) affine algebra. With this important form, correlation functions can be calculated. The classical W algebra is studied and the relations between classical W algebra and quantum W algebra are specified. Bosonization method is presented and sl(2)/sl(2) topological model, studied. Partition function bosonization of different models is described. A program of rational theory classification is described linking rational conformal theories and spin integrable models, and interesting relations between Boltzmann weights of different models have been found. With these relations, the integrability of models by a direct calculation of their Boltzmann weights is proved

  11. Business Model Perusahaan Keluarga: Studi Kasus Pada Industri Batik

    Directory of Open Access Journals (Sweden)

    Achmad Sobirin

    2014-07-01

    Full Text Available AbstractThis paper was directed to review the existing busniness model of family firm within the contect of batik industry and propose a new one. Busniness model is conceived as the logic of doing business for value creation. Therefore business model is sometime understood as a construct, a mental model or a business paradigm, to be used as a guide on how to do every day’s business. Meanwhile, family firm, by definition is a firm in which the whole or majority of ownership is in the hand of family unit, managed by family members, and to be transferred to the next generation. Using a single case study that is Perusahaan Batik Bogavira – a family business enterprise producing and selling specifically batik Lampung, we identified that the existing business model of Perusahaan Batik Bogavira may potentially create cannibalism. Therefore we proposed a new business model configuration with the hope loyal buyers remain with the firm and at the same time firm can still maintain its growth.Keywords: business model, family firm, batik industry.Abstrak Paper ini membahas penerapan sebuah konsep yang relatif masih baru yaitu “business model” pada perusahaan keluarga yang bergerak di industry batik – Perusahaan Batik Bogavira yang memroduksi dan menjual batik khas Lampung. Tujuannya adalah untuk menelaah ulang business model berjalan sehingga bisa diketahui tingkat kecocokan business model tersebut dengan karakteristik bisnis dan lingkungannya, dan jika dianggap perlu mengusulkan business model baru yang lebih sesuai. Bahasan diawali dengan menelaah konsep business model dan perusahaan keluarga untuk mendapatkan gambaran tentang esensi kedua konsep tersebut. Secara umum business model adalah the logic of doing business for value creation sehingga business model sering disebut juga sebagai construct, mental model atau business paradigm yang menjadi panduan dalam menjalankan kegiatan bisnis. Sementara itu yang dimaksud dengan perusahaan keluarga

  12. An optomechanical model eye for ophthalmological refractive studies.

    Science.gov (United States)

    Arianpour, Ashkan; Tremblay, Eric J; Stamenov, Igor; Ford, Joseph E; Schanzlin, David J; Lo, Yuhwa

    2013-02-01

    To create an accurate, low-cost optomechanical model eye for investigation of refractive errors in clinical and basic research studies. An optomechanical fluid-filled eye model with dimensions consistent with the human eye was designed and fabricated. Optical simulations were performed on the optomechanical eye model, and the quantified resolution and refractive errors were compared with the widely used Navarro eye model using the ray-tracing software ZEMAX (Radiant Zemax, Redmond, WA). The resolution of the physical optomechanical eye model was then quantified with a complementary metal-oxide semiconductor imager using the image resolution software SFR Plus (Imatest, Boulder, CO). Refractive, manufacturing, and assembling errors were also assessed. A refractive intraocular lens (IOL) and a diffractive IOL were added to the optomechanical eye model for tests and analyses of a 1951 U.S. Air Force target chart. Resolution and aberrations of the optomechanical eye model and the Navarro eye model were qualitatively similar in ZEMAX simulations. Experimental testing found that the optomechanical eye model reproduced properties pertinent to human eyes, including resolution better than 20/20 visual acuity and a decrease in resolution as the field of view increased in size. The IOLs were also integrated into the optomechanical eye model to image objects at distances of 15, 10, and 3 feet, and they indicated a resolution of 22.8 cycles per degree at 15 feet. A life-sized optomechanical eye model with the flexibility to be patient-specific was designed and constructed. The model had the resolution of a healthy human eye and recreated normal refractive errors. This model may be useful in the evaluation of IOLs for cataract surgery. Copyright 2013, SLACK Incorporated.

  13. Wildland Fire Behaviour Case Studies and Fuel Models for Landscape-Scale Fire Modeling

    Directory of Open Access Journals (Sweden)

    Paul-Antoine Santoni

    2011-01-01

    Full Text Available This work presents the extension of a physical model for the spreading of surface fire at landscape scale. In previous work, the model was validated at laboratory scale for fire spreading across litters. The model was then modified to consider the structure of actual vegetation and was included in the wildland fire calculation system Forefire that allows converting the two-dimensional model of fire spread to three dimensions, taking into account spatial information. Two wildland fire behavior case studies were elaborated and used as a basis to test the simulator. Both fires were reconstructed, paying attention to the vegetation mapping, fire history, and meteorological data. The local calibration of the simulator required the development of appropriate fuel models for shrubland vegetation (maquis for use with the model of fire spread. This study showed the capabilities of the simulator during the typical drought season characterizing the Mediterranean climate when most wildfires occur.

  14. Best Practices in Academic Management. Study Programs Classification Model

    Directory of Open Access Journals (Sweden)

    Ofelia Ema Aleca

    2016-05-01

    Full Text Available This article proposes and tests a set of performance indicators for the assessment of Bachelor and Master studies, from two perspectives: the study programs and the disciplines. The academic performance at the level of a study program shall be calculated based on success and efficiency rates, and at discipline level, on the basis of rates of efficiency, success and absenteeism. This research proposes a model of classification of the study programs within a Bachelor and Master cycle based on the education performance and efficiency. What recommends this model as a best practice model in academic management is the possibility of grouping a study program or a discipline in a particular category of efficiency

  15. Model simulations with COSMO-SPECS: impact of heterogeneous freezing modes and ice nucleating particle types on ice formation and precipitation in a deep convective cloud

    Directory of Open Access Journals (Sweden)

    K. Diehl

    2018-03-01

    Full Text Available In deep convective clouds, heavy rain is often formed involving the ice phase. Simulations were performed using the 3-D cloud resolving model COSMO-SPECS with detailed spectral microphysics including parameterizations of homogeneous and three heterogeneous freezing modes. The initial conditions were selected to result in a deep convective cloud reaching 14 km of altitude with strong updrafts up to 40 m s−1. At such altitudes with corresponding temperatures below −40 °C the major fraction of liquid drops freezes homogeneously. The goal of the present model simulations was to investigate how additional heterogeneous freezing will affect ice formation and precipitation although its contribution to total ice formation may be rather low. In such a situation small perturbations that do not show significant effects at first sight may trigger cloud microphysical responses. Effects of the following small perturbations were studied: (1 additional ice formation via immersion, contact, and deposition modes in comparison to solely homogeneous freezing, (2 contact and deposition freezing in comparison to immersion freezing, and (3 small fractions of biological ice nucleating particles (INPs in comparison to higher fractions of mineral dust INP. The results indicate that the modification of precipitation proceeds via the formation of larger ice particles, which may be supported by direct freezing of larger drops, the growth of pristine ice particles by riming, and by nucleation of larger drops by collisions with pristine ice particles. In comparison to the reference case with homogeneous freezing only, such small perturbations due to additional heterogeneous freezing rather affect the total precipitation amount. It is more likely that the temporal development and the local distribution of precipitation are affected by such perturbations. This results in a gradual increase in precipitation at early cloud stages instead of a strong increase at

  16. Innovation and Business Model: a case study about integration of Innovation Funnel and Business Model Canvas

    Directory of Open Access Journals (Sweden)

    Fábio Luiz Zandoval Bonazzi

    2014-12-01

    Full Text Available Unlike the past, currently, thinking about innovation refers to a reflection of value cocreation through strategic alliances, customer approach and adoption of different business models. Thus, this study analyzed and described the innovation process of company DSM, connecting it to concepts of organizational development strategies and the theory of business model. This is a basic interpretive qualitative research, developed by means of a single case study conducted through interviews and documentary analysis. This study enabled us to categorize the company business model as an open, unbundled and innovative model, which makes innovation a dependent variable of this internal configuration of value creation and value capture. As a theoretical contribution, we highlight the convergence and complementarity of the “Business Model Canvas” tool and “Innovation Funnel,” used here, to analyze the empirical case.

  17. Toward an in-situ analytics and diagnostics framework for earth system models

    Science.gov (United States)

    Anantharaj, Valentine; Wolf, Matthew; Rasch, Philip; Klasky, Scott; Williams, Dean; Jacob, Rob; Ma, Po-Lun; Kuo, Kwo-Sen

    2017-04-01

    The development roadmaps for many earth system models (ESM) aim for a globally cloud-resolving model targeting the pre-exascale and exascale systems of the future. The ESMs will also incorporate more complex physics, chemistry and biology - thereby vastly increasing the fidelity of the information content simulated by the model. We will then be faced with an unprecedented volume of simulation output that would need to be processed and analyzed concurrently in order to derive the valuable scientific results. We are already at this threshold with our current generation of ESMs at higher resolution simulations. Currently, the nominal I/O throughput in the Community Earth System Model (CESM) via Parallel IO (PIO) library is around 100 MB/s. If we look at the high frequency I/O requirements, it would require an additional 1 GB / simulated hour, translating to roughly 4 mins wallclock / simulated-day => 24.33 wallclock hours / simulated-model-year => 1,752,000 core-hours of charge per simulated-model-year on the Titan supercomputer at the Oak Ridge Leadership Computing Facility. There is also a pending need for 3X more volume of simulation output . Meanwhile, many ESMs use instrument simulators to run forward models to compare model simulations against satellite and ground-based instruments, such as radars and radiometers. The CFMIP Observation Simulator Package (COSP) is used in CESM as well as the Accelerated Climate Model for Energy (ACME), one of the ESMs specifically targeting current and emerging leadership-class computing platforms These simulators can be computationally expensive, accounting for as much as 30% of the computational cost. Hence the data are often written to output files that are then used for offline calculations. Again, the I/O bottleneck becomes a limitation. Detection and attribution studies also use large volume of data for pattern recognition and feature extraction to analyze weather and climate phenomenon such as tropical cyclones

  18. A model for assessing human cognitive reliability in PRA studies

    International Nuclear Information System (INIS)

    Hannaman, G.W.; Spurgin, A.J.; Lukic, Y.

    1985-01-01

    This paper summarizes the status of a research project sponsored by EPRI as part of the Probabilistic Risk Assessment (PRA) technology improvement program and conducted by NUS Corporation to develop a model of Human Cognitive Reliability (HCR). The model was synthesized from features identified in a review of existing models. The model development was based on the hypothesis that the key factors affecting crew response times are separable. The inputs to the model consist of key parameters the values of which can be determined by PRA analysts for each accident situation being assessed. The output is a set of curves which represent the probability of control room crew non-response as a function of time for different conditions affecting their performance. The non-response probability is then a contributor to the overall non-success of operating crews to achieve a functional objective identified in the PRA study. Simulator data and some small scale tests were utilized to illustrate the calibration of interim HCR model coefficients for different types of cognitive processing since the data were sparse. The model can potentially help PRA analysts make human reliability assessments more explicit. The model incorporates concepts from psychological models of human cognitive behavior, information from current collections of human reliability data sources and crew response time data from simulator training exercises

  19. Study on modeling technology in digital reactor system

    International Nuclear Information System (INIS)

    Liu Xiaoping; Luo Yuetong; Tong Lili

    2004-01-01

    Modeling is the kernel part of a digital reactor system. As an extensible platform for reactor conceptual design, it is very important to study modeling technology and develop some kind of tools to speed up preparation of all classical computing models. This paper introduces the background of the project and basic conception of digital reactor. MCAM is taken as an example for modeling and its related technologies used are given. It is an interface program for MCNP geometry model developed by FDS team (ASIPP and HUT), and designed to run on windows system. MCAM aims at utilizing CAD technology to facilitate creation of MCNP geometry model. There have been two ways for MCAM to utilize CAD technology: (1) Making use of user interface technology in aid of generation of MCNP geometry model; (2) Making use of existing 3D CAD model to accelerate creation of MCNP geometry model. This paper gives an overview of MCAM's major function. At last, several examples are given to demonstrate MCAM's various capabilities. (authors)

  20. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  1. Example of emergency response model evaluation of studies using the Mathew/Adpic models

    International Nuclear Information System (INIS)

    Dickerson, M.H.; Lange, R.

    1986-04-01

    This report summarizes model evaluation studies conducted for the MATHEW/ADPIC transport and diffusion models during the past ten years. These models support the US Department of Energy Atmospheric Release Advisory Capability, an emergency response service for atmospheric releases of nuclear material. Field studies involving tracer releases used in these studies cover a broad range of meteorology, terrain and tracer release heights, the three most important aspects of estimating air concentration values resulting from airborne releases of toxic material. Results of these studies show that these models can estimate air concentration values within a factor of 2 20% to 50% of the time and a factor of 5 40% to 80% of the time. As the meterology and terrain become more complex and the release height of the tracer is increased, the accuracy of the model calculations degrades. This band of uncertainty appears to correctly represent the capability of these models at this time. A method for estimating angular uncertainty in the model calculations is described and used to suggest alternative methods for evaluating emergency response models

  2. Saccharomyces cerevisiae as a model organism: a comparative study.

    Directory of Open Access Journals (Sweden)

    Hiren Karathia

    Full Text Available BACKGROUND: Model organisms are used for research because they provide a framework on which to develop and optimize methods that facilitate and standardize analysis. Such organisms should be representative of the living beings for which they are to serve as proxy. However, in practice, a model organism is often selected ad hoc, and without considering its representativeness, because a systematic and rational method to include this consideration in the selection process is still lacking. METHODOLOGY/PRINCIPAL FINDINGS: In this work we propose such a method and apply it in a pilot study of strengths and limitations of Saccharomyces cerevisiae as a model organism. The method relies on the functional classification of proteins into different biological pathways and processes and on full proteome comparisons between the putative model organism and other organisms for which we would like to extrapolate results. Here we compare S. cerevisiae to 704 other organisms from various phyla. For each organism, our results identify the pathways and processes for which S. cerevisiae is predicted to be a good model to extrapolate from. We find that animals in general and Homo sapiens in particular are some of the non-fungal organisms for which S. cerevisiae is likely to be a good model in which to study a significant fraction of common biological processes. We validate our approach by correctly predicting which organisms are phenotypically more distant from S. cerevisiae with respect to several different biological processes. CONCLUSIONS/SIGNIFICANCE: The method we propose could be used to choose appropriate substitute model organisms for the study of biological processes in other species that are harder to study. For example, one could identify appropriate models to study either pathologies in humans or specific biological processes in species with a long development time, such as plants.

  3. Phase field model for the study of boiling

    International Nuclear Information System (INIS)

    Ruyer, P.

    2006-07-01

    This study concerns both the modeling and the numerical simulation of boiling flows. First we propose a review concerning nucleate boiling at high wall heat flux and focus more particularly on the current understanding of the boiling crisis. From this analysis we deduce a motivation for the numerical simulation of bubble growth dynamics. The main and remaining part of this study is then devoted to the development and analyze of a phase field model for the liquid-vapor flows with phase change. We propose a thermodynamic quasi-compressible formulation whose properties match the one required for the numerical study envisaged. The system of governing equations is a thermodynamically consistent regularization of the sharp interface model, that is the advantage of the di use interface models. We show that the thickness of the interface transition layer can be defined independently from the thermodynamic description of the bulk phases, a property that is numerically attractive. We derive the kinetic relation that allows to analyze the consequences of the phase field formulation on the model of the dissipative mechanisms. Finally we study the numerical resolution of the model with the help of simulations of phase transition in simple configurations as well as of isothermal bubble dynamics. (author)

  4. Modelling study of sea breezes in a complex coastal environment

    Science.gov (United States)

    Cai, X.-M.; Steyn, D. G.

    This study investigates a mesoscale modelling of sea breezes blowing from a narrow strait into the lower Fraser valley (LFV), British Columbia, Canada, during the period of 17-20 July, 1985. Without a nudging scheme in the inner grid, the CSU-RAMS model produces satisfactory wind and temperature fields during the daytime. In comparison with observation, the agreement indices for surface wind and temperature during daytime reach about 0.6 and 0.95, respectively, while the agreement indices drop to 0.4 at night. In the vertical, profiles of modelled wind and temperature generally agree with tethersonde data collected on 17 and 19 July. The study demonstrates that in late afternoon, the model does not capture the advection of an elevated warm layer which originated from land surfaces outside of the inner grid. Mixed layer depth (MLD) is calculated from model output of turbulent kinetic energy field. Comparison of MLD results with observation shows that the method generates a reliable MLD during the daytime, and that accurate estimates of MLD near the coast require the correct simulation of wind conditions over the sea. The study has shown that for a complex coast environment like the LFV, a reliable modelling study depends not only on local surface fluxes but also on elevated layers transported from remote land surfaces. This dependence is especially important when local forcings are weak, for example, during late afternoon and at night.

  5. Models hosts for the study of oral candidiasis.

    Science.gov (United States)

    Junqueira, Juliana Campos

    2012-01-01

    Oral candidiasis is an opportunistic infection caused by yeast of the Candida genus, primarily Candida albicans. It is generally associated with predisposing factors such as the use of immunosuppressive agents, antibiotics, prostheses, and xerostomia. The development of research in animal models is extremely important for understanding the nature of the fungal pathogenicity, host interactions, and treatment of oral mucosal Candida infections. Many oral candidiasis models in rats and mice have been developed with antibiotic administration, induction of xerostomia, treatment with immunosuppressive agents, or the use of germ-free animals, and all these models has both benefits and limitations. Over the past decade, invertebrate model hosts, including Galleria mellonella, Caenorhabditis elegans, and Drosophila melanogaster, have been used for the study of Candida pathogenesis. These invertebrate systems offer a number of advantages over mammalian vertebrate models, predominantly because they allow the study of strain collections without the ethical considerations associated with studies in mammals. Thus, the invertebrate models may be useful to understanding of pathogenicity of Candida isolates from the oral cavity, interactions of oral microorganisms, and study of new antifungal compounds for oral candidiasis.

  6. Synthetic Study on the Geological and Hydrogeological Model around KURT

    International Nuclear Information System (INIS)

    Park, Kyung Woo; Kim, Kyung Su; Koh, Yong Kwon; Choi, Jong Won

    2011-01-01

    To characterize the site specific properties of a study area for high-level radioactive waste disposal research in KAERI, the several geological investigations such as surface geological surveys and borehole drillings were carried out since 1997. Especially, KURT (KAERI Underground Research Tunnel) was constructed to understand the further study of geological environments in 2006. As a result, the first geological model of a study area was constructed by using the results of geological investigation. The objective of this research is to construct a hydrogeological model around KURT area on the basis of the geological model. Hydrogeological data which were obtained from in-situ hydraulic tests in the 9 boreholes were estimated to accomplish the objective. And, the hydrogeological properties of the 4 geological elements in the geological model, which were the subsurface weathering zone, the log angle fracture zone, the fracture zones and the bedrock were suggested. The hydrogeological model suggested in this study will be used as input parameters to carry out the groundwater flow modeling as a next step of the site characterization around KURT area

  7. Study on geological environment model using geostatistics method

    International Nuclear Information System (INIS)

    Honda, Makoto; Suzuki, Makoto; Sakurai, Hideyuki; Iwasa, Kengo; Matsui, Hiroya

    2005-03-01

    The purpose of this study is to develop the geostatistical procedure for modeling geological environments and to evaluate the quantitative relationship between the amount of information and the reliability of the model using the data sets obtained in the surface-based investigation phase (Phase 1) of the Horonobe Underground Research Laboratory Project. This study lasts for three years from FY2004 to FY2006 and this report includes the research in FY2005 as the second year of three-year study. In FY2005 research, the hydrogeological model was built as well as FY2004 research using the data obtained from the deep boreholes (HDB-6, 7 and 8) and the ground magnetotelluric (AMT) survey which were executed in FY2004 in addition to the data sets used in the first year of study. Above all, the relationship between the amount of information and the reliability of the model was demonstrated through a comparison of the models at each step which corresponds to the investigation stage in each FY. Furthermore, the statistical test was applied for detecting the difference of basic statistics of various data due to geological features with a view to taking the geological information into the modeling procedures. (author)

  8. Service-oriented enterprise modelling and analysis: a case study

    NARCIS (Netherlands)

    Iacob, Maria Eugenia; Jonkers, H.; Lankhorst, M.M.; Steen, M.W.A.

    2007-01-01

    In order to validate the concepts and techniques for service-oriented enterprise architecture modelling, developed in the ArchiMate project (Lankhorst, et al., 2005), we have conducted a number of case studies. This paper describes one of these case studies, conducted at the Dutch Tax and Customs

  9. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  10. STUDI MODEL UNTUK PENINGKATAN PRESIPITASI AWAN KONVEKTIF DENGAN BUBUK GARAM

    OpenAIRE

    M. V, Belyaeva; A.S, Drofa; V.N, Ivanov; Kudsy, Mahally; Haryanto, Untung; Goenawan, R Djoko; Harsanti, Dini; Ridwan, Ridwan

    2011-01-01

    Sebuah studi tentang penggunaan garam serbuk polidispersi sebagai bahan semaitelah dilaksanakan dengan memakai model 1-dimensi. Dalam studi ini pengaruhpenambahan serbuk garam tersebut terhadap distribusi tetes awan dan jumlah penambahan presipitasi telah dilakukan, serta hasilnya telah dianalisa dan dibandingkan dengan hasil yang diperoleh pada pemakaian partikel higroskopis yang diperoleh dari flare piroteknik. Kondisi awan yang dipelajari terdiri dari beberapa macam ketinggian, updraft dan...

  11. Case Studies in Modelling, Control in Food Processes.

    Science.gov (United States)

    Glassey, J; Barone, A; Montague, G A; Sabou, V

    This chapter discusses the importance of modelling and control in increasing food process efficiency and ensuring product quality. Various approaches to both modelling and control in food processing are set in the context of the specific challenges in this industrial sector and latest developments in each area are discussed. Three industrial case studies are used to demonstrate the benefits of advanced measurement, modelling and control in food processes. The first case study illustrates the use of knowledge elicitation from expert operators in the process for the manufacture of potato chips (French fries) and the consequent improvements in process control to increase the consistency of the resulting product. The second case study highlights the economic benefits of tighter control of an important process parameter, moisture content, in potato crisp (chips) manufacture. The final case study describes the use of NIR spectroscopy in ensuring effective mixing of dry multicomponent mixtures and pastes. Practical implementation tips and infrastructure requirements are also discussed.

  12. Bias-Correction in Vector Autoregressive Models: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tom Engsted

    2014-03-01

    Full Text Available We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it compares very favorably in non-stationary models.

  13. Evaluation of Multiclass Model Observers in PET LROC Studies

    Science.gov (United States)

    Gifford, H. C.; Kinahan, P. E.; Lartizien, C.; King, M. A.

    2007-02-01

    A localization ROC (LROC) study was conducted to evaluate nonprewhitening matched-filter (NPW) and channelized NPW (CNPW) versions of a multiclass model observer as predictors of human tumor-detection performance with PET images. Target localization is explicitly performed by these model observers. Tumors were placed in the liver, lungs, and background soft tissue of a mathematical phantom, and the data simulation modeled a full-3D acquisition mode. Reconstructions were performed with the FORE+AWOSEM algorithm. The LROC study measured observer performance with 2D images consisting of either coronal, sagittal, or transverse views of the same set of cases. Versions of the CNPW observer based on two previously published difference-of-Gaussian channel models demonstrated good quantitative agreement with human observers. One interpretation of these results treats the CNPW observer as a channelized Hotelling observer with implicit internal noise

  14. Molecular modeling of protein materials: case study of elastin

    International Nuclear Information System (INIS)

    Tarakanova, Anna; Buehler, Markus J

    2013-01-01

    Molecular modeling of protein materials is a quickly growing area of research that has produced numerous contributions in fields ranging from structural engineering to medicine and biology. We review here the history and methods commonly employed in molecular modeling of protein materials, emphasizing the advantages for using modeling as a complement to experimental work. We then consider a case study of the protein elastin, a critically important ‘mechanical protein’ to exemplify the approach in an area where molecular modeling has made a significant impact. We outline the progression of computational modeling studies that have considerably enhanced our understanding of this important protein which endows elasticity and recoil to the tissues it is found in, including the skin, lungs, arteries and the heart. A vast collection of literature has been directed at studying the structure and function of this protein for over half a century, the first molecular dynamics study of elastin being reported in the 1980s. We review the pivotal computational works that have considerably enhanced our fundamental understanding of elastin's atomistic structure and its extraordinary qualities—focusing on two in particular: elastin's superb elasticity and the inverse temperature transition—the remarkable ability of elastin to take on a more structured conformation at higher temperatures, suggesting its effectiveness as a biomolecular switch. Our hope is to showcase these methods as both complementary and enriching to experimental approaches that have thus far dominated the study of most protein-based materials. (topical review)

  15. A model for voltage collapse study considering load characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Aguiar, L B [Companhia de Energia Eletrica da Bahia (COELBA), Salvador, BA (Brazil)

    1994-12-31

    This paper presents a model for analysis of voltage collapse and instability problem considering the load characteristics. The model considers fundamentally the transmission lines represented by exact from through the generalized constants A, B, C, D and the loads as function of the voltage, emphasizing the cases of constant power, constant current and constant impedance. the study treats of the system behavior on steady state and presents illustrative graphics about the problem. (author) 12 refs., 4 figs.

  16. Singularity analysis in nonlinear biomathematical models: Two case studies

    International Nuclear Information System (INIS)

    Meletlidou, E.; Leach, P.G.L.

    2007-01-01

    We investigate the possession of the Painleve Property for certain values of the parameters in two biological models. The first is a metapopulation model for two species (prey and predator) and the second one is a study of a sexually transmitted disease, into which 'education' is introduced. We determine the cases for which the systems possess the Painleve Property, in particular some of the cases for which the equations can be directly integrated. We draw conclusions for these cases

  17. Model technique for aerodynamic study of boiler furnace

    Energy Technology Data Exchange (ETDEWEB)

    1966-02-01

    The help of the Division was recently sought to improve the heat transfer and reduce the exit gas temperature in a pulverized-fuel-fired boiler at an Australian power station. One approach adopted was to construct from Perspex a 1:20 scale cold-air model of the boiler furnace and to use a flow-visualization technique to study the aerodynamic patterns established when air was introduced through the p.f. burners of the model. The work established good correlations between the behaviour of the model and of the boiler furnace.

  18. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  19. A study of critical two-phase flow models

    International Nuclear Information System (INIS)

    Siikonen, T.

    1982-01-01

    The existing computer codes use different boundary conditions in the calculation of critical two-phase flow. In the present study these boundary conditions are compared. It is shown that the boundary condition should be determined from the hydraulic model used in the computer code. The use of a correlation, which is not based on the hydraulic model used, leads often to bad results. Usually a good agreement with data is obtained in the calculation as far as the critical mass flux is concerned, but the agreement is not so good in the pressure profiles. The reason is suggested to be mainly in inadequate modeling of non-equilibrium effects. (orig.)

  20. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  1. Recent validation studies for two NRPB environmental transfer models

    International Nuclear Information System (INIS)

    Brown, J.; Simmonds, J.R.

    1991-01-01

    The National Radiological Protection Board (NRPB) developed a dynamic model for the transfer of radionuclides through terrestrial food chains some years ago. This model, now called FARMLAND, predicts both instantaneous and time integrals of concentration of radionuclides in a variety of foods. The model can be used to assess the consequences of both accidental and routine releases of radioactivity to the environment; and results can be obtained as a function of time. A number of validation studies have been carried out on FARMLAND. In these the model predictions have been compared with a variety of sets of environmental measurement data. Some of these studies will be outlined in the paper. A model to predict external radiation exposure from radioactivity deposited on different surfaces in the environment has also been developed at NRPB. This model, called EXPURT (EXPosure from Urban Radionuclide Transfer), can be used to predict radiation doses as a function of time following deposition in a variety of environments, ranging from rural to inner-city areas. This paper outlines validation studies and future extensions to be carried out on EXPURT. (12 refs., 4 figs.)

  2. Geology - Background complementary studies. Forsmark modelling stage 2.2

    Energy Technology Data Exchange (ETDEWEB)

    Stephens, Michael B. [Geological Survey of Sweden, Uppsala (Sweden); Skagius, Kristina [Kemakta Konsult AB, Stockholm (Sweden)] (eds.)

    2007-09-15

    During Forsmark model stage 2.2, seven complementary geophysical and geological studies were initiated by the geological modelling team, in direct connection with and as a background support to the deterministic modelling of deformation zones. One of these studies involved a field control on the character of two low magnetic lineaments with NNE and NE trends inside the target volume. The interpretation of these lineaments formed one of the late deliveries to SKB that took place after the data freeze for model stage 2.2 and during the initial stage of the modelling work. Six studies involved a revised processing and analysis of reflection seismic, refraction seismic and selected oriented borehole radar data, all of which had been presented earlier in connection with the site investigation programme. A prime aim of all these studies was to provide a better understanding of the geological significance of indirect geophysical data to the geological modelling team. Such essential interpretative work was lacking in the material acquired in connection with the site investigation programme. The results of these background complementary studies are published together in this report. The titles and authors of the seven background complementary studies are presented below. Summaries of the results of each study, with a focus on the implications for the geological modelling of deformation zones, are presented in the master geological report, SKB-R--07-45. The sections in the master report, where reference is made to each background complementary study and where the summaries are placed, are also provided. The individual reports are listed in the order that they are referred to in the master geological report and as they appear in this report. 1. Scan line fracture mapping and magnetic susceptibility measurements across two low magnetic lineaments with NNE and NE trend, Forsmark. Jesper Petersson, Ulf B. Andersson and Johan Berglund. 2. Integrated interpretation of surface and

  3. Geology - Background complementary studies. Forsmark modelling stage 2.2

    International Nuclear Information System (INIS)

    Stephens, Michael B.; Skagius, Kristina

    2007-09-01

    During Forsmark model stage 2.2, seven complementary geophysical and geological studies were initiated by the geological modelling team, in direct connection with and as a background support to the deterministic modelling of deformation zones. One of these studies involved a field control on the character of two low magnetic lineaments with NNE and NE trends inside the target volume. The interpretation of these lineaments formed one of the late deliveries to SKB that took place after the data freeze for model stage 2.2 and during the initial stage of the modelling work. Six studies involved a revised processing and analysis of reflection seismic, refraction seismic and selected oriented borehole radar data, all of which had been presented earlier in connection with the site investigation programme. A prime aim of all these studies was to provide a better understanding of the geological significance of indirect geophysical data to the geological modelling team. Such essential interpretative work was lacking in the material acquired in connection with the site investigation programme. The results of these background complementary studies are published together in this report. The titles and authors of the seven background complementary studies are presented below. Summaries of the results of each study, with a focus on the implications for the geological modelling of deformation zones, are presented in the master geological report, SKB-R--07-45. The sections in the master report, where reference is made to each background complementary study and where the summaries are placed, are also provided. The individual reports are listed in the order that they are referred to in the master geological report and as they appear in this report. 1. Scan line fracture mapping and magnetic susceptibility measurements across two low magnetic lineaments with NNE and NE trend, Forsmark. Jesper Petersson, Ulf B. Andersson and Johan Berglund. 2. Integrated interpretation of surface and

  4. Modelling and Analysis of Smart Grid: A Stochastic Model Checking Case Study

    DEFF Research Database (Denmark)

    Yuksel, Ender; Zhu, Huibiao; Nielson, Hanne Riis

    2012-01-01

    that require novel methods and applications. In this context, an important issue is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese Smart Grid implementation as a case study and address the verification problem for performance and energy......Cyber-physical systems integrate information and communication technology functions to the physical elements of a system for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... consumption. We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....

  5. Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    2014-01-01

    Cyber-physical systems integrate information and communication technology with the physical elements of a system, mainly for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy...... consumption.We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....

  6. Looking beyond general metrics for model comparison - lessons from an international model intercomparison study

    Science.gov (United States)

    de Boer-Euser, Tanja; Bouaziz, Laurène; De Niel, Jan; Brauer, Claudia; Dewals, Benjamin; Drogue, Gilles; Fenicia, Fabrizio; Grelier, Benjamin; Nossent, Jiri; Pereira, Fernando; Savenije, Hubert; Thirel, Guillaume; Willems, Patrick

    2017-01-01

    International collaboration between research institutes and universities is a promising way to reach consensus on hydrological model development. Although model comparison studies are very valuable for international cooperation, they do often not lead to very clear new insights regarding the relevance of the modelled processes. We hypothesise that this is partly caused by model complexity and the comparison methods used, which focus too much on a good overall performance instead of focusing on a variety of specific events. In this study, we use an approach that focuses on the evaluation of specific events and characteristics. Eight international research groups calibrated their hourly model on the Ourthe catchment in Belgium and carried out a validation in time for the Ourthe catchment and a validation in space for nested and neighbouring catchments. The same protocol was followed for each model and an ensemble of best-performing parameter sets was selected. Although the models showed similar performances based on general metrics (i.e. the Nash-Sutcliffe efficiency), clear differences could be observed for specific events. We analysed the hydrographs of these specific events and conducted three types of statistical analyses on the entire time series: cumulative discharges, empirical extreme value distribution of the peak flows and flow duration curves for low flows. The results illustrate the relevance of including a very quick flow reservoir preceding the root zone storage to model peaks during low flows and including a slow reservoir in parallel with the fast reservoir to model the recession for the studied catchments. This intercomparison enhanced the understanding of the hydrological functioning of the catchment, in particular for low flows, and enabled to identify present knowledge gaps for other parts of the hydrograph. Above all, it helped to evaluate each model against a set of alternative models.

  7. Open Innovation and Business Model: A Brazilian Company Case Study

    Directory of Open Access Journals (Sweden)

    Elzo Alves Aranha

    2015-12-01

    Full Text Available Open Innovation is increasingly being introduced in international and national organizations for the creation of value. Open innovation is a practical tool, requiring new strategies and decisions from managers for the exploitation of innovative activities. The basic question that this study seeks to answer is linked to the practice of open innovation in connection with the open business model geared towards the creation of value in a Brazilian company. This paper aims to present a case study that illustrates how open innovation offers resources to change the open business model in order to create value for the Brazilian company. The case study method of a company in the sector of pharma-chemical products was used. The results indicate that internal sources of knowledge, external sources of knowledge and accentuate working partnerships were adopted by company as strategies to offer resources to change the open business model in order to create value.

  8. Teaching Mathematical Modelling for Earth Sciences via Case Studies

    Science.gov (United States)

    Yang, Xin-She

    2010-05-01

    Mathematical modelling is becoming crucially important for earth sciences because the modelling of complex systems such as geological, geophysical and environmental processes requires mathematical analysis, numerical methods and computer programming. However, a substantial fraction of earth science undergraduates and graduates may not have sufficient skills in mathematical modelling, which is due to either limited mathematical training or lack of appropriate mathematical textbooks for self-study. In this paper, we described a detailed case-study-based approach for teaching mathematical modelling. We illustrate how essential mathematical skills can be developed for students with limited training in secondary mathematics so that they are confident in dealing with real-world mathematical modelling at university level. We have chosen various topics such as Airy isostasy, greenhouse effect, sedimentation and Stokes' flow,free-air and Bouguer gravity, Brownian motion, rain-drop dynamics, impact cratering, heat conduction and cooling of the lithosphere as case studies; and we use these step-by-step case studies to teach exponentials, logarithms, spherical geometry, basic calculus, complex numbers, Fourier transforms, ordinary differential equations, vectors and matrix algebra, partial differential equations, geostatistics and basic numeric methods. Implications for teaching university mathematics for earth scientists for tomorrow's classroom will also be discussed. Refereces 1) D. L. Turcotte and G. Schubert, Geodynamics, 2nd Edition, Cambridge University Press, (2002). 2) X. S. Yang, Introductory Mathematics for Earth Scientists, Dunedin Academic Press, (2009).

  9. Animal Models for the Study of Female Sexual Dysfunction

    Science.gov (United States)

    Marson, Lesley; Giamberardino, Maria Adele; Costantini, Raffaele; Czakanski, Peter; Wesselmann, Ursula

    2017-01-01

    Introduction Significant progress has been made in elucidating the physiological and pharmacological mechanisms of female sexual function through preclinical animal research. The continued development of animal models is vital for the understanding and treatment of the many diverse disorders that occur in women. Aim To provide an updated review of the experimental models evaluating female sexual function that may be useful for clinical translation. Methods Review of English written, peer-reviewed literature, primarily from 2000 to 2012, that described studies on female sexual behavior related to motivation, arousal, physiological monitoring of genital function and urogenital pain. Main Outcomes Measures Analysis of supporting evidence for the suitability of the animal model to provide measurable indices related to desire, arousal, reward, orgasm, and pelvic pain. Results The development of female animal models has provided important insights in the peripheral and central processes regulating sexual function. Behavioral models of sexual desire, motivation, and reward are well developed. Central arousal and orgasmic responses are less well understood, compared with the physiological changes associated with genital arousal. Models of nociception are useful for replicating symptoms and identifying the neurobiological pathways involved. While in some cases translation to women correlates with the findings in animals, the requirement of circulating hormones for sexual receptivity in rodents and the multifactorial nature of women’s sexual function requires better designed studies and careful analysis. The current models have studied sexual dysfunction or pelvic pain in isolation; combining these aspects would help to elucidate interactions of the pathophysiology of pain and sexual dysfunction. Conclusions Basic research in animals has been vital for understanding the anatomy, neurobiology, and physiological mechanisms underlying sexual function and urogenital pain

  10. Study and discretization of kinetic models and fluid models at low Mach number

    International Nuclear Information System (INIS)

    Dellacherie, Stephane

    2011-01-01

    This thesis summarizes our work between 1995 and 2010. It concerns the analysis and the discretization of Fokker-Planck or semi-classical Boltzmann kinetic models and of Euler or Navier-Stokes fluid models at low Mach number. The studied Fokker-Planck equation models the collisions between ions and electrons in a hot plasma, and is here applied to the inertial confinement fusion. The studied semi-classical Boltzmann equations are of two types. The first one models the thermonuclear reaction between a deuterium ion and a tritium ion producing an α particle and a neutron particle, and is also in our case used to describe inertial confinement fusion. The second one (known as the Wang-Chang and Uhlenbeck equations) models the transitions between electronic quantified energy levels of uranium and iron atoms in the AVLIS isotopic separation process. The basic properties of these two Boltzmann equations are studied, and, for the Wang-Chang and Uhlenbeck equations, a kinetic-fluid coupling algorithm is proposed. This kinetic-fluid coupling algorithm incited us to study the relaxation concept for gas and immiscible fluids mixtures, and to underline connections with classical kinetic theory. Then, a diphasic low Mach number model without acoustic waves is proposed to model the deformation of the interface between two immiscible fluids induced by high heat transfers at low Mach number. In order to increase the accuracy of the results without increasing computational cost, an AMR algorithm is studied on a simplified interface deformation model. These low Mach number studies also incited us to analyse on cartesian meshes the inaccuracy at low Mach number of Godunov schemes. Finally, the LBM algorithm applied to the heat equation is justified

  11. A comparative study of the constitutive models for silicon carbide

    Science.gov (United States)

    Ding, Jow-Lian; Dwivedi, Sunil; Gupta, Yogendra

    2001-06-01

    Most of the constitutive models for polycrystalline silicon carbide were developed and evaluated using data from either normal plate impact or Hopkinson bar experiments. At ISP, extensive efforts have been made to gain detailed insight into the shocked state of the silicon carbide (SiC) using innovative experimental methods, viz., lateral stress measurements, in-material unloading measurements, and combined compression shear experiments. The data obtained from these experiments provide some unique information for both developing and evaluating material models. In this study, these data for SiC were first used to evaluate some of the existing models to identify their strength and possible deficiencies. Motivated by both the results of this comparative study and the experimental observations, an improved phenomenological model was developed. The model incorporates pressure dependence of strength, rate sensitivity, damage evolution under both tension and compression, pressure confinement effect on damage evolution, stiffness degradation due to damage, and pressure dependence of stiffness. The model developments are able to capture most of the material features observed experimentally, but more work is needed to better match the experimental data quantitatively.

  12. Modeling of environmentally significant interfaces: Two case studies

    International Nuclear Information System (INIS)

    Williford, R.E.

    2006-01-01

    When some parameters cannot be easily measured experimentally, mathematical models can often be used to deconvolute or interpret data collected on complex systems, such as those characteristic of many environmental problems. These models can help quantify the contributions of various physical or chemical phenomena that contribute to the overall behavior, thereby enabling the scientist to control and manipulate these phenomena, and thus to optimize the performance of the material or device. In the first case study presented here, a model is used to test the hypothesis that oxygen interactions with hydrogen on the catalyst particles of solid oxide fuel cell anodes can sometimes occur a finite distance away from the triple phase boundary (TPB), so that such reactions are not restricted to the TPB as normally assumed. The model may help explain a discrepancy between the observed structure of SOFCs and their performance. The second case study develops a simple physical model that allows engineers to design and control the sizes and shapes of mesopores in silica thin films. Such pore design can be useful for enhancing the selectivity and reactivity of environmental sensors and catalysts. This paper demonstrates the mutually beneficial interactions between experiment and modeling in the solution of a wide range of problems

  13. Performance Implications of Business Model Change: A Case Study

    Directory of Open Access Journals (Sweden)

    Jana Poláková

    2015-01-01

    Full Text Available The paper deals with changes in performance level introduced by the change of business model. The selected case is a small family business undergoing through substantial changes in reflection of structural changes of its markets. The authors used the concept of business model to describe value creation processes within the selected family business and by contrasting the differences between value creation processes before and after the change introduced they prove the role of business model as the performance differentiator. This is illustrated with the use of business model canvas constructed on the basis interviews, observations and document analysis. The two business model canvases allow for explanation of cause-and-effect relationships within the business leading to change in performance. The change in the performance is assessed by financial analysis of the business conducted over the period of 2006–2012 demonstrates changes in performance (comparing development of ROA, ROE and ROS having their lowest levels before the change of business model was introduced, growing after the introduction of the change, as well as the activity indicators with similar developments of the family business. The described case study contributes to the concept of business modeling with the arguments supporting its value as strategic tool facilitating decisions related to value creation within the business.

  14. Study on high-level waste geological disposal metadata model

    International Nuclear Information System (INIS)

    Ding Xiaobin; Wang Changhong; Zhu Hehua; Li Xiaojun

    2008-01-01

    This paper expatiated the concept of metadata and its researches within china and abroad, then explain why start the study on the metadata model of high-level nuclear waste deep geological disposal project. As reference to GML, the author first set up DML under the framework of digital underground space engineering. Based on DML, a standardized metadata employed in high-level nuclear waste deep geological disposal project is presented. Then, a Metadata Model with the utilization of internet is put forward. With the standardized data and CSW services, this model may solve the problem in the data sharing and exchanging of different data form A metadata editor is build up in order to search and maintain metadata based on this model. (authors)

  15. A model for fine mapping in family based association studies.

    Science.gov (United States)

    Boehringer, Stefan; Pfeiffer, Ruth M

    2009-01-01

    Genome wide association studies for complex diseases are typically followed by more focused characterization of the identified genetic region. We propose a latent class model to evaluate a candidate region with several measured markers using observations on families. The main goal is to estimate linkage disequilibrium (LD) between the observed markers and the putative true but unobserved disease locus in the region. Based on this model, we estimate the joint distribution of alleles at the observed markers and the unobserved true disease locus, and a penetrance parameter measuring the impact of the disease allele on disease risk. A family specific random effect allows for varying baseline disease prevalences for different families. We present a likelihood framework for our model and assess its properties in simulations. We apply the model to an Alzheimer data set and confirm previous findings in the ApoE region.

  16. NATO Advanced Study Institute on Advanced Physical Oceanographic Numerical Modelling

    CERN Document Server

    1986-01-01

    This book is a direct result of the NATO Advanced Study Institute held in Banyuls-sur-mer, France, June 1985. The Institute had the same title as this book. It was held at Laboratoire Arago. Eighty lecturers and students from almost all NATO countries attended. The purpose was to review the state of the art of physical oceanographic numerical modelling including the parameterization of physical processes. This book represents a cross-section of the lectures presented at the ASI. It covers elementary mathematical aspects through large scale practical aspects of ocean circulation calculations. It does not encompass every facet of the science of oceanographic modelling. We have, however, captured most of the essence of mesoscale and large-scale ocean modelling for blue water and shallow seas. There have been considerable advances in modelling coastal circulation which are not included. The methods section does not include important material on phase and group velocity errors, selection of grid structures, advanc...

  17. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  18. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  19. A study of composite models at LEP with ALEPH

    International Nuclear Information System (INIS)

    Badaud, F.

    1992-04-01

    Tests of composite models are performed in e + e - collisions in the vicinity of the Z 0 pole using the ALEPH detector. Two kinds of substructure effects are searched for: deviations of differential cross section for reactions e + e - → l + l - and e + e - → γ γ from standard model predictions, and direct search for excited neutrino. A new interaction, parametrized by a 4-fermion contact term, cell, is studied in lepton pair production reactions, assuming different chiralities of the currents. Lower limits on the compositeness scale Λ are obtained by fitting model predictions to the data. They are in the range from 1 to a few TeV depending on model and lepton flavour. Researches for the lightest excited particle that could be the excited neutrino, are presented

  20. How do humans inspect BPMN models: an exploratory study.

    Science.gov (United States)

    Haisjackl, Cornelia; Soffer, Pnina; Lim, Shao Yi; Weber, Barbara

    2018-01-01

    Even though considerable progress regarding the technical perspective on modeling and supporting business processes has been achieved, it appears that the human perspective is still often left aside. In particular, we do not have an in-depth understanding of how process models are inspected by humans, what strategies are taken, what challenges arise, and what cognitive processes are involved. This paper contributes toward such an understanding and reports an exploratory study investigating how humans identify and classify quality issues in BPMN process models. Providing preliminary answers to initial research questions, we also indicate other research questions that can be investigated using this approach. Our qualitative analysis shows that humans adapt different strategies on how to identify quality issues. In addition, we observed several challenges appearing when humans inspect process models. Finally, we present different manners in which classification of quality issues was addressed.

  1. Experimental study and modeling of a novel magnetorheological elastomer isolator

    International Nuclear Information System (INIS)

    Yang, Jian; Li, Weihua; Sun, Shuaishuai; Du, Haiping; Li, Yancheng; Li, Jianchun; Deng, H X

    2013-01-01

    This paper reports an experimental setup aiming at evaluating the performance of a newly designed magnetorheological elastomer (MRE) seismic isolator. As a further effort to explore the field-dependent stiffness/damping properties of the MRE isolator, a series of experimental testing were conducted. Based upon the analysis of the experimental responses and the characteristics of the MRE isolator, a new model that is capable of reproducing the unique MRE isolator dynamics behaviors is proposed. The validation results verify the model’s effectiveness to portray the MRE isolator. A study on the field-dependent parameters is then provided to make the model valid with fluctuating magnetic fields. To fully explore the mechanism of the proposed model, an investigation relating the dependence of the proposed model on every parameter is carried out. (technical note)

  2. Comparison Study on Low Energy Physics Model of GEANT4

    International Nuclear Information System (INIS)

    Park, So Hyun; Jung, Won Gyun; Suh, Tae Suk

    2010-01-01

    The Geant4 simulation toolkit provides improved or renewed physics model according to the version. The latest Geant4.9.3 which has been recoded by developers applies inserted Livermore data and renewed physics model to the low energy electromagnetic physics model. And also, Geant4.9.3 improved the physics factors by modified code. In this study, the stopping power and CSDA(Continuously Slowing Down Approximation) range data of electron or particles were acquired in various material and then, these data were compared with NIST(National Institute of Standards and Technology) data. Through comparison between data of Geant4 simulation and NIST, the improvement of physics model on low energy electromagnetic of Geant4.9.3 was evaluated by comparing the Geant4.9.2

  3. Studying autism in rodent models: reconciling endophenotypes with comorbidities.

    Directory of Open Access Journals (Sweden)

    Andrew eArgyropoulos

    2013-07-01

    Full Text Available Autism spectrum disorder (ASD patients commonly exhibit a variety of comorbid traits including seizures, anxiety, aggressive behavior, gastrointestinal problems, motor deficits, abnormal sensory processing and sleep disturbances for which the cause is unknown. These features impact negatively on daily life and can exaggerate the effects of the core diagnostic traits (social communication deficits and repetitive behaviors. Studying endophenotypes relevant to both core and comorbid features of ASD in rodent models can provide insight into biological mechanisms underlying these disorders. Here we review the characterization of endophenotypes in a selection of environmental, genetic and behavioural rodent models of ASD. In addition to exhibiting core ASD-like behaviours, each of these animal models display one or more endophenotypes relevant to comorbid features including altered sensory processing, seizure susceptibility, anxiety-like behaviour and disturbed motor functions, suggesting that these traits are indicators of altered biological pathways in ASD. However, the study of behaviours paralleling comorbid traits in animal models of ASD is an emerging field and further research is needed to assess altered gastrointestinal function, aggression and disorders of sleep onset across models. Future studies should include investigation of these endophenotypes in order to advance our understanding of the etiology of this complex disorder.

  4. QUALITY OF AN ACADEMIC STUDY PROGRAMME - EVALUATION MODEL

    Directory of Open Access Journals (Sweden)

    Mirna Macur

    2016-01-01

    Full Text Available Quality of an academic study programme is evaluated by many: employees (internal evaluation and by external evaluators: experts, agencies and organisations. Internal and external evaluation of an academic programme follow written structure that resembles on one of the quality models. We believe the quality models (mostly derived from EFQM excellence model don’t fit very well into non-profit activities, policies and programmes, because they are much more complex than environment, from which quality models derive from (for example assembly line. Quality of an academic study programme is very complex and understood differently by various stakeholders, so we present dimensional evaluation in the article. Dimensional evaluation, as opposed to component and holistic evaluation, is a form of analytical evaluation in which the quality of value of the evaluand is determined by looking at its performance on multiple dimensions of merit or evaluation criteria. First stakeholders of a study programme and their views, expectations and interests are presented, followed by evaluation criteria. They are both joined into the evaluation model revealing which evaluation criteria can and should be evaluated by which stakeholder. Main research questions are posed and research method for each dimension listed.

  5. Geochemical modeling of uranium mill tailings: a case study

    International Nuclear Information System (INIS)

    Peterson, S.R.; Felmy, A.R.; Serne, R.J.; Gee, G.W.

    1983-08-01

    Liner failure was not found to be a problem when various acidic tailings solutions leached through liner materials for periods up to 3 y. On the contrary, materials that contained over 30% clay showed a decrease in permeability with time in the laboratory columns. The decreases in permeability noted above are attributed to pore plugging resulting from the precipitation of minerals and solids. This precipitation takes place due to the increase in pH of the tailings solution brought about by the buffering capacity of the soil. Geochemical modeling predicts, and x-ray characterization confirms, that precipitation of solids from solution is occurring in the acidic tailings solution/liner interactions studied. X-ray diffraction identified gypsum and alunite group minerals, such as jarosite, as having precipitated after acidic tailings solutions reacted with clay liners. The geochemical modeling and experimental work described above were used to construct an equilibrium conceptual model consisting of minerals and solid phases. This model was developed to represent a soil column. A computer program was used as a tool to solve the system of mathematical equations imposed by the conceptual chemical model. The combined conceptual model and computer program were used to predict aqueous phase compositions of effluent solutions from permeability cells packed with geologic materials and percolated with uranium mill tailings solutions. An initial conclusion drawn from these studies is that the laboratory experiments and geochemical modeling predictions were capable of simulating field observations. The same mineralogical changes and contaminant reductions observed in the laboratory studies were found at a drained evaporation pond (Lucky Mc in Wyoming) with a 10-year history of acid attack. 24 references, 5 figures 5 tables

  6. Southeast Atmosphere Studies: learning from model-observation syntheses

    Directory of Open Access Journals (Sweden)

    J. Mao

    2018-02-01

    Full Text Available Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales.This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and

  7. Southeast Atmosphere Studies: learning from model-observation syntheses

    Science.gov (United States)

    Mao, Jingqiu; Carlton, Annmarie; Cohen, Ronald C.; Brune, William H.; Brown, Steven S.; Wolfe, Glenn M.; Jimenez, Jose L.; Pye, Havala O. T.; Ng, Nga Lee; Xu, Lu; McNeill, V. Faye; Tsigaridis, Kostas; McDonald, Brian C.; Warneke, Carsten; Guenther, Alex; Alvarado, Matthew J.; de Gouw, Joost; Mickley, Loretta J.; Leibensperger, Eric M.; Mathur, Rohit; Nolte, Christopher G.; Portmann, Robert W.; Unger, Nadine; Tosca, Mika; Horowitz, Larry W.

    2018-02-01

    Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes) elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS) conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales.This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA) and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and elsewhere. Here we

  8. The contribution of animal models to the study of obesity.

    Science.gov (United States)

    Speakman, John; Hambly, Catherine; Mitchell, Sharon; Król, Elzbieta

    2008-10-01

    Obesity results from prolonged imbalance of energy intake and energy expenditure. Animal models have provided a fundamental contribution to the historical development of understanding the basic parameters that regulate the components of our energy balance. Five different types of animal model have been employed in the study of the physiological and genetic basis of obesity. The first models reflect single gene mutations that have arisen spontaneously in rodent colonies and have subsequently been characterized. The second approach is to speed up the random mutation rate artificially by treating rodents with mutagens or exposing them to radiation. The third type of models are mice and rats where a specific gene has been disrupted or over-expressed as a deliberate act. Such genetically-engineered disruptions may be generated through the entire body for the entire life (global transgenic manipulations) or restricted in both time and to certain tissue or cell types. In all these genetically-engineered scenarios, there are two types of situation that lead to insights: where a specific gene hypothesized to play a role in the regulation of energy balance is targeted, and where a gene is disrupted for a different purpose, but the consequence is an unexpected obese or lean phenotype. A fourth group of animal models concern experiments where selective breeding has been utilized to derive strains of rodents that differ in their degree of fatness. Finally, studies have been made of other species including non-human primates and dogs. In addition to studies of the physiological and genetic basis of obesity, studies of animal models have also informed us about the environmental aspects of the condition. Studies in this context include exploring the responses of animals to high fat or high fat/high sugar (Cafeteria) diets, investigations of the effects of dietary restriction on body mass and fat loss, and studies of the impact of candidate pharmaceuticals on components of energy

  9. STUDY OF INSTRUCTIONAL MODELS AND SYNTAX AS AN EFFORT FOR DEVELOPING ‘OIDDE’ INSTRUCTIONAL MODEL

    Directory of Open Access Journals (Sweden)

    Atok Miftachul Hudha

    2016-07-01

    Full Text Available The 21st century requires the availability of human resources with seven skills or competence (Maftuh, 2016, namely: 1 critical thinking and problem solving skills, 2 creative and innovative, 3 behave ethically, 4 flexible and quick to adapt, 5 competence in ICT and literacy, 6 interpersonal and collaborative capabilities, 7 social skills and cross-cultural interaction. One of the competence of human resources of the 21st century are behaving ethically should be established and developed through learning that includes the study of ethics because ethical behavior can not be created and owned as it is by human, but must proceed through solving problem, especially ethical dilemma solving on the ethical problems atau problematics of ethics. The fundamental problem, in order to ethical behavior competence can be achieved through learning, is the right model of learning is not found yet by teachers to implement the learning associated with ethical values as expected in character education (Hudha, et al, 2014a, 2014b, 2014c. Therefore, it needs a decent learning model (valid, practical and effective so that ethics learning, to establish a human resources behave ethically, can be met. Thus, it is necessary to study (to analyze and modificate the steps of learning (syntax existing learning model, in order to obtain the results of the development model of learning syntax. One model of learning that is feasible, practical, and effective question is the learning model on the analysis and modification of syntax model of social learning, syntax learning model systems behavior (Joyce and Weil, 1980, Joyce, et al, 2009 as well as syntax learning model Tri Prakoro (Akbar, 2013. The modified syntax generate learning model 'OIDDE' which is an acronym of orientation, identify, discussion, decision, and engage in behavior.

  10. Modeling study on geological environment at Horonobe URL site

    International Nuclear Information System (INIS)

    Shimo, Michito; Yamamoto, Hajime; Kumamoto, Sou; Fujiwara, Yasushi; Ono, Makoto

    2005-02-01

    The Horonobe underground research project has been operated by Japan Nuclear Cycle Development Institute to study the geological environment of sedimentary rocks in deep underground. The objectives of this study are to develop a geological environment model, which incorporate the current findings and the data obtained through the geological, geophysical, and borehole investigations at Horonobe site, and to predict the hydrological and geochemical impacts caused by the URL shaft excavation to the surrounding area. A three-dimensional geological structure model was constructed, integrating a large-scale model (25km x 15km) and a high-resolution site-scale model (4km x 4km) that have been developed by JNC. The constructed model includes surface topography, geologic formations (such as Yuchi, Koetoi, Wakkanai, and Masuporo Formations), and two major faults (Ohomagari fault and N1 fault). In hydrogeological modeling, water-conductive fractures identified in Wakkanai Formation are modeled stochastically using EHCM (Equivalent Heterogeneous Continuum Model) approach, to represent hydraulic heterogeneity and anisotropy in the fractured rock mass. Numerical code EQUIV FLO (Shimo et al., 1996), which is a 3D unsaturated-saturated groundwater simulator capable of EHCM, was used to simulate the regional groundwater flow. We used the same model and the code to predict the transient hydrological changes caused by the shaft excavations. Geochemical data in the Horonobe site such as water chemistries, mineral compositions of rocks were collected and summarized into digital datasets. M3 (Multivariate, Mixing and Mass-balance) method developed by SKB (Laaksoharju et al., 1999) was used to identify waters of different origins, and to infer the mixing ratio of these end-members to reproduce each sample's chemistry. Thermodynamic code such as RHREEQC, GWB, and EQ3/6 were used to model chemical reactions that explain the present minerals and aqueous concentrations observed in the site

  11. Application of Model Animals in the Study of Drug Toxicology

    Science.gov (United States)

    Song, Yagang; Miao, Mingsan

    2018-01-01

    Drug safety is a key factor in drug research and development, Drug toxicology test is the main method to evaluate the safety of drugs, The body condition of an animal has important implications for the results of the study, Previous toxicological studies of drugs were carried out in normal animals in the past, There is a great deviation from the clinical practice.The purpose of this study is to investigate the necessity of model animals as a substitute for normal animals for toxicological studies, It is expected to provide exact guidance for future drug safety evaluation.

  12. An updated summary of MATHEW/ADPIC model evaluation studies

    International Nuclear Information System (INIS)

    Foster, K.T.; Dickerson, M.H.

    1990-05-01

    This paper summarizes the major model evaluation studies conducted for the MATHEW/ADPIC atmospheric transport and diffusion models used by the US Department of Energy's Atmospheric Release Advisory Capability. These studies have taken place over the last 15 years and involve field tracer releases influenced by a variety of meteorological and topographical conditions. Neutrally buoyant tracers released both as surface and elevated point sources, as well as material dispersed by explosive, thermally bouyant release mechanisms have been studied. Results from these studies show that the MATHEW/ADPIC models estimate the tracer air concentrations to within a factor of two of the measured values 20% to 50% of the time, and within a factor of five of the measurements 35% to 85% of the time depending on the complexity of the meteorology and terrain, and the release height of the tracer. Comparisons of model estimates to peak downwind deposition and air concentration measurements from explosive releases are shown to be generally within a factor of two to three. 24 refs., 14 figs., 3 tabs

  13. Deformed shell model studies of spectroscopic properties of Zn and ...

    Indian Academy of Sciences (India)

    2014-04-05

    Apr 5, 2014 ... April 2014 physics pp. 757–767. Deformed shell model studies of ... experiments without isotopical enrichment thereby reducing the cost considerably. By taking a large mass of the sample because of its low cost, one can ...

  14. Model validation studies of solar systems, Phase III. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lantz, L.J.; Winn, C.B.

    1978-12-01

    Results obtained from a validation study of the TRNSYS, SIMSHAC, and SOLCOST solar system simulation and design are presented. Also included are comparisons between the FCHART and SOLCOST solar system design programs and some changes that were made to the SOLCOST program. Finally, results obtained from the analysis of several solar radiation models are presented. Separate abstracts were prepared for ten papers.

  15. Capillary microreactors for lactic acid extraction: experimental and modelling study

    NARCIS (Netherlands)

    Susanti, Susanti; Winkelman, Jozef; Schuur, Boelo; Heeres, Hero; Yue, Jun

    2015-01-01

    Lactic acid is an important biobased chemical and, among others, is used for the production of poly-lactic acid. Down-stream processing using state of the art technology is energy intensive and leads to the formation of large amounts of salts. In this presentation, experimental and modeling studies

  16. Interpretive and Critical Phenomenological Crime Studies: A Model Design

    Science.gov (United States)

    Miner-Romanoff, Karen

    2012-01-01

    The critical and interpretive phenomenological approach is underutilized in the study of crime. This commentary describes this approach, guided by the question, "Why are interpretive phenomenological methods appropriate for qualitative research in criminology?" Therefore, the purpose of this paper is to describe a model of the interpretive…

  17. Study of primitive universe in the Bianchi IX model

    International Nuclear Information System (INIS)

    Matsas, G.E.A.

    1988-03-01

    The theory of general relativity is used to study the homogeneous cosmological model Bianch IX with isometry group SO(3) near the cosmological singularity. The Bogoyavlenskii-Novikov formalism to explain the anusual behaviour of the Liapunov exponent associated with this chaotic system, is introduced. (author) [pt

  18. Studying historical occupational careers with multilevel growth models

    NARCIS (Netherlands)

    Schulz, W.; Maas, I.

    2010-01-01

    In this article we propose to study occupational careers with historical data by using multilevel growth models. Historical career data are often characterized by a lack of information on the timing of occupational changes and by different numbers of observations of occupations per individual.

  19. Conflicts Management Model in School: A Mixed Design Study

    Science.gov (United States)

    Dogan, Soner

    2016-01-01

    The object of this study is to evaluate the reasons for conflicts occurring in school according to perceptions and views of teachers and resolution strategies used for conflicts and to build a model based on the results obtained. In the research, explanatory design including quantitative and qualitative methods has been used. The quantitative part…

  20. Modelling studies of horizontal steam generator PGV-1000 with Cathare

    Energy Technology Data Exchange (ETDEWEB)

    Karppinen, I. [VTT Energy, Espoo (Finland)

    1995-12-31

    To perform thermal-hydraulic studies applied to nuclear power plants equipped with VVER, a program of qualification and assessment of the CATHARE computer code is in progress at the Institute of Protection and Nuclear Safety (IPSN). In this paper studies of modelling horizontal steam generator of VVER-1000 with the CATHARE computer code are presented. Steady state results are compared with measured data from the fifth unit of Novovoronezh nuclear power plant. (orig.). 10 refs.

  1. Modelling studies of horizontal steam generator PGV-1000 with Cathare

    Energy Technology Data Exchange (ETDEWEB)

    Karppinen, I [VTT Energy, Espoo (Finland)

    1996-12-31

    To perform thermal-hydraulic studies applied to nuclear power plants equipped with VVER, a program of qualification and assessment of the CATHARE computer code is in progress at the Institute of Protection and Nuclear Safety (IPSN). In this paper studies of modelling horizontal steam generator of VVER-1000 with the CATHARE computer code are presented. Steady state results are compared with measured data from the fifth unit of Novovoronezh nuclear power plant. (orig.). 10 refs.

  2. Multisite Case Study of Florida's Millennium High School Reform Model

    Directory of Open Access Journals (Sweden)

    Carol A. Mullen

    2002-10-01

    Full Text Available This study should have immediate utility for the United States and beyond its borders. School-to-work approaches to comprehensive reform are increasingly expected of schools while legislative funding for this purpose gets pulled back. This multisite case study launches the first analysis of the New Millennium High School (NMHS model in Florida. This improvement program relies upon exemplary leadership for preparing students for postsecondary education

  3. Phenomenological study of in the minimal model at LHC

    Indian Academy of Sciences (India)

    K M Balasubramaniam

    2017-10-05

    Oct 5, 2017 ... Phenomenological study of Z in the minimal B − L model at LHC ... The phenomenological study of neutral heavy gauge boson (Z. B−L) of the ...... JHEP10(2015)076, arXiv:1506.06767 [hep-ph] ... [15] ATLAS Collaboration: G Aad et al, Phys. Rev. D 90(5) ... [19] C W Chiang, N D Christensen, G J Ding and T.

  4. Cost Model Comparison: A Study of Internally and Commercially Developed Cost Models in Use by NASA

    Science.gov (United States)

    Gupta, Garima

    2011-01-01

    NASA makes use of numerous cost models to accurately estimate the cost of various components of a mission - hardware, software, mission/ground operations - during the different stages of a mission's lifecycle. The purpose of this project was to survey these models and determine in which respects they are similar and in which they are different. The initial survey included a study of the cost drivers for each model, the form of each model (linear/exponential/other CER, range/point output, capable of risk/sensitivity analysis), and for what types of missions and for what phases of a mission lifecycle each model is capable of estimating cost. The models taken into consideration consisted of both those that were developed by NASA and those that were commercially developed: GSECT, NAFCOM, SCAT, QuickCost, PRICE, and SEER. Once the initial survey was completed, the next step in the project was to compare the cost models' capabilities in terms of Work Breakdown Structure (WBS) elements. This final comparison was then portrayed in a visual manner with Venn diagrams. All of the materials produced in the process of this study were then posted on the Ground Segment Team (GST) Wiki.

  5. Comparative study between a QCD inspired model and a multiple diffraction model

    International Nuclear Information System (INIS)

    Luna, E.G.S.; Martini, A.F.; Menon, M.J.

    2003-01-01

    A comparative study between a QCD Inspired Model (QCDIM) and a Multiple Diffraction Model (MDM) is presented, with focus on the results for pp differential cross section at √s = 52.8 GeV. It is shown that the MDM predictions are in agreement with experimental data, except for the dip region and that the QCDIM describes only the diffraction peak region. Interpretations in terms of the corresponding eikonals are also discussed. (author)

  6. Cooling problems of thermal power plants. Physical model studies

    International Nuclear Information System (INIS)

    Neale, L.C.

    1975-01-01

    The Alden Research Laboratories of Worcester Polytechnic Institute has for many years conducted physical model studies, which are normally classified as river or structural hydraulic studies. Since 1952 one aspect of these studies has involved the heated discharge from steam power plants. The early studies on such problems concentrated on improving the thermal efficiency of the system. This was accomplished by minimizing recirculation and by assuring full use of available cold water supplies. With the growing awareness of the impact of thermal power generation on the environment attention has been redirected to reducing the effect of heated discharges on the biology of the receiving body of water. More specifically the efforts of designers and operators of power plants are aimed at meeting or complying with standards established by various governmental agencies. Thus the studies involve developing means of minimizing surface temperatures at an outfall or establishing a local area of higher temperature with limits specified in terms of areas or distances. The physical models used for these studies have varied widely in scope, size, and operating features. These models have covered large areas with both distorted geometric scales and uniform dimensions. Instrumentations has also varied from simple mercury thermometers to computer control and processing of hundreds of thermocouple indicators

  7. Development of hydrological models and surface process modelization Study case in High Mountain slopes

    International Nuclear Information System (INIS)

    Loaiza, Juan Carlos; Pauwels, Valentijn R

    2011-01-01

    Hydrological models are useful because allow to predict fluxes into the hydrological systems, which is useful to predict foods and violent phenomenon associated to water fluxes, especially in materials under a high meteorization level. The combination of these models with meteorological predictions, especially with rainfall models, allow to model water behavior into the soil. On most of cases, this type of models is really sensible to evapotranspiration. On climatic studies, the superficial processes have to be represented adequately. Calibration and validation of these models is necessary to obtain reliable results. This paper is a practical exercise of application of complete hydrological information at detailed scale in a high mountain catchment, considering the soil use and types more representatives. The information of soil moisture, infiltration, runoff and rainfall is used to calibrate and validate TOPLATS hydrological model to simulate the behavior of soil moisture. The finds show that is possible to implement an hydrological model by means of soil moisture information use and an equation of calibration by Extended Kalman Filter (EKF).

  8. Basic models modeling resistance training: an update for basic scientists interested in study skeletal muscle hypertrophy.

    Science.gov (United States)

    Cholewa, Jason; Guimarães-Ferreira, Lucas; da Silva Teixeira, Tamiris; Naimo, Marshall Alan; Zhi, Xia; de Sá, Rafaele Bis Dal Ponte; Lodetti, Alice; Cardozo, Mayara Quadros; Zanchi, Nelo Eidy

    2014-09-01

    Human muscle hypertrophy brought about by voluntary exercise in laboratorial conditions is the most common way to study resistance exercise training, especially because of its reliability, stimulus control and easy application to resistance training exercise sessions at fitness centers. However, because of the complexity of blood factors and organs involved, invasive data is difficult to obtain in human exercise training studies due to the integration of several organs, including adipose tissue, liver, brain and skeletal muscle. In contrast, studying skeletal muscle remodeling in animal models are easier to perform as the organs can be easily obtained after euthanasia; however, not all models of resistance training in animals displays a robust capacity to hypertrophy the desired muscle. Moreover, some models of resistance training rely on voluntary effort, which complicates the results observed when animal models are employed since voluntary capacity is something theoretically impossible to measure in rodents. With this information in mind, we will review the modalities used to simulate resistance training in animals in order to present to investigators the benefits and risks of different animal models capable to provoke skeletal muscle hypertrophy. Our second objective is to help investigators analyze and select the experimental resistance training model that best promotes the research question and desired endpoints. © 2013 Wiley Periodicals, Inc.

  9. A Study On Traditional And Evolutionary Software Development Models

    Directory of Open Access Journals (Sweden)

    Kamran Rasheed

    2017-07-01

    Full Text Available Today Computing technologies are becoming the pioneers of the organizations and helpful in individual functionality i.e. added to computing device we need to add softwares. Set of instruction or computer program is known as software. The development of software is done through some traditional or some new or evolutionary models. Software development is becoming a key and a successful business nowadays. Without software all hardware is useless. Some collective steps that are performed in the development of these are known as Software development life cycle SDLC. There are some adaptive and predictive models for developing software. Predictive mean already known like WATERFALL Spiral Prototype and V-shaped models while Adaptive model include agile Scrum. All methodologies of both adaptive and predictive have their own procedure and steps. Predictive are Static and Adaptive are dynamic mean change cannot be made to the predictive while adaptive have the capability of changing. The purpose of this study is to get familiar with all these and discuss their uses and steps of development. This discussion will be helpful in deciding which model they should use in which circumstance and what are the development step including in each model.

  10. Pre implanted mouse embryos as model for uranium toxicology studies

    International Nuclear Information System (INIS)

    Kundt, Miriam S.

    2001-01-01

    Full text: The search of 'in vitro' toxicology model that can predict toxicology effects 'in vivo' is a permanent challenge. A toxicology experimental model must to fill to certain requirements: to have a predictive character, an appropriate control to facilitate the interpretation of the data among the experimental groups, and to be able to control the independent variables that can interfere or modify the results that we are analyzing. The preimplantation embryos posses many advantages in this respect: they are a simple model that begins with the development of only one cell. The 'in vitro' model reproduces successfully the 'in vivo' situation. Due to the similarity that exists among the embryos of mammals during this period the model is practically valid for other species. The embryo is itself a stem cell, the toxicology effects are early observed in his clonal development and the physical-chemical parameters are easily controllable. The purpose of the exhibition is to explain the properties of the pre implanted embryo model for toxicology studies of uranium and to show our experimental results. The cultivation 'in vitro' of mouse embryos with uranylo nitrate demonstrated that the uranium causes from the 13 μgU/ml delay of development, decrease the number of cells per embryo and hipoploidy in the embryonic blastomere. (author)

  11. Bioresorbable polymer coated drug eluting stent: a model study.

    Science.gov (United States)

    Rossi, Filippo; Casalini, Tommaso; Raffa, Edoardo; Masi, Maurizio; Perale, Giuseppe

    2012-07-02

    In drug eluting stent technologies, an increased demand for better control, higher reliability, and enhanced performances of drug delivery systems emerged in the last years and thus offered the opportunity to introduce model-based approaches aimed to overcome the remarkable limits of trial-and-error methods. In this context a mathematical model was studied, based on detailed conservation equations and taking into account the main physical-chemical mechanisms involved in polymeric coating degradation, drug release, and restenosis inhibition. It allowed highlighting the interdependence between factors affecting each of these phenomena and, in particular, the influence of stent design parameters on drug antirestenotic efficacy. Therefore, the here-proposed model is aimed to simulate the diffusional release, for both in vitro and the in vivo conditions: results were verified against various literature data, confirming the reliability of the parameter estimation procedure. The hierarchical structure of this model also allows easily modifying the set of equations describing restenosis evolution to enhance model reliability and taking advantage of the deep understanding of physiological mechanisms governing the different stages of smooth muscle cell growth and proliferation. In addition, thanks to its simplicity and to the very low system requirements and central processing unit (CPU) time, our model allows obtaining immediate views of system behavior.

  12. A study on the modeling techniques using LS-INGRID

    Energy Technology Data Exchange (ETDEWEB)

    Ku, J. H.; Park, S. W

    2001-03-01

    For the development of radioactive material transport packages, the verification of structural safety of a package against the free drop impact accident should be carried out. The use of LS-DYNA, which is specially developed code for impact analysis, is essential for impact analysis of the package. LS-INGRID is a pre-processor for LS-DYNA with considerable capability to deal with complex geometries and allows for parametric modeling. LS-INGRID is most effective in combination with LS-DYNA code. Although the usage of LS-INGRID seems very difficult relative to many commercial mesh generators, the productivity of users performing parametric modeling tasks with LS-INGRID can be much higher in some cases. Therefore, LS-INGRID has to be used with LS-DYNA. This report presents basic explanations for the structure and commands, basic modelling examples and advanced modelling of LS-INGRID to use it for the impact analysis of various packages. The new users can build the complex model easily, through a study for the basic examples presented in this report from the modelling to the loading and constraint conditions.

  13. Sensitivity study of CFD turbulent models for natural convection analysis

    International Nuclear Information System (INIS)

    Yu sun, Park

    2007-01-01

    The buoyancy driven convective flow fields are steady circulatory flows which were made between surfaces maintained at two fixed temperatures. They are ubiquitous in nature and play an important role in many engineering applications. Application of a natural convection can reduce the costs and efforts remarkably. This paper focuses on the sensitivity study of turbulence analysis using CFD (Computational Fluid Dynamics) for a natural convection in a closed rectangular cavity. Using commercial CFD code, FLUENT and various turbulent models were applied to the turbulent flow. Results from each CFD model will be compared each other in the viewpoints of grid resolution and flow characteristics. It has been showed that: -) obtaining general flow characteristics is possible with relatively coarse grid; -) there is no significant difference between results from finer grid resolutions than grid with y + + is defined as y + = ρ*u*y/μ, u being the wall friction velocity, y being the normal distance from the center of the cell to the wall, ρ and μ being respectively the fluid density and the fluid viscosity; -) the K-ε models show a different flow characteristic from K-ω models or from the Reynolds Stress Model (RSM); and -) the y + parameter is crucial for the selection of the appropriate turbulence model to apply within the simulation

  14. Application of Learning Curves for Didactic Model Evaluation: Case Studies

    Directory of Open Access Journals (Sweden)

    Felix Mödritscher

    2013-01-01

    Full Text Available The success of (online courses depends, among other factors, on the underlying didactical models which have always been evaluated with qualitative and quantitative research methods. Several new evaluation techniques have been developed and established in the last years. One of them is ‘learning curves’, which aim at measuring error rates of users when they interact with adaptive educational systems, thereby enabling the underlying models to be evaluated and improved. In this paper, we report how we have applied this new method to two case studies to show that learning curves are useful to evaluate didactical models and their implementation in educational platforms. Results show that the error rates follow a power law distribution with each additional attempt if the didactical model of an instructional unit is valid. Furthermore, the initial error rate, the slope of the curve and the goodness of fit of the curve are valid indicators for the difficulty level of a course and the quality of its didactical model. As a conclusion, the idea of applying learning curves for evaluating didactical model on the basis of usage data is considered to be valuable for supporting teachers and learning content providers in improving their online courses.

  15. Parametric study of the Incompletely Stirred Reactor modeling

    Energy Technology Data Exchange (ETDEWEB)

    Mobini, K. [Department of Mechanical Engineering, Shahid Rajaee University, Lavizan, Tehran (Iran); Bilger, R.W. [School of Aerospace, Mechanical and Mechatronic Engineering, University of Sydney, Sydney (Australia)

    2009-09-15

    The Incompletely Stirred Reactor (ISR) is a generalization of the widely-used Perfectly Stirred Reactor (PSR) model and allows for incomplete mixing within the reactor. Its formulation is based on the Conditional Moment Closure (CMC) method. This model is applicable to nonpremixed combustion with strong recirculation such as in a gas turbine combustor primary zone. The model uses the simplifying assumptions that the conditionally-averaged reactive-scalar concentrations are independent of position in the reactor: this results in ordinary differential equations in mixture fraction space. The simplicity of the model permits the use of very complex chemical mechanisms. The effects of the detailed chemistry can be found while still including the effects of micromixing. A parametric study is performed here on an ISR for combustion of methane at overall stoichiometric conditions to investigate the sensitivity of the model to different parameters. The focus here is on emissions of nitric oxide and carbon monoxide. It is shown that the most important parameters in the ISR model are reactor residence time, the chemical mechanism and the core-averaged Probability Density Function (PDF). Using several different shapes for the core-averaged PDF, it is shown that use of a bimodal PDF with a low minimum at stoichiometric mixture fraction and a large variance leads to lower nitric oxide formation. The 'rich-plus-lean' mixing or staged combustion strategy for combustion is thus supported. (author)

  16. General circulation model study of atmospheric carbon monoxide

    International Nuclear Information System (INIS)

    Pinto, J.P.; Yung, Y.L.; Rind, D.; Russell, G.L.; Lerner, J.A.; Hansen, J.E.; Hameed, S.

    1983-01-01

    The carbon monoxide cycle is studied by incorporating the known and hypothetical sources and sinks in a tracer model that uses the winds generated by a general circulation model. Photochemical production and loss terms, which depend on OH radical concentrations, are calculated in an interactive fashion. The computed global distribution and seasonal variations of CO are compared with observations to obtain constraints on the distribution and magnitude of the sources and sinks of CO, and on the tropospheric abundance of OH. The simplest model that accounts for available observations requires a low latitude plant source of about 1.3 x 10 15 g yr -1 , in addition to sources from incomplete combustion of fossil fuels and oxidation of methane. The globally averaged OH concentration calculated in the model is 7 x 10 5 cm -3 . Models that calculate globally averaged OH concentrations much lower than our nominal value are not consistent with the observed variability of CO. Such models are also inconsistent with measurements of CO isotopic abundances, which imply the existence of plant sources

  17. Organizational home care models across Europe: A cross sectional study.

    Science.gov (United States)

    Van Eenoo, Liza; van der Roest, Henriëtte; Onder, Graziano; Finne-Soveri, Harriet; Garms-Homolova, Vjenka; Jonsson, Palmi V; Draisma, Stasja; van Hout, Hein; Declercq, Anja

    2018-01-01

    Decision makers are searching for models to redesign home care and to organize health care in a more sustainable way. The aim of this study is to identify and characterize home care models within and across European countries by means of structural characteristics and care processes at the policy and the organization level. At the policy level, variables that reflected variation in health care policy were included based on a literature review on the home care policy for older persons in six European countries: Belgium, Finland, Germany, Iceland, Italy, and the Netherlands. At the organizational level, data on the structural characteristics and the care processes were collected from 36 home care organizations by means of a survey. Data were collected between 2013 and 2015 during the IBenC project. An observational, cross sectional, quantitative design was used. The analyses consisted of a principal component analysis followed by a hierarchical cluster analysis. Fifteen variables at the organizational level, spread across three components, explained 75.4% of the total variance. The three components made it possible to distribute home care organizations into six care models that differ on the level of patient-centered care delivery, the availability of specialized care professionals, and the level of monitoring care performance. Policy level variables did not contribute to distinguishing between home care models. Six home care models were identified and characterized. These models can be used to describe best practices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Study on competitive interaction models in Cayley tree

    International Nuclear Information System (INIS)

    Moreira, J.G.M.A.

    1987-12-01

    We propose two kinds of models in the Cayley tree to simulate Ising models with axial anisotropy in the cubic lattice. The interaction in the direction of the anisotropy is simulated by the interaction along the branches of the tree. The interaction in the planes perpendicular to the anisotropy direction, in the first model, is simulated by interactions between spins in neighbour branches of the same generation arising from same site of the previous generation. In the second model, the simulation of the interaction in the planes are produced by mean field interactions among all spins in sites of the same generation arising from the same site of the previous generations. We study these models in the limit of infinite coordination number. First, we analyse a situation with antiferromagnetic interactions along the branches between first neighbours only, and we find the analogous of a metamagnetic Ising model. In the following, we introduce competitive interactions between first and second neighbours along the branches, to simulate the ANNNI model. We obtain one equation of differences which relates the magnetization of one generation with the magnetization of the two previous generations, to permit a detailed study of the modulated phase region. We note that the wave number of the modulation, for one fixed temperature, changes with the competition parameter to form a devil's staircase with a fractal dimension which increases with the temperature. We discuss the existence of strange atractors, related to a possible caothic phase. Finally, we show the obtained results when we consider interactions along the branches with three neighbours. (author)

  19. A study on online monitoring system development using empirical models

    Energy Technology Data Exchange (ETDEWEB)

    An, Sang Ha

    2010-02-15

    Maintenance technologies have been progressed from a time-based to a condition-based manner. The fundamental idea of condition-based maintenance (CBM) is built on the real-time diagnosis of impending failures and/or the prognosis of residual lifetime of equipment by monitoring health conditions using various sensors. The success of CBM, therefore, hinges on the capability to develop accurate diagnosis/prognosis models. Even though there may be an unlimited number of methods to implement models, the models can normally be classified into two categories in terms of their origins: using physical principles or historical observations. I have focused on the latter method (sometimes referred as the empirical model based on statistical learning) because of some practical benefits such as context-free applicability, configuration flexibility, and customization adaptability. While several pilot-scale systems using empirical models have been applied to work sites in Korea, it should be noticed that these do not seem to be generally competitive against conventional physical models. As a result of investigating the bottlenecks of previous attempts, I have recognized the need for a novel strategy for grouping correlated variables such that an empirical model can accept not only statistical correlation but also some extent of physical knowledge of a system. Detailed examples of problems are as follows: (1) missing of important signals in a group caused by the lack of observations, (2) problems of signals with the time delay, (3) problems of optimal kernel bandwidth. In this study an improved statistical learning framework including the proposed strategy and case studies illustrating the performance of the method are presented.

  20. Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies

    Science.gov (United States)

    Suh, Peter M.; Conyers, Howard Jason; Mavris, Dimitri N.

    2015-01-01

    This report introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this report is on tool presentation, verification, and validation. These processes are carried out in stages throughout the report. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.

  1. Results of the eruptive column model inter-comparison study

    Science.gov (United States)

    Costa, Antonio; Suzuki, Yujiro; Cerminara, M.; Devenish, Ben J.; Esposti Ongaro, T.; Herzog, Michael; Van Eaton, Alexa; Denby, L.C.; Bursik, Marcus; de' Michieli Vitturi, Mattia; Engwell, S.; Neri, Augusto; Barsotti, Sara; Folch, Arnau; Macedonio, Giovanni; Girault, F.; Carazzo, G.; Tait, S.; Kaminski, E.; Mastin, Larry G.; Woodhouse, Mark J.; Phillips, Jeremy C.; Hogg, Andrew J.; Degruyter, Wim; Bonadonna, Costanza

    2016-01-01

    This study compares and evaluates one-dimensional (1D) and three-dimensional (3D) numerical models of volcanic eruption columns in a set of different inter-comparison exercises. The exercises were designed as a blind test in which a set of common input parameters was given for two reference eruptions, representing a strong and a weak eruption column under different meteorological conditions. Comparing the results of the different models allows us to evaluate their capabilities and target areas for future improvement. Despite their different formulations, the 1D and 3D models provide reasonably consistent predictions of some of the key global descriptors of the volcanic plumes. Variability in plume height, estimated from the standard deviation of model predictions, is within ~ 20% for the weak plume and ~ 10% for the strong plume. Predictions of neutral buoyancy level are also in reasonably good agreement among the different models, with a standard deviation ranging from 9 to 19% (the latter for the weak plume in a windy atmosphere). Overall, these discrepancies are in the range of observational uncertainty of column height. However, there are important differences amongst models in terms of local properties along the plume axis, particularly for the strong plume. Our analysis suggests that the simplified treatment of entrainment in 1D models is adequate to resolve the general behaviour of the weak plume. However, it is inadequate to capture complex features of the strong plume, such as large vortices, partial column collapse, or gravitational fountaining that strongly enhance entrainment in the lower atmosphere. We conclude that there is a need to more accurately quantify entrainment rates, improve the representation of plume radius, and incorporate the effects of column instability in future versions of 1D volcanic plume models.

  2. Ensembles modeling approach to study Climate Change impacts on Wheat

    Science.gov (United States)

    Ahmed, Mukhtar; Claudio, Stöckle O.; Nelson, Roger; Higgins, Stewart

    2017-04-01

    Simulations of crop yield under climate variability are subject to uncertainties, and quantification of such uncertainties is essential for effective use of projected results in adaptation and mitigation strategies. In this study we evaluated the uncertainties related to crop-climate models using five crop growth simulation models (CropSyst, APSIM, DSSAT, STICS and EPIC) and 14 general circulation models (GCMs) for 2 representative concentration pathways (RCP) of atmospheric CO2 (4.5 and 8.5 W m-2) in the Pacific Northwest (PNW), USA. The aim was to assess how different process-based crop models could be used accurately for estimation of winter wheat growth, development and yield. Firstly, all models were calibrated for high rainfall, medium rainfall, low rainfall and irrigated sites in the PNW using 1979-2010 as the baseline period. Response variables were related to farm management and soil properties, and included crop phenology, leaf area index (LAI), biomass and grain yield of winter wheat. All five models were run from 2000 to 2100 using the 14 GCMs and 2 RCPs to evaluate the effect of future climate (rainfall, temperature and CO2) on winter wheat phenology, LAI, biomass, grain yield and harvest index. Simulated time to flowering and maturity was reduced in all models except EPIC with some level of uncertainty. All models generally predicted an increase in biomass and grain yield under elevated CO2 but this effect was more prominent under rainfed conditions than irrigation. However, there was uncertainty in the simulation of crop phenology, biomass and grain yield under 14 GCMs during three prediction periods (2030, 2050 and 2070). We concluded that to improve accuracy and consistency in simulating wheat growth dynamics and yield under a changing climate, a multimodel ensemble approach should be used.

  3. Phenomenological study of extended seesaw model for light sterile neutrino

    International Nuclear Information System (INIS)

    Nath, Newton; Ghosh, Monojit; Goswami, Srubabati; Gupta, Shivani

    2017-01-01

    We study the zero textures of the Yukawa matrices in the minimal extended type-I seesaw (MES) model which can give rise to ∼ eV scale sterile neutrinos. In this model, three right handed neutrinos and one extra singlet S are added to generate a light sterile neutrino. The light neutrino mass matrix for the active neutrinos, m ν , depends on the Dirac neutrino mass matrix (M D ), Majorana neutrino mass matrix (M R ) and the mass matrix (M S ) coupling the right handed neutrinos and the singlet. The model predicts one of the light neutrino masses to vanish. We systematically investigate the zero textures in M D and observe that maximum five zeros in M D can lead to viable zero textures in m ν . For this study we consider four different forms for M R (one diagonal and three off diagonal) and two different forms of (M S ) containing one zero. Remarkably we obtain only two allowed forms of m ν (m eτ =0 and m ττ =0) having inverted hierarchical mass spectrum. We re-analyze the phenomenological implications of these two allowed textures of m ν in the light of recent neutrino oscillation data. In the context of the MES model, we also express the low energy mass matrix, the mass of the sterile neutrino and the active-sterile mixing in terms of the parameters of the allowed Yukawa matrices. The MES model leads to some extra correlations which disallow some of the Yukawa textures obtained earlier, even though they give allowed one-zero forms of m ν . We show that the allowed textures in our study can be realized in a simple way in a model based on MES mechanism with a discrete Abelian flavor symmetry group Z 8 ×Z 2 .

  4. Shear viscosity from Kubo formalism: NJL model study

    International Nuclear Information System (INIS)

    Lang, Robert; Weise, Wolfram

    2014-01-01

    A large-N c expansion is combined with the Kubo formalism to study the shear viscosity η of strongly interacting matter in the two-flavor NJL model. We discuss analytical and numerical approaches to η and investigate systematically its strong dependence on the spectral width and the momentum-space cutoff. Thermal effects on the constituent quark mass from spontaneous chiral symmetry breaking are included. The ratio η/s and its thermal dependence are derived for different parameterizations of the spectral width and for an explicit one-loop calculation including mesonic modes within the NJL model. (orig.)

  5. Molecular level in silico studies for oncology. Direct models review

    Science.gov (United States)

    Psakhie, S. G.; Tsukanov, A. A.

    2017-09-01

    The combination of therapy and diagnostics in one process "theranostics" is a trend in a modern medicine, especially in oncology. Such an approach requires development and usage of multifunctional hybrid nanoparticles with a hierarchical structure. Numerical methods and mathematical models play a significant role in the design of the hierarchical nanoparticles and allow looking inside the nanoscale mechanisms of agent-cell interactions. The current position of in silico approach in biomedicine and oncology is discussed. The review of the molecular level in silico studies in oncology, which are using the direct models, is presented.

  6. Volatile particles formation during PartEmis: a modelling study

    Directory of Open Access Journals (Sweden)

    X. Vancassel

    2004-01-01

    Full Text Available A modelling study of the formation of volatile particles in a combustor exhaust has been carried out in the frame of the PartEmis European project. A kinetic model has been used in order to investigate nucleation efficiency of the H2O-H2SO4 binary mixture in the sampling system. A value for the fraction of the fuel sulphur S(IV converted into S(VI has been indirectly deduced from comparisons between model results and measurements. In the present study, ranges between roughly 2.5% and 6%, depending on the combustor settings and on the value assumed for the parameter describing sulphuric acid wall losses. Soot particles hygroscopicity has also been investigated as their activation is a key parameter for contrail formation. Growth factors of monodisperse particles exposed to high relative humidity (95% have been calculated and compared with experimental results. The modelling study confirms that the growth factor increases as the soot particle size decreases.

  7. HOMOLOGY MODELING AND MOLECULAR DYNAMICS STUDY OF MYCOBACTERIUM TUBERCULOSIS UREASE

    Directory of Open Access Journals (Sweden)

    Lisnyak Yu. V.

    2017-10-01

    Full Text Available Introduction. M. tuberculosis urease (MTU is an attractive target for chemotherapeutic intervention in tuberculosis by designing new safe and efficient enzyme inhibitors. A prerequisite for designing such inhibitors is an understanding of urease's three-dimensional (3D structure organization. 3D structure of M. tuberculosis urease is unknown. When experimental three-dimensional structure of a protein is not known, homology modeling, the most commonly used computational structure prediction method, is the technique of choice. This paper aimed to build a 3D-structure of M. tuberculosis urease by homology modeling and to study its stability by molecular dynamics simulations. Materials and methods. To build MTU model, five high-resolution X-ray structures of bacterial ureases with three-subunit composition (2KAU, 5G4H, 4UBP, 4СEU, and 4EPB have been selected as templates. For each template five stochastic alignments were created and for each alignment, a three-dimensional model was built. Then, each model was energy minimized and the models were ranked by quality Z-score. The MTU model with highest quality estimation amongst 25 potential models was selected. To further improve structure quality the model was refined by short molecular dynamics simulation that resulted in 20 snapshots which were rated according to their energy and the quality Z-score. The best scoring model having minimum energy was chosen as a final homology model of 3D structure for M. tuberculosis. The final model of MTU was also validated by using PDBsum and QMEAN servers. These checks confirmed good quality of MTU homology model. Results and discussion. Homology model of MTU is a nonamer (homotrimer of heterotrimers, (αβγ3 consisting of 2349 residues. In MTU heterotrimer, sub-units α, β, and γ tightly interact with each other at a surface of approximately 3000 Å2. Sub-unit α contains the enzyme active site with two Ni atoms coordinated by amino acid residues His347, His

  8. Modeling studies of the Indo-Pacific warm pool

    International Nuclear Information System (INIS)

    Barnett, T.P.; Schneider N.; Tyree, M.; Ritchie, J.; Ramanathan, V.; Sherwood, S.; Zhang, G.; Flatau, M.

    1994-01-01

    A wide variety of modeling studies are being conducted, aimed at understanding the interactions of clouds, radiation, and the ocean in the region of the Indo-Pacific warm pool, the flywheel of the global climate system. These studies are designed to understand the important physical processes operating in the ocean and atmosphere in the region. A stand alone Atmospheric GCM, forced by observed sea surface temperature, has been used for several purposes. One study with the AGCM shows the high sensitivity of the tropical circulation to variations in mid- to high-level clouds. A stand-alone ocean general circulation model (OGCM) is being used to study the relative role of shortwave radiation changes in the buoyancy flux forcing of the upper ocean. Complete studies of the warm pool can only be conducted with a full coupled ocean/atmosphere model. The latest version of the Hamburg CGCM produces realistic simulations of the ocean/atmosphere system in the Indo-Pacific without use of a flux correction scheme

  9. Credible baseline analysis for multi-model public policy studies

    Energy Technology Data Exchange (ETDEWEB)

    Parikh, S.C.; Gass, S.I.

    1981-01-01

    The nature of public decision-making and resource allocation is such that many complex interactions can best be examined and understood by quantitative analysis. Most organizations do not possess the totality of models and needed analytical skills to perform detailed and systematic quantitative analysis. Hence, the need for coordinated, multi-organization studies that support public decision-making has grown in recent years. This trend is expected not only to continue, but to increase. This paper describes the authors' views on the process of multi-model analysis based on their participation in an analytical exercise, the ORNL/MITRE Study. One of the authors was the exercise coordinator. During the study, the authors were concerned with the issue of measuring and conveying credibility of the analysis. This work led them to identify several key determinants, described in this paper, that could be used to develop a rating of credibility.

  10. Escompte Pre-modelling Studies In The Marseille Area.

    Science.gov (United States)

    Meleux, F.; Rosset, R.

    On June and July 2001, the campaign ESCOMPTE took place in the Marseille area in southern of France, with the aim of generating a detailed 3-D data base for the study of dynamics and chemistry of high pollution events so as to validate and improve air quality models. Previous to this field experiment, a pre-modelling exercise has been performed to document the dynamic interactions between sea and land breezes and orographics flows over this complex topographical area. This study was carried out using a nesting procedure at local and regional scales using the MESO-NH model (jointly developed by Laboratoire d'Aérologie and Meteofrance at Toulouse). Tracers emitted at various locations in the Marseille and Etang de Berre areas were first fol- lowed, then in a second step, full chemistry simulations have been run for two selected periods on June and July 1999, quite similar to the meteorological situations met dur- ing the IOP2a and the IOP4 in the 2001 campaign. The performance of the model has been assessed by comparing measured data with simulated data for meteorological pa- rameters and ozone. The general ability of the model to correctly simulate these two situations allows to further study ozone plume developments in more details. In par- ticular, these studies bear upon the relative roles of O3 transport versus O3 chemical production, as a function of distance within the plume to anthropogenic emissions and biogenic emissions, together with ozone daily variations and peak values observed at rural sites.

  11. How do humans inspect BPMN models: an exploratory study

    DEFF Research Database (Denmark)

    Haisjackl, Cornelia; Soffer, Pnina; Lim, Shao Yi

    2016-01-01

    to initial research questions, we also indicate other research questions that can be investigated using this approach. Our qualitative analysis shows that humans adapt different strategies on how to identify quality issues. In addition, we observed several challenges appearing when humans inspect process......Even though considerable progress regarding the technical perspective on modeling and supporting business processes has been achieved, it appears that the human perspective is still often left aside. In particular, we do not have an in-depth understanding of how process models are inspected...... by humans, what strategies are taken, what challenges arise, and what cognitive processes are involved. This paper contributes toward such an understanding and reports an exploratory study investigating how humans identify and classify quality issues in BPMN process models. Providing preliminary answers...

  12. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  13. Capability maturity models in engineering companies: case study analysis

    Directory of Open Access Journals (Sweden)

    Titov Sergei

    2016-01-01

    Full Text Available In the conditions of the current economic downturn engineering companies in Russia and worldwide are searching for new approaches and frameworks to improve their strategic position, increase the efficiency of the internal business processes and enhance the quality of the final products. Capability maturity models are well-known tools used by many foreign engineering companies to assess the productivity of the processes, to elaborate the program of business process improvement and to prioritize the efforts to optimize the whole company performance. The impact of capability maturity model implementation on cost and time are documented and analyzed in the existing research. However, the potential of maturity models as tools of quality management is less known. The article attempts to analyze the impact of CMM implementation on the quality issues. The research is based on a case study methodology and investigates the real life situation in a Russian engineering company.

  14. Experimental Study and Dynamic Modeling of Metal Rubber Isolating Bearing

    International Nuclear Information System (INIS)

    Zhang, Ke; Zhou, Yanguo; Jiang, Jian

    2015-01-01

    In this paper, dynamic shear mechanical properties of a new metal rubber isolating bearing is tested and studied. The mixed damping model is provided for theoretical modeling of MR isolating bearing, the shear stiffness and damping characteristics of the MR bearing can be analyzed separately and easily discussed, and the mixed damping model is proved to be an rather effective approach. The test results indicate that loading frequency bears little impact over shear property of metal rubber isolating bearing, the total energy consumption of metal rubber isolating bearing increases with the increase in loading amplitude. With the increase in loading amplitude, the stiffness of the isolating bearing will reduce showing its “soft property”; and the type of damping force gradually changes to be close to dry friction. The features of “soft property” and dry friction energy consumption of metal rubber isolating bearing are very useful in practical engineering application. (paper)

  15. Light aircraft sound transmission studies - Noise reduction model

    Science.gov (United States)

    Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.

    1987-01-01

    Experimental tests conducted on the fuselage of a single-engine Piper Cherokee light aircraft suggest that the cabin interior noise can be reduced by increasing the transmission loss of the dominant sound transmission paths and/or by increasing the cabin interior sound absorption. The validity of using a simple room equation model to predict the cabin interior sound-pressure level for different fuselage and exterior sound field conditions is also presented. The room equation model is based on the sound power flow balance for the cabin space and utilizes the measured transmitted sound intensity data. The room equation model predictions were considered good enough to be used for preliminary acoustical design studies.

  16. Modeling study of the Pauzhetsky geothermal field, Kamchatka, Russia

    Energy Technology Data Exchange (ETDEWEB)

    Kiryukhin, A.V. [Institute of Volcanology, Kamchatsky (Russian Federation); Yampolsky, V.A. [Kamchatskburgeotermia State Enterprise, Elizovo (Russian Federation)

    2004-08-01

    Exploitation of the Pauzhetsky geothermal field started in 1966 with a 5 MW{sub e} power plant. A hydrogeological model of the Pauzhetsky field has been developed based on an integrated analysis of data on lithological units, temperature, pressure, production zones and natural discharge distributions. A one-layer 'well by well' model with specified vertical heat and mass exchange conditions has been used to represent the main features of the production reservoir. Numerical model development was based on the TOUGH2 code [Pruess, 1991. TOUGH2 - A General Purpose Numerical Simulator for Multiphase Fluid and Heat Flow, Lawrence Berkeley National Laboratory Report, Berkeley, CA; Pruess et al., 1999. TOUGH2 User's Guide, Version 2.0, Report LBNL-43134, Lawrence Berkeley National Laboratory, Berkeley, CA] coupled with tables generated by the HOLA wellbore simulator [Aunzo et al., 1991. Wellbore Models GWELL, GWNACL, and HOLA, Users Guide, Draft, 81 pp.]. Lahey Fortran-90 compiler and computer graphical packages (Didger-3, Surfer-8, Grapher-3) were also used to model the development process. The modeling study of the natural-state conditions was targeted on a temperature distribution match to estimate the natural high-temperature upflow parameters: the mass flow-rate was estimated at 220 kg/s with enthalpy of 830-920 kJ/kg. The modeling study for the 1964-2000 exploitation period of the Pauzhetsky geothermal field was targeted at matching the transient reservoir pressure and flowing enthalpies of the production wells. The modeling study of exploitation confirmed that 'double porosity' in the reservoir, with a 10-20% active volume of 'fractures', and a thermo-mechanical response to reinjection (including changes in porosity due to compressibility and expansivity), were the key parameters of the model. The calibrated model of the Pauzhetsky geothermal field was used to forecast reservoir behavior under different exploitation scenarios for

  17. A model ecosystem experiment and its computational simulation studies

    International Nuclear Information System (INIS)

    Doi, M.

    2002-01-01

    Simplified microbial model ecosystem and its computer simulation model are introduced as eco-toxicity test for the assessment of environmental responses from the effects of environmental impacts. To take the effects on the interactions between species and environment into account, one option is to select the keystone species on the basis of ecological knowledge, and to put it in the single-species toxicity test. Another option proposed is to put the eco-toxicity tests as experimental micro ecosystem study and a theoretical model ecosystem analysis. With these tests, the stressors which are more harmful to the ecosystems should be replace with less harmful ones on the basis of unified measures. Management of radioactive materials, chemicals, hyper-eutrophic, and other artificial disturbances of ecosystem should be discussed consistently from the unified view point of environmental protection. (N.C.)

  18. An empirical and model study on automobile market in Taiwan

    Science.gov (United States)

    Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren

    2006-03-01

    We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.

  19. A new in situ model to study erosive enamel wear, a clinical pilot study.

    NARCIS (Netherlands)

    Ruben, J.L.; Truin, G.J.; Bronkhorst, E.M.; Huysmans, M.C.D.N.J.M.

    2017-01-01

    OBJECTIVES: To develop an in situ model for erosive wear research which allows for more clinically relevant exposure parameters than other in situ models and to show tooth site-specific erosive wear effect of an acid challenge of orange juice on enamel. METHODS: This pilot study included 6

  20. Study of the nonlinear imperfect software debugging model

    International Nuclear Information System (INIS)

    Wang, Jinyong; Wu, Zhibo

    2016-01-01

    In recent years there has been a dramatic proliferation of research on imperfect software debugging phenomena. Software debugging is a complex process and is affected by a variety of factors, including the environment, resources, personnel skills, and personnel psychologies. Therefore, the simple assumption that debugging is perfect is inconsistent with the actual software debugging process, wherein a new fault can be introduced when removing a fault. Furthermore, the fault introduction process is nonlinear, and the cumulative number of nonlinearly introduced faults increases over time. Thus, this paper proposes a nonlinear, NHPP imperfect software debugging model in consideration of the fact that fault introduction is a nonlinear process. The fitting and predictive power of the NHPP-based proposed model are validated through related experiments. Experimental results show that this model displays better fitting and predicting performance than the traditional NHPP-based perfect and imperfect software debugging models. S-confidence bounds are set to analyze the performance of the proposed model. This study also examines and discusses optimal software release-time policy comprehensively. In addition, this research on the nonlinear process of fault introduction is significant given the recent surge of studies on software-intensive products, such as cloud computing and big data. - Highlights: • Fault introduction is a nonlinear changing process during the debugging phase. • The assumption that the process of fault introduction is nonlinear is credible. • Our proposed model can better fit and accurately predict software failure behavior. • Research on fault introduction case is significant to software-intensive products.

  1. Modeling and numerical study of two phase flow

    International Nuclear Information System (INIS)

    Champmartin, A.

    2011-01-01

    This thesis describes the modelization and the simulation of two-phase systems composed of droplets moving in a gas. The two phases interact with each other and the type of model to consider directly depends on the type of simulations targeted. In the first part, the two phases are considered as fluid and are described using a mixture model with a drift relation (to be able to follow the relative velocity between the two phases and take into account two velocities), the two-phase flows are assumed at the equilibrium in temperature and pressure. This part of the manuscript consists of the derivation of the equations, writing a numerical scheme associated with this set of equations, a study of this scheme and simulations. A mathematical study of this model (hyperbolicity in a simplified framework, linear stability analysis of the system around a steady state) was conducted in a frame where the gas is assumed baro-tropic. The second part is devoted to the modelization of the effect of inelastic collisions on the particles when the time of the simulation is shorter and the droplets can no longer be seen as a fluid. We introduce a model of inelastic collisions for droplets in a spray, leading to a specific Boltzmann kernel. Then, we build caricatures of this kernel of BGK type, in which the behavior of the first moments of the solution of the Boltzmann equation (that is mass, momentum, directional temperatures, variance of the internal energy) are mimicked. The quality of these caricatures is tested numerically at the end. (author) [fr

  2. Earthquake Source Spectral Study beyond the Omega-Square Model

    Science.gov (United States)

    Uchide, T.; Imanishi, K.

    2017-12-01

    Earthquake source spectra have been used for characterizing earthquake source processes quantitatively and, at the same time, simply, so that we can analyze the source spectra for many earthquakes, especially for small earthquakes, at once and compare them each other. A standard model for the source spectra is the omega-square model, which has the flat spectrum and the falloff inversely proportional to the square of frequencies at low and high frequencies, respectively, which are bordered by a corner frequency. The corner frequency has often been converted to the stress drop under the assumption of circular crack models. However, recent studies claimed the existence of another corner frequency [Denolle and Shearer, 2016; Uchide and Imanishi, 2016] thanks to the recent development of seismic networks. We have found that many earthquakes in areas other than the area studied by Uchide and Imanishi [2016] also have source spectra deviating from the omega-square model. Another part of the earthquake spectra we now focus on is the falloff rate at high frequencies, which will affect the seismic energy estimation [e.g., Hirano and Yagi, 2017]. In June, 2016, we deployed seven velocity seismometers in the northern Ibaraki prefecture, where the shallow crustal seismicity mainly with normal-faulting events was activated by the 2011 Tohoku-oki earthquake. We have recorded seismograms at 1000 samples per second and at a short distance from the source, so that we can investigate the high-frequency components of the earthquake source spectra. Although we are still in the stage of discovery and confirmation of the deviation from the standard omega-square model, the update of the earthquake source spectrum model will help us systematically extract more information on the earthquake source process.

  3. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    Science.gov (United States)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    /24 degree, if in the end you only look at monthly runoff? In this study an attempt is made to link time and space scales in the VIC model, to study the added value of a higher spatial resolution-model for different time steps. In order to do this, four different VIC models were constructed for the Thur basin in North-Eastern Switzerland (1700 km²), a tributary of the Rhine: one lumped model, and three spatially distributed models with a resolution of respectively 1x1 km, 5x5 km, and 10x10 km. All models are run at an hourly time step and aggregated and calibrated for different time steps (hourly, daily, monthly, yearly) using a novel Hierarchical Latin Hypercube Sampling Technique (Vořechovský, 2014). For each time and space scale, several diagnostics like Nash-Sutcliffe efficiency, Kling-Gupta efficiency, all the quantiles of the discharge etc., are calculated in order to compare model performance over different time and space scales for extreme events like floods and droughts. Next to that, the effect of time and space scale on the parameter distribution can be studied. In the end we hope to find a link for optimal time and space scale combinations.

  4. Mechanism study of pulsus paradoxus using mechanical models.

    Directory of Open Access Journals (Sweden)

    Chang-yang Xing

    Full Text Available Pulsus paradoxus is an exaggeration of the normal inspiratory decrease in systolic blood pressure. Despite a century of attempts to explain this sign consensus is still lacking. To solve the controversy and reveal the exact mechanism, we reexamined the characteristic anatomic arrangement of the circulation system in the chest and designed these mechanical models based on related hydromechanic principles. Model 1 was designed to observe the primary influence of respiratory intrathoracic pressure change (RIPC on systemic and pulmonary venous return systems (SVR and PVR respectively. Model 2, as an equivalent mechanical model of septal swing, was to study the secondary influence of RIPC on the motion of the interventriclar septum (IVS, which might be the direct cause for pulsus paradoxus. Model 1 demonstrated that the simulated RIPC had different influence on the simulated SVR and PVR. It increased the volume of the simulated right ventricle (SRV when the internal pressure was kept constant (8.16 cmH2O, while it had the opposite effect on PVR. Model 2 revealed the three major factors determining the respiratory displacement of IVS in normal and different pathophysiological conditions: the magnitude of RIPC, the pressure difference between the two ventricles and the intrapericardial pressure. Our models demonstrate that the different anatomical arrangement of the two venous return systems leads to a different effect of RIPC on right and left ventricles, and thus a pressure gradient across IVS that tends to shift IVS left- and rightwards. When the leftward displacement of IVS reaches a considerable amplitude in some pathologic condition such as cardiac tamponade, the pulsus paradoxus occurs.

  5. Flow regulation in coronary vascular tree: a model study.

    Directory of Open Access Journals (Sweden)

    Xinzhou Xie

    Full Text Available Coronary blood flow can always be matched to the metabolic demand of the myocardium due to the regulation of vasoactive segments. Myocardial compressive forces play an important role in determining coronary blood flow but its impact on flow regulation is still unknown. The purpose of this study was to develop a coronary specified flow regulation model, which can integrate myocardial compressive forces and other identified regulation factors, to further investigate the coronary blood flow regulation behavior.A theoretical coronary flow regulation model including the myogenic, shear-dependent and metabolic responses was developed. Myocardial compressive forces were included in the modified wall tension model. Shear-dependent response was estimated by using the experimental data from coronary circulation. Capillary density and basal oxygen consumption were specified to corresponding to those in coronary circulation. Zero flow pressure was also modeled by using a simplified capillary model.Pressure-flow relations predicted by the proposed model are consistent with previous experimental data. The predicted diameter changes in small arteries are in good agreement with experiment observations in adenosine infusion and inhibition of NO synthesis conditions. Results demonstrate that the myocardial compressive forces acting on the vessel wall would extend the auto-regulatory range by decreasing the myogenic tone at the given perfusion pressure.Myocardial compressive forces had great impact on coronary auto-regulation effect. The proposed model was proved to be consistent with experiment observations and can be employed to investigate the coronary blood flow regulation effect in physiological and pathophysiological conditions.

  6. High-Level Waste Glass Formulation Model Sensitivity Study 2009 Glass Formulation Model Versus 1996 Glass Formulation Model

    International Nuclear Information System (INIS)

    Belsher, J.D.; Meinert, F.L.

    2009-01-01

    This document presents the differences between two HLW glass formulation models (GFM): The 1996 GFM and 2009 GFM. A glass formulation model is a collection of glass property correlations and associated limits, as well as model validity and solubility constraints; it uses the pretreated HLW feed composition to predict the amount and composition of glass forming additives necessary to produce acceptable HLW glass. The 2009 GFM presented in this report was constructed as a nonlinear optimization calculation based on updated glass property data and solubility limits described in PNNL-18501 (2009). Key mission drivers such as the total mass of HLW glass and waste oxide loading are compared between the two glass formulation models. In addition, a sensitivity study was performed within the 2009 GFM to determine the effect of relaxing various constraints on the predicted mass of the HLW glass.

  7. Model Studies of the Dynamics of Bacterial Flagellar Motors

    Energy Technology Data Exchange (ETDEWEB)

    Bai, F; Lo, C; Berry, R; Xing, J

    2009-03-19

    The Bacterial Flagellar Motor is a rotary molecular machine that rotates the helical filaments which propel swimming bacteria. Extensive experimental and theoretical studies exist on the structure, assembly, energy input, power generation and switching mechanism of the motor. In our previous paper, we explained the general physics underneath the observed torque-speed curves with a simple two-state Fokker-Planck model. Here we further analyze this model. In this paper we show (1) the model predicts that the two components of the ion motive force can affect the motor dynamics differently, in agreement with the latest experiment by Lo et al.; (2) with explicit consideration of the stator spring, the model also explains the lack of dependence of the zero-load speed on stator number in the proton motor, recently observed by Yuan and Berg; (3) the model reproduces the stepping behavior of the motor even with the existence of the stator springs and predicts the dwelling time distribution. Predicted stepping behavior of motors with two stators is discussed, and we suggest future experimental verification.

  8. Animal models as tools to study the pathophysiology of depression

    Directory of Open Access Journals (Sweden)

    Helena M. Abelaira

    2013-01-01

    Full Text Available The incidence of depressive illness is high worldwide, and the inadequacy of currently available drug treatments contributes to the significant health burden associated with depression. A basic understanding of the underlying disease processes in depression is lacking; therefore, recreating the disease in animal models is not possible. Popular current models of depression creatively merge ethologically valid behavioral assays with the latest technological advances in molecular biology. Within this context, this study aims to evaluate animal models of depression and determine which has the best face, construct, and predictive validity. These models differ in the degree to which they produce features that resemble a depressive-like state, and models that include stress exposure are widely used. Paradigms that employ acute or sub-chronic stress exposure include learned helplessness, the forced swimming test, the tail suspension test, maternal deprivation, chronic mild stress, and sleep deprivation, to name but a few, all of which employ relatively short-term exposure to inescapable or uncontrollable stress and can reliably detect antidepressant drug response.

  9. COMPARATIVE STUDY ON MAIN SOLVENCY ASSESSMENT MODELS FOR INSURANCE FIELD

    Directory of Open Access Journals (Sweden)

    Daniela Nicoleta SAHLIAN

    2015-07-01

    Full Text Available During the recent financial crisis of insurance domain, there were imposed new aspects that have to be taken into account concerning the risks management and surveillance activity. The insurance societies could develop internal models in order to determine the minimum capital requirement imposed by the new regulations that are to be adopted on 1 January 2016. In this respect, the purpose of this research paper is to offer a real presentation and comparing with the main solvency regulation systems used worldwide, the accent being on their common characteristics and current tendencies. Thereby, we would like to offer a better understanding of the similarities and differences between the existent solvency regimes in order to develop the best regime of solvency for Romania within the Solvency II project. The study will show that there are clear differences between the existent Solvency I regime and the new approaches based on risk and will also point out the fact that even the key principles supporting the new solvency regimes are convergent, there are a lot of approaches for the application of these principles. In this context, the question we would try to find the answer is "how could the global solvency models be useful for the financial surveillance authority of Romania for the implementation of general model and for the development of internal solvency models according to the requirements of Solvency II" and "which would be the requirements for the implementation of this type of approach?". This thing makes the analysis of solvency models an interesting exercise.

  10. A discrete model to study reaction-diffusion-mechanics systems.

    Science.gov (United States)

    Weise, Louis D; Nash, Martyn P; Panfilov, Alexander V

    2011-01-01

    This article introduces a discrete reaction-diffusion-mechanics (dRDM) model to study the effects of deformation on reaction-diffusion (RD) processes. The dRDM framework employs a FitzHugh-Nagumo type RD model coupled to a mass-lattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material). Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the mass-lattice system. Using this framework results were reproduced on self-organized pacemaking activity that have been previously found with a continuous RD mechanics model. Mechanisms that determine the period of pacemakers and its dependency on the medium size are identified. Finally it is shown how the drift direction of pacemakers in RDM systems is related to the spatial distribution of deformation and curvature effects.

  11. Model Studies of the Dynamics of Bacterial Flagellar Motors

    Science.gov (United States)

    Bai, Fan; Lo, Chien-Jung; Berry, Richard M.; Xing, Jianhua

    2009-01-01

    Abstract The bacterial flagellar motor is a rotary molecular machine that rotates the helical filaments that propel swimming bacteria. Extensive experimental and theoretical studies exist on the structure, assembly, energy input, power generation, and switching mechanism of the motor. In a previous article, we explained the general physics underneath the observed torque-speed curves with a simple two-state Fokker-Planck model. Here, we further analyze that model, showing that 1), the model predicts that the two components of the ion motive force can affect the motor dynamics differently, in agreement with latest experiments; 2), with explicit consideration of the stator spring, the model also explains the lack of dependence of the zero-load speed on stator number in the proton motor, as recently observed; and 3), the model reproduces the stepping behavior of the motor even with the existence of the stator springs and predicts the dwell-time distribution. The predicted stepping behavior of motors with two stators is discussed, and we suggest future experimental procedures for verification. PMID:19383460

  12. A discrete model to study reaction-diffusion-mechanics systems.

    Directory of Open Access Journals (Sweden)

    Louis D Weise

    Full Text Available This article introduces a discrete reaction-diffusion-mechanics (dRDM model to study the effects of deformation on reaction-diffusion (RD processes. The dRDM framework employs a FitzHugh-Nagumo type RD model coupled to a mass-lattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material. Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the mass-lattice system. Using this framework results were reproduced on self-organized pacemaking activity that have been previously found with a continuous RD mechanics model. Mechanisms that determine the period of pacemakers and its dependency on the medium size are identified. Finally it is shown how the drift direction of pacemakers in RDM systems is related to the spatial distribution of deformation and curvature effects.

  13. Metocean input data for drift models applications: Loustic study

    International Nuclear Information System (INIS)

    Michon, P.; Bossart, C.; Cabioc'h, M.

    1995-01-01

    Real-time monitoring and crisis management of oil slicks or floating structures displacement require a good knowledge of local winds, waves and currents used as input data for operational drift models. Fortunately, thanks to world-wide and all-weather coverage, satellite measurements have recently enabled the introduction of new methods for the remote sensing of the marine environment. Within a French joint industry project, a procedure has been developed using basically satellite measurements combined to metocean models in order to provide marine operators' drift models with reliable wind, wave and current analyses and short term forecasts. Particularly, a model now allows the calculation of the drift current, under the joint action of wind and sea-state, thus radically improving the classical laws. This global procedure either directly uses satellite wind and waves measurements (if available on the study area) or indirectly, as calibration of metocean models results which are brought to the oil slick or floating structure location. The operational use of this procedure is reported here with an example of floating structure drift offshore from the Brittany coasts

  14. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    International Nuclear Information System (INIS)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J

    2008-01-01

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization

  15. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    Energy Technology Data Exchange (ETDEWEB)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J [Atmospheric Science and Global Change Division, Pacific Northwest National Laboratory, PO Box 999, MSIN K9-30, Richland, WA (United States)], E-mail: William.Gustafson@pnl.gov

    2008-04-15

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization.

  16. Process modeling for the Integrated Thermal Treatment System (ITTS) study

    Energy Technology Data Exchange (ETDEWEB)

    Liebelt, K.H.; Brown, B.W.; Quapp, W.J.

    1995-09-01

    This report describes the process modeling done in support of the integrated thermal treatment system (ITTS) study, Phases 1 and 2. ITTS consists of an integrated systems engineering approach for uniform comparison of widely varying thermal treatment technologies proposed for treatment of the contact-handled mixed low-level wastes (MLLW) currently stored in the U.S. Department of Energy complex. In the overall study, 19 systems were evaluated. Preconceptual designs were developed that included all of the various subsystems necessary for a complete installation, from waste receiving through to primary and secondary stabilization and disposal of the processed wastes. Each system included the necessary auxiliary treatment subsystems so that all of the waste categories in the complex were fully processed. The objective of the modeling task was to perform mass and energy balances of the major material components in each system. Modeling of trace materials, such as pollutants and radioactive isotopes, were beyond the present scope. The modeling of the main and secondary thermal treatment, air pollution control, and metal melting subsystems was done using the ASPEN PLUS process simulation code, Version 9.1-3. These results were combined with calculations for the remainder of the subsystems to achieve the final results, which included offgas volumes, and mass and volume waste reduction ratios.

  17. Dynamic modelling and experimental study of cantilever beam with clearance

    International Nuclear Information System (INIS)

    Li, B; Jin, W; Han, L; He, Z

    2012-01-01

    Clearances occur in almost all mechanical systems, typically such as the clearance between slide plate of gun barrel and guide. Therefore, to study the clearances of mechanisms can be very important to increase the working performance and lifetime of mechanisms. In this paper, rigid dynamic modelling of cantilever with clearance was done according to the subject investigated. In the rigid dynamic modelling, clearance is equivalent to the spring-dashpot model, the impact of beam and boundary face was also taken into consideration. In ADAMS software, the dynamic simulation was carried out according to the model above. The software simulated the movement of cantilever with clearance under external excitation. Research found: When the clearance is larger, the force of impact will become larger. In order to study how the stiffness of the cantilever's supporting part influences natural frequency of the system, A Euler beam which is restricted by a draught spring and a torsion spring at its end was raised. Through numerical calculation, the relationship between natural frequency and stiffness was found. When the value of the stiffness is close to the limit value, the corresponding boundary condition is illustrated. An ADAMS experiment was carried out to check the theory and the simulation.

  18. Holes in the t-Jz model: A diagrammatic study

    International Nuclear Information System (INIS)

    Chernyshev, A.L.; Leung, P.W.

    1999-01-01

    The t-J z model is the strongly anisotropic limit of the t-J model which captures some general properties of doped antiferromagnets (AF close-quote s). The absence of spin fluctuations simplifies the analytical treatment of hole motion in an AF background, and allows us to calculate single- and two-hole spectra with a high accuracy using a regular diagram technique combined with a real-space approach. At the same time, numerical studies of this model via exact diagonalization on small clusters show negligible finite-size effects for a number of quantities, thus allowing a direct comparison between analytical and numerical results. Both approaches demonstrate that the holes have a tendency to pair in p- and d-wave channels at realistic values of t/J. Interactions leading to pairing and effects selecting p and d waves are thoroughly investigated. The role of transverse spin fluctuations is considered using perturbation theory. Based on the results of the present study, we discuss the pairing problem in the realistic t-J-like model. Possible implications for preformed pairs formation and phase separation are drawn. copyright 1999 The American Physical Society

  19. Process modeling for the Integrated Thermal Treatment System (ITTS) study

    International Nuclear Information System (INIS)

    Liebelt, K.H.; Brown, B.W.; Quapp, W.J.

    1995-09-01

    This report describes the process modeling done in support of the integrated thermal treatment system (ITTS) study, Phases 1 and 2. ITTS consists of an integrated systems engineering approach for uniform comparison of widely varying thermal treatment technologies proposed for treatment of the contact-handled mixed low-level wastes (MLLW) currently stored in the U.S. Department of Energy complex. In the overall study, 19 systems were evaluated. Preconceptual designs were developed that included all of the various subsystems necessary for a complete installation, from waste receiving through to primary and secondary stabilization and disposal of the processed wastes. Each system included the necessary auxiliary treatment subsystems so that all of the waste categories in the complex were fully processed. The objective of the modeling task was to perform mass and energy balances of the major material components in each system. Modeling of trace materials, such as pollutants and radioactive isotopes, were beyond the present scope. The modeling of the main and secondary thermal treatment, air pollution control, and metal melting subsystems was done using the ASPEN PLUS process simulation code, Version 9.1-3. These results were combined with calculations for the remainder of the subsystems to achieve the final results, which included offgas volumes, and mass and volume waste reduction ratios

  20. Dynamic modelling and experimental study of cantilever beam with clearance

    Science.gov (United States)

    Li, B.; Jin, W.; Han, L.; He, Z.

    2012-05-01

    Clearances occur in almost all mechanical systems, typically such as the clearance between slide plate of gun barrel and guide. Therefore, to study the clearances of mechanisms can be very important to increase the working performance and lifetime of mechanisms. In this paper, rigid dynamic modelling of cantilever with clearance was done according to the subject investigated. In the rigid dynamic modelling, clearance is equivalent to the spring-dashpot model, the impact of beam and boundary face was also taken into consideration. In ADAMS software, the dynamic simulation was carried out according to the model above. The software simulated the movement of cantilever with clearance under external excitation. Research found: When the clearance is larger, the force of impact will become larger. In order to study how the stiffness of the cantilever's supporting part influences natural frequency of the system, A Euler beam which is restricted by a draught spring and a torsion spring at its end was raised. Through numerical calculation, the relationship between natural frequency and stiffness was found. When the value of the stiffness is close to the limit value, the corresponding boundary condition is illustrated. An ADAMS experiment was carried out to check the theory and the simulation.

  1. Study of gap conductance model for thermo mechanical fully coupled finite element model

    International Nuclear Information System (INIS)

    Kim, Hyo Cha; Yang, Yong Sik; Kim, Dae Ho; Bang, Je Geon; Kim, Sun Ki; Koo, Yang Hyun

    2012-01-01

    accurately, gap conductance model for thermomechanical fully coupled FE should be developed. However, gap conductance in FE can be difficult issue in terms of convergence because all elements which are positioned in gap have different gap conductance at each iteration step. It is clear that our code should have gap conductance model for thermo-mechanical fully coupled FE in three-dimension. In this paper, gap conductance model for thermomechanical coupled FE has been built using commercial FE code to understand gap conductance model in FE. We coded commercial FE code using APDL because it does not have iterative gap conductance model. Through model, convergence parameter and characteristics were studied

  2. A piecewise modeling approach for climate sensitivity studies: Tests with a shallow-water model

    Science.gov (United States)

    Shao, Aimei; Qiu, Chongjian; Niu, Guo-Yue

    2015-10-01

    In model-based climate sensitivity studies, model errors may grow during continuous long-term integrations in both the "reference" and "perturbed" states and hence the climate sensitivity (defined as the difference between the two states). To reduce the errors, we propose a piecewise modeling approach that splits the continuous long-term simulation into subintervals of sequential short-term simulations, and updates the modeled states through re-initialization at the end of each subinterval. In the re-initialization processes, this approach updates the reference state with analysis data and updates the perturbed states with the sum of analysis data and the difference between the perturbed and the reference states, thereby improving the credibility of the modeled climate sensitivity. We conducted a series of experiments with a shallow-water model to evaluate the advantages of the piecewise approach over the conventional continuous modeling approach. We then investigated the impacts of analysis data error and subinterval length used in the piecewise approach on the simulations of the reference and perturbed states as well as the resulting climate sensitivity. The experiments show that the piecewise approach reduces the errors produced by the conventional continuous modeling approach, more effectively when the analysis data error becomes smaller and the subinterval length is shorter. In addition, we employed a nudging assimilation technique to solve possible spin-up problems caused by re-initializations by using analysis data that contain inconsistent errors between mass and velocity. The nudging technique can effectively diminish the spin-up problem, resulting in a higher modeling skill.

  3. Development of a self-consistent lightning NOx simulation in large-scale 3-D models

    Science.gov (United States)

    Luo, Chao; Wang, Yuhang; Koshak, William J.

    2017-03-01

    We seek to develop a self-consistent representation of lightning NOx (LNOx) simulation in a large-scale 3-D model. Lightning flash rates are parameterized functions of meteorological variables related to convection. We examine a suite of such variables and find that convective available potential energy and cloud top height give the best estimates compared to July 2010 observations from ground-based lightning observation networks. Previous models often use lightning NOx vertical profiles derived from cloud-resolving model simulations. An implicit assumption of such an approach is that the postconvection lightning NOx vertical distribution is the same for all deep convection, regardless of geographic location, time of year, or meteorological environment. Detailed observations of the lightning channel segment altitude distribution derived from the NASA Lightning Nitrogen Oxides Model can be used to obtain the LNOx emission profile. Coupling such a profile with model convective transport leads to a more self-consistent lightning distribution compared to using prescribed postconvection profiles. We find that convective redistribution appears to be a more important factor than preconvection LNOx profile selection, providing another reason for linking the strength of convective transport to LNOx distribution.

  4. Using an experimental model for the study of therapeutic touch.

    Science.gov (United States)

    dos Santos, Daniella Soares; Marta, Ilda Estéfani Ribeiro; Cárnio, Evelin Capellari; de Quadros, Andreza Urba; Cunha, Thiago Mattar; de Carvalho, Emilia Campos

    2013-02-01

    to verify whether the Paw Edema Model can be used in investigations about the effects of Therapeutic Touch on inflammation by measuring the variables pain, edema and neutrophil migration. this is a pilot and experimental study, involving ten male mice of the same genetic strain and divided into experimental and control group, submitted to the chemical induction of local inflammation in the right back paw. The experimental group received a daily administration of Therapeutic Touch for 15 minutes during three days. the data showed statistically significant differences in the nociceptive threshold and in the paw circumference of the animals from the experimental group on the second day of the experiment. the experiment model involving animals can contribute to study the effects of Therapeutic Touch on inflammation, and adjustments are suggested in the treatment duration, number of sessions and experiment duration.

  5. Penson-Kolb-Hubbard model: a renormalisation group study

    International Nuclear Information System (INIS)

    Bhattacharyya, Bibhas; Roy, G.K.

    1995-01-01

    The Penson-Kolb-Hubbard (PKH) model in one dimension (1d) by means of real space renormalisation group (RG) method for the half-filled band has been studied. Different phases are identified by studying the RG-flow pattern, the energy gap and different correlation functions. The phase diagram consists of four phases: a spin density wave (SDW), a strong coupling superconducting phase (SSC), a weak coupling superconducting phase (WSC) and a nearly metallic phase. For the negative value of the pair hopping amplitude introduced in this model it was found that the pair-pair correlation indicates a superconducting phase for which the centre-of-mass of the pairs move with a momentum π. (author). 7 refs., 4 figs

  6. Overview of the reactor safety study consequence model

    International Nuclear Information System (INIS)

    Wall, I.B.; Yaniv, S.S.; Blond, R.M.; McGrath, P.E.; Church, H.W.; Wayland, J.R.

    1977-01-01

    The Reactor Safety Study (WASH-1400) is a comprehensive assessment of the potential risk to the public from accidents in light water power reactors. The engineering analysis of the plants is described in detail in the Reactor Safety Study: it provides an estimate of the probability versus magnitude of the release of radioactive material. The consequence model, which is the subject of this paper, describes the progression of the postulated accident after the release of the radioactive material from the containment. A brief discussion of the manner in which the consequence calculations are performed is presented. The emphasis in the description is on the models and data that differ significantly from those previously used for these types of assessments. The results of the risk calculations for 100 light water power reactors are summarized

  7. Cold flow model study of an oxyfuel combustion pilot plant

    Energy Technology Data Exchange (ETDEWEB)

    Guio-Perez, D.C.; Tondl, G.; Hoeltl, W.; Proell, T.; Hofbauer, H. [Vienna University of Technology, Institute of Chemical Engineering, Vienna (Austria)

    2011-12-15

    The fluid-dynamic behavior of a circulating fluidized bed pilot plant for oxyfuel combustion was studied in a cold flow model, down-scaled using Glicksman's criteria. Pressures along the unit and the global circulation rate were used for characterization. The analysis of five operating parameters and their influence on the system was carried out; namely, total solids inventory and the air velocity of primary, secondary, loop seal and support fluidizations. The cold flow model study shows that the reactor design allows stable operation at a wide range of fluidization rates, with results that agree well with previous observations described in the literature. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  8. Modelling of protective actions in the German Risk Study (FRG)

    International Nuclear Information System (INIS)

    Burkart, A.K.

    1981-01-01

    An emergency response model for nuclear accidents has to allow for a great number of widely different emergency conditions. In addition, it should be compatible with the pertinent laws, regulations, ordinances, guidelines, criteria and reference levels. The German (FRG) guidelines are basic and flexible rather than precise, many decisions being left to the emergency management. In the Risk Study these decisions had to be anticipated. After a brief discussion of the basis of the emergency response model employed in the German Risk Study (FRG), the essential requirements to be met are listed. The main part of the paper deals with the rationale and specification of protective actions. As a result of the calculations the numbers of persons and sizes of areas involved in protective actions are presented. The last section deals with the variation of input data. (author)

  9. Paradigms of knowledge management with systems modelling case studies

    CERN Document Server

    Pandey, Krishna Nath

    2016-01-01

    This book has been written by studying the knowledge management implementation at POWERGRID India, one of the largest power distribution companies in the world. The patterns which have led to models, both hypothesized and data-enabled, have been provided. The book suggests ways and means to follow for knowledge management implementation, especially for organizations with multiple business verticals to follow. The book underlines that knowledge is both an entity and organizational asset which can be managed. A holistic view of knowledge management implementation has been provided. It also emphasizes the phenomenological importance of human resource parameters as compared to that of technological parameters. Various hypotheses have been tested to validate the significant models hypothesized. This work will prove useful to corporations, researchers, and independent professionals working to study or implement knowledge management paradigms.

  10. Space engineering modeling and optimization with case studies

    CERN Document Server

    Pintér, János

    2016-01-01

    This book presents a selection of advanced case studies that cover a substantial range of issues and real-world challenges and applications in space engineering. Vital mathematical modeling, optimization methodologies and numerical solution aspects of each application case study are presented in detail, with discussions of a range of advanced model development and solution techniques and tools. Space engineering challenges are discussed in the following contexts: •Advanced Space Vehicle Design •Computation of Optimal Low Thrust Transfers •Indirect Optimization of Spacecraft Trajectories •Resource-Constrained Scheduling, •Packing Problems in Space •Design of Complex Interplanetary Trajectories •Satellite Constellation Image Acquisition •Re-entry Test Vehicle Configuration Selection •Collision Risk Assessment on Perturbed Orbits •Optimal Robust Design of Hybrid Rocket Engines •Nonlinear Regression Analysis in Space Engineering< •Regression-Based Sensitivity Analysis and Robust Design ...

  11. Design Models as Emergent Features: An Empirical Study in Communication and Shared Mental Models in Instructional

    Science.gov (United States)

    Botturi, Luca

    2006-01-01

    This paper reports the results of an empirical study that investigated the instructional design process of three teams involved in the development of an e-­learning unit. The teams declared they were using the same fast-­prototyping design and development model, and were composed of the same roles (although with a different number of SMEs).…

  12. Studies of Monte Carlo Modelling of Jets at ATLAS

    CERN Document Server

    Kar, Deepak; The ATLAS collaboration

    2017-01-01

    The predictions of different Monte Carlo generators for QCD jet production, both in multijets and for jets produced in association with other objects, are presented. Recent improvements in showering Monte Carlos provide new tools for assessing systematic uncertainties associated with these jets.  Studies of the dependence of physical observables on the choice of shower tune parameters and new prescriptions for assessing systematic uncertainties associated with the choice of shower model and tune are presented.

  13. Using Computational and Mechanical Models to Study Animal Locomotion

    OpenAIRE

    Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas

    2012-01-01

    Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locom...

  14. Study of ATES thermal behavior using a steady flow model

    Science.gov (United States)

    Doughty, C.; Hellstroem, G.; Tsang, C. F.; Claesson, J.

    1981-01-01

    The thermal behavior of a single well aquifer thermal energy storage system in which buoyancy flow is neglected is studied. A dimensionless formulation of the energy transport equations for the aquifer system is presented, and the key dimensionless parameters are discussed. A simple numerical model is used to generate graphs showing the thermal behavior of the system as a function of these parameters. Some comparisons with field experiments are given to illustrate the use of the dimensionless groups and graphs.

  15. Vertical circulation and thermospheric composition: a modelling study

    OpenAIRE

    H. Rishbeth; I. C. F. Müller-Wodarg; I. C. F. Müller-Wodarg

    1999-01-01

    The coupled thermosphere-ionosphere-plasmasphere model CTIP is used to study the global three-dimensional circulation and its effect on neutral composition in the midlatitude F-layer. At equinox, the vertical air motion is basically up by day, down by night, and the atomic oxygen/molecular nitrogen [O/N2] concentration ratio is symmetrical about the equator. At solstice there is a summer-to-winter flow of air, with downwelling at subauroral latitudes in winter that produc...

  16. Physical Model Study of Cross Vanes and Ice

    Science.gov (United States)

    2009-08-01

    spacing since, in the pre-scour state, experiments and the HEC - RAS hydraulic model (USACE 2002b) found that water surface ele- vation merged with the...docs/eng-manuals/em1110- 2-1612/toc.htm. USACE (2002b) HEC - RAS , Hydraulic Reference Manual. US Army Corps of Engineers Hydrologic Engineering Center...Currently little design guidance is available for constructing these structures on ice-affected rivers . This study used physical and numerical

  17. The green seaweed Ulva: A model system to study morphogenesis

    OpenAIRE

    Thomas eWichard; Benedicte eCharrier; Benedicte eCharrier; Frédéric eMineur; John Henry Bothwell; Olivier eDe Clerck; Juliet C. Coates

    2015-01-01

    Green macroalgae, mostly represented by the Ulvophyceae, the main multicellular branch of the Chlorophyceae, constitute important primary producers of marine and brackish coastal ecosystems. Ulva or sea lettuce species are some of the most abundant representatives, being ubiquitous in coastal benthic communities around the world. Nonetheless the genus also remains largely understudied. This review highlights Ulva as an exciting novel model organism for studies of algal growth, development and...

  18. The green seaweed Ulva: a model system to study morphogenesis

    OpenAIRE

    Wichard, Thomas; Charrier, Bénédicte; Mineur, Frédéric; Bothwell, John H; De Clerck, Olivier; Coates, Juliet C

    2015-01-01

    International audience; Green macroalgae, mostly represented by the Ulvophyceae, the main multicellular branch of the Chlorophyceae, constitute important primary producers of marine and brackish coastal ecosystems. Ulva or sea lettuce species are some of the most abundant representatives, being ubiquitous in coastal benthic communities around the world. Nonetheless the genus also remains largely understudied. This review highlights Ulva as an exciting novel model organism for studies of algal...

  19. Using animal models to study post-partum psychiatric disorders.

    Science.gov (United States)

    Perani, C V; Slattery, D A

    2014-10-01

    The post-partum period represents a time during which all maternal organisms undergo substantial plasticity in a wide variety of systems in order to ensure the well-being of the offspring. Although this time is generally associated with increased calmness and decreased stress responses, for a substantial subset of mothers, this period represents a time of particular risk for the onset of psychiatric disorders. Thus, post-partum anxiety, depression and, to a lesser extent, psychosis may develop, and not only affect the well-being of the mother but also place at risk the long-term health of the infant. Although the risk factors for these disorders, as well as normal peripartum-associated adaptations, are well known, the underlying aetiology of post-partum psychiatric disorders remains poorly understood. However, there have been a number of attempts to model these disorders in basic research, which aim to reveal their underlying mechanisms. In the following review, we first discuss known peripartum adaptations and then describe post-partum mood and anxiety disorders, including their risk factors, prevalence and symptoms. Thereafter, we discuss the animal models that have been designed in order to study them and what they have revealed about their aetiology to date. Overall, these studies show that it is feasible to study such complex disorders in animal models, but that more needs to be done in order to increase our knowledge of these severe and debilitating mood and anxiety disorders. © 2014 The British Pharmacological Society.

  20. Spatial Temporal Modelling of Particulate Matter for Health Effects Studies

    Science.gov (United States)

    Hamm, N. A. S.

    2016-10-01

    Epidemiological studies of the health effects of air pollution require estimation of individual exposure. It is not possible to obtain measurements at all relevant locations so it is necessary to predict at these space-time locations, either on the basis of dispersion from emission sources or by interpolating observations. This study used data obtained from a low-cost sensor network of 32 air quality monitoring stations in the Dutch city of Eindhoven, which make up the ILM (innovative air (quality) measurement system). These stations currently provide PM10 and PM2.5 (particulate matter less than 10 and 2.5 m in diameter), aggregated to hourly means. The data provide an unprecedented level of spatial and temporal detail for a city of this size. Despite these benefits the time series of measurements is characterized by missing values and noisy values. In this paper a space-time analysis is presented that is based on a dynamic model for the temporal component and a Gaussian process geostatistical for the spatial component. Spatial-temporal variability was dominated by the temporal component, although the spatial variability was also substantial. The model delivered accurate predictions for both isolated missing values and 24-hour periods of missing values (RMSE = 1.4 μg m-3 and 1.8 μg m-3 respectively). Outliers could be detected by comparison to the 95% prediction interval. The model shows promise for predicting missing values, outlier detection and for mapping to support health impact studies.

  1. Regional scale groundwater modelling study for Ganga River basin

    Science.gov (United States)

    Maheswaran, R.; Khosa, R.; Gosain, A. K.; Lahari, S.; Sinha, S. K.; Chahar, B. R.; Dhanya, C. T.

    2016-10-01

    Subsurface movement of water within the alluvial formations of Ganga Basin System of North and East India, extending over an area of 1 million km2, was simulated using Visual MODFLOW based transient numerical model. The study incorporates historical groundwater developments as recorded by various concerned agencies and also accommodates the role of some of the major tributaries of River Ganga as geo-hydrological boundaries. Geo-stratigraphic structures, along with corresponding hydrological parameters,were obtained from Central Groundwater Board, India,and used in the study which was carried out over a time horizon of 4.5 years. The model parameters were fine tuned for calibration using Parameter Estimation (PEST) simulations. Analyses of the stream aquifer interaction using Zone Budget has allowed demarcation of the losing and gaining stretches along the main stem of River Ganga as well as some of its principal tributaries. From a management perspective,and entirely consistent with general understanding, it is seen that unabated long term groundwater extraction within the study basin has induced a sharp decrease in critical dry weather base flow contributions. In view of a surge in demand for dry season irrigation water for agriculture in the area, numerical models can be a useful tool to generate not only an understanding of the underlying groundwater system but also facilitate development of basin-wide detailed impact scenarios as inputs for management and policy action.

  2. Analytical, Experimental, and Modelling Studies of Lunar and Terrestrial Rocks

    Science.gov (United States)

    Haskin, Larry A.

    1997-01-01

    The goal of our research has been to understand the paths and the processes of planetary evolution that produced planetary surface materials as we find them. Most of our work has been on lunar materials and processes. We have done studies that obtain geological knowledge from detailed examination of regolith materials and we have reported implications for future sample-collecting and on-surface robotic sensing missions. Our approach has been to study a suite of materials that we have chosen in order to answer specific geologic questions. We continue this work under NAG5-4172. The foundation of our work has been the study of materials with precise chemical and petrographic analyses, emphasizing analysis for trace chemical elements. We have used quantitative models as tests to account for the chemical compositions and mineralogical properties of the materials in terms of regolith processes and igneous processes. We have done experiments as needed to provide values for geochemical parameters used in the models. Our models take explicitly into account the physical as well as the chemical processes that produced or modified the materials. Our approach to planetary geoscience owes much to our experience in terrestrial geoscience, where samples can be collected in field context and sampling sites revisited if necessary. Through studies of terrestrial analog materials, we have tested our ideas about the origins of lunar materials. We have been mainly concerned with the materials of the lunar highland regolith, their properties, their modes of origin, their provenance, and how to extrapolate from their characteristics to learn about the origin and evolution of the Moon's early igneous crust. From this work a modified model for the Moon's structure and evolution is emerging, one of globally asymmetric differentiation of the crust and mantle to produce a crust consisting mainly of ferroan and magnesian igneous rocks containing on average 70-80% plagioclase, with a large

  3. Simulation study of a rectifying bipolar ion channel: Detailed model versus reduced model

    Directory of Open Access Journals (Sweden)

    Z. Ható

    2016-02-01

    Full Text Available We study a rectifying mutant of the OmpF porin ion channel using both all-atom and reduced models. The mutant was created by Miedema et al. [Nano Lett., 2007, 7, 2886] on the basis of the NP semiconductor diode, in which an NP junction is formed. The mutant contains a pore region with positive amino acids on the left-hand side and negative amino acids on the right-hand side. Experiments show that this mutant rectifies. Although we do not know the structure of this mutant, we can build an all-atom model for it on the basis of the structure of the wild type channel. Interestingly, molecular dynamics simulations for this all-atom model do not produce rectification. A reduced model that contains only the important degrees of freedom (the positive and negative amino acids and free ions in an implicit solvent, on the other hand, exhibits rectification. Our calculations for the reduced model (using the Nernst-Planck equation coupled to Local Equilibrium Monte Carlo simulations reveal a rectification mechanism that is different from that seen for semiconductor diodes. The basic reason is that the ions are different in nature from electrons and holes (they do not recombine. We provide explanations for the failure of the all-atom model including the effect of all the other atoms in the system as a noise that inhibits the response of ions (that would be necessary for rectification to the polarizing external field.

  4. Sensitivity model study of regional mercury dispersion in the atmosphere

    Science.gov (United States)

    Gencarelli, Christian N.; Bieser, Johannes; Carbone, Francesco; De Simone, Francesco; Hedgecock, Ian M.; Matthias, Volker; Travnikov, Oleg; Yang, Xin; Pirrone, Nicola

    2017-01-01

    Atmospheric deposition is the most important pathway by which Hg reaches marine ecosystems, where it can be methylated and enter the base of food chain. The deposition, transport and chemical interactions of atmospheric Hg have been simulated over Europe for the year 2013 in the framework of the Global Mercury Observation System (GMOS) project, performing 14 different model sensitivity tests using two high-resolution three-dimensional chemical transport models (CTMs), varying the anthropogenic emission datasets, atmospheric Br input fields, Hg oxidation schemes and modelling domain boundary condition input. Sensitivity simulation results were compared with observations from 28 monitoring sites in Europe to assess model performance and particularly to analyse the influence of anthropogenic emission speciation and the Hg0(g) atmospheric oxidation mechanism. The contribution of anthropogenic Hg emissions, their speciation and vertical distribution are crucial to the simulated concentration and deposition fields, as is also the choice of Hg0(g) oxidation pathway. The areas most sensitive to changes in Hg emission speciation and the emission vertical distribution are those near major sources, but also the Aegean and the Black seas, the English Channel, the Skagerrak Strait and the northern German coast. Considerable influence was found also evident over the Mediterranean, the North Sea and Baltic Sea and some influence is seen over continental Europe, while this difference is least over the north-western part of the modelling domain, which includes the Norwegian Sea and Iceland. The Br oxidation pathway produces more HgII(g) in the lower model levels, but overall wet deposition is lower in comparison to the simulations which employ an O3 / OH oxidation mechanism. The necessity to perform continuous measurements of speciated Hg and to investigate the local impacts of Hg emissions and deposition, as well as interactions dependent on land use and vegetation, forests, peat

  5. Study and mathematical model of ultra-low gas burner

    International Nuclear Information System (INIS)

    Gueorguieva, A.

    2001-01-01

    The main objective of this project is prediction and reduction of NOx and CO 2 emissions under levels recommended from European standards for gas combustion processes. A mathematical model of burner and combustion chamber is developed based on interacting fluid dynamics processes: turbulent flow, gas phase chemical reactions, heat and radiation transfer The NOx prediction model for prompt and thermal NOx is developed. The validation of CFD (Computer fluid-dynamics) simulations corresponds to 5 MWI burner type - TEA, installed on CASPER boiler. This burner is three-stream air distribution burner with swirl effect, designed by ENEL to meet future NOx emission standards. For performing combustion computer modelling, FLUENT CFD code is preferred, because of its capabilities to provide accurately description of large number of rapid interacting processes: turbulent flow, phase chemical reactions and heat transfer and for its possibilities to present wide range of calculation and graphical output reporting data The computational tool used in this study is FLUENT version 5.4.1, installed on fs 8200 UNIX systems The work includes: study the effectiveness of low-NOx concepts and understand the impact of combustion and swirl air distribution and flue gas recirculation on peak flame temperatures, flame structure and fuel/air mixing. A finite rate combustion model: Eddy-Dissipation (Magnussen-Hjertager) Chemical Model for 1, 2 step Chemical reactions of bi-dimensional (2D) grid is developed along with NOx and CO 2 predictions. The experimental part of the project consists of participation at combustion tests on experimental facilities located in Livorno. The results of the experiments are used, to obtain better vision for combustion process on small-scaled design and to collect the necessary input data for further Fluent simulations

  6. Foothills model forest grizzly bear study : project update

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-01-01

    This report updates a five year study launched in 1999 to ensure the continued healthy existence of grizzly bears in west-central Alberta by integrating their needs into land management decisions. The objective was to gather better information and to develop computer-based maps and models regarding grizzly bear migration, habitat use and response to human activities. The study area covers 9,700 square km in west-central Alberta where 66 to 147 grizzly bears exist. During the first 3 field seasons, researchers captured and radio collared 60 bears. Researchers at the University of Calgary used remote sensing tools and satellite images to develop grizzly bear habitat maps. Collaborators at the University of Washington used trained dogs to find bear scat which was analyzed for DNA, stress levels and reproductive hormones. Resource Selection Function models are being developed by researchers at the University of Alberta to identify bear locations and to see how habitat is influenced by vegetation cover and oil, gas, forestry and mining activities. The health of the bears is being studied by researchers at the University of Saskatchewan and the Canadian Cooperative Wildlife Health Centre. The study has already advanced the scientific knowledge of grizzly bear behaviour. Preliminary results indicate that grizzlies continue to find mates, reproduce and gain weight and establish dens. These are all good indicators of a healthy population. Most bear deaths have been related to poaching. The study will continue for another two years. 1 fig.

  7. Comprehensive School Reform Models: A Study Guide for Comparing CSR Models (and How Well They Meet Minnesota's Learning Standards).

    Science.gov (United States)

    St. John, Edward P.; Loescher, Siri; Jacob, Stacy; Cekic, Osman; Kupersmith, Leigh; Musoba, Glenda Droogsma

    A growing number of schools are exploring the prospect of applying for funding to implement a Comprehensive School Reform (CSR) model. But the process of selecting a CSR model can be complicated because it frequently involves self-study and a review of models to determine which models best meet the needs of the school. This study guide is intended…

  8. Combined observational and modeling efforts of aerosol-cloud-precipitation interactions over Southeast Asia

    Science.gov (United States)

    Loftus, Adrian; Tsay, Si-Chee; Nguyen, Xuan Anh

    2016-04-01

    Low-level stratocumulus (Sc) clouds cover more of the Earth's surface than any other cloud type rendering them critical for Earth's energy balance, primarily via reflection of solar radiation, as well as their role in the global hydrological cycle. Stratocumuli are particularly sensitive to changes in aerosol loading on both microphysical and macrophysical scales, yet the complex feedbacks involved in aerosol-cloud-precipitation interactions remain poorly understood. Moreover, research on these clouds has largely been confined to marine environments, with far fewer studies over land where major sources of anthropogenic aerosols exist. The aerosol burden over Southeast Asia (SEA) in boreal spring, attributed to biomass burning (BB), exhibits highly consistent spatiotemporal distribution patterns, with major variability due to changes in aerosol loading mediated by processes ranging from large-scale climate factors to diurnal meteorological events. Downwind from source regions, the transported BB aerosols often overlap with low-level Sc cloud decks associated with the development of the region's pre-monsoon system, providing a unique, natural laboratory for further exploring their complex micro- and macro-scale relationships. Compared to other locations worldwide, studies of springtime biomass-burning aerosols and the predominately Sc cloud systems over SEA and their ensuing interactions are underrepresented in scientific literature. Measurements of aerosol and cloud properties, whether ground-based or from satellites, generally lack information on microphysical processes; thus cloud-resolving models are often employed to simulate the underlying physical processes in aerosol-cloud-precipitation interactions. The Goddard Cumulus Ensemble (GCE) cloud model has recently been enhanced with a triple-moment (3M) bulk microphysics scheme as well as the Regional Atmospheric Modeling System (RAMS) version 6 aerosol module. Because the aerosol burden not only affects cloud

  9. A study of spatial resolution in pollution exposure modelling

    Directory of Open Access Journals (Sweden)

    Gustafsson Susanna

    2007-06-01

    Full Text Available Abstract Background This study is part of several ongoing projects concerning epidemiological research into the effects on health of exposure to air pollutants in the region of Scania, southern Sweden. The aim is to investigate the optimal spatial resolution, with respect to temporal resolution, for a pollutant database of NOx-values which will be used mainly for epidemiological studies with durations of days, weeks or longer periods. The fact that a pollutant database has a fixed spatial resolution makes the choice critical for the future use of the database. Results The results from the study showed that the accuracy between the modelled concentrations of the reference grid with high spatial resolution (100 m, denoted the fine grid, and the coarser grids (200, 400, 800 and 1600 meters improved with increasing spatial resolution. When the pollutant values were aggregated in time (from hours to days and weeks the disagreement between the fine grid and the coarser grids were significantly reduced. The results also illustrate a considerable difference in optimal spatial resolution depending on the characteristic of the study area (rural or urban areas. To estimate the accuracy of the modelled values comparison were made with measured NOx values. The mean difference between the modelled and the measured value were 0.6 μg/m3 and the standard deviation 5.9 μg/m3 for the daily difference. Conclusion The choice of spatial resolution should not considerably deteriorate the accuracy of the modelled NOx values. Considering the comparison between modelled and measured values we estimate that an error due to coarse resolution greater than 1 μg/m3 is inadvisable if a time resolution of one day is used. Based on the study of different spatial resolutions we conclude that for urban areas a spatial resolution of 200–400 m is suitable; and for rural areas the spatial resolution could be coarser (about 1600 m. This implies that we should develop a pollutant

  10. Xenopus: An Emerging Model for Studying Congenital Heart Disease

    Science.gov (United States)

    Kaltenbrun, Erin; Tandon, Panna; Amin, Nirav M.; Waldron, Lauren; Showell, Chris; Conlon, Frank L.

    2011-01-01

    Congenital heart defects affect nearly 1% of all newborns and are a significant cause of infant death. Clinical studies have identified a number of congenital heart syndromes associated with mutations in genes that are involved in the complex process of cardiogenesis. The African clawed frog, Xenopus, has been instrumental in studies of vertebrate heart development and provides a valuable tool to investigate the molecular mechanisms underlying human congenital heart diseases. In this review, we discuss the methodologies that make Xenopus an ideal model system to investigate heart development and disease. We also outline congenital heart conditions linked to cardiac genes that have been well-studied in Xenopus and describe some emerging technologies that will further aid in the study of these complex syndromes. PMID:21538812

  11. ANIMAL MODELS FOR THE STUDY OF LEISHMANIASIS IMMUNOLOGY

    Directory of Open Access Journals (Sweden)

    Elsy Nalleli Loria-Cervera

    2014-01-01

    Full Text Available Leishmaniasis remains a major public health problem worldwide and is classified as Category I by the TDR/WHO, mainly due to the absence of control. Many experimental models like rodents, dogs and monkeys have been developed, each with specific features, in order to characterize the immune response to Leishmania species, but none reproduces the pathology observed in human disease. Conflicting data may arise in part because different parasite strains or species are being examined, different tissue targets (mice footpad, ear, or base of tail are being infected, and different numbers (“low” 1×102 and “high” 1×106 of metacyclic promastigotes have been inoculated. Recently, new approaches have been proposed to provide more meaningful data regarding the host response and pathogenesis that parallels human disease. The use of sand fly saliva and low numbers of parasites in experimental infections has led to mimic natural transmission and find new molecules and immune mechanisms which should be considered when designing vaccines and control strategies. Moreover, the use of wild rodents as experimental models has been proposed as a good alternative for studying the host-pathogen relationships and for testing candidate vaccines. To date, using natural reservoirs to study Leishmania infection has been challenging because immunologic reagents for use in wild rodents are lacking. This review discusses the principal immunological findings against Leishmania infection in different animal models highlighting the importance of using experimental conditions similar to natural transmission and reservoir species as experimental models to study the immunopathology of the disease.

  12. A crowdsourcing model for creating preclinical medical education study tools.

    Science.gov (United States)

    Bow, Hansen C; Dattilo, Jonathan R; Jonas, Andrea M; Lehmann, Christoph U

    2013-06-01

    During their preclinical course work, medical students must memorize and recall substantial amounts of information. Recent trends in medical education emphasize collaboration through team-based learning. In the technology world, the trend toward collaboration has been characterized by the crowdsourcing movement. In 2011, the authors developed an innovative approach to team-based learning that combined students' use of flashcards to master large volumes of content with a crowdsourcing model, using a simple informatics system to enable those students to share in the effort of generating concise, high-yield study materials. The authors used Google Drive and developed a simple Java software program that enabled students to simultaneously access and edit sets of questions and answers in the form of flashcards. Through this crowdsourcing model, medical students in the class of 2014 at the Johns Hopkins University School of Medicine created a database of over 16,000 questions that corresponded to the Genes to Society basic science curriculum. An analysis of exam scores revealed that students in the class of 2014 outperformed those in the class of 2013, who did not have access to the flashcard system, and a survey of students demonstrated that users were generally satisfied with the system and found it a valuable study tool. In this article, the authors describe the development and implementation of their crowdsourcing model for creating study materials, emphasize its simplicity and user-friendliness, describe its impact on students' exam performance, and discuss how students in any educational discipline could implement a similar model of collaborative learning.

  13. Detailed kinetic modeling study of n-pentanol oxidation

    KAUST Repository

    Heufer, Karl Alexander; Sarathy, Mani; Curran, Henry J.; Davis, Alexander C.; Westbrook, Charles K.; Pitz, William J.

    2012-01-01

    To help overcome the world's dependence upon fossil fuels, suitable biofuels are promising alternatives that can be used in the transportation sector. Recent research on internal combustion engines shows that short alcoholic fuels (e.g., ethanol or n-butanol) have reduced pollutant emissions and increased knock resistance compared to fossil fuels. Although higher molecular weight alcohols (e.g., n-pentanol and n-hexanol) exhibit higher reactivity that lowers their knock resistance, they are suitable for diesel engines or advanced engine concepts, such as homogeneous charge compression ignition (HCCI), where higher reactivity at lower temperatures is necessary for engine operation. The present study presents a detailed kinetic model for n-pentanol based on modeling rules previously presented for n-butanol. This approach was initially validated using quantum chemistry calculations to verify the most stable n-pentanol conformation and to obtain C-H and C-C bond dissociation energies. The proposed model has been validated against ignition delay time data, speciation data from a jet-stirred reactor, and laminar flame velocity measurements. Overall, the model shows good agreement with the experiments and permits a detailed discussion of the differences between alcohols and alkanes. © 2012 American Chemical Society.

  14. Study on modeling of Energy-Economy-Environment system

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seung Jin [Korea Energy Economics Institute, Euiwang (Korea)

    1999-07-01

    This study analyzed the effect of carbon dioxide reduction policy generated by energy use by developing a new operation general equilibrium model. This model is a multi sector successive dynamic model, designed to be able to forecast economic variables as well as GDP, energy consumption, and carbon dioxide emission amount until 2030 for every 5 years. Using this model, it analyzed three greenhouse gas reduction policy scenarios, the introduction of world single carbon tax, the setting up limit of greenhouse gas discharge, and the introduction of international discharge permit trading system. It analyzes that it gives a heavy burden to Korean economy when Korean government implements the greenhouse gas reduction policy with only domestic policy instrument. Therefore it is considered that it is required to reduce greenhouse gas cost-effectively by using Kyoto Protocol actively, such as international permit trading, co-implementation, and clean development system, when greenhouse gas reduction gives a heavy burden. Moreover, a policy that is dependent only on price mechanism, such as carbon tax or permit trading, to reduce greenhouse gas requires a very high cost and has a limitation. Therefore, to relieve some burden on economy requires to implement non-price mechanism simultaneously such as energy technology development and restructuring on industry and transportation system. (author). 70 refs., 11 figs., 34 tabs.

  15. Experimental study and modelling of iron ore reduction by hydrogen

    International Nuclear Information System (INIS)

    Wagner, D.

    2008-01-01

    In an effort to find new ways to drastically reduce the CO 2 emissions from the steel industry (ULCOS project), the reduction of iron ore by pure hydrogen in a shaft furnace was investigated. The work consisted of literature, experimental, and modelling studies. The chemical reaction and its kinetics were analysed on the basis of thermogravimetric experiments and physicochemical characterizations of partially reduced samples. A specific kinetic model was designed, which simulates the successive reactions, the different steps of mass transport, and possible iron sintering, at the particle scale. Finally, a 2-dimensional numerical model of a shaft furnace was developed. It depicts the variation of the solid and gas temperatures and compositions throughout the reactor. One original feature of the model is using the law of additive characteristic times for calculating the reaction rates. This allowed us to handle both the particle and the reactor scale, while keeping reasonable calculation time. From the simulation results, the influence of the process parameters was assessed. Optimal operating conditions were concluded, which reveal the efficiency of the hydrogen process. (author)

  16. Experimental and modelling studies of radionuclide migration from contaminated groundwaters

    International Nuclear Information System (INIS)

    Tompkins, J. A.; Butler, A. P.; Wheater, H. S.; Shaw, G.; Wadey, P.; Bell, J. N. B.

    1994-01-01

    Lysimeter-based studies of radionuclide uptake by winter wheat are being undertaken to investigate soil-to-plant transfer processes. A five year multi-disciplinary research project has concentrated on the upward migration of contaminants from near surface water-tables and their subsequent uptake by a winter wheat crop. A weighted transfer factor approach and a physically based modelling methodology, for the simulation and prediction of radionuclide uptake, have been developed which offer alternatives to the traditional transfer factor approach. Integrated hydrological and solute transport models are used to simulate contaminant movement and subsequent root uptake. This approach enables prediction of radionuclide transport for a wide range of soil, plant and radionuclide types. This paper presents simulated results of 22 Na plant uptake and soil activity profiles, which are verified with respect to lysimeter data. The results demonstrate that a simple modelling approach can describe the variability in radioactivity in both the harvested crop and the soil profile, without recourse to a large number of empirical parameters. The proposed modelling technique should be readily applicable to a range of scales and conditions, since it embodies an understanding of the underlying physical processes of the system. This work constitutes part of an ongoing research programme being undertaken by UK Nirex Ltd., to assess the long term safety of a deep level repository for low and intermediate level nuclear waste. (author)

  17. A CASE STUDY ON POINT PROCESS MODELLING IN DISEASE MAPPING

    Directory of Open Access Journals (Sweden)

    Viktor Beneš

    2011-05-01

    Full Text Available We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis (TBE, and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependence of the risk on the covariates. Instead of using the common area level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics.

  18. Online modelling of water distribution systems: a UK case study

    Directory of Open Access Journals (Sweden)

    J. Machell

    2010-03-01

    Full Text Available Hydraulic simulation models of water distribution networks are routinely used for operational investigations and network design purposes. However, their full potential is often never realised because, in the majority of cases, they have been calibrated with data collected manually from the field during a single historic time period and, as such, reflect the network operational conditions that were prevalent at that time, and they are then applied as part of a reactive, desktop investigation. In order to use a hydraulic model to assist proactive distribution network management its element asset information must be up to date and it should be able to access current network information to drive simulations. Historically this advance has been restricted by the high cost of collecting and transferring the necessary field measurements. However, recent innovation and cost reductions associated with data transfer is resulting in collection of data from increasing numbers of sensors in water supply systems, and automatic transfer of the data to point of use. This means engineers potentially have access to a constant stream of current network data that enables a new era of "on-line" modelling that can be used to continually assess standards of service compliance for pressure and reduce the impact of network events, such as mains bursts, on customers. A case study is presented here that shows how an online modelling system can give timely warning of changes from normal network operation, providing capacity to minimise customer impact.

  19. A multiple-compartment model for biokinetics studies in plants

    International Nuclear Information System (INIS)

    Garcia, Fermin; Pietrobron, Flavio; Fonseca, Agnes M.F.; Mol, Anderson W.; Rodriguez, Oscar; Guzman, Fernando

    2001-01-01

    In the present work is used the system of linear equations based in the general Assimakopoulos's GMCM model , for the development of a new method that will determine the flow's parameters and transfer coefficients in plants. The need of mathematical models to quantify the penetration of a trace substance in animals and plants, has often been stressed in the literature. Usually, in radiological environment studies, it is used the mean value of contaminant concentrations on whole or edible part plant body, without taking in account vegetable physiology regularities. In this work concepts and mathematical formulation of a Vegetable Multi-compartment Model (VMCM), taking into account the plant's physiology regularities is presented. The model based in general ideas of the GMCM , and statistical Square Minimum Method STATFLUX is proposed to use in inverse sense: the experimental time dependence of concentration in each compartment, should be input, and the parameters should be determined from this data in a statistical approach. The case of Uranium metabolism is discussed. (author)

  20. Detailed kinetic modeling study of n-pentanol oxidation

    KAUST Repository

    Heufer, Karl Alexander

    2012-10-18

    To help overcome the world\\'s dependence upon fossil fuels, suitable biofuels are promising alternatives that can be used in the transportation sector. Recent research on internal combustion engines shows that short alcoholic fuels (e.g., ethanol or n-butanol) have reduced pollutant emissions and increased knock resistance compared to fossil fuels. Although higher molecular weight alcohols (e.g., n-pentanol and n-hexanol) exhibit higher reactivity that lowers their knock resistance, they are suitable for diesel engines or advanced engine concepts, such as homogeneous charge compression ignition (HCCI), where higher reactivity at lower temperatures is necessary for engine operation. The present study presents a detailed kinetic model for n-pentanol based on modeling rules previously presented for n-butanol. This approach was initially validated using quantum chemistry calculations to verify the most stable n-pentanol conformation and to obtain C-H and C-C bond dissociation energies. The proposed model has been validated against ignition delay time data, speciation data from a jet-stirred reactor, and laminar flame velocity measurements. Overall, the model shows good agreement with the experiments and permits a detailed discussion of the differences between alcohols and alkanes. © 2012 American Chemical Society.

  1. Phenomenological study of extended seesaw model for light sterile neutrino

    Energy Technology Data Exchange (ETDEWEB)

    Nath, Newton [Physical Research Laboratory,Navarangpura, Ahmedabad 380 009 (India); Indian Institute of Technology,Gandhinagar, Ahmedabad-382424 (India); Ghosh, Monojit [Department of Physics, Tokyo Metropolitan University,Hachioji, Tokyo 192-0397 (Japan); Goswami, Srubabati [Physical Research Laboratory,Navarangpura, Ahmedabad 380 009 (India); Gupta, Shivani [Center of Excellence for Particle Physics (CoEPP), University of Adelaide,Adelaide SA 5005 (Australia)

    2017-03-14

    We study the zero textures of the Yukawa matrices in the minimal extended type-I seesaw (MES) model which can give rise to ∼ eV scale sterile neutrinos. In this model, three right handed neutrinos and one extra singlet S are added to generate a light sterile neutrino. The light neutrino mass matrix for the active neutrinos, m{sub ν}, depends on the Dirac neutrino mass matrix (M{sub D}), Majorana neutrino mass matrix (M{sub R}) and the mass matrix (M{sub S}) coupling the right handed neutrinos and the singlet. The model predicts one of the light neutrino masses to vanish. We systematically investigate the zero textures in M{sub D} and observe that maximum five zeros in M{sub D} can lead to viable zero textures in m{sub ν}. For this study we consider four different forms for M{sub R} (one diagonal and three off diagonal) and two different forms of (M{sub S}) containing one zero. Remarkably we obtain only two allowed forms of m{sub ν} (m{sub eτ}=0 and m{sub ττ}=0) having inverted hierarchical mass spectrum. We re-analyze the phenomenological implications of these two allowed textures of m{sub ν} in the light of recent neutrino oscillation data. In the context of the MES model, we also express the low energy mass matrix, the mass of the sterile neutrino and the active-sterile mixing in terms of the parameters of the allowed Yukawa matrices. The MES model leads to some extra correlations which disallow some of the Yukawa textures obtained earlier, even though they give allowed one-zero forms of m{sub ν}. We show that the allowed textures in our study can be realized in a simple way in a model based on MES mechanism with a discrete Abelian flavor symmetry group Z{sub 8}×Z{sub 2}.

  2. THE FLAT TAX - A COMPARATIVE STUDY OF THE EXISTING MODELS

    Directory of Open Access Journals (Sweden)

    Schiau (Macavei Laura - Liana

    2011-07-01

    Full Text Available In the two last decades the flat tax systems have spread all around the globe from East and Central Europe to Asia and Central America. Many specialists consider this phenomenon a real fiscal revolution, but others see it as a mistake as long as the new systems are just a feint of the true flat tax designed by the famous Stanford University professors Robert Hall and Alvin Rabushka. In this context this paper tries to determine which of the existing flat tax systems resemble the true flat tax model by comparing and contrasting their main characteristics with the features of the model proposed by Hall and Rabushka. The research also underlines the common features and the differences between the existing models. The idea of this kind of study is not really new, others have done it but the comparison was limited to one country. For example Emil Kalchev from New Bulgarian University has asses the Bulgarian income system, by comparing it with the flat tax and concluding that taxation in Bulgaria is not simple, neutral and non-distortive. Our research is based on several case studies and on compare and contrast qualitative and quantitative methods. The study starts form the fiscal design drawn by the two American professors in the book The Flat Tax. Four main characteristics of the flat tax system were chosen in order to build the comparison: fiscal design, simplicity, avoidance of double taxation and uniformity of the tax rates. The jurisdictions chosen for the case study are countries all around the globe with fiscal systems which are considered flat tax systems. The results obtained show that the fiscal design of Hong Kong is the only flat tax model which is built following an economic logic and not a legal sense, being in the same time a simple and transparent system. Others countries as Slovakia, Albania, Macedonia in Central and Eastern Europe fulfill the requirement regarding the uniformity of taxation. Other jurisdictions avoid the double

  3. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  4. A magnetospheric specification model validation study: Geosynchronous electrons

    Science.gov (United States)

    Hilmer, R. V.; Ginet, G. P.

    2000-09-01

    The Rice University Magnetospheric Specification Model (MSM) is an operational space environment model of the inner and middle magnetosphere designed to specify charged particle fluxes up to 100keV. Validation test data taken between January 1996 and June 1998 consist of electron fluxes measured by a charge control system (CCS) on a defense satellite communications system (DSCS) spacecraft. The CCS includes both electrostatic analyzers to measure the particle environment and surface potential monitors to track differential charging between various materials and vehicle ground. While typical RMS error analysis methods provide a sense of the models overall abilities, they do not specifically address physical situations critical to operations, i.e., how well does the model specify when a high differential charging state is probable. In this validation study, differential charging states observed by DSCS are used to determine several threshold fluxes for the associated 20-50keV electrons and joint probability distributions are constructed to determine Hit, Miss, and False Alarm rates for the models. An MSM run covering the two and one-half year interval is performed using the minimum required input parameter set, consisting of only the magnetic activity index Kp, in order to statistically examine the model's seasonal and yearly performance. In addition, the relative merits of the input parameter, i.e., Kp, Dst, the equatorward boundary of diffuse aurora at midnight, cross-polar cap potential, solar wind density and velocity, and interplanetary magnetic field values, are evaluated as drivers of shorter model runs of 100 d each. In an effort to develop operational tools that can address spacecraft charging issues, we also identify temporal features in the model output that can be directly linked to input parameter variations and model boundary conditions. All model output is interpreted using the full three-dimensional, dipole tilt-dependent algorithms currently in

  5. Cellular Automata Models Applied to the Study of Landslide Dynamics

    Science.gov (United States)

    Liucci, Luisa; Melelli, Laura; Suteanu, Cristian

    2015-04-01

    Landslides are caused by complex processes controlled by the interaction of numerous factors. Increasing efforts are being made to understand the spatial and temporal evolution of this phenomenon, and the use of remote sensing data is making significant contributions in improving forecast. This paper studies landslides seen as complex dynamic systems, in order to investigate their potential Self Organized Critical (SOC) behavior, and in particular, scale-invariant aspects of processes governing the spatial development of landslides and their temporal evolution, as well as the mechanisms involved in driving the system and keeping it in a critical state. For this purpose, we build Cellular Automata Models, which have been shown to be capable of reproducing the complexity of real world features using a small number of variables and simple rules, thus allowing for the reduction of the number of input parameters commonly used in the study of processes governing landslide evolution, such as those linked to the geomechanical properties of soils. This type of models has already been successfully applied in studying the dynamics of other natural hazards, such as earthquakes and forest fires. The basic structure of the model is composed of three modules: (i) An initialization module, which defines the topographic surface at time zero as a grid of square cells, each described by an altitude value; the surface is acquired from real Digital Elevation Models (DEMs). (ii) A transition function, which defines the rules used by the model to update the state of the system at each iteration. The rules use a stability criterion based on the slope angle and introduce a variable describing the weakening of the material over time, caused for example by rainfall. The weakening brings some sites of the system out of equilibrium thus causing the triggering of landslides, which propagate within the system through local interactions between neighboring cells. By using different rates of

  6. A numerical study of back-building process in a quasistationary rainband with extreme rainfall over northern Taiwan during 11–12 June 2012

    Directory of Open Access Journals (Sweden)

    C.-C. Wang

    2016-09-01

    Full Text Available During 11–12 June 2012, quasistationary linear mesoscale convective systems (MCSs developed near northern Taiwan and produced extreme rainfall up to 510 mm and severe flooding in Taipei. In the midst of background forcing of low-level convergence, the back-building (BB process in these MCSs contributed to the extreme rainfall and thus is investigated using a cloud-resolving model in the case study here. Specifically, as the cold pool mechanism is not responsible for the triggering of new BB cells in this subtropical event during the meiyu season, we seek answers to the question why the location about 15–30 km upstream from the old cell is still often more favorable for new cell initiation than other places in the MCS. With a horizontal grid size of 1.5 km, the linear MCS and the BB process in this case are successfully reproduced, and the latter is found to be influenced more by the thermodynamic and less by dynamic effects based on a detailed analysis of convective-scale pressure perturbations. During initiation in a background with convective instability and near-surface convergence, new cells are associated with positive (negative buoyancy below (above due to latent heating (adiabatic cooling, which represents a gradual destabilization. At the beginning, the new development is close to the old convection, which provides stronger warming below and additional cooling at mid-levels from evaporation of condensates in the downdraft at the rear flank, thus yielding a more rapid destabilization. This enhanced upward decrease in buoyancy at low levels eventually creates an upward perturbation pressure gradient force to drive further development along with the positive buoyancy itself. After the new cell has gained sufficient strength, the old cell's rear-flank downdraft also acts to separate the new cell to about 20 km upstream. Therefore, the advantages of the location in the BB process can be explained even without the lifting at the

  7. A study of pilot modeling in multi-controller tasks

    Science.gov (United States)

    Whitbeck, R. F.; Knight, J. R.

    1972-01-01

    A modeling approach, which utilizes a matrix of transfer functions to describe the human pilot in multiple input, multiple output control situations, is studied. The approach used was to extend a well established scalar Wiener-Hopf minimization technique to the matrix case and then study, via a series of experiments, the data requirements when only finite record lengths are available. One of these experiments was a two-controller roll tracking experiment designed to force the pilot to use rudder in order to coordinate and reduce the effects of aileron yaw. One model was computed for the case where the signals used to generate the spectral matrix are error and bank angle while another model was computed for the case where error and yaw angle are the inputs. Several anomalies were observed to be present in the experimental data. These are defined by the descriptive terms roll up, break up, and roll down. Due to these algorithm induced anomalies, the frequency band over which reliable estimates of power spectra can be achieved is considerably less than predicted by the sampling theorem.

  8. Modeling eBook acceptance: A study on mathematics teachers

    Science.gov (United States)

    Jalal, Azlin Abd; Ayub, Ahmad Fauzi Mohd; Tarmizi, Rohani Ahmad

    2014-12-01

    The integration and effectiveness of eBook utilization in Mathematics teaching and learning greatly relied upon the teachers, hence the need to understand their perceptions and beliefs. The eBook, an individual laptop completed with digitized textbook sofwares, were provided for each students in line with the concept of 1 student:1 laptop. This study focuses on predicting a model on the acceptance of the eBook among Mathematics teachers. Data was collected from 304 mathematics teachers in selected schools using a survey questionnaire. The selection were based on the proportionate stratified sampling. Structural Equation Modeling (SEM) were employed where the model was tested and evaluated and was found to have a good fit. The variance explained for the teachers' attitude towards eBook is approximately 69.1% where perceived usefulness appeared to be a stronger determinant compared to perceived ease of use. This study concluded that the attitude of mathematics teachers towards eBook depends largely on the perception of how useful the eBook is on improving their teaching performance, implying that teachers should be kept updated with the latest mathematical application and sofwares to use with the eBook to ensure positive attitude towards using it in class.

  9. Modelling and Simulation of TCPAR for Power System Flow Studies

    Directory of Open Access Journals (Sweden)

    Narimen Lahaçani AOUZELLAG

    2012-12-01

    Full Text Available In this paper, the modelling of Thyristor Controlled Phase Angle Regulator ‘TCPAR’ for power flow studies and the role of that modelling in the study of Flexible Alternating Current Transmission Systems ‘FACTS’ for power flow control are discussed. In order to investigate the impact of TCPAR on power systems effectively, it is essential to formulate a correct and appropriate model for it. The TCPAR, thus, makes it possible to increase or decrease the power forwarded in the line where it is inserted in a considerable way, which makes of it an ideal tool for this kind of use. Knowing that the TCPAR does not inject any active power, it offers a good solution with a less consumption. One of the adverse effects of the TCPAR is the voltage drop which it causes in the network although it is not significant. To solve this disadvantage, it is enough to introduce a Static VAR Compensator ‘SVC’ into the electrical network which will compensate the voltages fall and will bring them back to an acceptable level.

  10. Information System Model as a Mobbing Prevention: A Case Study

    Directory of Open Access Journals (Sweden)

    Ersin Karaman

    2014-06-01

    Full Text Available In this study, it is aimed to detect mobbing issues in Atatürk University, Economics and Administrative Science Facultyand provide an information system model to prevent mobbing and reduce the risk. The study consists of two parts;i detect mobbing situation via questionnaire and ii design an information system based on the findings of the first part. The questionnaire was applied to research assistants in the faculty. Five factors were analyzed and it is concluded that research assistants have not been exposed to mobbing except the fact that they have mobbing perception about task assignment process. Results show that task operational difficulty, task time and task period are the common mobbing issues.  In order to develop an information system to cope with these issues,   assignment of exam proctor process is addressed. Exam time, instructor location, classroom location and exam duration are the considered as decision variables to developed linear programming (LP model. Coefficients of these variables and constraints about the LP model are specified in accordance with the findings. It is recommended that research assistants entrusting process should be conducted by using this method to prevent and reduce the risk of mobbing perception in the organization.

  11. Ports: Definition and study of types, sizes and business models

    Directory of Open Access Journals (Sweden)

    Ivan Roa

    2013-09-01

    Full Text Available Purpose: In the world today there are thousands of port facilities of different types and sizes, competing to capture some market share of freight by sea, mainly. This article aims to determine the type of port and the most common size, in order to find out which business model is applied in that segment and what is the legal status of the companies of such infrastructure.Design/methodology/approach: To achieve this goal, we develop a research on a representative sample of 800 ports worldwide, which manage 90% of the containerized port loading. Then you can find out the legal status of the companies that manage them.Findings: The results indicate a port type and a dominant size, which are mostly managed by companies subject to a concession model.Research limitations/implications: In this research, we study only those ports that handle freight (basically containerized, ignoring other activities such as fishing, military, tourism or recreational.Originality/value: This is an investigation to show that the vast majority of the studied segment port facilities are governed by a similar corporate model and subject to pressure from the markets, which increasingly demand efficiency and service. Consequently, we tend to concession terminals to private operators in a process that might be called privatization, but in the strictest sense of the term, is not entirely realistic because the ownership of the land never ceases to be public

  12. Continuous Evaluation of Fast Processes in Climate Models Using ARM Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhijin [Univ. of California, Los Angeles, CA (United States); Sha, Feng [Univ. of California, Los Angeles, CA (United States); Liu, Yangang [Brookhaven National Lab. (BNL), Upton, NY (United States); Lin, Wuyin [Brookhaven National Lab. (BNL), Upton, NY (United States); Toto, Tami [Brookhaven National Lab. (BNL), Upton, NY (United States); Vogelmann, Andrew [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2016-02-02

    This five-year award supports the project “Continuous Evaluation of Fast Processes in Climate Models Using ARM Measurements (FASTER)”. The goal of this project is to produce accurate, consistent and comprehensive data sets for initializing both single column models (SCMs) and cloud resolving models (CRMs) using data assimilation. A multi-scale three-dimensional variational data assimilation scheme (MS-3DVAR) has been implemented. This MS-3DVAR system is built on top of WRF/GSI. The Community Gridpoint Statistical Interpolation (GSI) system is an operational data assimilation system at the National Centers for Environmental Prediction (NCEP) and has been implemented in the Weather Research and Forecast (WRF) model. This MS-3DVAR is further enhanced by the incorporation of a land surface 3DVAR scheme and a comprehensive aerosol 3DVAR scheme. The data assimilation implementation focuses in the ARM SGP region. ARM measurements are assimilated along with other available satellite and radar data. Reanalyses are then generated for a few selected period of time. This comprehensive data assimilation system has also been employed for other ARM-related applications.

  13. Foresight Model of Turkey's Defense Industries' Space Studies until 2040

    Science.gov (United States)

    Yuksel, Nurdan; Cifci, Hasan; Cakir, Serhat

    2016-07-01

    Being advanced in science and technology is inevitable reality in order to be able to have a voice in the globalized world. Therefore, for the countries, making policies in consistent with their societies' intellectual, economic and political infrastructure and attributing them to the vision having been embraced by all parties of the society is quite crucial for the success. The generated policies are supposed to ensure the usage of countries' resources in the most effective and fastest way, determine the priorities and needs of society and set their goals and related roadmaps. In this sense, technology foresight studies based on justified forecasting in science and technology have critical roles in the process of developing policies. In this article, Foresight Model of Turkey's Defense Industries' Space Studies, which is turned out to be the important part of community life and fundamental background of most technologies, up to 2040 is presented. Turkey got late in space technology studies. Hence, for being fast and efficient to use its national resources in a cost effective way and within national and international collaboration, it should be directed to its pre-set goals. By taking all these factors into consideration, the technology foresight model of Turkey's Defense Industry's Space Studies was presented in the study. In the model, the present condition of space studies in the World and Turkey was analyzed; literature survey and PEST analysis were made. PEST analysis will be the inputs of SWOT analysis and Delphi questionnaire will be used in the study. A two-round Delphi survey will be applied to the participants from universities, public and private organizations operating in space studies at Defense Industry. Critical space technologies will be distinguished according to critical technology measures determined by expert survey; space technology fields and goals will be established according to their importance and feasibility indexes. Finally, for the

  14. Molecular dynamics study of thermal disorder in a bicrystal model

    International Nuclear Information System (INIS)

    Nguyen, T.; Ho, P.S.; Kwok, T.; Yip, S.

    1990-01-01

    This paper studies a (310) θ = 36.86 degrees left-angle 001 right-angle symmetrical-tilt bicrystal model using an Embedded Atom Method aluminum potential. Based on explicit results obtained from the simulations regarding structural order, energy, and mobility, the authors find that their bicrystal model shows no evidence of pre-melting. Both the surface and the grain-boundary interface exhibit thermal disorder at temperatures below T m , with complete melting occurring only at, or very near, T m . Concerning the details of the onset of melting, the data show considerable disordering in the interfacial region starting at about 0.93 T m . The interfaces exhibit metastable behavior in this temperature range, and the temperature variation of the interfacial thickness suggests that the disordering induced by the interface is a continuous transition, a behavior that has been predicted by a theoretical analysis

  15. An experimental and modeling study of n-octanol combustion

    KAUST Repository

    Cai, Liming

    2015-01-01

    This study presents the first investigation on the combustion chemistry of n-octanol, a long chain alcohol. Ignition delay times were determined experimentally in a high-pressure shock tube, and stable species concentration profiles were obtained in a jet stirred reactor for a range of initial conditions. A detailed kinetic model was developed to describe the oxidation of n-octanol at both low and high temperatures, and the model shows good agreement with the present dataset. The fuel\\'s combustion characteristics are compared to those of n-alkanes and to short chain alcohols to illustrate the effects of the hydroxyl moiety and the carbon chain length on important combustion properties. Finally, the results are discussed in detail. © 2014 The Combustion Institute.

  16. The green seaweed Ulva: a model system to study morphogenesis.

    Science.gov (United States)

    Wichard, Thomas; Charrier, Bénédicte; Mineur, Frédéric; Bothwell, John H; Clerck, Olivier De; Coates, Juliet C

    2015-01-01

    Green macroalgae, mostly represented by the Ulvophyceae, the main multicellular branch of the Chlorophyceae, constitute important primary producers of marine and brackish coastal ecosystems. Ulva or sea lettuce species are some of the most abundant representatives, being ubiquitous in coastal benthic communities around the world. Nonetheless the genus also remains largely understudied. This review highlights Ulva as an exciting novel model organism for studies of algal growth, development and morphogenesis as well as mutualistic interactions. The key reasons that Ulva is potentially such a good model system are: (i) patterns of Ulva development can drive ecologically important events, such as the increasing number of green tides observed worldwide as a result of eutrophication of coastal waters, (ii) Ulva growth is symbiotic, with proper development requiring close association with bacterial epiphytes, (iii) Ulva is extremely developmentally plastic, which can shed light on the transition from simple to complex multicellularity and (iv) Ulva will provide additional information about the evolution of the green lineage.

  17. Functional renormalization group study of the Anderson–Holstein model

    International Nuclear Information System (INIS)

    Laakso, M A; Kennes, D M; Jakobs, S G; Meden, V

    2014-01-01

    We present a comprehensive study of the spectral and transport properties in the Anderson–Holstein model both in and out of equilibrium using the functional renormalization group (fRG). We show how the previously established machinery of Matsubara and Keldysh fRG can be extended to include the local phonon mode. Based on the analysis of spectral properties in equilibrium we identify different regimes depending on the strength of the electron–phonon interaction and the frequency of the phonon mode. We supplement these considerations with analytical results from the Kondo model. We also calculate the nonlinear differential conductance through the Anderson–Holstein quantum dot and find clear signatures of the presence of the phonon mode. (paper)

  18. Study on the development of geological environmental model

    International Nuclear Information System (INIS)

    Tsujimoto, Keiichi; Shinohara, Yoshinori; Ueta, Shinzo; Saito, Shigeyuki; Kawamura, Yuji; Tomiyama, Shingo; Ohashi, Toyo

    2002-03-01

    The safety performance assessment was carried out in potential geological environment in the conventional research and development of geological disposal, but the importance of safety assessment based on the repository design and scenario considering the concrete geological environment will increase in the future. The research considering the link of the major three fields of geological disposal, investigation of geological environment, repository design, and safety performance assessment, is the contemporary worldwide research theme. Hence it is important to organize information flow that contains the series of information process form the data production to analysis in the three fields, and to systemize the knowledge base that unifies the information flow hierarchically. The purpose of the research is to support the development of the unified analysis system for geological disposal. The development technology for geological environmental model studied for the second progress report by JNC are organized and examined for the purpose of developing database system with considering the suitability for the deep underground research facility. The geological environmental investigation technology and building methodology for geological structure and hydro geological structure models are organized and systemized. Furthermore, the quality assurance methods in building geological environment models are examined. Information which is used and stored in the unified analysis system are examined to design database structure of the system based on the organized methodology for building geological environmental model. The graphic processing function for data stored in the unified database are examined. furthermore, future research subjects for the development of detail models for geological disposal are surveyed to organize safety performance system. (author)

  19. A transgenic Xenopus laevis reporter model to study lymphangiogenesis

    Directory of Open Access Journals (Sweden)

    Annelii Ny

    2013-07-01

    The importance of the blood- and lymph vessels in the transport of essential fluids, gases, macromolecules and cells in vertebrates warrants optimal insight into the regulatory mechanisms underlying their development. Mouse and zebrafish models of lymphatic development are instrumental for gene discovery and gene characterization but are challenging for certain aspects, e.g. no direct accessibility of embryonic stages, or non-straightforward visualization of early lymphatic sprouting, respectively. We previously demonstrated that the Xenopus tadpole is a valuable model to study the processes of lymphatic development. However, a fluorescent Xenopus reporter directly visualizing the lymph vessels was lacking. Here, we created transgenic Tg(Flk1:eGFP Xenopus laevis reporter lines expressing green fluorescent protein (GFP in blood- and lymph vessels driven by the Flk1 (VEGFR-2 promoter. We also established a high-resolution fluorescent dye labeling technique selectively and persistently visualizing lymphatic endothelial cells, even in conditions of impaired lymph vessel formation or drainage function upon silencing of lymphangiogenic factors. Next, we applied the model to dynamically document blood and lymphatic sprouting and patterning of the initially avascular tadpole fin. Furthermore, quantifiable models of spontaneous or induced lymphatic sprouting into the tadpole fin were developed for dynamic analysis of loss-of-function and gain-of-function phenotypes using pharmacologic or genetic manipulation. Together with angiography and lymphangiography to assess functionality, Tg(Flk1:eGFP reporter tadpoles readily allowed detailed lymphatic phenotyping of live tadpoles by fluorescence microscopy. The Tg(Flk1:eGFP tadpoles represent a versatile model for functional lymph/angiogenomics and drug screening.

  20. Enhanced phytoremediation in the vadose zone: Modeling and column studies

    Science.gov (United States)

    Sung, K.; Chang, Y.; Corapcioglu, M.; Cho, C.

    2002-05-01

    Phytoremediation is a plant-based technique with potential for enhancing the remediation of vadoese zone soils contaminated by pollutants. The use of deep-rooted plants is an alternative to conventional methodologies. However, when the phytoremediation is applied to the vadose zone, it might have some restrictions since it uses solely naturally driven energy and mechanisms in addition to the complesxity of the vadose zone. As a more innovative technique than conventional phytoremediation methods, air injected phytoremediation technique is introduced to enhance the remediation efficiency or to apply at the former soil vapor extraction or bio venting sites. Effects of air injection, vegetation treatment, and air injection with vegetation treatments on the removal of hydrocarbon were investigated by column studies to simulate the field situation. Both the removal efficiency and the microbial activity were highest in air-injected and vegetated column soils. It was suggested that increased microorganisms activity stimulated by plant root exudates enhanced biodegradation of hydrocarbon compounds. Air injection provided sufficient opportunity for promoting the microbial activity at depths where the conditions are anaerobic. Air injection can enhance the physicochemical properties of the medium and contaminant and increase the bioavailability i.e., the plant and microbial accessibility to the contaminant. A mathematical model that can be applied to phytoremediation, especially to air injected phytoremediation, for simulating the fate and the transport of a diesel contaminant in the vadose zone is developed. The approach includes a two-phase model of water flow in vegetated and unplanted vadose zone soil. A time-specific root distribution model and a microbial growth model in the rhizosphere of vegetated soil were combined with an unsaturated soil water flow equation as well as with a contaminant transport equation. The proposed model showed a satisfactory representation of

  1. Experimental study of mass boiling in a porous medium model

    International Nuclear Information System (INIS)

    Sapin, Paul

    2014-01-01

    This manuscript presents a pore-scale experimental study of convective boiling heat transfer in a two-dimensional porous medium. The purpose is to deepen the understanding of thermohydraulics of porous media saturated with multiple fluid phases, in order to enhance management of severe accidents in nuclear reactors. Indeed, following a long-lasting failure in the cooling system of a pressurized water reactor (PWR) or a boiling water reactor (BWR) and despite the lowering of the control rods that stops the fission reaction, residual power due to radioactive decay keeps heating up the core. This induces water evaporation, which leads to the drying and degradation of the fuel rods. The resulting hot debris bed, comparable to a porous heat-generating medium, can be cooled down by reflooding, provided a water source is available. This process involves intense boiling mechanisms that must be modelled properly. The experimental study of boiling in porous media presented in this thesis focuses on the influence of different pore-scale boiling regimes on local heat transfer. The experimental setup is a model porous medium made of a bundle of heating cylinders randomly placed between two ceramic plates, one of which is transparent. Each cylinder is a resistance temperature detector (RTD) used to give temperature measurements as well as heat generation. Thermal measurements and high-speed image acquisition allow the effective heat exchanges to be characterized according to the observed local boiling regimes. This provides precious indications precious indications for the type of correlations used in the non-equilibrium macroscopic model used to model reflooding process. (author) [fr

  2. Modeling CICR in rat ventricular myocytes: voltage clamp studies

    Directory of Open Access Journals (Sweden)

    Palade Philip T

    2010-11-01

    Full Text Available Abstract Background The past thirty-five years have seen an intense search for the molecular mechanisms underlying calcium-induced calcium-release (CICR in cardiac myocytes, with voltage clamp (VC studies being the leading tool employed. Several VC protocols including lowering of extracellular calcium to affect Ca2+ loading of the sarcoplasmic reticulum (SR, and administration of blockers caffeine and thapsigargin have been utilized to probe the phenomena surrounding SR Ca2+ release. Here, we develop a deterministic mathematical model of a rat ventricular myocyte under VC conditions, to better understand mechanisms underlying the response of an isolated cell to calcium perturbation. Motivation for the study was to pinpoint key control variables influencing CICR and examine the role of CICR in the context of a physiological control system regulating cytosolic Ca2+ concentration ([Ca2+]myo. Methods The cell model consists of an electrical-equivalent model for the cell membrane and a fluid-compartment model describing the flux of ionic species between the extracellular and several intracellular compartments (cell cytosol, SR and the dyadic coupling unit (DCU, in which resides the mechanistic basis of CICR. The DCU is described as a controller-actuator mechanism, internally stabilized by negative feedback control of the unit's two diametrically-opposed Ca2+ channels (trigger-channel and release-channel. It releases Ca2+ flux into the cyto-plasm and is in turn enclosed within a negative feedback loop involving the SERCA pump, regulating[Ca2+]myo. Results Our model reproduces measured VC data published by several laboratories, and generates graded Ca2+ release at high Ca2+ gain in a homeostatically-controlled environment where [Ca2+]myo is precisely regulated. We elucidate the importance of the DCU elements in this process, particularly the role of the ryanodine receptor in controlling SR Ca2+ release, its activation by trigger Ca2+, and its

  3. Drift Scale Modeling: Study of Unsaturated Flow into a Drift Using a Stochastic Continuum Model

    International Nuclear Information System (INIS)

    Birkholzer, J.T.; Tsang, C.F.; Tsang, Y.W.; Wang, J.S

    1996-01-01

    Unsaturated flow in heterogeneous fractured porous rock was simulated using a stochastic continuum model (SCM). In this model, both the more conductive fractures and the less permeable matrix are generated within the framework of a single continuum stochastic approach, based on non-parametric indicator statistics. High-permeable fracture zones are distinguished from low-permeable matrix zones in that they have assigned a long range correlation structure in prescribed directions. The SCM was applied to study small-scale flow in the vicinity of an access tunnel, which is currently being drilled in the unsaturated fractured tuff formations at Yucca Mountain, Nevada. Extensive underground testing is underway in this tunnel to investigate the suitability of Yucca Mountain as an underground nuclear waste repository. Different flow scenarios were studied in the present paper, considering the flow conditions before and after the tunnel emplacement, and assuming steady-state net infiltration as well as episodic pulse infiltration. Although the capability of the stochastic continuum model has not yet been fully explored, it has been demonstrated that the SCM is a good alternative model feasible of describing heterogeneous flow processes in unsaturated fractured tuff at Yucca Mountain

  4. PENGEMBANGAN MODEL PEMBINAAN KOMPETENSI CALON GURU MATEMATIKA MELALUI LESSON STUDY

    Directory of Open Access Journals (Sweden)

    Rahmad Bustanul Anwar

    2014-06-01

    Full Text Available Education has a very important role in improving the quality of human resources. Therefore, education is expected to be one of the ways to prepare generations of qualified human resources and has the ability to deal with the progress of time and technology development . In order to enhance the quality of student mastery of competencies in the development of prospective teachers in this study will be applied to the activities in the process of lesson study in lecture . Lesson study is a model of coaching to people who work as both teacher educators and lecturers through collaborative learning and assessment in building sustainable learning communities. The purpose of this research is to improve the competence of prospective mathematics teachers through lesson study . More specifically , this study aims to describe the efforts made to improve the pedagogical, professional competence , social competence and personal competence prospective mathematics teachers through lesson study . Subjects in this study were students who took the micro teaching courses totaling 15 students , divided into 3 group . This type of research is a qualitative descriptive study is to develop the competence of prospective mathematics teachers through lesson study . Lesson study conducted collaborated with Action Research activities ( Action Reseach. The results of this research activity is the implementation of lesson study to greater competence to prospective teachers teaching mathematics through the micro subjects namely: pedagogical competence categories were 80 % and 20 % lower, professional competence categories were 46.7 % and 53.3 % lower, personal competence 100 % category being and social competence categories were 86.7 % and 13.3 % lower .

  5. Mathematical and computational modeling and simulation fundamentals and case studies

    CERN Document Server

    Moeller, Dietmar P F

    2004-01-01

    Mathematical and Computational Modeling and Simulation - a highly multi-disciplinary field with ubiquitous applications in science and engineering - is one of the key enabling technologies of the 21st century. This book introduces to the use of Mathematical and Computational Modeling and Simulation in order to develop an understanding of the solution characteristics of a broad class of real-world problems. The relevant basic and advanced methodologies are explained in detail, with special emphasis on ill-defined problems. Some 15 simulation systems are presented on the language and the logical level. Moreover, the reader can accumulate experience by studying a wide variety of case studies. The latter are briefly described within the book but their full versions as well as some simulation software demos are available on the Web. The book can be used for University courses of different level as well as for self-study. Advanced sections are marked and can be skipped in a first reading or in undergraduate courses...

  6. Analytical study on model tests of soil-structure interaction

    International Nuclear Information System (INIS)

    Odajima, M.; Suzuki, S.; Akino, K.

    1987-01-01

    Since nuclear power plant (NPP) structures are stiff, heavy and partly-embedded, the behavior of those structures during an earthquake depends on the vibrational characteristics of not only the structure but also the soil. Accordingly, seismic response analyses considering the effects of soil-structure interaction (SSI) are extremely important for seismic design of NPP structures. Many studies have been conducted on analytical techniques concerning SSI and various analytical models and approaches have been proposed. Based on the studies, SSI analytical codes (computer programs) for NPP structures have been improved at JINS (Japan Institute of Nuclear Safety), one of the departments of NUPEC (Nuclear Power Engineering Test Center) in Japan. These codes are soil-spring lumped-mass code (SANLUM), finite element code (SANSSI), thin layered element code (SANSOL). In proceeding with the improvement of the analytical codes, in-situ large-scale forced vibration SSI tests were performed using models simulating light water reactor buildings, and simulation analyses were performed to verify the codes. This paper presents an analytical study to demonstrate the usefulness of the codes

  7. Applications of the FIV Model to Study HIV Pathogenesis

    Directory of Open Access Journals (Sweden)

    Craig Miller

    2018-04-01

    Full Text Available Feline immunodeficiency virus (FIV is a naturally-occurring retrovirus that infects domestic and non-domestic feline species, producing progressive immune depletion that results in an acquired immunodeficiency syndrome (AIDS. Much has been learned about FIV since it was first described in 1987, particularly in regard to its application as a model to study the closely related lentivirus, human immunodeficiency virus (HIV. In particular, FIV and HIV share remarkable structure and sequence organization, utilize parallel modes of receptor-mediated entry, and result in a similar spectrum of immunodeficiency-related diseases due to analogous modes of immune dysfunction. This review summarizes current knowledge of FIV infection kinetics and the mechanisms of immune dysfunction in relation to opportunistic disease, specifically in regard to studying HIV pathogenesis. Furthermore, we present data that highlight changes in the oral microbiota and oral immune system during FIV infection, and outline the potential for the feline model of oral AIDS manifestations to elucidate pathogenic mechanisms of HIV-induced oral disease. Finally, we discuss advances in molecular biology, vaccine development, neurologic dysfunction, and the ability to apply pharmacologic interventions and sophisticated imaging technologies to study experimental and naturally occurring FIV, which provide an excellent, but often overlooked, resource for advancing therapies and the management of HIV/AIDS.

  8. Parameter study on dynamic behavior of ITER tokamak scaled model

    International Nuclear Information System (INIS)

    Nakahira, Masataka; Takeda, Nobukazu

    2004-12-01

    This report summarizes that the study on dynamic behavior of ITER tokamak scaled model according to the parametric analysis of base plate thickness, in order to find a reasonable solution to give the sufficient rigidity without affecting the dynamic behavior. For this purpose, modal analyses were performed changing the base plate thickness from the present design of 55 mm to 100 mm, 150 mm and 190 mm. Using these results, the modification plan of the plate thickness was studied. It was found that the thickness of 150 mm gives well fitting of 1st natural frequency about 90% of ideal rigid case. Thus, the modification study was performed to find out the adequate plate thickness. Considering the material availability, transportation and weldability, it was found that the 300mm thickness would be a limitation. The analysis result of 300mm thickness case showed 97% fitting of 1st natural frequency to the ideal rigid case. It was however found that the bolt length was too long and it gave additional twisting mode. As a result, it was concluded that the base plate thickness of 150mm or 190mm gives sufficient rigidity for the dynamic behavior of the scaled model. (author)

  9. Climate Simulations from Super-parameterized and Conventional General Circulation Models with a Third-order Turbulence Closure

    Science.gov (United States)

    Xu, Kuan-Man; Cheng, Anning

    2014-05-01

    A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three

  10. Characterization of a Novel Murine Model to Study Zika Virus.

    Science.gov (United States)

    Rossi, Shannan L; Tesh, Robert B; Azar, Sasha R; Muruato, Antonio E; Hanley, Kathryn A; Auguste, Albert J; Langsjoen, Rose M; Paessler, Slobodan; Vasilakis, Nikos; Weaver, Scott C

    2016-06-01

    The mosquito-borne Zika virus (ZIKV) is responsible for an explosive ongoing outbreak of febrile illness across the Americas. ZIKV was previously thought to cause only a mild, flu-like illness, but during the current outbreak, an association with Guillain-Barré syndrome and microcephaly in neonates has been detected. A previous study showed that ZIKV requires murine adaptation to generate reproducible murine disease. In our study, a low-passage Cambodian isolate caused disease and mortality in mice lacking the interferon (IFN) alpha receptor (A129 mice) in an age-dependent manner, but not in similarly aged immunocompetent mice. In A129 mice, viremia peaked at ∼10(7) plaque-forming units/mL by day 2 postinfection (PI) and reached high titers in the spleen by day 1. ZIKV was detected in the brain on day 3 PI and caused signs of neurologic disease, including tremors, by day 6. Robust replication was also noted in the testis. In this model, all mice infected at the youngest age (3 weeks) succumbed to illness by day 7 PI. Older mice (11 weeks) showed signs of illness, viremia, and weight loss but recovered starting on day 8. In addition, AG129 mice, which lack both type I and II IFN responses, supported similar infection kinetics to A129 mice, but with exaggerated disease signs. This characterization of an Asian lineage ZIKV strain in a murine model, and one of the few studies reporting a model of Zika disease and demonstrating age-dependent morbidity and mortality, could provide a platform for testing the efficacy of antivirals and vaccines. © The American Society of Tropical Medicine and Hygiene.

  11. Assessment and Reduction of Model Parametric Uncertainties: A Case Study with A Distributed Hydrological Model

    Science.gov (United States)

    Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.

    2017-12-01

    The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40

  12. Pulse radiolysis in model studies toward radiation processing

    Energy Technology Data Exchange (ETDEWEB)

    Sonntag, C Von; Bothe, E; Ulanski, P; Deeble, D J [Max-Planck-Institut fuer Strahlenchemie, Muelheim an der Ruhr (Germany)

    1995-10-01

    Using the pulse radiolysis technique, the OH-radical-induced reactions of poly(vinyl alcohol) PVAL, poly(acrylic acid) PAA, poly(methyacrylic acid) PMA, and hyaluronic acid have been investigated in dilute aqueos solution. The reactions of the free-radical intermediates were followed by UV-spectroscopy and low-angle laser light-scattering; the scission of the charged polymers was also monitored by conductometry. For more detailed product studies, model systems such as 2,4-dihydroxypentane (for PVAL) and 2,4-dimethyl glutaric acid (for PAA) was also investigated. (author).

  13. Studies on 14C labelled chlorpyrifos in model marine ecosystem

    International Nuclear Information System (INIS)

    Pandit, G.G.; Mohan Rao, A.M.; Kale, S.P.; Murthy, N.B.K.; Raghu, K.

    1997-01-01

    Chlorpyrifos is one of the widely used organophosphorus insecticides in tropical countries. Experiments were conducted with 14 C labelled chlorpyrifos to study the distribution of this compound in model marine ecosystem. Less than 50 per cent of the applied activity remained in water in 24 h. Major portion of the applied chlorpyrifos (about 4.2 % residue per g) accumulated into the clams with sediment containing a maximum of 5 to 6 per cent of applied compound. No degradation of chlorpyrifos was observed in water or sediment samples. However, metabolic products were formed in clams. (author). 4 refs., 3 tabs

  14. Polarized Airway Epithelial Models for Immunological Co-Culture Studies

    DEFF Research Database (Denmark)

    Papazian, Dick; Würtzen, Peter A; Hansen, Søren Werner Karlskov

    2016-01-01

    Epithelial cells line all cavities and surfaces throughout the body and play a substantial role in maintaining tissue homeostasis. Asthma and other atopic diseases are increasing worldwide and allergic disorders are hypothesized to be a consequence of a combination of dysregulation...... of the epithelial response towards environmental antigens and genetic susceptibility, resulting in inflammation and T cell-derived immune responses. In vivo animal models have long been used to study immune homeostasis of the airways but are limited by species restriction and lack of exposure to a natural...

  15. Deschutes estuary feasibility study: hydrodynamics and sediment transport modeling

    Science.gov (United States)

    George, Douglas A.; Gelfenbaum, Guy; Lesser, Giles; Stevens, Andrew W.

    2006-01-01

    Continual sediment accumulation in Capitol Lake since the damming of the Deschutes River in 1951 has altered the initial morphology of the basin. As part of the Deschutes River Estuary Feasibility Study (DEFS), the United States Geological Survey (USGS) was tasked to model how tidal and storm processes will influence the river, lake and lower Budd Inlet should estuary restoration occur. Understanding these mechanisms will assist in developing a scientifically sound assessment on the feasibility of restoring the estuary. The goals of the DEFS are as follows. - Increase understanding of the estuary alternative to the same level as managing the lake environment.

  16. Decerebrate mouse model for studies of the spinal cord circuits

    DEFF Research Database (Denmark)

    Meehan, Claire Francesca; Mayr, Kyle A; Manuel, Marin

    2017-01-01

    The adult decerebrate mouse model (a mouse with the cerebrum removed) enables the study of sensory-motor integration and motor output from the spinal cord for several hours without compromising these functions with anesthesia. For example, the decerebrate mouse is ideal for examining locomotor be......, which is ample time to perform most short-term procedures. These protocols can be modified for those interested in cardiovascular or respiratory function in addition to motor function and can be performed by trainees with some previous experience in animal surgery....

  17. Model-independent study of light cone current commutators

    International Nuclear Information System (INIS)

    Gautam, S.R.; Dicus, D.A.

    1974-01-01

    An attempt is made to extract information on the nature of light cone current commutators (L. C. C.) in a model independent manner. Using simple assumptions on the validity of the DGS representation for the structure functions of deep inelastic scattering and using the Bjorken--Johnston--Low theorem it is shown that in principle the L. C. C. may be constructed knowing the experimental electron--proton scattering data. On the other hand the scaling behavior of the structure functions is utilized to study the consistency of a vanishing value for various L. C. C. under mild assumptions on the behavior of the DGS spectral moments. (U.S.)

  18. Shell-model Monte Carlo studies of nuclei

    International Nuclear Information System (INIS)

    Dean, D.J.

    1997-01-01

    The pair content and structure of nuclei near N = Z are described in the frwnework of shell-model Monte Carlo (SMMC) calculations. Results include the enhancement of J=0 T=1 proton-neutron pairing at N=Z nuclei, and the maxked difference of thermal properties between even-even and odd-odd N=Z nuclei. Additionally, a study of the rotational properties of the T=1 (ground state), and T=0 band mixing seen in 74 Rb is presented

  19. Modeled Urea Distribution Volume and Mortality in the HEMO Study

    Science.gov (United States)

    Greene, Tom; Depner, Thomas A.; Levin, Nathan W.; Chertow, Glenn M.

    2011-01-01

    Summary Background and objectives In the Hemodialysis (HEMO) Study, observed small decreases in achieved equilibrated Kt/Vurea were noncausally associated with markedly increased mortality. Here we examine the association of mortality with modeled volume (Vm), the denominator of equilibrated Kt/Vurea. Design, setting, participants, & measurements Parameters derived from modeled urea kinetics (including Vm) and blood pressure (BP) were obtained monthly in 1846 patients. Case mix–adjusted time-dependent Cox regressions were used to relate the relative mortality hazard at each time point to Vm and to the change in Vm over the preceding 6 months. Mixed effects models were used to relate Vm to changes in intradialytic systolic BP and to other factors at each follow-up visit. Results Mortality was associated with Vm and change in Vm over the preceding 6 months. The association between change in Vm and mortality was independent of vascular access complications. In contrast, mortality was inversely associated with V calculated from anthropometric measurements (Vant). In case mix–adjusted analysis using Vm as a time-dependent covariate, the association of mortality with Vm strengthened after statistical adjustment for Vant. After adjustment for Vant, higher Vm was associated with slightly smaller reductions in intradialytic systolic BP and with risk factors for mortality including recent hospitalization and reductions in serum albumin concentration and body weight. Conclusions An increase in Vm is a marker for illness and mortality risk in hemodialysis patients. PMID:21511841

  20. Modeling AEC—New Approaches to Study Rare Genetic Disorders

    Science.gov (United States)

    Koch, Peter J.; Dinella, Jason; Fete, Mary; Siegfried, Elaine C.; Koster, Maranke I.

    2015-01-01

    Ankyloblepharon-ectodermal defects-cleft lip/palate (AEC) syndrome is a rare monogenetic disorder that is characterized by severe abnormalities in ectoderm-derived tissues, such as skin and its appendages. A major cause of morbidity among affected infants is severe and chronic skin erosions. Currently, supportive care is the only available treatment option for AEC patients. Mutations in TP63, a gene that encodes key regulators of epidermal development, are the genetic cause of AEC. However, it is currently not clear how mutations in TP63 lead to the various defects seen in the patients’ skin. In this review, we will discuss current knowledge of the AEC disease mechanism obtained by studying patient tissue and genetically engineered mouse models designed to mimic aspects of the disorder. We will then focus on new approaches to model AEC, including the use of patient cells and stem cell technology to replicate the disease in a human tissue culture model. The latter approach will advance our understanding of the disease and will allow for the development of new in vitro systems to identify drugs for the treatment of skin erosions in AEC patients. Further, the use of stem cell technology, in particular induced pluripotent stem cells (iPSC), will enable researchers to develop new therapeutic approaches to treat the disease using the patient’s own cells (autologous keratinocyte transplantation) after correction of the disease-causing mutations. PMID:24665072

  1. Experimental study and modelization of a propane storage tank depressurization

    International Nuclear Information System (INIS)

    Veneau, Tania

    1995-01-01

    The risks associated with the fast depressurization of propane storage tanks reveals the importance of the 'source term' determination. This term is directly linked, among others, to the characteristics of the jet developed downstream of the breach. The first aim of this work was to provide an original data bank concerning drop velocity and diameter distributions in a propane jet. For this purpose, a phase Doppler anemometer bas been implemented on an experimental set-up. Propane blowdowns have been performed with different breach sizes and several initial pressures in the storage tank. Drop diameter and velocity distributions have been investigated at different locations in the jet zone. These measurements exhibited the fragmentation and vaporisation trends in the jet. The second aim of this work concerned the 'source term'. lt required to study the coupling between the fluid behaviour inside the tank and the flow through the breach. This model took into account the phase exchange when flashing occurred in the tank. The flow at the breach was described with an homogeneous relaxation model. This coupled modelization has been successfully and exhaustively validated. lt originality lies on the application to propane flows. (author) [fr

  2. Numerical study of similarity in prototype and model pumped turbines

    International Nuclear Information System (INIS)

    Li, Z J; Wang, Z W; Bi, H L

    2014-01-01

    Similarity study of prototype and model pumped turbines are performed by numerical simulation and the partial discharge case is analysed in detail. It is found out that in the RSI (rotor-stator interaction) region where the flow is convectively accelerated with minor flow separation, a high level of similarity in flow patterns and pressure fluctuation appear with relative pressure fluctuation amplitude of model turbine slightly higher than that of prototype turbine. As for the condition in the runner where the flow is convectively accelerated with severe separation, similarity fades substantially due to different topology of flow separation and vortex formation brought by distinctive Reynolds numbers of the two turbines. In the draft tube where the flow is diffusively decelerated, similarity becomes debilitated owing to different vortex rope formation impacted by Reynolds number. It is noted that the pressure fluctuation amplitude and characteristic frequency of model turbine are larger than those of prototype turbine. The differences in pressure fluctuation characteristics are discussed theoretically through dimensionless Navier-Stokes equation. The above conclusions are all made based on simulation without regard to the penstock response and resonance

  3. a Model Study of Small-Scale World Map Generalization

    Science.gov (United States)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  4. The accident consequence model of the German safety study

    International Nuclear Information System (INIS)

    Huebschmann, W.

    1977-01-01

    The accident consequence model essentially describes a) the diffusion in the atmosphere and deposition on the soil of radioactive material released from the reactor into the atmosphere; b) the irradiation exposure and health consequences of persons affected. It is used to calculate c) the number of persons suffering from acute or late damage, taking into account possible counteractions such as relocation or evacuation, and d) the total risk to the population from the various types of accident. The model, the underlying parameters and assumptions are described. The bone marrow dose distribution is shown for the case of late overpressure containment failure, which is discussed in the paper of Heuser/Kotthoff, combined with four typical weather conditions. The probability distribution functions for acute mortality, late incidence of cancer and genetic damage are evaluated, assuming a characteristic population distribution. The aim of these calculations is first the presentation of some results of the consequence model as an example, in second the identification of problems, which need possibly in a second phase of study to be evaluated in more detail. (orig.) [de

  5. A study of doppler waveform using pulsatile flow model

    International Nuclear Information System (INIS)

    Chung, Hye Won; Chung, Myung Jin; Park, Jae Hyung; Chung, Jin Wook; Lee, Dong Hyuk; Min, Byoung Goo

    1997-01-01

    Through the construction of a pulsatile flow model using an artificial heart pump and stenosis to demonstrate triphasic Doppler waveform, which simulates in vivo conditions, and to evaluate the relationship between Doppler waveform and vascular compliance. The flow model was constructed using a flowmeter, rubber tube, glass tube with stenosis, and artificial heart pump. Doppler study was carried out at the prestenotic, poststenotic, and distal segments;compliance was changed by changing the length of the rubber tube. With increasing proximal compliance, Doppler waveforms show decreasing peak velocity of the first phase and slightly delayed acceleration time, but the waveform itself did not change significantly. Distal compliance influenced the second phase, and was important for the formation of pulsus tardus and parvus, which without poststenotic vascular compliance, did not develop. The peak velocity of the first phase was inversely proportional to proximal compliance, and those of the second and third phases were directly proportional to distal compliance. After constructing this pulsatile flow model, we were able to explain the relationship between vascular compliance and Doppler waveform, and also better understand the formation of pulsus tardus and parvus

  6. Study on the development of geological environmental model. 2

    International Nuclear Information System (INIS)

    Tsujimoto, Keiichi; Shinohara, Yoshinori; Saito, Shigeyuki; Ueta, Shinzo; Ohashi, Toyo; Sasaki, Ryouichi; Tomiyama, Shingo

    2003-02-01

    The safety performance assessment was carried out in imaginary geological environment in the conventional research and development of geological disposal, but the importance of safety assessment based on the repository design and scenario considering the concrete geological environment will increase in the future. The research considering the link of the major three fields of geological disposal, investigation of geological environment, repository design, and safety performance assessment, is the contemporary worldwide research theme. Hence it is important to organize information flow that contains the series of information process from the data production to analysis in the three fields, and to systematize the knowledge base that unifies the information flow hierarchically. The information flow for geological environment model generation process is examined and modified base on the product of the research of 'Study on the development of geological environment model' that was examined in 2002. The work flow diagrams for geological structure and hydrology are modified, and those for geochemical and rock property are examined from the scratch. Furthermore, database design was examined to build geoclinal environment database (knowledgebase) based on the results of the systemisation of the environment model generation technology. The geoclinal