WorldWideScience

Sample records for giss modele study

  1. Eocene climate and Arctic paleobathymetry: A tectonic sensitivity study using GISS ModelE-R

    Science.gov (United States)

    Roberts, C. D.; Legrande, A. N.; Tripati, A. K.

    2009-12-01

    The early Paleogene (65-45 million years ago, Ma) was a ‘greenhouse’ interval with global temperatures warmer than any other time in the last 65 Ma. This period was characterized by high levels of CO2, warm high-latitudes, warm surface-and-deep oceans, and an intensified hydrological cycle. Sediments from the Arctic suggest that the Eocene surface Arctic Ocean was warm, brackish, and episodically enabled the freshwater fern Azolla to bloom. The precise mechanisms responsible for the development of these conditions remain uncertain. We present equilibrium climate conditions derived from a fully-coupled, water-isotope enabled, general circulation model (GISS ModelE-R) configured for the early Eocene. We also present model-data comparison plots for key climatic variables (SST and δ18O) and analyses of the leading modes of variability in the tropical Pacific and North Atlantic regions. Our tectonic sensitivity study indicates that Northern Hemisphere climate would have been very sensitive to the degree of oceanic exchange through the seaways connecting the Arctic to the Atlantic and Tethys. By restricting these seaways, we simulate freshening of the surface Arctic Ocean to ~6 psu and warming of sea-surface temperatures by 2°C in the North Atlantic and 5-10°C in the Labrador Sea. Our results may help explain the occurrence of low-salinity tolerant taxa in the Arctic Ocean during the Eocene and provide a mechanism for enhanced warmth in the north western Atlantic. We also suggest that the formation of a volcanic land-bridge between Greenland and Europe could have caused increased ocean convection and warming of intermediate waters in the Atlantic. If true, this result is consistent with the theory that bathymetry changes may have caused thermal destabilisation of methane clathrates in the Atlantic.

  2. Climate implications of carbonaceous aerosols: An aerosol microphysical study using the GISS/MATRIX climate model

    International Nuclear Information System (INIS)

    Bauer, Susanne E.; Menon, Surabi; Koch, Dorothy; Bond, Tami; Tsigaridis, Kostas

    2010-01-01

    Recently, attention has been drawn towards black carbon aerosols as a likely short-term climate warming mitigation candidate. However the global and regional impacts of the direct, cloud-indirect and semi-direct forcing effects are highly uncertain, due to the complex nature of aerosol evolution and its climate interactions. Black carbon is directly released as particle into the atmosphere, but then interacts with other gases and particles through condensation and coagulation processes leading to further aerosol growth, aging and internal mixing. A detailed aerosol microphysical scheme, MATRIX, embedded within the global GISS modelE includes the above processes that determine the lifecycle and climate impact of aerosols. This study presents a quantitative assessment of the impact of microphysical processes involving black carbon, such as emission size distributions and optical properties on aerosol cloud activation and radiative forcing. Our best estimate for net direct and indirect aerosol radiative forcing change is -0.56 W/m 2 between 1750 and 2000. However, the direct and indirect aerosol effects are very sensitive to the black and organic carbon size distribution and consequential mixing state. The net radiative forcing change can vary between -0.32 to -0.75 W/m 2 depending on these carbonaceous particle properties. Assuming that sulfates, nitrates and secondary organics form a coating shell around a black carbon core, rather than forming a uniformly mixed particles, changes the overall net radiative forcing from a negative to a positive number. Black carbon mitigation scenarios showed generally a benefit when mainly black carbon sources such as diesel emissions are reduced, reducing organic and black carbon sources such as bio-fuels, does not lead to reduced warming.

  3. Dangerous human-made interference with climate: a GISS modelE study

    Directory of Open Access Journals (Sweden)

    J. Hansen

    2007-01-01

    Full Text Available We investigate the issue of "dangerous human-made interference with climate" using simulations with GISS modelE driven by measured or estimated forcings for 1880–2003 and extended to 2100 for IPCC greenhouse gas scenarios as well as the "alternative" scenario of Hansen and Sato (2004. Identification of "dangerous" effects is partly subjective, but we find evidence that added global warming of more than 1°C above the level in 2000 has effects that may be highly disruptive. The alternative scenario, with peak added forcing ~1.5 W/m2 in 2100, keeps further global warming under 1°C if climate sensitivity is ~3°C or less for doubled CO2. The alternative scenario keeps mean regional seasonal warming within 2σ (standard deviations of 20th century variability, but other scenarios yield regional changes of 5–10σ, i.e. mean conditions outside the range of local experience. We conclude that a CO2 level exceeding about 450 ppm is "dangerous", but reduction of non-CO2 forcings can provide modest relief on the CO2 constraint. We discuss three specific sub-global topics: Arctic climate change, tropical storm intensification, and ice sheet stability. We suggest that Arctic climate change has been driven as much by pollutants (O3, its precursor CH4, and soot as by CO2, offering hope that dual efforts to reduce pollutants and slow CO2 growth could minimize Arctic change. Simulated recent ocean warming in the region of Atlantic hurricane formation is comparable to observations, suggesting that greenhouse gases (GHGs may have contributed to a trend toward greater hurricane intensities. Increasing GHGs cause significant warming in our model in submarine regions of ice shelves and shallow methane hydrates, raising concern about the potential for accelerating sea level rise and future positive feedback from methane release. Growth of non-CO2 forcings has slowed in recent years, but CO2 emissions are now surging well above the alternative scenario. Prompt

  4. Dangerous human-made interference with climate: a GISS modelE study

    International Nuclear Information System (INIS)

    Hansen, J.; Lacis, A.; Miller, R.; Schmidt, G.A.; Russell, G.; Canuto, V.; Del Genio, A.; Hall, T.; Kiang, N.Y.; Rind, D.; Romanou, A.; Shindell, D.; Sun, S.; Hansen, J.; Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Koch, D.; Lerner, J.; Perlwitz, Ju.; Unger, N.; Zhang, S.; Ruedy, R.; Lo, K.; Cheng, Y.; Oinas, V.; Schmunk, R.; Tausnev, N.; Yao, M.; Lacis, A.; Schmidt, G.A.; Del Genio, A.; Rind, D.; Romanou, A.; Shindell, D.; Thresher, D.; Miller, R.; Cairns, B.; Hall, T.; Perlwitz, Ja.; Baum, E.; Cohen, A.; Fleming, E.; Jackman, C.; Labow, G.; Friend, A.; Kelley, M.

    2007-01-01

    We investigate the issue of 'dangerous human-made interference with climate' using simulations with GISS modelE driven by measured or estimated forcing for 1880-2003 and extended to 2100 for IPCC greenhouse gas scenarios as well as the 'alternative' scenario of Hansen and Sato (2004). Identification of 'dangerous' effects is partly subjective, but we find evidence that added global warming of more than 1 degrees C above the level in 2000 has effects that may be highly disruptive. The alternative scenario, with peak added forcing similar to 1.5 W/m 2 in 2100, keeps further global warming under 1 degrees C if climate sensitivity is similar to 3 degrees C or less for doubled CO 2 . The alternative scenario keeps mean regional seasonal warming within 2 σ (standard deviations) of 20. century variability, but other scenarios yield regional changes of 5-10 σ, i.e. mean conditions outside the range of local experience. We conclude that a CO 2 level exceeding about 450 ppm is 'dangerous', but reduction of non-CO 2 forcing can provide modest relief on the CO 2 constraint. We discuss three specific sub-global topics: Arctic climate change, tropical storm intensification, and ice sheet stability. We suggest that Arctic climate change has been driven as much by pollutants (O 3 , its precursor CH 4 , and soot) as by CO 2 , offering hope that dual efforts to reduce pollutants and slow CO 2 growth could minimize Arctic change. Simulated recent ocean warming in the region of Atlantic hurricane formation is comparable to observations, suggesting that greenhouse gases (GHGs) may have contributed to a trend toward greater hurricane intensities. Increasing GHGs cause significant warming in our model in submarine regions of ice shelves and shallow methane hydrates, raising concern about the potential for accelerating sea level rise and future positive feedback from methane release. Growth of non-CO 2 forcing has slowed in recent years, but CO 2 emissions are now surging well above the

  5. Downscaling GISS ModelE Boreal Summer Climate over Africa

    Science.gov (United States)

    Druyan, Leonard M.; Fulakeza, Matthew

    2015-01-01

    The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.

  6. Studies of African wave disturbances with the GISS GCM

    Science.gov (United States)

    Druyan, Leonard M.; Hall, Timothy M.

    1994-01-01

    Simulations made with the general circulation model of the NASA/Goddard Institute for Space Studies (GISS GCM) run at 4 deg latitude by 5 deg longitude horizontal resolution are analyzed to determine the model's representation of African wave disturbances. Waves detected in the model's lower troposphere over northern Africa during the summer monsoon season exhibit realistic wavelengths of about 2200 km. However, power spectra of the meridional wind show that the waves propagate westward too slowly, with periods of 5-10 days, about twice the observed values. This sluggishness is most pronounced during August, consistent with simulated 600-mb zonal winds that are only about half the observed speeds of the midtropospheric jet. The modeled wave amplitudes are strongest over West Africa during the first half of the summer but decrease dramatically by September, contrary to observational evidence. Maximum amplitudes occur at realistic latitudes, 12 deg - 20 deg N, but not as observed near the Atlantic coast. Spectral analyses suggest some wave modulation of precipitation in the 5-8 day band, and compositing shows that precipitation is slightly enhanced east of the wave trough, coincident with southerly winds. Extrema of low-level convergence west of the wave troughs, coinciding with northerly winds, were not preferred areas for simulated precipitation, probably because of the drying effect of this advection, as waves were generally north of the humid zone. The documentation of African wave disturbances in the GISS GCM is a first step toward considering wave influences in future GCM studies of Sahel drought.

  7. New Gravity Wave Treatments for GISS Climate Models

    Science.gov (United States)

    Geller, Marvin A.; Zhou, Tiehan; Ruedy, Reto; Aleinov, Igor; Nazarenko, Larissa; Tausnev, Nikolai L.; Sun, Shan; Kelley, Maxwell; Cheng, Ye

    2011-01-01

    Previous versions of GISS climate models have either used formulations of Rayleigh drag to represent unresolved gravity wave interactions with the model-resolved flow or have included a rather complicated treatment of unresolved gravity waves that, while being climate interactive, involved the specification of a relatively large number of parameters that were not well constrained by observations and also was computationally very expensive. Here, the authors introduce a relatively simple and computationally efficient specification of unresolved orographic and nonorographic gravity waves and their interaction with the resolved flow. Comparisons of the GISS model winds and temperatures with no gravity wave parameterization; with only orographic gravity wave parameterization; and with both orographic and nonorographic gravity wave parameterizations are shown to illustrate how the zonal mean winds and temperatures converge toward observations. The authors also show that the specifications of orographic and nonorographic gravity waves must be different in the Northern and Southern Hemispheres. Then results are presented where the nonorographic gravity wave sources are specified to represent sources from convection in the intertropical convergence zone and spontaneous emission from jet imbalances. Finally, a strategy to include these effects in a climate-dependent manner is suggested.

  8. Direct stratospheric injection of biomass burning emissions: a case study of the 2009 Australian bushfires using the NASA GISS ModelE2 composition-climate model

    Science.gov (United States)

    Field, Robert; From, Mike; Voulgarakis, Apostolos; Shindell, Drew; Flannigan, Mike; Bernath, Peter

    2014-05-01

    Direct stratospheric injection (DSI) of forest fire smoke represents a direct biogeochemical link between the land surface and stratosphere. DSI events occur regularly in the northern and southern extratropics, and have been observed across a wide range of measurements, but their fate and effects are not well understood. DSIs result from explosive, short-lived fires, and their plumes stand out from background concentrations immediately. This makes it easier to associate detected DSIs to individual fires and their estimated emissions. Because the emissions pulses are brief, chemical decay can be more clearly assessed, and because the emissions pulses are so large, a wide range of rare chemical species can be detected. Observational evidence suggests that they can persist in the stratosphere for several months, enhance ozone production, and be self-lofted to the middle stratosphere through shortwave absorption and diabatic heating. None of these phenomena have been evaluated, however, with a physical model. To that end, we are simulating the smoke plumes from the February 2009 Australia 'Black Saturday' bushfires using the NASA GISS ModelE2 composition-climate model, nudged toward horizontal winds from reanalysis. To-date, this is the best-observed DSI in the southern hemisphere. Chemical and aerosol signatures of the plume were observed in a wide array of limb and nadir satellite retrievals. Detailed estimates of fuel consumption and injection height have been made because of the severity of the fires. Uncommon among DSIs events was a large segment of the plume that entrained into the upper equatorial easterlies. Preliminary modeling results show that the relative strengths of the equatorial and extratropical plume segments are sensitive to the plume's initial injection height. This highlights the difficulty in reconciling uncertainty in the reanalysis over the Southern Hemisphere with fairly-well constrained estimates of fire location and injection height at the

  9. Tropical cyclones in the GISS ModelE2

    Directory of Open Access Journals (Sweden)

    Suzana J. Camargo

    2016-07-01

    Full Text Available The authors describe the characteristics of tropical cyclone (TC activity in the GISS general circulation ModelE2 with a horizontal resolution 1°×1°. Four model simulations are analysed. In the first, the model is forced with sea surface temperature (SST from the recent historical climatology. The other three have different idealised climate change simulations, namely (1 a uniform increase of SST by 2 degrees, (2 doubling of the CO2 concentration and (3 a combination of the two. These simulations were performed as part of the US Climate Variability and Predictability Program Hurricane Working Group. Diagnostics of standard measures of TC activity are computed from the recent historical climatological SST simulation and compared with the same measures computed from observations. The changes in TC activity in the three idealised climate change simulations, by comparison with that in the historical climatological SST simulation, are also described. Similar to previous results in the literature, the changes in TC frequency in the simulation with a doubling CO2 and an increase in SST are approximately the linear sum of the TC frequency in the other two simulations. However, in contrast with previous results, in these simulations the effects of CO2 and SST on TC frequency oppose each other. Large-scale environmental variables associated with TC activity are then analysed for the present and future simulations. Model biases in the large-scale fields are identified through a comparison with ERA-Interim reanalysis. Changes in the environmental fields in the future climate simulations are shown and their association with changes in TC activity discussed.

  10. Evaluation of WRF Performance Driven by GISS-E2-R Global Model for the 2014 Rainy Season in Mexico

    Science.gov (United States)

    Almanza, V.; Zavala, M. A.; Lei, W.; Shindell, D. T.; Molina, L. T.

    2017-12-01

    Precipitation and cloud fields as well as the spatial distribution of emissions are important during the estimation of the radiative effects of atmospheric pollutants in future climate applications. In particular, landfalling hurricanes and tropical storms greatly affect the amount and distribution of annual precipitation, and thus have a direct impact on the wet deposition of pollutants and aerosol-cloud interactions. Therefore, long-term simulations in chemistry mode driven by the outputs of a global model need to consider the influence of these phenomena on the radiative effects, particularly for countries such as Mexico that have high number of landfalling hurricanes and tropical storms. In this work the NASA earth system GISS-E2-R global model is downscaled with the WRF model over a domain encompassing Mexico. We use the North American Regional Reanalysis (NARR) and Era-Interim reanalysis, along with available surface observations and data from the Tropical Rainfall Measuring Mission (TRMM) products to evaluate the contribution of spectral nudging, domain size and resolution in resolving the precipitation and cloud fraction fields for the rainy season in 2014. We focus on this year since 10 tropical cyclones made landfall in central Mexico. The results of the evaluation are useful to assess the performance of the model in representing the present conditions of precipitation and cloud fraction in Mexico. In addition, it provides guidelines for conducting the operational runs in chemistry mode for the future years.

  11. PyrE, an interactive fire module within the NASA-GISS Earth System Model

    Science.gov (United States)

    Mezuman, K.; Bauer, S. E.; Tsigaridis, K.

    2017-12-01

    Fires directly affect the composition of the atmosphere and Earth's radiation balance by emitting a suite of reactive gases and particles. Having an interactive fire module in an Earth System Model allows us to study the natural and anthropogenic drivers, feedbacks, and interactions of biomass burning in different time periods. To do so we have developed PyrE, the NASA-GISS interactive fire emissions model. PyrE uses the flammability, ignition, and suppression parameterization proposed by Pechony and Shindell (2009), and is coupled to a burned area and surface recovery parameterization. The burned area calculation follows CLM's approach (Li et al., 2012), paired with an offline recovery scheme based on Ent's Terrestrial Biosphere Model (Ent TBM) carbon pool turnover time. PyrE is driven by environmental variables calculated by climate simulations, population density data, MODIS fire counts and LAI retrievals, as well as GFED4s emissions. Since the model development required extensive use of reference datasets, in addition to comparing it to GFED4s BA, we evaluate it by studying the effect of fires on atmospheric composition and climate. Our results show good agreement globally, with some regional differences. Finally, we quantify the present day fire radiative forcing. The development of PyrE allowed us for the first time to interactively simulate climate and fire activity with GISS-ModelE3

  12. The Added Value to Global Model Projections of Climate Change by Dynamical Downscaling: A Case Study over the Continental U.S. using the GISS-ModelE2 and WRF Models

    Science.gov (United States)

    Racherla, P. N.; Shindell, D. T.; Faluvegi, G. S.

    2012-01-01

    Dynamical downscaling is being increasingly used for climate change studies, wherein the climates simulated by a coupled atmosphere-ocean general circulation model (AOGCM) for a historical and a future (projected) decade are used to drive a regional climate model (RCM) over a specific area. While previous studies have demonstrated that RCMs can add value to AOGCM-simulated climatologies over different world regions, it is unclear as to whether or not this translates to a better reproduction of the observed climate change therein. We address this issue over the continental U.S. using the GISS-ModelE2 and WRF models, a state-of-the-science AOGCM and RCM, respectively. As configured here, the RCM does not effect holistic improvement in the seasonally and regionally averaged surface air temperature or precipitation for the individual historical decades. Insofar as the climate change between the two decades is concerned, the RCM does improve upon the AOGCM when nudged in the domain proper, but only modestly so. Further, the analysis indicates that there is not a strong relationship between skill in capturing climatological means and skill in capturing climate change. Though additional research would be needed to demonstrate the robustness of this finding in AOGCM/RCM models generally, the evidence indicates that, for climate change studies, the most important factor is the skill of the driving global model itself, suggesting that highest priority should be given to improving the long-range climate skill of AOGCMs.

  13. Dynamical Downscaling of NASA/GISS ModelE: Continuous, Multi-Year WRF Simulations

    Science.gov (United States)

    Otte, T.; Bowden, J. H.; Nolte, C. G.; Otte, M. J.; Herwehe, J. A.; Faluvegi, G.; Shindell, D. T.

    2010-12-01

    The WRF Model is being used at the U.S. EPA for dynamical downscaling of the NASA/GISS ModelE fields to assess regional impacts of climate change in the United States. The WRF model has been successfully linked to the ModelE fields in their raw hybrid vertical coordinate, and continuous, multi-year WRF downscaling simulations have been performed. WRF will be used to downscale decadal time slices of ModelE for recent past, current, and future climate as the simulations being conducted for the IPCC Fifth Assessment Report become available. This presentation will focus on the sensitivity to interior nudging within the RCM. The use of interior nudging for downscaled regional climate simulations has been somewhat controversial over the past several years but has been recently attracting attention. Several recent studies that have used reanalysis (i.e., verifiable) fields as a proxy for GCM input have shown that interior nudging can be beneficial toward achieving the desired downscaled fields. In this study, the value of nudging will be shown using fields from ModelE that are downscaled using WRF. Several different methods of nudging are explored, and it will be shown that the method of nudging and the choices made with respect to how nudging is used in WRF are critical to balance the constraint of ModelE against the freedom of WRF to develop its own fields.

  14. Exploring diurnal and seasonal characteristics of global carbon cycle with GISS Model E2 GCM

    Science.gov (United States)

    Aleinov, I. D.; Kiang, N. Y.; Romanou, A.

    2017-12-01

    The ability to properly model surface carbon fluxes on the diurnal and seasonal time scale is a necessary requirement for understanding of the global carbon cycle. It is also one of the most challenging tasks faced by modern General Circulation Models (GCMs) due to complexity of the algorithms and variety of relevant spatial and temporal scales. The observational data, though abundant, is difficult to interpret at the global scale, because flux tower observations are very sparse for large impact areas (such as Amazon and African rainforest and most of Siberia) and satellite missions often struggle to produce sufficiently high confidence data over the land and may be missing CO2 amounts near the surface due to the nature of the method. In this work we use the GISS Model E2 GCM to perform a subset of experiments proposed by the Coupled Climate-Carbon Cycle Model Intercomparison Project (C4MIP) and relate the results to available observations.The GISS Model E2 GCM is currently equipped with a complete global carbon cycle algorithm. Its surface carbon fluxes are computed by the Ent Terrestrial Biosphere Model (Ent TBM) over the land with observed leaf area index of the Moderate Resolution Imaging Spectrometer (MODIS) and by the NASA Ocean Biogeochemistry Model (NOBM) over the ocean. The propagation of atmospheric CO2 is performed by a generic Model E2 tracer algorithm, which is based on a quadratic upstream method (Prather 1986). We perform a series spin-up experiments for preindustrial climate conditions and fixed preindustrial atmospheric CO2 concentration. First, we perform separate spin-up simulations each for terrestrial and ocean carbon. We then combine the spun-up states and perform a coupled spin-up simulation until the model reaches a sufficient equilibrium. We then release restrictions on CO2 concentration and allow it evolve freely, driven only by simulated surface fluxes. We then study the results of the unforced run, comparing the amplitude and the phase

  15. Improved Upper Ocean/Sea Ice Modeling in the GISS GCM for Investigating Climate Change

    Science.gov (United States)

    1997-01-01

    This project built on our previous results in which we highlighted the importance of sea ice in overall climate sensitivity by determining that for both warming and cooling climates, when sea ice was not allowed to change, climate sensitivity was reduced by 35-40%. We also modified the Goddard Institute for Space Studies (GISS) 8 deg x lO deg atmospheric General Circulation Model (GCM) to include an upper-ocean/sea-ice model involving the Semtner three-layer ice/snow thermodynamic model, the Price et al. (1986) ocean mixed layer model and a general upper ocean vertical advection/diffusion scheme for maintaining and fluxing properties across the pycnocline. This effort, in addition to improving the sea ice representation in the AGCM, revealed a number of sensitive components of the sea ice/ocean system. For example, the ability to flux heat through the ice/snow properly is critical in order to resolve the surface temperature properly, since small errors in this lead to unrestrained climate drift. The present project, summarized in this report, had as its objectives: (1) introducing a series of sea ice and ocean improvements aimed at overcoming remaining weaknesses in the GCM sea ice/ocean representation, and (2) performing a series of sensitivity experiments designed to evaluate the climate sensitivity of the revised model to both Antarctic and Arctic sea ice, determine the sensitivity of the climate response to initial ice distribution, and investigate the transient response to doubling CO2.

  16. NASA GISS Climate Change Research Initiative: A Multidisciplinary Vertical Team Model for Improving STEM Education by Using NASA's Unique Capabilities.

    Science.gov (United States)

    Pearce, M. D.

    2017-12-01

    CCRI is a year-long STEM education program designed to bring together teams of NASA scientists, graduate, undergraduate and high school interns and high school STEM educators to become immersed in NASA research focused on atmospheric and climate changes in the 21st century. GISS climate research combines analysis of global datasets with global models of atmospheric, land surface, and oceanic processes to study climate change on Earth and other planetary atmospheres as a useful tool in assessing our general understanding of climate change. CCRI interns conduct research, gain knowledge in assigned research discipline, develop and present scientific presentations summarizing their research experience. Specifically, CCRI interns write a scientific research paper explaining basic ideas, research protocols, abstract, results, conclusion and experimental design. Prepare and present a professional presentation of their research project at NASA GISS, prepare and present a scientific poster of their research project at local and national research symposiums along with other federal agencies. CCRI Educators lead research teams under the direction of a NASA GISS scientist, conduct research, develop research based learning units and assist NASA scientists with the mentoring of interns. Educators create an Applied Research STEM Curriculum Unit Portfolio based on their research experience integrating NASA unique resources, tools and content into a teacher developed unit plan aligned with the State and NGSS standards. STEM Educators also Integrate and implement NASA unique units and content into their STEM courses during academic year, perform community education STEM engagement events, mentor interns in writing a research paper, oral research reporting, power point design and scientific poster design for presentation to local and national audiences. The CCRI program contributes to the Federal STEM Co-STEM initiatives by providing opportunities, NASA education resources and

  17. Climate simulations for 1880-2003 with GISS modelE

    International Nuclear Information System (INIS)

    Hansen, J.; Lacis, A.; Miller, R.; Schmidt, G.A.; Russell, G.; Canuto, V.; Del Genio, A.; Hall, T.; Hansen, J.; Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Ruedy, R.; Lo, K.; Cheng, Y.; Lacis, A.; Schmidt, G.A.; Del Genio, A.; Miller, R.; Cairns, B.; Hall, T.; Baum, E.; Cohen, A.; Fleming, E.; Jackman, C.; Friend, A.; Kelley, M.

    2007-01-01

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcing. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcing, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcing are due to model deficiencies, inaccurate or incomplete forcing, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcing, we aim to provide a benchmark against which the effect of improvements in the model, climate forcing, and observations can be tested. Principal model deficiencies include unrealistic weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcing are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (authors)

  18. The distribution of snow black carbon observed in the Arctic and compared to the GISS-PUCCINI model

    Directory of Open Access Journals (Sweden)

    T. Dou

    2012-09-01

    Full Text Available In this study, we evaluate the ability of the latest NASA GISS composition-climate model, GISS-E2-PUCCINI, to simulate the spatial distribution of snow BC (sBC in the Arctic relative to present-day observations. Radiative forcing due to BC deposition onto Arctic snow and sea ice is also estimated. Two sets of model simulations are analyzed, where meteorology is linearly relaxed towards National Centers for Environmental Prediction (NCEP and towards NASA Modern Era Reanalysis for Research and Applications (MERRA reanalyses. Results indicate that the modeled concentrations of sBC are comparable with present-day observations in and around the Arctic Ocean, except for apparent underestimation at a few sites in the Russian Arctic. That said, the model has some biases in its simulated spatial distribution of BC deposition to the Arctic. The simulations from the two model runs are roughly equal, indicating that discrepancies between model and observations come from other sources. Underestimation of biomass burning emissions in Northern Eurasia may be the main cause of the low biases in the Russian Arctic. Comparisons of modeled aerosol BC (aBC with long-term surface observations at Barrow, Alert, Zeppelin and Nord stations show significant underestimation in winter and spring concentrations in the Arctic (most significant in Alaska, although the simulated seasonality of aBC has been greatly improved relative to earlier model versions. This is consistent with simulated biases in vertical profiles of aBC, with underestimation in the lower and middle troposphere but overestimation in the upper troposphere and lower stratosphere, suggesting that the wet removal processes in the current model may be too weak or that vertical transport is too rapid, although the simulated BC lifetime seems reasonable. The combination of observations and modeling provides a comprehensive distribution of sBC over the Arctic. On the basis of this distribution, we estimate the

  19. Improved Upper Ocean/Sea Ice Modeling in the GISS GCM for Investigating Climate Change

    Science.gov (United States)

    1998-01-01

    This project built on our previous results in which we highlighted the importance of sea ice in overall climate sensitivity by determining that for both warming and cooling climates, when sea ice was not allowed to change, climate sensitivity was reduced by 35-40%. We also modified the GISS 8 deg x lO deg atmospheric GCM to include an upper-ocean/sea-ice model involving the Semtner three-layer ice/snow thermodynamic model, the Price et al. (1986) ocean mixed layer model and a general upper ocean vertical advection/diffusion scheme for maintaining and fluxing properties across the pycnocline. This effort, in addition to improving the sea ice representation in the AGCM, revealed a number of sensitive components of the sea ice/ocean system. For example, the ability to flux heat through the ice/snow properly is critical in order to resolve the surface temperature properly, since small errors in this lead to unrestrained climate drift. The present project, summarized in this report, had as its objectives: (1) introducing a series of sea ice and ocean improvements aimed at overcoming remaining weaknesses in the GCM sea ice/ocean representation, and (2) performing a series of sensitivity experiments designed to evaluate the climate sensitivity of the revised model to both Antarctic and Arctic sea ice, determine the sensitivity of the climate response to initial ice distribution, and investigate the transient response to doubling CO2.

  20. The northern annular mode in summer and its relation to solar activity variations in the GISS ModelE

    Science.gov (United States)

    Lee, Jae N.; Hameed, Sultan; Shindell, Drew T.

    2008-03-01

    The northern annular mode (NAM) has been successfully used in several studies to understand the variability of the winter atmosphere and its modulation by solar activity. The variability of summer circulation can also be described by the leading empirical orthogonal function (EOF) of geopotential heights. We compare the annular modes of the summer geopotential heights in the northern hemisphere stratosphere and troposphere in the Goddard Institute for Space Studies (GISS) ModelE with those in the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis. In the stratosphere, the summer NAM obtained from NCEP/NCAR reanalysis as well as from the ModelE simulations has the same sign throughout the northern hemisphere, but shows greater variability at low latitudes. The patterns in both analyses are consistent with the interpretation that low NAM conditions represent an enhancement of the seasonal difference between the summer and the annual averages of geopotential height, temperature and velocity distributions, while the reverse holds for high NAM conditions. Composite analysis of high and low NAM cases in both model and observation suggests that the summer stratosphere is more "summer-like" when the solar activity is near a maximum. This means that the zonal easterly wind flow is stronger and the temperature is higher than normal. Thus increased irradiance favors a low summer NAM. A quantitative comparison of the anti-correlation between the NAM and the solar forcing is presented in the model and in the observation, both of which show lower/higher NAM index in solar maximum/minimum conditions. The temperature fluctuations in simulated solar minimum conditions are greater than in solar maximum throughout the summer stratosphere. The summer NAM in the troposphere obtained from NCEP/NCAR reanalysis has a dipolar zonal structure with maximum variability over the Asian monsoon region. The corresponding EOF in ModelE has

  1. Assessing modeled Greenland surface mass balance in the GISS Model E2 and its sensitivity to surface albedo

    Science.gov (United States)

    Alexander, Patrick; LeGrande, Allegra N.; Koenig, Lora S.; Tedesco, Marco; Moustafa, Samiah E.; Ivanoff, Alvaro; Fischer, Robert P.; Fettweis, Xavier

    2016-04-01

    The surface mass balance (SMB) of the Greenland Ice Sheet (GrIS) plays an important role in global sea level change. Regional Climate Models (RCMs) such as the Modèle Atmosphérique Régionale (MAR) have been employed at high spatial resolution with relatively complex physics to simulate ice sheet SMB. Global climate models (GCMs) incorporate less sophisticated physical schemes and provide outputs at a lower spatial resolution, but have the advantage of modeling the interaction between different components of the earth's oceans, climate, and land surface at a global scale. Improving the ability of GCMs to represent ice sheet SMB is important for making predictions of future changes in global sea level. With the ultimate goal of improving SMB simulated by the Goddard Institute for Space Studies (GISS) Model E2 GCM, we compare simulated GrIS SMB against the outputs of the MAR model and radar-derived estimates of snow accumulation. In order to reproduce present-day climate variability in the Model E2 simulation, winds are constrained to match the reanalysis datasets used to force MAR at the lateral boundaries. We conduct a preliminary assessment of the sensitivity of the simulated Model E2 SMB to surface albedo, a parameter that is known to strongly influence SMB. Model E2 albedo is set to a fixed value of 0.8 over the entire ice sheet in the initial configuration of the model (control case). We adjust this fixed value in an ensemble of simulations over a range of 0.4 to 0.8 (roughly the range of observed summer GrIS albedo values) to examine the sensitivity of ice-sheet-wide SMB to albedo. We prescribe albedo from the Moderate Resolution Imaging Spectroradiometer (MODIS) MCD43A3 v6 to examine the impact of a more realistic spatial and temporal variations in albedo. An age-dependent snow albedo parameterization is applied, and its impact on SMB relative to observations and the RCM is assessed.

  2. CEOS SEO and GISS Meeting

    Science.gov (United States)

    Killough, Brian; Stover, Shelley

    2008-01-01

    The Committee on Earth Observation Satellites (CEOS) provides a brief to the Goddard Institute for Space Studies (GISS) regarding the CEOS Systems Engineering Office (SEO) and current work on climate requirements and analysis. A "system framework" is provided for the Global Earth Observation System of Systems (GEOSS). SEO climate-related tasks are outlined including the assessment of essential climate variable (ECV) parameters, use of the "systems framework" to determine relevant informational products and science models and the performance of assessments and gap analyses of measurements and missions for each ECV. Climate requirements, including instruments and missions, measurements, knowledge and models, and decision makers, are also outlined. These requirements would establish traceability from instruments to products and services allowing for benefit evaluation of instruments and measurements. Additionally, traceable climate requirements would provide a better understanding of global climate models.

  3. Evaluation of aerosol distributions in the GISS-TOMAS global aerosol microphysics model with remote sensing observations

    Directory of Open Access Journals (Sweden)

    Y. H. Lee

    2010-03-01

    Full Text Available The Aerosol Optical Depth (AOD and Angstrom Coefficient (AC predictions in the GISS-TOMAS model of global aerosol microphysics are evaluated against remote sensing data from MODIS, MISR, and AERONET. The model AOD agrees well (within a factor of two over polluted continental (or high sulfate, dusty, and moderate sea-salt regions but less well over the equatorial, high sea-salt, and biomass burning regions. Underprediction of sea-salt in the equatorial region is likely due to GCM meteorology (low wind speeds and high precipitation. For the Southern Ocean, overprediction of AOD is very likely due to high sea-salt emissions and perhaps aerosol water uptake in the model. However, uncertainties in cloud screening at high latitudes make it difficult to evaluate the model AOD there with the satellite-based AOD. AOD in biomass burning regions is underpredicted, a tendency found in other global models but more severely here. Using measurements from the LBA-SMOCC 2002 campaign, the surface-level OC concentration in the model are found to be underpredicted severely during the dry season while much less severely for EC concentration, suggesting the low AOD in the model is due to underpredictions in OM mass. The potential for errors in emissions and wet deposition to contribute to this bias is discussed.

  4. Impact of improved Greenland ice sheet surface representation in the NASA GISS ModelE2 GCM on simulated surface mass balance and regional climate

    Science.gov (United States)

    Alexander, P. M.; LeGrande, A. N.; Fischer, E.; Tedesco, M.; Kelley, M.; Schmidt, G. A.; Fettweis, X.

    2017-12-01

    Towards achieving coupled simulations between the NASA Goddard Institute for Space Studies (GISS) ModelE2 general circulation model (GCM) and ice sheet models (ISMs), improvements have been made to the representation of the ice sheet surface in ModelE2. These include a sub-grid-scale elevation class scheme, a multi-layer snow model, a time-variable surface albedo scheme, and adjustments to parameterization of sublimation/evaporation. These changes improve the spatial resolution and physical representation of the ice sheet surface such that the surface is represented at a level of detail closer to that of Regional Climate Models (RCMs). We assess the impact of these changes on simulated Greenland Ice Sheet (GrIS) surface mass balance (SMB). We also compare ModelE2 simulations in which winds have been nudged to match the European Center for Medium-Range Weather Forecasts (ECMWF) ERA-Interim reanalysis with simulations from the Modèle Atmosphérique Régionale (MAR) RCM forced by the same reanalysis. Adding surface elevation classes results in a much higher spatial resolution representation of the surface necessary for coupling with ISMs, but has a negligible impact on overall SMB. Implementing a variable surface albedo scheme increases melt by 100%, bringing it closer to melt simulated by MAR. Adjustments made to the representation of topography-influenced surface roughness length in ModelE2 reduce a positive bias in evaporation relative to MAR. We also examine the impact of changes to the GrIS surface on regional atmospheric and oceanic climate in coupled ocean-atmosphere simulations with ModelE2, finding a general warming of the Arctic due to a warmer GrIS, and a cooler North Atlantic in scenarios with doubled atmospheric CO2 relative to pre-industrial levels. The substantial influence of changes to the GrIS surface on the oceans and atmosphere highlight the importance of including these processes in the GCM, in view of potential feedbacks between the ice sheet

  5. Seasonal changes in the atmospheric heat balance simulated by the GISS general circulation model

    Science.gov (United States)

    Stone, P. H.; Chow, S.; Helfand, H. M.; Quirk, W. J.; Somerville, R. C. J.

    1975-01-01

    Tests of the ability of numerical general circulation models to simulate the atmosphere have focussed so far on simulations of the January climatology. These models generally present boundary conditions such as sea surface temperature, but this does not prevent testing their ability to simulate seasonal changes in atmospheric processes that accompany presented seasonal changes in boundary conditions. Experiments to simulate changes in the zonally averaged heat balance are discussed since many simplified models of climatic processes are based solely on this balance.

  6. Using long-term ARM observations to evaluate Arctic mixed-phased cloud representation in the GISS ModelE GCM

    Science.gov (United States)

    Lamer, K.; Fridlind, A. M.; Luke, E. P.; Tselioudis, G.; Ackerman, A. S.; Kollias, P.; Clothiaux, E. E.

    2016-12-01

    The presence of supercooled liquid in clouds affects surface radiative and hydrological budgets, especially at high latitudes. Capturing these effects is crucial to properly quantifying climate sensitivity. Currently, a number of CGMs disagree on the distribution of cloud phase. Adding to the challenge is a general lack of observations on the continuum of clouds, from high to low-level and from warm to cold. In the current study, continuous observations from 2011 to 2014 are used to evaluate all clouds produced by the GISS ModelE GCM over the ARM North Slope of Alaska site. The International Satellite Cloud Climatology Project (ISCCP) Global Weather State (GWS) approach reveals that fair-weather (GWS 7, 32% occurrence rate), as well as mid-level storm related (GWS 5, 28%) and polar (GWS 4, 14%) clouds, dominate the large-scale cloud patterns at this high latitude site. At higher spatial and temporal resolutions, ground-based cloud radar observations reveal a majority of single layer cloud vertical structures (CVS). While clear sky and low-level clouds dominate (each with 30% occurrence rate) a fair amount of shallow ( 10%) to deep ( 5%) convection are observed. Cloud radar Doppler spectra are used along with depolarization lidar observations in a neural network approach to detect the presence, layering and inhomogeneity of supercooled liquid layers. Preliminary analyses indicate that most of the low-level clouds sampled contain one or more supercooled liquid layers. Furthermore, the relationship between CVS and the presence of supercooled liquid is established, as is the relationship between the presence of supercool liquid and precipitation susceptibility. Two approaches are explored to bridge the gap between large footprint GCM simulations and high-resolution ground-based observations. The first approach consists of comparing model output and ground-based observations that exhibit the same column CVS type (i.e. same cloud depth, height and layering

  7. Dependence of stratocumulus-topped boundary-layer entrainment on cloud-water sedimentation: Impact on global aerosol indirect effect in GISS ModelE3 single column model and global simulations

    Science.gov (United States)

    Ackerman, A. S.; Kelley, M.; Cheng, Y.; Fridlind, A. M.; Del Genio, A. D.; Bauer, S.

    2017-12-01

    Reduction in cloud-water sedimentation induced by increasing droplet concentrations has been shown in large-eddy simulations (LES) and direct numerical simulation (DNS) to enhance boundary-layer entrainment, thereby reducing cloud liquid water path and offsetting the Twomey effect when the overlying air is sufficiently dry, which is typical. Among recent upgrades to ModelE3, the latest version of the NASA Goddard Institute for Space Studies (GISS) general circulation model (GCM), are a two-moment stratiform cloud microphysics treatment with prognostic precipitation and a moist turbulence scheme that includes an option in its entrainment closure of a simple parameterization for the effect of cloud-water sedimentation. Single column model (SCM) simulations are compared to LES results for a stratocumulus case study and show that invoking the sedimentation-entrainment parameterization option indeed reduces the dependence of cloud liquid water path on increasing aerosol concentrations. Impacts of variations of the SCM configuration and the sedimentation-entrainment parameterization will be explored. Its impact on global aerosol indirect forcing in the framework of idealized atmospheric GCM simulations will also be assessed.

  8. Simulations of the Mid-Pliocene Warm Period Using Two Versions of the NASA-GISS ModelE2-R Coupled Model

    Science.gov (United States)

    Chandler, M. A.; Sohl, L. E.; Jonas, J. A.; Dowsett, H. J.; Kelley, M.

    2013-01-01

    The mid-Pliocene Warm Period (mPWP) bears many similarities to aspects of future global warming as projected by the Intergovernmental Panel on Climate Change (IPCC, 2007). Both marine and terrestrial data point to high-latitude temperature amplification, including large decreases in sea ice and land ice, as well as expansion of warmer climate biomes into higher latitudes. Here we present our most recent simulations of the mid-Pliocene climate using the CMIP5 version of the NASAGISS Earth System Model (ModelE2-R). We describe the substantial impact associated with a recent correction made in the implementation of the Gent-McWilliams ocean mixing scheme (GM), which has a large effect on the simulation of ocean surface temperatures, particularly in the North Atlantic Ocean. The effect of this correction on the Pliocene climate results would not have been easily determined from examining its impact on the preindustrial runs alone, a useful demonstration of how the consequences of code improvements as seen in modern climate control runs do not necessarily portend the impacts in extreme climates.Both the GM-corrected and GM-uncorrected simulations were contributed to the Pliocene Model Intercomparison Project (PlioMIP) Experiment 2. Many findings presented here corroborate results from other PlioMIP multi-model ensemble papers, but we also emphasize features in the ModelE2-R simulations that are unlike the ensemble means. The corrected version yields results that more closely resemble the ocean core data as well as the PRISM3D reconstructions of the mid-Pliocene, especially the dramatic warming in the North Atlantic and Greenland-Iceland-Norwegian Sea, which in the new simulation appears to be far more realistic than previously found with older versions of the GISS model. Our belief is that continued development of key physical routines in the atmospheric model, along with higher resolution and recent corrections to mixing parameterisations in the ocean model, have led

  9. Climate response to projected changes in short-lived species under an A1B scenario from 2000-2050 in the GISS climate model

    Energy Technology Data Exchange (ETDEWEB)

    Menon, Surabi; Shindell, Drew T.; Faluvegi, Greg; Bauer, Susanne E.; Koch, Dorothy M.; Unger, Nadine; Menon, Surabi; Miller, Ron L.; Schmidt, Gavin A.; Streets, David G.

    2007-03-26

    We investigate the climate forcing from and response to projected changes in short-lived species and methane under the A1B scenario from 2000-2050 in the GISS climate model. We present a meta-analysis of new simulations of the full evolution of gas and aerosol species and other existing experiments with variations of the same model. The comparison highlights the importance of several physical processes in determining radiative forcing, especially the effect of climate change on stratosphere-troposphere exchange, heterogeneous sulfate-nitrate-dust chemistry, and changes in methane oxidation and natural emissions. However, the impact of these fairly uncertain physical effects is substantially less than the difference between alternative emission scenarios for all short-lived species. The net global mean annual average direct radiative forcing from the short-lived species is .02 W/m{sup 2} or less in our projections, as substantial positive ozone forcing is largely offset by negative aerosol direct forcing. Since aerosol reductions also lead to a reduced indirect effect, the global mean surface temperature warms by {approx}0.07 C by 2030 and {approx}0.13 C by 2050, adding 19% and 17%, respectively, to the warming induced by long-lived greenhouse gases. Regional direct forcings are large, up to 3.8 W/m{sup 2}. The ensemble-mean climate response shows little regional correlation with the spatial pattern of the forcing, however, suggesting that oceanic and atmospheric mixing generally overwhelms the effect of even large localized forcings. Exceptions are the polar regions, where ozone and aerosols may induce substantial seasonal climate changes.

  10. Evaluation of Aerosol Mixing State Classes in the GISS Modele-matrix Climate Model Using Single-particle Mass Spectrometry Measurements

    Science.gov (United States)

    Bauer, Susanne E.; Ault, Andrew; Prather, Kimberly A.

    2013-01-01

    Aerosol particles in the atmosphere are composed of multiple chemical species. The aerosol mixing state, which describes how chemical species are mixed at the single-particle level, provides critical information on microphysical characteristics that determine the interaction of aerosols with the climate system. The evaluation of mixing state has become the next challenge. This study uses aerosol time-of-flight mass spectrometry (ATOFMS) data and compares the results to those of the Goddard Institute for Space Studies modelE-MATRIX (Multiconfiguration Aerosol TRacker of mIXing state) model, a global climate model that includes a detailed aerosol microphysical scheme. We use data from field campaigns that examine a variety of air mass regimens (urban, rural, and maritime). At all locations, polluted areas in California (Riverside, La Jolla, and Long Beach), a remote location in the Sierra Nevada Mountains (Sugar Pine) and observations from Jeju (South Korea), the majority of aerosol species are internally mixed. Coarse aerosol particles, those above 1 micron, are typically aged, such as coated dust or reacted sea-salt particles. Particles below 1 micron contain large fractions of organic material, internally-mixed with sulfate and black carbon, and few external mixtures. We conclude that observations taken over multiple weeks characterize typical air mass types at a given location well; however, due to the instrumentation, we could not evaluate mass budgets. These results represent the first detailed comparison of single-particle mixing states in a global climate model with real-time single-particle mass spectrometry data, an important step in improving the representation of mixing state in global climate models.

  11. Interactive ozone and methane chemistry in GISS-E2 historical and future climate simulations

    Directory of Open Access Journals (Sweden)

    D. T. Shindell

    2013-03-01

    Full Text Available The new generation GISS climate model includes fully interactive chemistry related to ozone in historical and future simulations, and interactive methane in future simulations. Evaluation of ozone, its tropospheric precursors, and methane shows that the model captures much of the large-scale spatial structure seen in recent observations. While the model is much improved compared with the previous chemistry-climate model, especially for ozone seasonality in the stratosphere, there is still slightly too rapid stratospheric circulation, too little stratosphere-to-troposphere ozone flux in the Southern Hemisphere and an Antarctic ozone hole that is too large and persists too long. Quantitative metrics of spatial and temporal correlations with satellite datasets as well as spatial autocorrelation to examine transport and mixing are presented to document improvements in model skill and provide a benchmark for future evaluations. The difference in radiative forcing (RF calculated using modeled tropospheric ozone versus tropospheric ozone observed by TES is only 0.016 W m−2. Historical 20th Century simulations show a steady increase in whole atmosphere ozone RF through 1970 after which there is a decrease through 2000 due to stratospheric ozone depletion. Ozone forcing increases throughout the 21st century under RCP8.5 owing to a projected recovery of stratospheric ozone depletion and increases in methane, but decreases under RCP4.5 and 2.6 due to reductions in emissions of other ozone precursors. RF from methane is 0.05 to 0.18 W m−2 higher in our model calculations than in the RCP RF estimates. The surface temperature response to ozone through 1970 follows the increase in forcing due to tropospheric ozone. After that time, surface temperatures decrease as ozone RF declines due to stratospheric depletion. The stratospheric ozone depletion also induces substantial changes in surface winds and the Southern Ocean circulation, which may play a role in

  12. The MJO Transition from Shallow to Deep Convection in CloudSat/CALIPSO Data and GISS GCM Simulations

    Science.gov (United States)

    DelGenio, Anthony G.; Chen, Yonghua; Kim, Daehyun; Yao, Mao-Sung

    2013-01-01

    The relationship between convective penetration depth and tropospheric humidity is central to recent theories of the Madden-Julian oscillation (MJO). It has been suggested that general circulation models (GCMs) poorly simulate the MJO because they fail to gradually moisten the troposphere by shallow convection and simulate a slow transition to deep convection. CloudSat and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) data are analyzed to document the variability of convection depth and its relation to water vapor during the MJO transition from shallow to deep convection and to constrain GCM cumulus parameterizations. Composites of cloud occurrence for 10MJO events show the following anticipatedMJO cloud structure: shallow and congestus clouds in advance of the peak, deep clouds near the peak, and upper-level anvils after the peak. Cirrus clouds are also frequent in advance of the peak. The Advanced Microwave Scanning Radiometer for EarthObserving System (EOS) (AMSR-E) columnwater vapor (CWV) increases by;5 mmduring the shallow- deep transition phase, consistent with the idea of moisture preconditioning. Echo-top height of clouds rooted in the boundary layer increases sharply with CWV, with large variability in depth when CWV is between;46 and 68 mm. International Satellite Cloud Climatology Project cloud classifications reproduce these climatological relationships but correctly identify congestus-dominated scenes only about half the time. A version of the Goddard Institute for Space Studies Model E2 (GISS-E2) GCM with strengthened entrainment and rain evaporation that produces MJO-like variability also reproduces the shallow-deep convection transition, including the large variability of cloud-top height at intermediate CWV values. The variability is due to small grid-scale relative humidity and lapse rate anomalies for similar values of CWV. 1.

  13. Multiple GISS AGCM Hindcasts and MSU Versions of 1979-1998

    Science.gov (United States)

    Shah, Kathryn Pierce; Rind, David; Druyan, Leonard; Lonergan, Patrick; Chandler, Mark

    1998-01-01

    Multiple realizations of the 1979-1998 time period have been simulated by the Goddard Institute for Space Studies Atmospheric General Circulation Model (GISS AGCM) to explore its responsiveness to accumulated forcings, particularly over sensitive agricultural regions. A microwave radiative transfer postprocessor has produced the AGCM's lower tropospheric, tropospheric and lower stratospheric brightness temperature (Tb) time series for correlations with the various Microwave Sounding Unit (MSU) time series available. MSU maps of monthly means and anomalies were also used to assess the AGCM's mean annual cycle and regional variability. Seven realizations by the AGCM were forced by observed sea surface temperatures (sst) through 1992 to gather rough standard deviations associated with internal model variability. Subsequent runs hindcast January 1979 through April 1998 with an accumulation of forcings: observed ssts, greenhouse gases, stratospheric volcanic aerosols. stratospheric and tropospheric ozone and tropospheric sulfate and black carbon aerosols. The goal of narrowing gaps between AGCM and MSU time series was complicated by MSU time series, by Tb simulation concerns and by unforced climatic variability in the AGCM and in the real world. Lower stratospheric Tb correlations between the AGCM and MSU for 1979-1998 reached as high as 0.91 +/-0.16 globally with sst, greenhouse gases, volcanic aerosol, stratospheric ozone forcings and tropospheric aerosols. Mid-tropospheric Tb correlations reached as high as 0.66 +/-.04 globally and 0.84 +/-.02 in the tropics. Oceanic lower tropospheric Tb correlations similarly reached 0.61 +/-.06 globally and 0.79 +/-.02 in the tropics. Of the sensitive agricultural areas considered, Nordeste in northeastern Brazil was simulated best with mid-tropospheric Tb correlations up to 0.75 +/- .03. The two other agricultural regions, in Africa and in the northern mid-latitudes, suffered from higher levels of non-sst variability. Zimbabwe

  14. Animating climate model data

    Science.gov (United States)

    DaPonte, John S.; Sadowski, Thomas; Thomas, Paul

    2006-05-01

    This paper describes a collaborative project conducted by the Computer Science Department at Southern Connecticut State University and NASA's Goddard Institute for Space Science (GISS). Animations of output from a climate simulation math model used at GISS to predict rainfall and circulation have been produced for West Africa from June to September 2002. These early results have assisted scientists at GISS in evaluating the accuracy of the RM3 climate model when compared to similar results obtained from satellite imagery. The results presented below will be refined to better meet the needs of GISS scientists and will be expanded to cover other geographic regions for a variety of time frames.

  15. Using Self Organizing Maps to evaluate the NASA GISS AR5 SCM at the ARM SGP Site

    Science.gov (United States)

    Dong, X.; Kennedy, A. D.; Xi, B.

    2010-12-01

    Cluster analyses have gained popularity in recent years to establish cloud regimes using satellite and radar cloud data. These regimes can then be used to evaluate climate models or to determine what large-scale or subgrid processes are responsible for cloud formation. An alternative approach is to first classify the meteorological regimes (i.e. synoptic pattern and forcing) and then determine what cloud scenes occur. In this study, a competitive neural network known as the Self Organizing Map (SOM) is used to classify synoptic patterns over the Southern Great Plains (SGP) region to evaluate simulated clouds from the AR5 version of the NASA GISS Model E Single Column Model (SCM). In detail, 54-class SOMs have been developed using North American Regional Reanalysis (NARR) variables averaged to 2x2.5 degree latitude longitude grid boxes for a region of 7x7 grid boxes centered on the ARM SGP site. Variables input into the SOM include mean sea-level pressure and the horizontal wind components, relative humidity, and geopotential height at the 900, 500, and 300 hPa levels. These SOMs are produced for the winter (DJF), spring (MAM), summer (JJA), and fall (SON) seasons during 1999-2001. This synoptic typing will be associated with observed cloud fractions and forcing properties from the ARM SGP site and then used to evaluate simulated clouds from the SCM. SOMs provide a visually intuitive way to understand their classifications because classes are related to each other in a two-dimensional space. In Fig. 1 for example, the reader can easily see for a 54 class SOM during the winter season, classes with higher 300 hPa mean relative humidities are clustered near each other. This allows for the user to identify that there appears to be a relationship between mean 300 hPa RH and high cloud fraction as observed by the ARM SGP site. Figure 1. Mean high cloud fraction (top panel) and 300 hPa Relative Humidity (bottom panel) for a 9x6 (54 class) SOM during the winter (DJF) season

  16. Evaluation of methane emissions from West Siberian wetlands based on inverse modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H-S; Inoue, G [Research Institute for Humanity and Nature, 457-4 Motoyama, Kamigamo, Kita-ku, Kyoto 603-8047 (Japan); Maksyutov, S; Machida, T [National Institute for Environmental Studies, 16-2 Onogawa, Tsukuba, Ibaraki 305-8506 (Japan); Glagolev, M V [Lomonosov Moscow State University, GSP-1, Leninskie Gory, Moscow 119991 (Russian Federation); Patra, P K [Research Institute for Global Change/JAMSTEC, 3173-25 Showa-cho, Kanazawa-ku, Yokohama, Kanagawa 236-0001 (Japan); Sudo, K, E-mail: heonsook.kim@gmail.com [Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601 (Japan)

    2011-07-15

    West Siberia contains the largest extent of wetlands in the world, including large peat deposits; the wetland area is equivalent to 27% of the total area of West Siberia. This study used inverse modeling to refine emissions estimates for West Siberia using atmospheric CH{sub 4} observations and two wetland CH{sub 4} emissions inventories: (1) the global wetland emissions dataset of the NASA Goddard Institute for Space Studies (the GISS inventory), which includes emission seasons and emission rates based on climatology of monthly surface air temperature and precipitation, and (2) the West Siberian wetland emissions data (the Bc7 inventory), based on in situ flux measurements and a detailed wetland classification. The two inversions using the GISS and Bc7 inventories estimated annual mean flux from West Siberian wetlands to be 2.9 {+-} 1.7 and 3.0 {+-} 1.4 Tg yr{sup -1}, respectively, which are lower than the 6.3 Tg yr{sup -1} predicted in the GISS inventory, but similar to those of the Bc7 inventory (3.2 Tg yr{sup -1}). The well-constrained monthly fluxes and a comparison between the predicted CH{sub 4} concentrations in the two inversions suggest that the Bc7 inventory predicts the seasonal cycle of West Siberian wetland CH{sub 4} emissions more reasonably, indicating that the GISS inventory predicts more emissions from wetlands in northern and middle taiga.

  17. MATRIX (Multiconfiguration Aerosol TRacker of mIXing state): an aerosol microphysical module for global atmospheric models

    OpenAIRE

    Bauer , S. E.; Wright , D.; Koch , D.; Lewis , E. R.; Mcgraw , R.; Chang , L.-S.; Schwartz , S. E.; Ruedy , R.

    2008-01-01

    A new aerosol microphysical module MATRIX, the Multiconfiguration Aerosol TRacker of mIXing state, and its application in the Goddard Institute for Space Studies (GISS) climate model (ModelE) are described. This module, which is based on the quadrature method of moments (QMOM), represents nucleation, condensation, coagulation, internal and external mixing, and cloud-drop activation and provides aerosol particle mass and number concentration and particle size information for up to 16 mixed-mod...

  18. Software Engineering Tools for Scientific Models

    Science.gov (United States)

    Abrams, Marc; Saboo, Pallabi; Sonsini, Mike

    2013-01-01

    Software tools were constructed to address issues the NASA Fortran development community faces, and they were tested on real models currently in use at NASA. These proof-of-concept tools address the High-End Computing Program and the Modeling, Analysis, and Prediction Program. Two examples are the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) atmospheric model in Cell Fortran on the Cell Broadband Engine, and the Goddard Institute for Space Studies (GISS) coupled atmosphere- ocean model called ModelE, written in fixed format Fortran.

  19. Using In Situ Observations and Satellite Retrievals to Constrain Large-Eddy Simulations and Single-Column Simulations: Implications for Boundary-Layer Cloud Parameterization in the NASA GISS GCM

    Science.gov (United States)

    Remillard, J.

    2015-12-01

    Two low-cloud periods from the CAP-MBL deployment of the ARM Mobile Facility at the Azores are selected through a cluster analysis of ISCCP cloud property matrices, so as to represent two low-cloud weather states that the GISS GCM severely underpredicts not only in that region but also globally. The two cases represent (1) shallow cumulus clouds occurring in a cold-air outbreak behind a cold front, and (2) stratocumulus clouds occurring when the region was dominated by a high-pressure system. Observations and MERRA reanalysis are used to derive specifications used for large-eddy simulations (LES) and single-column model (SCM) simulations. The LES captures the major differences in horizontal structure between the two low-cloud fields, but there are unconstrained uncertainties in cloud microphysics and challenges in reproducing W-band Doppler radar moments. The SCM run on the vertical grid used for CMIP-5 runs of the GCM does a poor job of representing the shallow cumulus case and is unable to maintain an overcast deck in the stratocumulus case, providing some clues regarding problems with low-cloud representation in the GCM. SCM sensitivity tests with a finer vertical grid in the boundary layer show substantial improvement in the representation of cloud amount for both cases. GCM simulations with CMIP-5 versus finer vertical gridding in the boundary layer are compared with observations. The adoption of a two-moment cloud microphysics scheme in the GCM is also tested in this framework. The methodology followed in this study, with the process-based examination of different time and space scales in both models and observations, represents a prototype for GCM cloud parameterization improvements.

  20. Using beryllium-7 to assess cross-tropopause transport in global models

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Hongyu [National Institute of Aerospace, Hampton, VA (United States); Considine, David B. [NASA Langley Research Center, Hampton, VA (United States); Horowitz, Larry W. [NOAA Geophysical Fluid and Dynamics Laboratory, Princeton, NJ (United States); and others

    2016-07-01

    We use the Global Modeling Initiative (GMI) modeling framework to assess the utility of cosmogenic beryllium-7 ({sup 7}Be), a natural aerosol tracer, for evaluating cross-tropopause transport in global models. The GMI chemical transport model (CTM) was used to simulate atmospheric {sup 7}Be distributions using four different meteorological data sets (GEOS1-STRAT DAS, GISS II{sup '} GCM, fvGCM, and GEOS4-DAS), featuring significantly different stratosphere-troposphere exchange (STE) characteristics. The simulations were compared with the upper troposphere and/or lower stratosphere (UT/LS) {sup 7}Be climatology constructed from ∝ 25 years of aircraft and balloon data, as well as climatological records of surface concentrations and deposition fluxes. Comparison of the fraction of surface air of stratospheric origin estimated from the {sup 7}Be simulations with observationally derived estimates indicates excessive cross-tropopause transport at mid-latitudes in simulations using GEOS1-STRAT and at high latitudes using GISS II{sup '} meteorological data. These simulations also overestimate {sup 7}Be deposition fluxes at mid-latitudes (GEOS1-STRAT) and at high latitudes (GISS II{sup '}), respectively. We show that excessive cross-tropopause transport of {sup 7}Be corresponds to overestimated stratospheric contribution to tropospheric ozone. Our perspectives on STE in these meteorological fields based on {sup 7}Be simulations are consistent with previous modeling studies of tropospheric ozone using the same meteorological fields. We conclude that the observational constraints for {sup 7}Be and observed {sup 7}Be total deposition fluxes can be used routinely as a first-order assessment of cross-tropopause transport in global models.

  1. Near-Surface Meteorology During the Arctic Summer Cloud Ocean Study (ASCOS): Evaluation of Reanalyses and Global Climate Models.

    Science.gov (United States)

    De Boer, G.; Shupe, M.D.; Caldwell, P.M.; Bauer, Susanne E.; Persson, O.; Boyle, J.S.; Kelley, M.; Klein, S.A.; Tjernstrom, M.

    2014-01-01

    Atmospheric measurements from the Arctic Summer Cloud Ocean Study (ASCOS) are used to evaluate the performance of three atmospheric reanalyses (European Centre for Medium Range Weather Forecasting (ECMWF)- Interim reanalysis, National Center for Environmental Prediction (NCEP)-National Center for Atmospheric Research (NCAR) reanalysis, and NCEP-DOE (Department of Energy) reanalysis) and two global climate models (CAM5 (Community Atmosphere Model 5) and NASA GISS (Goddard Institute for Space Studies) ModelE2) in simulation of the high Arctic environment. Quantities analyzed include near surface meteorological variables such as temperature, pressure, humidity and winds, surface-based estimates of cloud and precipitation properties, the surface energy budget, and lower atmospheric temperature structure. In general, the models perform well in simulating large-scale dynamical quantities such as pressure and winds. Near-surface temperature and lower atmospheric stability, along with surface energy budget terms, are not as well represented due largely to errors in simulation of cloud occurrence, phase and altitude. Additionally, a development version of CAM5, which features improved handling of cloud macro physics, has demonstrated to improve simulation of cloud properties and liquid water amount. The ASCOS period additionally provides an excellent example of the benefits gained by evaluating individual budget terms, rather than simply evaluating the net end product, with large compensating errors between individual surface energy budget terms that result in the best net energy budget.

  2. Incorporating GISs (geographic information systems) into decision support systems: Where have we come from and where do we need to go

    Energy Technology Data Exchange (ETDEWEB)

    Honea, R.B.; Hake, K.A.; Durfee, R.C.

    1990-01-01

    This paper reviews some of the pitfalls in the design and use of GISs that were encountered with both large and small projects for a variety of sponsors. The stand-alone, self-sufficient GIS world prevalent today does not adequately meet the needs of decision support systems. Developers of these systems are left with the difficult task of software system integration which generally produces less than adequate results. Modularization of GIS concepts is critical to the solution which involves the establishment of a GIS toolkit. More functionality and flexibility are introduced through this approach so that GIS may be truly applied'' in system development projects.

  3. Do responses to different anthropogenic forcings add linearly in climate models?

    International Nuclear Information System (INIS)

    Marvel, Kate; Schmidt, Gavin A; LeGrande, Allegra N; Nazarenko, Larissa; Shindell, Drew; Bonfils, Céline; Tsigaridis, Kostas

    2015-01-01

    Many detection and attribution and pattern scaling studies assume that the global climate response to multiple forcings is additive: that the response over the historical period is statistically indistinguishable from the sum of the responses to individual forcings. Here, we use the NASA Goddard Institute for Space Studies (GISS) and National Center for Atmospheric Research Community Climate System Model (CCSM4) simulations from the CMIP5 archive to test this assumption for multi-year trends in global-average, annual-average temperature and precipitation at multiple timescales. We find that responses in models forced by pre-computed aerosol and ozone concentrations are generally additive across forcings. However, we demonstrate that there are significant nonlinearities in precipitation responses to different forcings in a configuration of the GISS model that interactively computes these concentrations from precursor emissions. We attribute these to differences in ozone forcing arising from interactions between forcing agents. Our results suggest that attribution to specific forcings may be complicated in a model with fully interactive chemistry and may provide motivation for other modeling groups to conduct further single-forcing experiments. (letter)

  4. An observational and modeling study of the regional impacts of climate variability

    Science.gov (United States)

    Horton, Radley M.

    Climate variability has large impacts on humans and their agricultural systems. Farmers are at the center of this agricultural network, but it is often agricultural planners---regional planners, extension agents, commodity groups and cooperatives---that translate climate information for users. Global climate models (GCMs) are a leading tool for understanding and predicting climate and climate change. Armed with climate projections and forecasts, agricultural planners adapt their decision-making to optimize outcomes. This thesis explores what GCMs can, and cannot, tell us about climate variability and change at regional scales. The question is important, since high-quality regional climate projections could assist farmers and regional planners in key management decisions, contributing to better agricultural outcomes. To answer these questions, climate variability and its regional impacts are explored in observations and models for the current and future climate. The goals are to identify impacts of observed variability, assess model simulation of variability, and explore how climate variability and its impacts may change under enhanced greenhouse warming. Chapter One explores how well Goddard Institute for Space Studies (GISS) atmospheric models, forced by historical sea surface temperatures (SST), simulate climatology and large-scale features during the exceptionally strong 1997--1999 El Nino Southern Oscillation (ENSO) cycle. Reasonable performance in this 'proof of concept' test is considered a minimum requirement for further study of variability in models. All model versions produce appropriate local changes with ENSO, indicating that with correct ocean temperatures these versions are capable of simulating the large-scale effects of ENSO around the globe. A high vertical resolution model (VHR) provides the best simulation. Evidence is also presented that SST anomalies outside the tropical Pacific may play a key role in generating remote teleconnections even

  5. GISS Surface Temperature Analysis

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The GISTEMP dataset is a global 2x2 gridded temperature anomaly dataset. Temperature data is updated around the middle of every month using current data files from...

  6. Modeling and cellular studies

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    Testing the applicability of mathematical models with carefully designed experiments is a powerful tool in the investigations of the effects of ionizing radiation on cells. The modeling and cellular studies complement each other, for modeling provides guidance for designing critical experiments which must provide definitive results, while the experiments themselves provide new input to the model. Based on previous experimental results the model for the accumulation of damage in Chlamydomonas reinhardi has been extended to include various multiple two-event combinations. Split dose survival experiments have shown that models tested to date predict most but not all the observed behavior. Stationary-phase mammalian cells, required for tests of other aspects of the model, have been shown to be at different points in the cell cycle depending on how they were forced to stop proliferating. These cultures also demonstrate different capacities for repair of sublethal radiation damage

  7. A Global Modeling Study on Carbonaceous Aerosol Microphysical Characteristics and Radiative Effects

    Science.gov (United States)

    Bauer, S. E.; Menon, S.; Koch, D.; Bond, T. C.; Tsigaridis, K.

    2010-01-01

    Recently, attention has been drawn towards black carbon aerosols as a short-term climate warming mitigation candidate. However the global and regional impacts of the direct, indirect and semi-direct aerosol effects are highly uncertain, due to the complex nature of aerosol evolution and the way that mixed, aged aerosols interact with clouds and radiation. A detailed aerosol microphysical scheme, MATRIX, embedded within the GISS climate model is used in this study to present a quantitative assessment of the impact of microphysical processes involving black carbon, such as emission size distributions and optical properties on aerosol cloud activation and radiative effects. Our best estimate for net direct and indirect aerosol radiative flux change between 1750 and 2000 is -0.56 W/m2. However, the direct and indirect aerosol effects are quite sensitive to the black and organic carbon size distribution and consequential mixing state. The net radiative flux change can vary between -0.32 to -0.75 W/m2 depending on these carbonaceous particle properties at emission. Taking into account internally mixed black carbon particles let us simulate correct aerosol absorption. Absorption of black carbon aerosols is amplified by sulfate and nitrate coatings and, even more strongly, by organic coatings. Black carbon mitigation scenarios generally showed reduced radiative fluxeswhen sources with a large proportion of black carbon, such as diesel, are reduced; however reducing sources with a larger organic carbon component as well, such as bio-fuels, does not necessarily lead to a reduction in positive radiative flux.

  8. Chemistry-Climate Interactions in the Goddard Institute for Space Studies General Circulation Model. 2; New Insights into Modeling the Pre-Industrial Atmosphere

    Science.gov (United States)

    Grenfell, J. Lee; Shindell, D. T.; Koch, D.; Rind, D.; Hansen, James E. (Technical Monitor)

    2002-01-01

    We investigate the chemical (hydroxyl and ozone) and dynamical response to changing from present day to pre-industrial conditions in the Goddard Institute for Space Studies General Circulation Model (GISS GMC). We identify three main improvements not included by many other works. Firstly, our model includes interactive cloud calculations. Secondly we reduce sulfate aerosol which impacts NOx partitioning hence Ox distributions. Thirdly we reduce sea surface temperatures and increase ocean ice coverage which impact water vapor and ground albedo respectively. Changing the ocean data (hence water vapor and ozone) produces a potentially important feedback between the Hadley circulation and convective cloud cover. Our present day run (run 1, control run) global mean OH value was 9.8 x 10(exp 5) molecules/cc. For our best estimate of pre-industrial conditions run (run 2) which featured modified chemical emissions, sulfate aerosol and sea surface temperatures/ocean ice, this value changed to 10.2 x 10(exp 5) molecules/cc. Reducing only the chemical emissions to pre-industrial levels in run 1 (run 3) resulted in this value increasing to 10.6 x 10(exp 5) molecules/cc. Reducing the sulfate in run 3 to pre-industrial levels (run 4) resulted in a small increase in global mean OH (10.7 x 10(exp 5) molecules/cc). Changing the ocean data in run 4 to pre-industrial levels (run 5) led to a reduction in this value to 10.3 x 10(exp 5) molecules/cc. Mean tropospheric ozone burdens were 262, 181, 180, 180, and 182 Tg for runs 1-5 respectively.

  9. The dependence of the oceans MOC on mesoscale eddy diffusivities: A model study

    Science.gov (United States)

    Marshall, John; Scott, Jeffery R.; Romanou, Anastasia; Kelley, Maxwell; Leboissetier, Anthony

    2017-01-01

    The dependence of the depth and strength of the ocean's global meridional overturning cells (MOC) on the specification of mesoscale eddy diffusivity (K) is explored in two ocean models. The GISS and MIT ocean models are driven by the same prescribed forcing fields, configured in similar ways, spun up to equilibrium for a range of K 's and the resulting MOCs mapped and documented. Scaling laws implicit in modern theories of the MOC are used to rationalize the results. In all calculations the K used in the computation of eddy-induced circulation and that used in the representation of eddy stirring along neutral surfaces, is set to the same value but is changed across experiments. We are able to connect changes in the strength and depth of the Atlantic MOC, the southern ocean upwelling MOC, and the deep cell emanating from Antarctica, to changes in K.

  10. MATRIX (Multiconfiguration Aerosol TRacker of mIXing state: an aerosol microphysical module for global atmospheric models

    Directory of Open Access Journals (Sweden)

    S. E. Bauer

    2008-10-01

    Full Text Available A new aerosol microphysical module MATRIX, the Multiconfiguration Aerosol TRacker of mIXing state, and its application in the Goddard Institute for Space Studies (GISS climate model (ModelE are described. This module, which is based on the quadrature method of moments (QMOM, represents nucleation, condensation, coagulation, internal and external mixing, and cloud-drop activation and provides aerosol particle mass and number concentration and particle size information for up to 16 mixed-mode aerosol populations. Internal and external mixing among aerosol components sulfate, nitrate, ammonium, carbonaceous aerosols, dust and sea-salt particles are represented. The solubility of each aerosol population, which is explicitly calculated based on its soluble and insoluble components, enables calculation of the dependence of cloud drop activation on the microphysical characterization of multiple soluble aerosol populations.

    A detailed model description and results of box-model simulations of various aerosol population configurations are presented. The box model experiments demonstrate the dependence of cloud activating aerosol number concentration on the aerosol population configuration; comparisons to sectional models are quite favorable. MATRIX is incorporated into the GISS climate model and simulations are carried out primarily to assess its performance/efficiency for global-scale atmospheric model application. Simulation results were compared with aircraft and station measurements of aerosol mass and number concentration and particle size to assess the ability of the new method to yield data suitable for such comparison. The model accurately captures the observed size distributions in the Aitken and accumulation modes up to particle diameter 1 μm, in which sulfate, nitrate, black and organic carbon are predominantly located; however the model underestimates coarse-mode number concentration and size, especially in the marine environment

  11. ICRF edge modeling studies

    Energy Technology Data Exchange (ETDEWEB)

    Lehrman, I.S. (Grumman Corp. Research Center, Princeton, NJ (USA)); Colestock, P.L. (Princeton Univ., NJ (USA). Plasma Physics Lab.)

    1990-04-01

    Theoretical models have been developed, and are currently being refined, to explain the edge plasma-antenna interaction that occurs during ICRF heating. The periodic structure of a Faraday shielded antenna is found to result in strong ponderomotive force in the vicinity of the antenna. A fluid model, which incorporates the ponderomotive force, shows an increase in transport to the Faraday shield. A kinetic model shows that the strong antenna near fields act to increase the energy of deuterons which strike the shield, thereby increasing the sputtering of shield material. Estimates of edge impurity harmonic heating show no significant heating for either in or out-of-phase antenna operation. Additionally, a particle model for electrons near the shield shows that heating results from the parallel electric field associated with the fast wave. A quasilinear model for edge electron heating is presented and compared to the particle calculations. The models' predictions are shown to be consistent with measurements of enhanced transport. (orig.).

  12. Studies on DANESS Code Modeling

    International Nuclear Information System (INIS)

    Jeong, Chang Joon

    2009-09-01

    The DANESS code modeling study has been performed. DANESS code is widely used in a dynamic fuel cycle analysis. Korea Atomic Energy Research Institute (KAERI) has used the DANESS code for the Korean national nuclear fuel cycle scenario analysis. In this report, the important models such as Energy-demand scenario model, New Reactor Capacity Decision Model, Reactor and Fuel Cycle Facility History Model, and Fuel Cycle Model are investigated. And, some models in the interface module are refined and inserted for Korean nuclear fuel cycle model. Some application studies have also been performed for GNEP cases and for US fast reactor scenarios with various conversion ratios

  13. Evaluation and Improvement of Cloud and Convective Parameterizations from Analyses of ARM Observations and Models

    Energy Technology Data Exchange (ETDEWEB)

    Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)

    2016-03-11

    Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.

  14. Assessing and Upgrading Ocean Mixing for the Study of Climate Change

    Science.gov (United States)

    Howard, A. M.; Fells, J.; Lindo, F.; Tulsee, V.; Canuto, V.; Cheng, Y.; Dubovikov, M. S.; Leboissetier, A.

    2016-12-01

    Climate is critical. Climate variability affects us all; Climate Change is a burning issue. Droughts, floods, other extreme events, and Global Warming's effects on these and problems such as sea-level rise and ecosystem disruption threaten lives. Citizens must be informed to make decisions concerning climate such as "business as usual" vs. mitigating emissions to keep warming within bounds. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. To make useful predictions we must realistically model each component of the climate system, including the ocean, whose critical role includes transporting&storing heat and dissolved CO2. We need physically based parameterizations of key ocean processes that can't be put explicitly in a global climate model, e.g. vertical&lateral mixing. The NASA-GISS turbulence group uses theory to model mixing including: 1) a comprehensive scheme for small scale vertical mixing, including convection&shear, internal waves & double-diffusion, and bottom tides 2) a new parameterization for the lateral&vertical mixing by mesoscale eddies. For better understanding we write our own programs. To assess the modelling MATLAB programs visualize and calculate statistics, including means, standard deviations and correlations, on NASA-GISS OGCM output with different mixing schemes and help us study drift from observations. We also try to upgrade the schemes, e.g. the bottom tidal mixing parameterizations' roughness, calculated from high resolution topographic data using Gaussian weighting functions with cut-offs. We study the effects of their parameters to improve them. A FORTRAN program extracts topography data subsets of manageable size for a MATLAB program, tested on idealized cases, to visualize&calculate roughness on. Students are introduced to modeling a complex system, gain a deeper appreciation of climate science, programming skills and familiarity with MATLAB, while furthering climate

  15. Evaluating Ice Nucleating Particle Concentrations From Prognostic Dust Minerals in an Earth System Model

    Science.gov (United States)

    Perlwitz, J. P.; Knopf, D. A.; Fridlind, A. M.; Miller, R. L.; Pérez García-Pando, C.; DeMott, P. J.

    2016-12-01

    The effect of aerosol particles on the radiative properties of clouds, the so-called, indirect effect of aerosols, is recognized as one of the largest sources of uncertainty in climate prediction. The distribution of water vapor, precipitation, and ice cloud formation are influenced by the atmospheric ice formation, thereby modulating cloud albedo and thus climate. It is well known that different particle types possess different ice formation propensities with mineral dust being a superior ice nucleating particle (INP) compared to soot particles. Furthermore, some dust mineral types are more proficient INP than others, depending on temperature and relative humidity.In recent work, we have presented an improved dust aerosol module in the NASA GISS Earth System ModelE2 with prognostic mineral composition of the dust aerosols. Thus, there are regional variations in dust composition. We evaluated the predicted mineral fractions of dust aerosols by comparing them to measurements from a compilation of about 60 published literature references. Additionally, the capability of the model to reproduce the elemental composition of the simulated dusthas been tested at Izana Observatory at Tenerife, Canary Islands, which is located off-shore of Africa and where frequent dust events are observed. We have been able to show that the new approach delivers a robust improvement of the predicted mineral fractions and elemental composition of dust.In the current study, we use three-dimensional dust mineral fields and thermodynamic conditions, which are simulated using GISS ModelE, to calculate offline the INP concentrations derived using different ice nucleation parameterizations that are currently discussed. We evaluate the calculated INP concentrations from the different parameterizations by comparing them to INP concentrations from field measurements.

  16. Influence of daily versus monthly fire emissions on atmospheric model applications in the tropics

    Science.gov (United States)

    Marlier, M. E.; Voulgarakis, A.; Faluvegi, G.; Shindell, D. T.; DeFries, R. S.

    2012-12-01

    Fires are widely used throughout the tropics to create and maintain areas for agriculture, but are also significant contributors to atmospheric trace gas and aerosol concentrations. However, the timing and magnitude of fire activity can vary strongly by year and ecosystem type. For example, frequent, low intensity fires dominate in African savannas whereas Southeast Asian peatland forests are susceptible to huge pulses of emissions during regional El Niño droughts. Despite the potential implications for modeling interactions with atmospheric chemistry and transport, fire emissions have commonly been input into global models at a monthly resolution. Recognizing the uncertainty that this can introduce, several datasets have parsed fire emissions to daily and sub-daily scales with satellite active fire detections. In this study, we explore differences between utilizing the monthly and daily Global Fire Emissions Database version 3 (GFED3) products as inputs into the NASA GISS-E2 composition climate model. We aim to understand how the choice of the temporal resolution of fire emissions affects uncertainty with respect to several common applications of global models: atmospheric chemistry, air quality, and climate. Focusing our analysis on tropical ozone, carbon monoxide, and aerosols, we compare modeled concentrations with available ground and satellite observations. We find that increasing the temporal frequency of fire emissions from monthly to daily can improve correlations with observations, predominately in areas or during seasons more heavily affected by fires. Differences between the two datasets are more evident with public health applications: daily resolution fire emissions increases the number of days exceeding World Health Organization air quality targets.

  17. Model quality and safety studies

    DEFF Research Database (Denmark)

    Petersen, K.E.

    1997-01-01

    The paper describes the EC initiative on model quality assessment and emphasizes some of the problems encountered in the selection of data from field tests used in the evaluation process. Further, it discusses the impact of model uncertainties in safety studies of industrial plants. The model...... that most of these have never been through a procedure of evaluation, but nonetheless are used to assist in making decisions that may directly affect the safety of the public and the environment. As a major funder of European research on major industrial hazards, DGXII is conscious of the importance......-tain model is appropriate for use in solving a given problem. Further, the findings from the REDIPHEM project related to dense gas dispersion will be highlighted. Finally, the paper will discuss the need for model quality assessment in safety studies....

  18. Downscaling a global climate model to simulate climate change over the US and the implication on regional and urban air quality

    Directory of Open Access Journals (Sweden)

    M. Trail

    2013-09-01

    Full Text Available Climate change can exacerbate future regional air pollution events by making conditions more favorable to form high levels of ozone. In this study, we use spectral nudging with the Weather Research and Forecasting (WRF model to downscale NASA earth system GISS modelE2 results during the years 2006 to 2010 and 2048 to 2052 over the contiguous United States in order to compare the resulting meteorological fields from the air quality perspective during the four seasons of five-year historic and future climatological periods. GISS results are used as initial and boundary conditions by the WRF regional climate model (RCM to produce hourly meteorological fields. The downscaling technique and choice of physics parameterizations used are evaluated by comparing them with in situ observations. This study investigates changes of similar regional climate conditions down to a 12 km by 12 km resolution, as well as the effect of evolving climate conditions on the air quality at major US cities. The high-resolution simulations produce somewhat different results than the coarse-resolution simulations in some regions. Also, through the analysis of the meteorological variables that most strongly influence air quality, we find consistent changes in regional climate that would enhance ozone levels in four regions of the US during fall (western US, Texas, northeastern, and southeastern US, one region during summer (Texas, and one region where changes potentially would lead to better air quality during spring (Northeast. Changes in regional climate that would enhance ozone levels are increased temperatures and stagnation along with decreased precipitation and ventilation. We also find that daily peak temperatures tend to increase in most major cities in the US, which would increase the risk of health problems associated with heat stress. Future work will address a more comprehensive assessment of emissions and chemistry involved in the formation and removal of air

  19. Downscaling a Global Climate Model to Simulate Climate Change Impacts on U.S. Regional and Urban Air Quality

    Science.gov (United States)

    Trail, M.; Tsimpidi, A. P.; Liu, P.; Tsigaridis, K.; Hu, Y.; Nenes, A.; Russell, A. G.

    2013-01-01

    Climate change can exacerbate future regional air pollution events by making conditions more favorable to form high levels of ozone. In this study, we use spectral nudging with WRF to downscale NASA earth system GISS modelE2 results during the years 2006 to 2010 and 2048 to 2052 over the continental United States in order to compare the resulting meteorological fields from the air quality perspective during the four seasons of five-year historic and future climatological periods. GISS results are used as initial and boundary conditions by the WRF RCM to produce hourly meteorological fields. The downscaling technique and choice of physics parameterizations used are evaluated by comparing them with in situ observations. This study investigates changes of similar regional climate conditions down to a 12km by 12km resolution, as well as the effect of evolving climate conditions on the air quality at major U.S. cities. The high resolution simulations produce somewhat different results than the coarse resolution simulations in some regions. Also, through the analysis of the meteorological variables that most strongly influence air quality, we find consistent changes in regional climate that would enhance ozone levels in four regions of the U.S. during fall (Western U.S., Texas, Northeastern, and Southeastern U.S), one region during summer (Texas), and one region where changes potentially would lead to better air quality during spring (Northeast). We also find that daily peak temperatures tend to increase in most major cities in the U.S. which would increase the risk of health problems associated with heat stress. Future work will address a more comprehensive assessment of emissions and chemistry involved in the formation and removal of air pollutants.

  20. Mathematical study of mixing models

    International Nuclear Information System (INIS)

    Lagoutiere, F.; Despres, B.

    1999-01-01

    This report presents the construction and the study of a class of models that describe the behavior of compressible and non-reactive Eulerian fluid mixtures. Mixture models can have two different applications. Either they are used to describe physical mixtures, in the case of a true zone of extensive mixing (but then this modelization is incomplete and must be considered only as a point of departure for the elaboration of models of mixtures actually relevant). Either they are used to solve the problem of the numerical mixture. This problem appears during the discretization of an interface which separates fluids having laws of different state: the zone of numerical mixing is the set of meshes which cover the interface. The attention is focused on numerical mixtures, for which the hypothesis of non-miscibility (physics) will bring two equations (the sixth and the eighth of the system). It is important to emphasize that even in the case of the only numerical mixture, the presence in one and same place (same mesh) of several fluids have to be taken into account. This will be formalized by the possibility for mass fractions to take all values between 0 and 1. This is not at odds with the equations that derive from the hypothesis of non-miscibility. One way of looking at things is to consider that there are two scales of observation: the physical scale at which one observes the separation of fluids, and the numerical scale, given by the fineness of the mesh, to which a mixture appears. In this work, mixtures are considered from the mathematical angle (both in the elaboration phase and during their study). In particular, Chapter 5 shows a result of model degeneration for a non-extended mixing zone (case of an interface): this justifies the use of models in the case of numerical mixing. All these models are based on the classical model of non-viscous compressible fluids recalled in Chapter 2. In Chapter 3, the central point of the elaboration of the class of models is

  1. Urban Studies: A Learning Model.

    Science.gov (United States)

    Cooper, Terry L.; Sundeen, Richard

    1979-01-01

    The urban studies learning model described in this article was found to increase students' self-esteem, imbue a more flexible and open perspective, contribute to the capacity for self-direction, produce increases on the feeling reactivity, spontaneity, and acceptance of aggression scales, and expand interpersonal competence. (Author/WI)

  2. Campus network security model study

    Science.gov (United States)

    Zhang, Yong-ku; Song, Li-ren

    2011-12-01

    Campus network security is growing importance, Design a very effective defense hacker attacks, viruses, data theft, and internal defense system, is the focus of the study in this paper. This paper compared the firewall; IDS based on the integrated, then design of a campus network security model, and detail the specific implementation principle.

  3. Kuala Kemaman hydraulic model study

    International Nuclear Information System (INIS)

    Abdul Kadir Ishak

    2005-01-01

    There The problems facing the area of Kuala Kemaman are siltation and erosion at shoreline. The objectives of study are to assess the best alignment of the groyne alignment, to ascertain the most stable shoreline regime and to investigate structural measures to overcome the erosion. The scope of study are data collection, wave analysis, hydrodynamic simulation and sediment transport simulation. Numerical models MIKE 21 are used - MIKE 21 NSW, for wind-wave model, which describes the growth, decay and transformation of wind-generated waves and swell in nearshore areas. The study takes into account effects of refraction and shoaling due to varying depth, energy dissipation due to bottom friction and wave breaking, MIKE 21 HD - modelling system for 2D free-surface flow which to stimulate the hydraulics phenomena in estuaries, coastal areas and seas. Predicted tidal elevation and waves (radiation stresses) are considered into study while wind is not considered. MIKE 21 ST - the system that calculates the rates of non-cohesive (sand) sediment transport for both pure content and combined waves and current situation

  4. Studies of Catalytic Model Systems

    DEFF Research Database (Denmark)

    Holse, Christian

    The overall topic of this thesis is within the field of catalysis, were model systems of different complexity have been studied utilizing a multipurpose Ultra High Vacuum chamber (UHV). The thesis falls in two different parts. First a simple model system in the form of a ruthenium single crystal...... of the Cu/ZnO nanoparticles is highly relevant to industrial methanol synthesis for which the direct interaction of Cu and ZnO nanocrystals synergistically boost the catalytic activity. The dynamical behavior of the nanoparticles under reducing and oxidizing environments were studied by means of ex situ X......-ray Photoelectron Electron Spectroscopy (XPS) and in situ Transmission Electron Microscopy (TEM). The surface composition of the nanoparticles changes reversibly as the nanoparticles exposed to cycles of high-pressure oxidation and reduction (200 mbar). Furthermore, the presence of metallic Zn is observed by XPS...

  5. NURE uranium deposit model studies

    International Nuclear Information System (INIS)

    Crew, M.E.

    1981-01-01

    The National Uranium Resource Evaluation (NURE) Program has sponsored uranium deposit model studies by Bendix Field Engineering Corporation (Bendix), the US Geological Survey (USGS), and numerous subcontractors. This paper deals only with models from the following six reports prepared by Samuel S. Adams and Associates: GJBX-1(81) - Geology and Recognition Criteria for Roll-Type Uranium Deposits in Continental Sandstones; GJBX-2(81) - Geology and Recognition Criteria for Uraniferous Humate Deposits, Grants Uranium Region, New Mexico; GJBX-3(81) - Geology and Recognition Criteria for Uranium Deposits of the Quartz-Pebble Conglomerate Type; GJBX-4(81) - Geology and Recognition Criteria for Sandstone Uranium Deposits in Mixed Fluvial-Shallow Marine Sedimentary Sequences, South Texas; GJBX-5(81) - Geology and Recognition Criteria for Veinlike Uranium Deposits of the Lower to Middle Proterozoic Unconformity and Strata-Related Types; GJBX-6(81) - Geology and Recognition Criteria for Sandstone Uranium Deposits of the Salt Wash Type, Colorado Plateau Province. A unique feature of these models is the development of recognition criteria in a systematic fashion, with a method for quantifying the various items. The recognition-criteria networks are used in this paper to illustrate the various types of deposits

  6. Natural Ocean Carbon Cycle Sensitivity to Parameterizations of the Recycling in a Climate Model

    Science.gov (United States)

    Romanou, A.; Romanski, J.; Gregg, W. W.

    2014-01-01

    Sensitivities of the oceanic biological pump within the GISS (Goddard Institute for Space Studies ) climate modeling system are explored here. Results are presented from twin control simulations of the air-sea CO2 gas exchange using two different ocean models coupled to the same atmosphere. The two ocean models (Russell ocean model and Hybrid Coordinate Ocean Model, HYCOM) use different vertical coordinate systems, and therefore different representations of column physics. Both variants of the GISS climate model are coupled to the same ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), which computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. In particular, the model differences due to remineralization rate changes are compared to differences attributed to physical processes modeled differently in the two ocean models such as ventilation, mixing, eddy stirring and vertical advection. GISSEH(GISSER) is found to underestimate mixed layer depth compared to observations by about 55% (10 %) in the Southern Ocean and overestimate it by about 17% (underestimate by 2%) in the northern high latitudes. Everywhere else in the global ocean, the two models underestimate the surface mixing by about 12-34 %, which prevents deep nutrients from reaching the surface and promoting primary production there. Consequently, carbon export is reduced because of reduced production at the surface. Furthermore, carbon export is particularly sensitive to remineralization rate changes in the frontal regions of the subtropical gyres and at the Equator and this sensitivity in the model is much higher than the sensitivity to physical processes such as vertical mixing, vertical advection and mesoscale eddy transport. At depth, GISSER, which has a significant warm bias, remineralizes nutrients and carbon faster thereby producing more nutrients and carbon at depth, which

  7. Saltstone SDU6 Modeling Study

    International Nuclear Information System (INIS)

    Lee, Si Y.; Hyun, Sinjae

    2013-01-01

    A new disposal unit, designated as Saltstone Disposal Unit 6 (SDU6), is being designed for support of site accelerated closure goals and salt waste projections identified in the new Liquid Waste System Plan. The unit is a cylindrical disposal cell of 375 ft in diameter and 43 ft in height, and it has a minimum 30 million gallons of capacity. SRNL was requested to evaluate the impact of an increased grout placement height on the flow patterns radially spread on the floor and to determine whether grout quality is impacted by the height. The primary goals of the work are to develop the baseline Computational Fluid Dynamics (CFD) model and to perform the evaluations for the flow patterns of grout material in SDU6 as a function of elevation of grout discharge port and grout rheology. Two transient grout models have been developed by taking a three-dimensional multiphase CFD approach to estimate the domain size of the grout materials radially spread on the facility floor and to perform the sensitivity analysis with respect to the baseline design and operating conditions such as elevation height of the discharge port and fresh grout properties. For the CFD modeling calculations, air-grout Volume of Fluid (VOF) method combined with Bingham plastic and time-dependent grout models were used for examining the impact of fluid spread performance for the initial baseline configurations and to evaluate the impact of grout pouring height on grout quality. The grout quality was estimated in terms of the air volume fraction for the grout layer formed on the SDU6 floor, resulting in the change of grout density. The study results should be considered as preliminary scoping analyses since benchmarking analysis is not included in this task scope. Transient analyses with the Bingham plastic model were performed with the FLUENTTM code on the high performance parallel computing platform in SRNL. The analysis coupled with a transient grout aging model was performed by using ANSYS-CFX code

  8. Benthic boundary layer modelling studies

    International Nuclear Information System (INIS)

    Richards, K.J.

    1984-01-01

    A numerical model has been developed to study the factors which control the height of the benthic boundary layer in the deep ocean and the dispersion of a tracer within and directly above the layer. This report covers tracer clouds of horizontal scales of 10 to 100 km. The dispersion of a tracer has been studied in two ways. Firstly, a number of particles have been introduced into the flow. The trajectories of these particles provide information on dispersion rates. For flow conditions similar to those observed in the abyssal N.E. Atlantic the diffusivity of a tracer was found to be 5 x 10 6 cm 2 s -1 for a tracer within the boundary layer and 8 x 10 6 cm 2 s -1 for a tracer above the boundary layer. The results are in accord with estimates made from current meter measurements. The second method of studying dispersion was to calculate the evolution of individual tracer clouds. Clouds within and above the benthic boundary layer often show quite different behaviour from each other although the general structure of the clouds in the two regions were found to have no significant differences. (author)

  9. Crystal study and econometric model

    Science.gov (United States)

    1975-01-01

    An econometric model was developed that can be used to predict demand and supply figures for crystals over a time horizon roughly concurrent with that of NASA's Space Shuttle Program - that is, 1975 through 1990. The model includes an equation to predict the impact on investment in the crystal-growing industry. Actually, two models are presented. The first is a theoretical model which follows rather strictly the standard theoretical economic concepts involved in supply and demand analysis, and a modified version of the model was developed which, though not quite as theoretically sound, was testable utilizing existing data sources.

  10. Updating flood maps efficiently using existing hydraulic models, very-high-accuracy elevation data, and a geographic information system; a pilot study on the Nisqually River, Washington

    Science.gov (United States)

    Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.

    2001-01-01

    A method of updating flood inundation maps at a fraction of the expense of using traditional methods was piloted in Washington State as part of the U.S. Geological Survey Urban Geologic and Hydrologic Hazards Initiative. Large savings in expense may be achieved by building upon previous Flood Insurance Studies and automating the process of flood delineation with a Geographic Information System (GIS); increases in accuracy and detail result from the use of very-high-accuracy elevation data and automated delineation; and the resulting digital data sets contain valuable ancillary information such as flood depth, as well as greatly facilitating map storage and utility. The method consists of creating stage-discharge relations from the archived output of the existing hydraulic model, using these relations to create updated flood stages for recalculated flood discharges, and using a GIS to automate the map generation process. Many of the effective flood maps were created in the late 1970?s and early 1980?s, and suffer from a number of well recognized deficiencies such as out-of-date or inaccurate estimates of discharges for selected recurrence intervals, changes in basin characteristics, and relatively low quality elevation data used for flood delineation. FEMA estimates that 45 percent of effective maps are over 10 years old (FEMA, 1997). Consequently, Congress has mandated the updating and periodic review of existing maps, which have cost the Nation almost 3 billion (1997) dollars. The need to update maps and the cost of doing so were the primary motivations for piloting a more cost-effective and efficient updating method. New technologies such as Geographic Information Systems and LIDAR (Light Detection and Ranging) elevation mapping are key to improving the efficiency of flood map updating, but they also improve the accuracy, detail, and usefulness of the resulting digital flood maps. GISs produce digital maps without manual estimation of inundated areas between

  11. Mudanças na circulação atmosférica sobre a América do Sul para cenários futuros de clima projetados pelos modelos globais do IPCC AR4 Changes in the atmospheric circulation pattern over South America in future climate scenarios derived from the IPCC AR4 model climate simulations

    Directory of Open Access Journals (Sweden)

    María C Valverde

    2010-03-01

    Full Text Available Neste trabalho são analisadas as mudanças no padrão de circulação que possam vir a acontecer no clima da América do Sul (AS, como conseqüência do aumento nas concentrações dos gases de efeito estufa. Para isto utilizam-se cinco modelos globais do IPCC AR4 (CCCMA, GFDL, HadCM3, MIROC e GISS, para o clima do século XX (1961-1990 - 20C3M e para o cenário futuro SRES_A2 (2011-2100. As características em comum que os modelos apresentaram (a exceção do MIROC para as três climatologias futuras (2011-2040, 2041-2070 e 2071-2100, principalmente, no verão e na primavera, foram o deslocamento da baixa continental (associada à baixa do Chaco para o sudoeste da sua posição climatológica (1961-1990, e da Alta da Bolívia para o noroeste. Além disso, os cinco modelos simularam, para o clima presente, uma Alta do Pacífico Sul (APS menos intensa em relação à Reanálise do NCEP, sugerindo menor subsidência sobre a sua região de atuação. Para cenários futuros os modelos GISS e HadCM3 simularam a APS menos intensa. Por outro lado, para a alta do Atlântico Sul, não existiu um consenso nos modelos. Em geral foi simulada mais intensa (a exceção do GISS, sobretudo no outono e no inverno. O modelo HadCM3 simulou a circulação de verão e primavera mais próxima à Reanálise, com uma ZCAS melhor definida, e uma área menor de anomalias negativas de chuva sobre a Amazônia, em relação aos outros modelos. Já para o cenário futuro este modelo modificou seu padrão de chuvas, e anomalias positivas, sobre a costa norte do Peru e Equador, e negativas sobre o Nordeste e leste da Amazônia, foram observadas, associadas a uma APS enfraquecida e deslocada para o sul, o que reforçou a ZCIT do Pacífico sobre 5ºS. Uma diminuição da convergência de umidade sobre a Amazônia também foi observada.In this paper changes in the atmospheric circulation that may occur in the South America (SA as a consequence of climate change were studied for

  12. Fallout model for system studies

    International Nuclear Information System (INIS)

    Harvey, T.F.; Serduke, F.J.D.

    1979-01-01

    A versatile fallout model was developed to assess complex civil defense and military effect issues. Large technical and scenario uncertainties require a fast, adaptable, time-dependent model to obtain technically defensible fallout results in complex demographic scenarios. The KDFOC2 capability, coupled with other data bases, provides the essential tools to consider tradeoffs between various plans and features in different nuclear scenarios and estimate the technical uncertainties in the predictions. All available data were used to validate the model. In many ways, the capability is unmatched in its ability to predict fallout hazards to a society

  13. BIOMOVS: an international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1988-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (author)

  14. BIOMOVS: An international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1987-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (orig.)

  15. Imitation Modeling and Institutional Studies

    Directory of Open Access Journals (Sweden)

    Maksim Y. Barbashin

    2017-09-01

    Full Text Available This article discusses the use of imitation modeling in the conduct of institutional research. The institutional approach is based on the observation of social behavior. To understand a social process means to determine the key rules that individuals use, undertaking social actions associated with this process or phenomenon. This does not mean that institutions determine behavioral reactions, although there are a number of social situations where the majority of individuals follow the dominant rules. If the main laws of development of the institutional patterns are known, one can describe most of the social processes accurately. The author believes that the main difficulty with the analysis of institutional processes is their recursive nature: from the standards of behavior one may find the proposed actions of social agents who follow, obey or violate institutions, but the possibility of reconstructive analysis is not obvious. The author demonstrates how the institutional approach is applied to the analysis of social behavior. The article describes the basic principles and methodology of imitation modeling. Imitation modeling reveals the importance of institutions in structuring social transactions. The article concludes that in the long term institutional processes are not determined by initial conditions.

  16. Activities of NASA's Global Modeling Initiative (GMI) in the Assessment of Subsonic Aircraft Impact

    Science.gov (United States)

    Rodriquez, J. M.; Logan, J. A.; Rotman, D. A.; Bergmann, D. J.; Baughcum, S. L.; Friedl, R. R.; Anderson, D. E.

    2004-01-01

    The Intergovernmental Panel on Climate Change estimated a peak increase in ozone ranging from 7-12 ppbv (zonal and annual average, and relative to a baseline with no aircraft), due to the subsonic aircraft in the year 2015, corresponding to aircraft emissions of 1.3 TgN/year. This range of values presumably reflects differences in model input (e.g., chemical mechanism, ground emission fluxes, and meteorological fields), and algorithms. The model implemented by the Global Modeling Initiative allows testing the impact of individual model components on the assessment calculations. We present results of the impact of doubling the 1995 aircraft emissions of NOx, corresponding to an extra 0.56 TgN/year, utilizing meteorological data from NASA's Data Assimilation Office (DAO), the Goddard Institute for Space Studies (GISS), and the Middle Atmosphere Community Climate Model, version 3 (MACCM3). Comparison of results to observations can be used to assess the model performance. Peak ozone perturbations ranging from 1.7 to 2.2 ppbv of ozone are calculated using the different fields. These correspond to increases in total tropospheric ozone ranging from 3.3 to 4.1 Tg/Os. These perturbations are consistent with the IPCC results, due to the difference in aircraft emissions. However, the range of values calculated is much smaller than in IPCC.

  17. Studying shocks in model astrophysical flows

    International Nuclear Information System (INIS)

    Chakrabarti, S.K.

    1989-01-01

    We briefly discuss some properties of the shocks in the existing models for quasi two-dimensional astrophysical flows. All of these models which allow the study of shock analytically have some unphysical characteristics due to inherent assumptions made. We propose a hybrid model for a thin flow which has fewer unpleasant features and is suitable for the study of shocks. (author). 5 refs

  18. Operations planning simulation: Model study

    Science.gov (United States)

    1974-01-01

    The use of simulation modeling for the identification of system sensitivities to internal and external forces and variables is discussed. The technique provides a means of exploring alternate system procedures and processes, so that these alternatives may be considered on a mutually comparative basis permitting the selection of a mode or modes of operation which have potential advantages to the system user and the operator. These advantages are measurements is system efficiency are: (1) the ability to meet specific schedules for operations, mission or mission readiness requirements or performance standards and (2) to accomplish the objectives within cost effective limits.

  19. QCD and Standard Model Studies

    Energy Technology Data Exchange (ETDEWEB)

    Gagliardi, Carl A [Texas A & M Univ., College Station, TX (United States)

    2017-02-28

    Our group has focused on using jets in STAR to investigate the longitudinal and transverse spin structure of the proton. We performed measurements of the longitudinal double-spin asymmetry for inclusive jet production that provide the strongest evidence to date that the gluons in the proton with x>0.05 are polarized. We also made the first observation of the Collins effect in pp collisions, thereby providing an important test of the universality of the Collins fragmentation function and opening a new tool to probe quark transversity in the proton. Our studies of forward rapidity electromagnetic jet-like events raise serious question whether the large transverse spin asymmetries that have been seen for forward inclusive hadron production arise from conventional 2 → 2 parton scattering. This is the final technical report for DOE Grant DE-FG02-93ER40765. It covers activities during the period January 1, 2015 through November 30, 2016.

  20. A Web-Based Geovisual Analytical System for Climate Studies

    Directory of Open Access Journals (Sweden)

    Zhenlong Li

    2012-12-01

    Full Text Available Climate studies involve petabytes of spatiotemporal datasets that are produced and archived at distributed computing resources. Scientists need an intuitive and convenient tool to explore the distributed spatiotemporal data. Geovisual analytical tools have the potential to provide such an intuitive and convenient method for scientists to access climate data, discover the relationships between various climate parameters, and communicate the results across different research communities. However, implementing a geovisual analytical tool for complex climate data in a distributed environment poses several challenges. This paper reports our research and development of a web-based geovisual analytical system to support the analysis of climate data generated by climate model. Using the ModelE developed by the NASA Goddard Institute for Space Studies (GISS as an example, we demonstrate that the system is able to (1 manage large volume datasets over the Internet; (2 visualize 2D/3D/4D spatiotemporal data; (3 broker various spatiotemporal statistical analyses for climate research; and (4 support interactive data analysis and knowledge discovery. This research also provides an example for managing, disseminating, and analyzing Big Data in the 21st century.

  1. Combined simulation of carbon and water isotopes in a global ocean model

    Science.gov (United States)

    Paul, André; Krandick, Annegret; Gebbie, Jake; Marchal, Olivier; Dutkiewicz, Stephanie; Losch, Martin; Kurahashi-Nakamura, Takasumi; Tharammal, Thejna

    2013-04-01

    Carbon and water isotopes are included as passive tracers in the MIT general circulation model (MITgcm). The implementation of the carbon isotopes is based on the existing MITgcm carbon cycle component and involves the fractionation processes during photosynthesis and air-sea gas exchange. Special care is given to the use of a real freshwater flux boundary condition in conjunction with the nonlinear free surface of the ocean model. The isotopic content of precipitation and water vapor is obtained from an atmospheric GCM (the NCAR CAM3) and mapped onto the MITgcm grid system, but the kinetic fractionation during evaporation is treated explicitly in the ocean model. In a number of simulations, we test the sensitivity of the carbon isotope distributions to the formulation of fractionation during photosynthesis and compare the results to modern observations of δ13C and Δ14C from GEOSECS, WOCE and CLIVAR. Similarly, we compare the resulting distribution of oxygen isotopes to modern δ18O data from the NASA GISS Global Seawater Oxygen-18 Database. The overall agreement is good, but there are discrepancies in the carbon isotope composition of the surface water and the oxygen isotope composition of the intermediate and deep waters. The combined simulation of carbon and water isotopes in a global ocean model will provide a framework for studying present and past states of ocean circulation such as postulated from deep-sea sediment records.

  2. A Model for Undergraduate and High School Student Research in Earth and Space Sciences: The New York City Research Initiative

    Science.gov (United States)

    Scalzo, F.; Johnson, L.; Marchese, P.

    2006-05-01

    The New York City Research Initiative (NYCRI) is a research and academic program that involves high school students, undergraduate and graduate students, and high school teachers in research teams that are led by college/university principal investigators of NASA funded projects and/or NASA scientists. The principal investigators are at 12 colleges/universities within a 50-mile radius of New York City (NYC and surrounding counties, Southern Connecticut and Northern New Jersey), as well as the NASA Goddard Institute of Space Studies (GISS). This program has a summer research institute component in Earth Science and Space Science, and an academic year component that includes the formulation and implementation NASA research based learning units in existing STEM courses by high school and college faculty. NYCRI is a revision and expansion of the Institute on Climate and Planets at GISS and is funded by NASA MURED and the Goddard Space Flight Center's Education Office.

  3. Modeling Climate Responses to Spectral Solar Forcing on Centennial and Decadal Time Scales

    Science.gov (United States)

    Wen, G.; Cahalan, R.; Rind, D.; Jonas, J.; Pilewskie, P.; Harder, J.

    2012-01-01

    We report a series of experiments to explore clima responses to two types of solar spectral forcing on decadal and centennial time scales - one based on prior reconstructions, and another implied by recent observations from the SORCE (Solar Radiation and Climate Experiment) SIM (Spectral 1rradiance Monitor). We apply these forcings to the Goddard Institute for Space Studies (GISS) Global/Middle Atmosphere Model (GCMAM). that couples atmosphere with ocean, and has a model top near the mesopause, allowing us to examine the full response to the two solar forcing scenarios. We show different climate responses to the two solar forCing scenarios on decadal time scales and also trends on centennial time scales. Differences between solar maximum and solar minimum conditions are highlighted, including impacts of the time lagged reSponse of the lower atmosphere and ocean. This contrasts with studies that assume separate equilibrium conditions at solar maximum and minimum. We discuss model feedback mechanisms involved in the solar forced climate variations.

  4. A SYSTEMATIC STUDY OF SOFTWARE QUALITY MODELS

    OpenAIRE

    Dr.Vilas. M. Thakare; Ashwin B. Tomar

    2011-01-01

    This paper aims to provide a basis for software quality model research, through a systematic study ofpapers. It identifies nearly seventy software quality research papers from journals and classifies paper asper research topic, estimation approach, study context and data set. The paper results combined withother knowledge provides support for recommendations in future software quality model research, toincrease the area of search for relevant studies, carefully select the papers within a set ...

  5. A Comparison Between Gravity Wave Momentum Fluxes in Observations and Climate Models

    Science.gov (United States)

    Geller, Marvin A.; Alexadner, M. Joan; Love, Peter T.; Bacmeister, Julio; Ern, Manfred; Hertzog, Albert; Manzini, Elisa; Preusse, Peter; Sato, Kaoru; Scaife, Adam A.; hide

    2013-01-01

    For the first time, a formal comparison is made between gravity wave momentum fluxes in models and those derived from observations. Although gravity waves occur over a wide range of spatial and temporal scales, the focus of this paper is on scales that are being parameterized in present climate models, sub-1000-km scales. Only observational methods that permit derivation of gravity wave momentum fluxes over large geographical areas are discussed, and these are from satellite temperature measurements, constant-density long-duration balloons, and high-vertical-resolution radiosonde data. The models discussed include two high-resolution models in which gravity waves are explicitly modeled, Kanto and the Community Atmosphere Model, version 5 (CAM5), and three climate models containing gravity wave parameterizations,MAECHAM5, Hadley Centre Global Environmental Model 3 (HadGEM3), and the Goddard Institute for Space Studies (GISS) model. Measurements generally show similar flux magnitudes as in models, except that the fluxes derived from satellite measurements fall off more rapidly with height. This is likely due to limitations on the observable range of wavelengths, although other factors may contribute. When one accounts for this more rapid fall off, the geographical distribution of the fluxes from observations and models compare reasonably well, except for certain features that depend on the specification of the nonorographic gravity wave source functions in the climate models. For instance, both the observed fluxes and those in the high-resolution models are very small at summer high latitudes, but this is not the case for some of the climate models. This comparison between gravity wave fluxes from climate models, high-resolution models, and fluxes derived from observations indicates that such efforts offer a promising path toward improving specifications of gravity wave sources in climate models.

  6. Models to Study Colonisation and Colonisation Resistance

    OpenAIRE

    Boreau, H.; Hartmann, L.; Karjalainen, T.; Rowland, I.; Wilkinson, M. H. F.

    2011-01-01

    This review describes various in vivo animal models (humans; conventional animals administered antimicrobial agents and animals species used; gnotobiotic and germ-free animals), in vitro models (luminal and mucosal), and in silico and mathematicalmodels which have been developed to study colonisation and colonisation resistance and effects of gut flora on hosts. Where applicable, the advantages and disadvantages of each model are discussed.Keywords: colonisation, colonisation resistance, anim...

  7. A Study of Simple Diffraction Models

    DEFF Research Database (Denmark)

    Agerkvist, Finn

    In this paper two simple methods for cabinet edge diffraction are examined. Calculations with both models are compared with more sophisticated theoretical models and with measured data. The parameters involved are studied and their importance for normal loudspeaker box designs is examined....

  8. Regional and Global Climate Response to Anthropogenic SO2 Emissions from China in Three Climate Models

    Science.gov (United States)

    Kasoar, M.; Voulgarakis, Apostolos; Lamarque, Jean-Francois; Shindell, Drew T.; Bellouin, Nicholas; Collins, William J.; Faluvegi, Greg; Tsigaridis, Kostas

    2016-01-01

    We use the HadGEM3-GA4, CESM1, and GISS ModelE2 climate models to investigate the global and regional aerosol burden, radiative flux, and surface temperature responses to removing anthropogenic sulfur dioxide (SO2) emissions from China. We find that the models differ by up to a factor of 6 in the simulated change in aerosol optical depth (AOD) and shortwave radiative flux over China that results from reduced sulfate aerosol, leading to a large range of magnitudes in the regional and global temperature responses. Two of the three models simulate a near-ubiquitous hemispheric warming due to the regional SO2 removal, with similarities in the local and remote pattern of response, but overall with a substantially different magnitude. The third model simulates almost no significant temperature response. We attribute the discrepancies in the response to a combination of substantial differences in the chemical conversion of SO2 to sulfate, translation of sulfate mass into AOD, cloud radiative interactions, and differences in the radiative forcing efficiency of sulfate aerosol in the models. The model with the strongest response (HadGEM3-GA4) compares best with observations of AOD regionally, however the other two models compare similarly (albeit poorly) and still disagree substantially in their simulated climate response, indicating that total AOD observations are far from sufficient to determine which model response is more plausible. Our results highlight that there remains a large uncertainty in the representation of both aerosol chemistry as well as direct and indirect aerosol radiative effects in current climate models, and reinforces that caution must be applied when interpreting the results of modelling studies of aerosol influences on climate. Model studies that implicate aerosols in climate responses should ideally explore a range of radiative forcing strengths representative of this uncertainty, in addition to thoroughly evaluating the models used against

  9. Nuclear clustering - a cluster core model study

    International Nuclear Information System (INIS)

    Paul Selvi, G.; Nandhini, N.; Balasubramaniam, M.

    2015-01-01

    Nuclear clustering, similar to other clustering phenomenon in nature is a much warranted study, since it would help us in understanding the nature of binding of the nucleons inside the nucleus, closed shell behaviour when the system is highly deformed, dynamics and structure at extremes. Several models account for the clustering phenomenon of nuclei. We present in this work, a cluster core model study of nuclear clustering in light mass nuclei

  10. Towards a digital watershed, with a case study in the Heihe River Basin of northwest China

    Science.gov (United States)

    Li, X.; Cheng, G.-D.; Ma, M.-G.; Lu, L.; Ge, Y.-C.

    2003-04-01

    Integrated watershed study and river basin management needs integrated database and integrated hydrological and water resource models. We define digital watershed as a web-based information system that integrates data from different sources and in different scales through both information technology and hydrological modeling. In the last two years, a “digital basin” of the Heihe River Basin, which is a well-studied in-land catchment in China’s arid region was established. More than 6 Gb of in situ observation data, GIS maps, and remotely sensed data have been uploaded to the Heihe web site. Various database and dynamic web techniques such as PHP, ASP, XML, VRML are being used for data service. In addition, the DIAL (Data and Information Access Link), IMS (Internet Map Server) and other Web-GISs are used to make GIS and remote sensing datasets of the Heihe River Basin available and accessible on the Internet. We also have developed models for estimating the evapotranspiration, bio-physical parameters, and snow runoff. These methods can be considered as the elements to build up the integrated watershed model that can be used for integrated management of the Heihe River Basin. The official domain name of the digital Heihe River Basin is heihe.westgis.ac.cn

  11. A mixed model framework for teratology studies.

    Science.gov (United States)

    Braeken, Johan; Tuerlinckx, Francis

    2009-10-01

    A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.

  12. New Statistical Model for Variability of Aerosol Optical Thickness: Theory and Application to MODIS Data over Ocean

    Science.gov (United States)

    Alexandrov, Mikhail Dmitrievic; Geogdzhayev, Igor V.; Tsigaridis, Konstantinos; Marshak, Alexander; Levy, Robert; Cairns, Brian

    2016-01-01

    A novel model for the variability in aerosol optical thickness (AOT) is presented. This model is based on the consideration of AOT fields as realizations of a stochastic process, that is the exponent of an underlying Gaussian process with a specific autocorrelation function. In this approach AOT fields have lognormal PDFs and structure functions having the correct asymptotic behavior at large scales. The latter is an advantage compared with fractal (scale-invariant) approaches. The simple analytical form of the structure function in the proposed model facilitates its use for the parameterization of AOT statistics derived from remote sensing data. The new approach is illustrated using a month-long global MODIS AOT dataset (over ocean) with 10 km resolution. It was used to compute AOT statistics for sample cells forming a grid with 5deg spacing. The observed shapes of the structure functions indicated that in a large number of cases the AOT variability is split into two regimes that exhibit different patterns of behavior: small-scale stationary processes and trends reflecting variations at larger scales. The small-scale patterns are suggested to be generated by local aerosols within the marine boundary layer, while the large-scale trends are indicative of elevated aerosols transported from remote continental sources. This assumption is evaluated by comparison of the geographical distributions of these patterns derived from MODIS data with those obtained from the GISS GCM. This study shows considerable potential to enhance comparisons between remote sensing datasets and climate models beyond regional mean AOTs.

  13. Mining Product Data Models: A Case Study

    Directory of Open Access Journals (Sweden)

    Cristina-Claudia DOLEAN

    2014-01-01

    Full Text Available This paper presents two case studies used to prove the validity of some data-flow mining algorithms. We proposed the data-flow mining algorithms because most part of mining algorithms focuses on the control-flow perspective. First case study uses event logs generated by an ERP system (Navision after we set several trackers on the data elements needed in the process analyzed; while the second case study uses the event logs generated by YAWL system. We offered a general solution of data-flow model extraction from different data sources. In order to apply the data-flow mining algorithms the event logs must comply a certain format (using InputOutput extension. But to respect this format, a set of conversion tools is needed. We depicted the conversion tools used and how we got the data-flow models. Moreover, the data-flow model is compared to the control-flow model.

  14. A model study of bridge hydraulics

    Science.gov (United States)

    2010-08-01

    Most flood studies in the United States use the Army Corps of Engineers HEC-RAS (Hydrologic Engineering : Centers River Analysis System) computer program. This study was carried out to compare results of HEC-RAS : bridge modeling with laboratory e...

  15. Theoretical study of turbulent channel flow - Bulk properties, pressure fluctuations, and propagation of electromagnetic waves

    Science.gov (United States)

    Canuto, V. M.; Hartke, G. J.; Battaglia, A.; Chasnov, J.; Albrecht, G. F.

    1990-01-01

    In this paper, we apply two theoretical turbulence models, DIA and the recent GISS model, to study properties of a turbulent channel flow. Both models provide a turbulent kinetic energy spectral function E(k) as the solution of a non-linear equation; the two models employ the same source function but different closures. The source function is characterized by a rate n sub s (k) which is derived from the complex eigenvalues of the Orr-Sommerfeld (OS) equation in which the basic flow is taken to be of a Poiseuille type. The O-S equation is solved for a variety of Reynolds numbers corresponding to available experimental data. A physical argument is presented whereby the central line velocity characterizing the basic flow, U0 sup L, is not to be identified with the U0 appearing in the experimental Reynolds number. The theoretical results are compared with two types of experimental data: (1) turbulence bulk properties, and (2) properties that depend strongly on the structure of the turbulence spectrum at low wave numbers. The only existing analytical expression for Pi (k) cannot be used in the present case because it applies to the case of a flat plate, not a finite channel.

  16. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  17. Neonatal Seizure Models to Study Epileptogenesis

    Directory of Open Access Journals (Sweden)

    Yuka Kasahara

    2018-04-01

    Full Text Available Current therapeutic strategies for epilepsy include anti-epileptic drugs and surgical treatments that are mainly focused on the suppression of existing seizures rather than the occurrence of the first spontaneous seizure. These symptomatic treatments help a certain proportion of patients, but these strategies are not intended to clarify the cellular and molecular mechanisms underlying the primary process of epilepsy development, i.e., epileptogenesis. Epileptogenic changes include reorganization of neural and glial circuits, resulting in the formation of an epileptogenic focus. To achieve the goal of developing “anti-epileptogenic” drugs, we need to clarify the step-by-step mechanisms underlying epileptogenesis for patients whose seizures are not controllable with existing “anti-epileptic” drugs. Epileptogenesis has been studied using animal models of neonatal seizures because such models are useful for studying the latent period before the occurrence of spontaneous seizures and the lowering of the seizure threshold. Further, neonatal seizure models are generally easy to handle and can be applied for in vitro studies because cells in the neonatal brain are suitable for culture. Here, we review two animal models of neonatal seizures for studying epileptogenesis and discuss their features, specifically focusing on hypoxia-ischemia (HI-induced seizures and febrile seizures (FSs. Studying these models will contribute to identifying the potential therapeutic targets and biomarkers of epileptogenesis.

  18. Sensitivity analysis with the regional climate model COSMO-CLM over the CORDEX-MENA domain

    Science.gov (United States)

    Bucchignani, E.; Cattaneo, L.; Panitz, H.-J.; Mercogliano, P.

    2016-02-01

    The results of a sensitivity work based on ERA-Interim driven COSMO-CLM simulations over the Middle East-North Africa (CORDEX-MENA) domain are presented. All simulations were performed at 0.44° spatial resolution. The purpose of this study was to ascertain model performances with respect to changes in physical and tuning parameters which are mainly related to surface, convection, radiation and cloud parameterizations. Evaluation was performed for the whole CORDEX-MENA region and six sub-regions, comparing a set of 26 COSMO-CLM runs against a combination of available ground observations, satellite products and reanalysis data to assess temperature, precipitation, cloud cover and mean sea level pressure. The model proved to be very sensitive to changes in physical parameters. The optimized configuration allows COSMO-CLM to improve the simulated main climate features of this area. Its main characteristics consist in the new parameterization of albedo, based on Moderate Resolution Imaging Spectroradiometer data, and the new parameterization of aerosol, based on NASA-GISS AOD distributions. When applying this configuration, Mean Absolute Error values for the considered variables are as follows: about 1.2 °C for temperature, about 15 mm/month for precipitation, about 9 % for total cloud cover, and about 0.6 hPa for mean sea level pressure.

  19. Differential Equations Models to Study Quorum Sensing.

    Science.gov (United States)

    Pérez-Velázquez, Judith; Hense, Burkhard A

    2018-01-01

    Mathematical models to study quorum sensing (QS) have become an important tool to explore all aspects of this type of bacterial communication. A wide spectrum of mathematical tools and methods such as dynamical systems, stochastics, and spatial models can be employed. In this chapter, we focus on giving an overview of models consisting of differential equations (DE), which can be used to describe changing quantities, for example, the dynamics of one or more signaling molecule in time and space, often in conjunction with bacterial growth dynamics. The chapter is divided into two sections: ordinary differential equations (ODE) and partial differential equations (PDE) models of QS. Rates of change are represented mathematically by derivatives, i.e., in terms of DE. ODE models allow describing changes in one independent variable, for example, time. PDE models can be used to follow changes in more than one independent variable, for example, time and space. Both types of models often consist of systems (i.e., more than one equation) of equations, such as equations for bacterial growth and autoinducer concentration dynamics. Almost from the onset, mathematical modeling of QS using differential equations has been an interdisciplinary endeavor and many of the works we revised here will be placed into their biological context.

  20. Mixed models in cerebral ischemia study

    Directory of Open Access Journals (Sweden)

    Matheus Henrique Dal Molin Ribeiro

    2016-06-01

    Full Text Available The data modeling from longitudinal studies stands out in the current scientific scenario, especially in the areas of health and biological sciences, which induces a correlation between measurements for the same observed unit. Thus, the modeling of the intra-individual dependency is required through the choice of a covariance structure that is able to receive and accommodate the sample variability. However, the lack of methodology for correlated data analysis may result in an increased occurrence of type I or type II errors and underestimate/overestimate the standard errors of the model estimates. In the present study, a Gaussian mixed model was adopted for the variable response latency of an experiment investigating the memory deficits in animals subjected to cerebral ischemia when treated with fish oil (FO. The model parameters estimation was based on maximum likelihood methods. Based on the restricted likelihood ratio test and information criteria, the autoregressive covariance matrix was adopted for errors. The diagnostic analyses for the model were satisfactory, since basic assumptions and results obtained corroborate with biological evidence; that is, the effectiveness of the FO treatment to alleviate the cognitive effects caused by cerebral ischemia was found.

  1. A Case Study Application Of Time Study Model In Paint ...

    African Journals Online (AJOL)

    This paper presents a case study in the development and application of a time study model in a paint manufacturing company. The organization specializes in the production of different grades of paint and paint containers. The paint production activities include; weighing of raw materials, drying of raw materials, dissolving ...

  2. Overhead distribution line models for harmonics studies

    Energy Technology Data Exchange (ETDEWEB)

    Nagpal, M.; Xu, W.; Dommel, H.W.

    1994-01-01

    Carson's formulae and Maxwell's potential coefficients are used for calculating the per unit length series impedances and shunt capacitances of the overhead lines. The per unit length values are then used for building the models, nominal pi-circuit, and equivalent pi-circuit at the harmonic frequencies. This paper studies the accuracy of these models for presenting the overhead distribution lines in steady-state harmonic solutions at frequencies up to 5 kHz. The models are verified with a field test on a 25 kV distribution line and the sensitivity of the models to ground resistivity, skin effect, and multiple grounding is reported.

  3. Parametric study of a thorium model

    International Nuclear Information System (INIS)

    Lourenco, M.C.; Lipztein, J.L.; Szwarcwald, C.L.

    1997-01-01

    Full text. Models for radionuclides distribution in the human body and dosimetry involve assumptions on the biokinetic behaviour of the material among compartments representing organs and tissues in the body. The lack of knowledge about the metabolic behaviour of a radionuclide represents a factor of uncertainty in estimates of committed dose equivalent. An important problem in biokinetic modeling is the correct assignment of transfer coefficients and biological half-lives to body compartments. The purpose of this study is to analyze the variability in the activities of the body compartments in relation to the variations in the transfer coefficients and compartments biological half-lives in a certain model. A thorium specific recycling model for continuous exposure was used. Multiple regression analysis methods were applied to analyze the results

  4. Process modeling study of the CIF incinerator

    International Nuclear Information System (INIS)

    Hang, T.

    1995-01-01

    The Savannah River Site (SRS) plans to begin operating the Consolidated Incineration Facility (CIF) in 1996. The CIF will treat liquid and solid low-level radioactive, mixed and RCRA hazardous wastes generated at SRS. In addition to experimental test programs, process modeling was applied to provide guidance in areas of safety, environmental regulation compliances, process improvement and optimization. A steady-state flowsheet model was used to calculate material/energy balances and to track key chemical constituents throughout the process units. Dynamic models were developed to predict the CIF transient characteristics in normal and abnormal operation scenarios. Predictions include the rotary kiln heat transfer, dynamic responses of the CIF to fluctuations in the solid waste feed or upsets in the system equipments, performance of the control system, air inleakage in the kiln, etc. This paper reviews the modeling study performed to assist in the deflagration risk assessment

  5. Leggett's noncontextual model studied with neutrons

    International Nuclear Information System (INIS)

    Durstberger-Rennhofer, K.; Sponar, S.; Badurek, G.; Hasegawa, Y.; Schmitzer, C.; Bartosik, H.; Klepp, J.

    2011-01-01

    Full text: It is a long-lasting debate whether nature can be described by deterministic hidden variable theories (HVT) underlying quantum mechanics (QM). Bell inequalities for local HVT as well as the Kochen- Specker theorem for non-contextual models stress the conflict between these alternative theories and QM. Leggett showed that even nonlocal hidden variable models are incompatible with quantum predictions. Neutron interferometry and polarimetry are very proper tools to analyse the behaviour of single neutron systems, where entanglement is created between different degrees of freedom (e.g., spin/ path, spin/energy) and thus quantum contextuality can be studied. We report the first experimental test of a contextual model of quantum mechanics a la Leggett, which deals with definiteness of measurement results before the measurements. The results show a discrepancy between our model and quantum mechanics of more than 7 standard deviations and confirm quantum indefiniteness under the contextual condition. (author)

  6. Parametric study of a thorium model

    International Nuclear Information System (INIS)

    Lourenco, M.C.; Lipsztein, J.L.; Szwarcwald, C.L.

    2002-01-01

    Models for radionuclides distribution in the human body and dosimetry involve assumptions on the biokinetic behavior of the material among compartments representing organs and tissues in the body. One of the most important problem in biokinetic modeling is the assignment of transfer coefficients and biological half-lives to body compartments. In Brazil there are many areas of high natural radioactivity, where the population is chronically exposed to radionuclides of the thorium series. The uncertainties of the thorium biokinetic model are a major cause of uncertainty in the estimates of the committed dose equivalent of the population living in high background areas. The purpose of this study is to discuss the variability in the thorium activities accumulated in the body compartments in relation to the variations in the transfer coefficients and compartments biological half-lives of a thorium-recycling model for continuous exposure. Multiple regression analysis methods were applied to analyze the results. (author)

  7. Parametric study for horizontal steam generator modelling

    Energy Technology Data Exchange (ETDEWEB)

    Ovtcharova, I. [Energoproekt, Sofia (Bulgaria)

    1995-12-31

    In the presentation some of the calculated results of horizontal steam generator PGV - 440 modelling with RELAP5/Mod3 are described. Two nodalization schemes have been used with different components in the steam dome. A study of parameters variation on the steam generator work and calculated results is made in cases with separator and branch.

  8. Parametric study for horizontal steam generator modelling

    Energy Technology Data Exchange (ETDEWEB)

    Ovtcharova, I [Energoproekt, Sofia (Bulgaria)

    1996-12-31

    In the presentation some of the calculated results of horizontal steam generator PGV - 440 modelling with RELAP5/Mod3 are described. Two nodalization schemes have been used with different components in the steam dome. A study of parameters variation on the steam generator work and calculated results is made in cases with separator and branch.

  9. Experimental and modelling studies of infiltration

    International Nuclear Information System (INIS)

    Giudici, M.

    2004-01-01

    This presentation describes a study of infiltration in the unsaturated soil with the objective of estimating the recharge to a phreatic aquifer. The study area is at the border of the city of Milan (Northern Italy), which draws water for both domestic and industrial purposes from ground water resources located beneath the urban area. The rate of water pumping from the aquifer system has been varying during the XX century, depending upon the number of inhabitants and the development of industrial activities. This caused variations with time of the depth of the water table below the ground surface and in turn some emergencies: the two most prominent episodes correspond to the middle '70s, when the water table in the city centre was about 30 m below the undisturbed natural conditions, and to the last decade, when the water table has raised at a rate of approximately 1 m/year and caused infiltrations in deep constructions (garages and building foundations, the underground railways, etc.). We have developed four ground water flow models at different scales, which share some characteristics: they are based on quasi-3D approximation (horizontal flow in the aquifers and vertical flow in the aquitards), conservative finite-differences schemes for regular grid with square cells in the horizontal plane and are implemented with proprietary computer codes. Among the problems that were studied for the development of these models, I recall some numerical problems, related to the behaviour of the phreatic aquifer under conditions of strong exploitation. Model calibration and validation for ModMil has been performed with a two-stage process, i.e., using some of the available data for model calibration and the remaining data for model validation. The application of geophysical exploration techniques, in particular seismic and geo-electrical prospecting, has been very useful to complete the data and information on the hydro-geological structure obtained from stratigraphic logs

  10. Pitting corrosion of copper. Further model studies

    International Nuclear Information System (INIS)

    Taxen, C.

    2002-08-01

    The work presented in this report is a continuation and expansion of a previous study. The aim of the work is to provide background information about pitting corrosion of copper for a safety analysis of copper canisters for final deposition of radioactive waste. A mathematical model for the propagation of corrosion pits is used to estimate the conditions required for stationary propagation of a localised anodic corrosion process. The model uses equilibrium data for copper and its corrosion products and parameters for the aqueous mass transport of dissolved species. In the present work we have, in the model, used a more extensive set of aqueous and solid compounds and equilibrium data from a different source. The potential dependence of pitting in waters with different compositions is studied in greater detail. More waters have been studied and single parameter variations in the composition of the water have been studied over wider ranges of concentration. The conclusions drawn in the previous study are not contradicted by the present results. However, the combined effect of potential and water composition on the possibility of pitting corrosion is more complex than was realised. In the previous study we found what seemed to be a continuous aggravation of a pitting situation by increasing potentials. The present results indicate that pitting corrosion can take place only over a certain potential range and that there is an upper potential limit for pitting as well as a lower. A sensitivity analysis indicates that the model gives meaningful predictions of the minimum pitting potential also when relatively large errors in the input parameters are allowed for

  11. Analytical study of anisotropic compact star models

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, B.V. [Bulgarian Academy of Science, Institute for Nuclear Research and Nuclear Energy, Sofia (Bulgaria)

    2017-11-15

    A simple classification is given of the anisotropic relativistic star models, resembling the one of charged isotropic solutions. On the ground of this database, and taking into account the conditions for physically realistic star models, a method is proposed for generating all such solutions. It is based on the energy density and the radial pressure as seeding functions. Numerous relations between the realistic conditions are found and the need for a graphic proof is reduced just to one pair of inequalities. This general formalism is illustrated with an example of a class of solutions with linear equation of state and simple energy density. It is found that the solutions depend on three free constants and concrete examples are given. Some other popular models are studied with the same method. (orig.)

  12. Integrated core-edge-divertor modeling studies

    International Nuclear Information System (INIS)

    Stacey, W.M.

    2001-01-01

    An integrated calculation model for simulating the interaction of physics phenomena taking place in the plasma core, in the plasma edge and in the SOL and divertor of tokamaks has been developed and applied to study such interactions. The model synthesises a combination of numerical calculations (1) the power and particle balances for the core plasma, using empirical confinement scaling laws and taking into account radiation losses (2), the particle, momentum and power balances in the SOL and divertor, taking into account the effects of radiation and recycling neutrals, (3) the transport of feeling and recycling neutrals, explicitly representing divertor and pumping geometry, and (4) edge pedestal gradient scale lengths and widths, evaluation of theoretical predictions (5) confinement degradation due to thermal instabilities in the edge pedestals, (6) detachment and divertor MARFE onset, (7) core MARFE onsets leading to a H-L transition, and (8) radiative collapse leading to a disruption and evaluation of empirical fits (9) power thresholds for the L-H and H-L transitions and (10) the width of the edge pedestals. The various components of the calculation model are coupled and must be iterated to a self-consistent convergence. The model was developed over several years for the purpose of interpreting various edge phenomena observed in DIII-D experiments and thereby, to some extent, has been benchmarked against experiment. Because the model treats the interactions of various phenomena in the core, edge and divertor, yet is computationally efficient, it lends itself to the investigation of the effects of different choices of various edge plasma operating conditions on overall divertor and core plasma performance. Studies of the effect of feeling location and rate, divertor geometry, plasma shape, pumping and over 'edge parameters' on core plasma properties (line average density, confinement, density limit, etc.) have been performed for DIII-D model problems. A

  13. Mathematical modelling a case studies approach

    CERN Document Server

    Illner, Reinhard; McCollum, Samantha; Roode, Thea van

    2004-01-01

    Mathematical modelling is a subject without boundaries. It is the means by which mathematics becomes useful to virtually any subject. Moreover, modelling has been and continues to be a driving force for the development of mathematics itself. This book explains the process of modelling real situations to obtain mathematical problems that can be analyzed, thus solving the original problem. The presentation is in the form of case studies, which are developed much as they would be in true applications. In many cases, an initial model is created, then modified along the way. Some cases are familiar, such as the evaluation of an annuity. Others are unique, such as the fascinating situation in which an engineer, armed only with a slide rule, had 24 hours to compute whether a valve would hold when a temporary rock plug was removed from a water tunnel. Each chapter ends with a set of exercises and some suggestions for class projects. Some projects are extensive, as with the explorations of the predator-prey model; oth...

  14. Visualization study of operators' plant knowledge model

    International Nuclear Information System (INIS)

    Kanno, Tarou; Furuta, Kazuo; Yoshikawa, Shinji

    1999-03-01

    Nuclear plants are typically very complicated systems and are required extremely high level safety on the operations. Since it is never possible to include all the possible anomaly scenarios in education/training curriculum, plant knowledge formation is desired for operators to enable thein to act against unexpected anomalies based on knowledge base decision making. The authors have been conducted a study on operators' plant knowledge model for the purpose of supporting operators' effort in forming this kind of plant knowledge. In this report, an integrated plant knowledge model consisting of configuration space, causality space, goal space and status space is proposed. The authors examined appropriateness of this model and developed a prototype system to support knowledge formation by visualizing the operators' knowledge model and decision making process in knowledge-based actions with this model on a software system. Finally the feasibility of this prototype as a supportive method in operator education/training to enhance operators' ability in knowledge-based performance has been evaluated. (author)

  15. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.; Tang, X.Z.; Strauss, H.R.; Sugiyama, L.E.

    1999-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of δf particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future. copyright 1999 American Institute of Physics

  16. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.

    2000-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future

  17. Modeling the Sulfate Deposition to the Greenland Ice Sheet From the Laki Eruption

    Science.gov (United States)

    Oman, L.; Robock, A.; Stenchikov, G.; Thordarson, T.; Gao, C.

    2005-12-01

    Using the state of the art Goddard Institute for Space Studies (GISS) modelE general circulation model, simulations were conducted of the chemistry and transport of aerosols resulting from the 1783-84 Laki (64°N) flood lava eruption. A set of 3 ensemble simulations from different initial conditions were conducted by injecting our estimate of the SO2 gas into the atmosphere by the 10 episodes of the eruption and allowing the sulfur chemistry model to convert this gas into sulfate aerosol. The SO2 gas and sulfate aerosol is transported by the model and wet and dry deposition is calculated over each grid box during the simulation. We compare the resulting sulfate deposition to the Greenland Ice Sheet in the model to 23 ice core measurements and find very good agreement. The model simulation deposits a range of 169 to over 300 kg/km2 over interior Greenland with much higher values along the coastal areas. This compares to a range of 62 to 324 kg/km2 for the 23 ice core measurements with an average value of 158 kg/km2. This comparison is one important model validation tool. Modeling and observations show fairly large spatial variations in the deposition of sulfate across the Greenland Ice Sheet for the Laki eruption, but the patterns are similar to those we modeled for the 1912 Katmai and 1991 Pinatubo eruptions. Estimates of sulfate loading based on single ice cores can show significant differences, so ideally several ice cores should be combined in reconstructing the sulfate loading of past volcanic eruptions, taking into account the characteristic spatial variations in the deposition pattern.

  18. Model study radioecology Biblis. Vol. 2

    International Nuclear Information System (INIS)

    1976-01-01

    The second part of the study deals with questions concerning the release of radioactive substances into the environment via the air pathway and their diffusion. On the basis of an introductory lecture on basic principles the interrelations within the whole complex to be considered are discussed. For the release itself, the fundamental issues concerning type and quantity of radioactive substances and their time behaviour are established. With regard to diffusion, calculation models and their parameters are reviewed. Much the same as the first colloquium, this one too aims at compiling existing models and data, determining the known facts and the questions still open and formulating tasks for the further activities. Thirteen questions emerged. They are concerned, above all, with the existing conditions for release and diffusion at the site. These questions will be dealt with in the forms of partial studies by individual participants or by task groups. Results will be reported on at further colloquia. (orig.) [de

  19. Digital terrain data base - new possibilities of 3D terrain modeling

    Directory of Open Access Journals (Sweden)

    Mateja Rihtaršič

    1992-12-01

    Full Text Available GISs has brought new dimensions in the field of digital terrain modelling, too. Modem DTMs must be real (relational databases with high degree of "intelligence". This paper presents some of the demands, ivhich have to be solved in modern digital terrain databases, together with main steps of their's generation. Problems, connected to regional level, multi-pur pose use, new possibilities and direct integration into GIS are presented. The practical model was created across smaller test area, so few lines with practical experiences can be droped, too.

  20. Experimental study and modelling of transient boiling

    International Nuclear Information System (INIS)

    Baudin, Nicolas

    2015-01-01

    A failure in the control system of the power of a nuclear reactor can lead to a Reactivity Initiated Accident in a nuclear power plant. Then, a power peak occurs in some fuel rods, high enough to lead to the coolant film boiling. It leads to an important increase of the temperature of the rod. The possible risk of the clad failure is a matter of interest for the Institut de Radioprotection et de Securite Nucleaire. The transient boiling heat transfer is not yet understood and modelled. An experimental set-up has been built at the Institut de Mecanique des Fluides de Toulouse (IMFT). Subcooled HFE-7000 flows vertically upward in a semi annulus test section. The inner half cylinder simulates the clad and is made of a stainless steel foil, heated by Joule effect. Its temperature is measured by an infrared camera, coupled with a high speed camera for the visualization of the flow topology. The whole boiling curve is studied in steady state and transient regimes: convection, onset of boiling, nucleate boiling, critical heat flux, film boiling and rewetting. The steady state heat transfers are well modelled by literature correlations. Models are suggested for the transient heat flux: the convection and nucleate boiling evolutions are self-similar during a power step. This observation allows to model more complex evolutions, as temperature ramps. The transient Hsu model well represents the onset of nucleate boiling. When the intensity of the power step increases, the film boiling begins at the same temperature but with an increasing heat flux. For power ramps, the critical heat flux decreases while the corresponding temperature increases with the heating rate. When the wall is heated, the film boiling heat transfer is higher than in steady state but it is not understood. A two-fluid model well simulates the cooling film boiling and the rewetting. (author)

  1. Simulation model for studying low frequency microinstabilities

    International Nuclear Information System (INIS)

    Lee, W.W.; Okuda, H.

    1976-03-01

    A 2 1 / 2 dimensional, electrostatic particle code in a slab geometry has been developed to study low frequency oscillations such as drift wave and trapped particle instabilities in a nonuniform bounded plasma. A drift approximation for the electron transverse motion is made which eliminates the high frequency oscillations at the electron gyrofrequency and its multiples. It is, therefore, possible to study the nonlinear effects such as the anomalous transport of plasmas within a reasonable computing time using a real mass ratio. Several examples are given to check the validity and usefulness of the model

  2. BIOMASS REBURNING - MODELING/ENGINEERING STUDIES

    International Nuclear Information System (INIS)

    Vladimir Zamansky; David Moyeda; Mark Sheldon

    2000-01-01

    This project is designed to develop engineering and modeling tools for a family of NO(sub x) control technologies utilizing biomass as a reburning fuel. During the tenth reporting period (January 1-March 31, 2000), EER and NETL R and D group continued to work on Tasks 2, 3, 4, and 5. Information regarding these tasks will be included in the next Quarterly Report. This report includes (Appendix 1) a conceptual design study for the introduction of biomass reburning in a working coal-fired utility boiler. This study was conducted under the coordinated SBIR program funded by the U. S. Department of Agriculture

  3. Building Materials, Ionizing Radiation and HBIM: A Case Study from Pompei (Italy

    Directory of Open Access Journals (Sweden)

    Pasquale Argenziano

    2018-01-01

    Full Text Available This paper presents a different point of view on the conservation of the built heritage, adding ionizing radiation to the most well-known digital documentation dataset. Igneous building materials characterize most of the built heritage in the Campania region, and in a large part of southern Italy. The ionizing radiations proceeding from these materials can produce stochastic biological effects on the exposed living beings. The research team designed and tested a technical-scientific protocol to survey and analyse this natural phenomenon in association with the use of geological material for building purposes. Geographical Information Systems (GISs, City Information Modelling (CIM, and Building Information Modelling (BIM are the digital tools used to manage the construction entities and their characteristics, and then to represent the thematic data as false-colour images. The emission spectra of fair-faced or plastered materials as a fingerprint of their nature is proposed as a non-invasive method. Due to both the huge presence of historical buildings and an intense touristic flow, the main square of Pompei has been selected as a study area.

  4. Technical data report : marine acoustics modelling study

    Energy Technology Data Exchange (ETDEWEB)

    Chorney, N.; Warner, G.; Austin, M. [Jasco Applied Sciences, Victoria, BC (Canada)

    2010-07-01

    This study was conducted to predict the ensonification produced by vessel traffic transiting to and from the Enbridge Northern Gateway Project's marine terminal located near Kitimat, British Columbia (BC). An underwater acoustic propagation model was used to model frequency bands from 20 Hz to 5 kHz at a standard depth of 20 metres. The model included bathymetric grids of the modelling area; underwater sound speed as a function of depth; and geo-acoustic profiles based on the stratified composition of the seafloor. The obtained 1/3 octave band levels were then used to determine broadband received sound levels for 4 scenarios along various transit routes: the Langara and Triple Island in Dixon Entrance; the Browning Entrance in Hecate Strait, and Cape St. James in the Queen Charlotte Basin. The scenarios consisted of a tanker transiting at 16 knots, and an accompanying tug boat. Underwater sound level maps for each scenario were presented. 14 refs., 5 tabs., 16 figs.

  5. The Ozone Budget in the Upper Troposphere from Global Modeling Initiative (GMI)Simulations

    Science.gov (United States)

    Rodriquez, J.; Duncan, Bryan N.; Logan, Jennifer A.

    2006-01-01

    Ozone concentrations in the upper troposphere are influenced by in-situ production, long-range tropospheric transport, and influx of stratospheric ozone, as well as by photochemical removal. Since ozone is an important greenhouse gas in this region, it is particularly important to understand how it will respond to changes in anthropogenic emissions and changes in stratospheric ozone fluxes.. This response will be determined by the relative balance of the different production, loss and transport processes. Ozone concentrations calculated by models will differ depending on the adopted meteorological fields, their chemical scheme, anthropogenic emissions, and treatment of the stratospheric influx. We performed simulations using the chemical-transport model from the Global Modeling Initiative (GMI) with meteorological fields from (It)h e NASA Goddard Institute for Space Studies (GISS) general circulation model (GCM), (2) the atmospheric GCM from NASA's Global Modeling and Assimilation Office(GMAO), and (3) assimilated winds from GMAO . These simulations adopt the same chemical mechanism and emissions, and adopt the Synthetic Ozone (SYNOZ) approach for treating the influx of stratospheric ozone -. In addition, we also performed simulations for a coupled troposphere-stratosphere model with a subset of the same winds. Simulations were done for both 4degx5deg and 2degx2.5deg resolution. Model results are being tested through comparison with a suite of atmospheric observations. In this presentation, we diagnose the ozone budget in the upper troposphere utilizing the suite of GMI simulations, to address the sensitivity of this budget to: a) the different meteorological fields used; b) the adoption of the SYNOZ boundary condition versus inclusion of a full stratosphere; c) model horizontal resolution. Model results are compared to observations to determine biases in particular simulations; by examining these comparisons in conjunction with the derived budgets, we may pinpoint

  6. Improved Ground Hydrology Calculations for Global Climate Models (GCMs): Soil Water Movement and Evapotranspiration.

    Science.gov (United States)

    Abramopoulos, F.; Rosenzweig, C.; Choudhury, B.

    1988-09-01

    A physically based ground hydrology model is developed to improve the land-surface sensible and latent heat calculations in global climate models (GCMs). The processes of transpiration, evaporation from intercepted precipitation and dew, evaporation from bare soil, infiltration, soil water flow, and runoff are explicitly included in the model. The amount of detail in the hydrologic calculations is restricted to a level appropriate for use in a GCM, but each of the aforementioned processes is modeled on the basis of the underlying physical principles. Data from the Goddard Institute for Space Studies (GISS) GCM are used as inputs for off-line tests of the ground hydrology model in four 8° × 10° regions (Brazil, Sahel, Sahara, and India). Soil and vegetation input parameters are calculated as area-weighted means over the 8° × 10° gridhox. This compositing procedure is tested by comparing resulting hydrological quantities to ground hydrology model calculations performed on the 1° × 1° cells which comprise the 8° × 10° gridbox. Results show that the compositing procedure works well except in the Sahel where lower soil water levels and a heterogeneous land surface produce more variability in hydrological quantities, indicating that a resolution better than 8° × 10° is needed for that region. Modeled annual and diurnal hydrological cycles compare well with observations for Brazil, where real world data are available. The sensitivity of the ground hydrology model to several of its input parameters was tested; it was found to be most sensitive to the fraction of land covered by vegetation and least sensitive to the soil hydraulic conductivity and matric potential.

  7. Variational study of the pair hopping model

    International Nuclear Information System (INIS)

    Fazekas, P.

    1990-01-01

    We study the ground state of a Hamiltonian introduced by Kolb and Penson for modelling situations in which small electron pairs are formed. The Hamiltonian consists of a tight binding band term, and a term describing the nearest neighbour hopping of electron pairs. We give a Gutzwiller-type variational treatment, first with a single-parameter Ansatz treated in the single site Gutzwiller approximation, and then with more complicated trial wave functions, and an improved Gutzwiller approximation. The calculation yields a transition from a partially paired normal state, in which the spin susceptibility has a diminished value, into a fully paired state. (author). 16 refs, 2 figs

  8. A Simple Model to Study Tau Pathology

    Directory of Open Access Journals (Sweden)

    Alexander L. Houck

    2016-01-01

    Full Text Available Tau proteins play a role in the stabilization of microtubules, but in pathological conditions, tauopathies, tau is modified by phosphorylation and can aggregate into aberrant aggregates. These aggregates could be toxic to cells, and different cell models have been used to test for compounds that might prevent these tau modifications. Here, we have used a cell model involving the overexpression of human tau in human embryonic kidney 293 cells. In human embryonic kidney 293 cells expressing tau in a stable manner, we have been able to replicate the phosphorylation of intracellular tau. This intracellular tau increases its own level of phosphorylation and aggregates, likely due to the regulatory effect of some growth factors on specific tau kinases such as GSK3. In these conditions, a change in secreted tau was observed. Reversal of phosphorylation and aggregation of tau was found by the use of lithium, a GSK3 inhibitor. Thus, we propose this as a simple cell model to study tau pathology in nonneuronal cells due to their viability and ease to work with.

  9. Risk modelling study for carotid endarterectomy.

    Science.gov (United States)

    Kuhan, G; Gardiner, E D; Abidia, A F; Chetter, I C; Renwick, P M; Johnson, B F; Wilkinson, A R; McCollum, P T

    2001-12-01

    The aims of this study were to identify factors that influence the risk of stroke or death following carotid endarterectomy (CEA) and to develop a model to aid in comparative audit of vascular surgeons and units. A series of 839 CEAs performed by four vascular surgeons between 1992 and 1999 was analysed. Multiple logistic regression analysis was used to model the effect of 15 possible risk factors on the 30-day risk of stroke or death. Outcome was compared for four surgeons and two units after adjustment for the significant risk factors. The overall 30-day stroke or death rate was 3.9 per cent (29 of 741). Heart disease, diabetes and stroke were significant risk factors. The 30-day predicted stroke or death rates increased with increasing risk scores. The observed 30-day stroke or death rate was 3.9 per cent for both vascular units and varied from 3.0 to 4.2 per cent for the four vascular surgeons. Differences in the outcomes between the surgeons and vascular units did not reach statistical significance after risk adjustment. Diabetes, heart disease and stroke are significant risk factors for stroke or death following CEA. The risk score model identified patients at higher risk and aided in comparative audit.

  10. A study on an optimal movement model

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton BN1 9QH, UK (United Kingdom); Zhang, Kewei [SMS, Sussex University, Brighton BN1 9QH (United Kingdom); Luo Yousong [Department of Mathematics and Statistics, RMIT University, GOP Box 2476V, Melbourne, Vic 3001 (Australia)

    2003-07-11

    We present an analytical and rigorous study on a TOPS (task optimization in the presence of signal-dependent noise) model with a hold-on or an end-point control. Optimal control signals are rigorously obtained, which enables us to investigate various issues about the model including its trajectories, velocities, control signals, variances and the dependence of these quantities on various model parameters. With the hold-on control, we find that the optimal control can be implemented with an almost 'nil' hold-on period. The optimal control signal is a linear combination of two sub-control signals. One of the sub-control signals is positive and the other is negative. With the end-point control, the end-point variance is dramatically reduced, in comparison with the hold-on control. However, the velocity is not symmetric (bell shape). Finally, we point out that the velocity with a hold-on control takes the bell shape only within a limited parameter region.

  11. Regional Model Nesting Within GFS Daily Forecasts Over West Africa

    Science.gov (United States)

    Druyan, Leonard M.; Fulakeza, Matthew; Lonergan, Patrick; Worrell, Ruben

    2010-01-01

    The study uses the RM3, the regional climate model at the Center for Climate Systems Research of Columbia University and the NASA/Goddard Institute for Space Studies (CCSR/GISS). The paper evaluates 30 48-hour RM3 weather forecasts over West Africa during September 2006 made on a 0.5 grid nested within 1 Global Forecast System (GFS) global forecasts. September 2006 was the Special Observing Period #3 of the African Monsoon Multidisciplinary Analysis (AMMA). Archived GFS initial conditions and lateral boundary conditions for the simulations from the US National Weather Service, National Oceanographic and Atmospheric Administration were interpolated four times daily. Results for precipitation forecasts are validated against Tropical Rainfall Measurement Mission (TRMM) satellite estimates and data from the Famine Early Warning System (FEWS), which includes rain gauge measurements, and forecasts of circulation are compared to reanalysis 2. Performance statistics for the precipitation forecasts include bias, root-mean-square errors and spatial correlation coefficients. The nested regional model forecasts are compared to GFS forecasts to gauge whether nesting provides additional realistic information. They are also compared to RM3 simulations driven by reanalysis 2, representing high potential skill forecasts, to gauge the sensitivity of results to lateral boundary conditions. Nested RM3/GFS forecasts generate excessive moisture advection toward West Africa, which in turn causes prodigious amounts of model precipitation. This problem is corrected by empirical adjustments in the preparation of lateral boundary conditions and initial conditions. The resulting modified simulations improve on the GFS precipitation forecasts, achieving time-space correlations with TRMM of 0.77 on the first day and 0.63 on the second day. One realtime RM3/GFS precipitation forecast made at and posted by the African Centre of Meteorological Application for Development (ACMAD) in Niamey, Niger

  12. Model study radioecology Biblis. Vol. 1

    International Nuclear Information System (INIS)

    1976-01-01

    The first part of the study deals with questions concerning radiation on account of the release of radioactive substances into the environment via the aquatic pathway. The discussion is preceded by a lecture on the basis for the calculations, and on a lecture on the results of primary radiological model calculations. The colloquium aims at establishing existing knowledge, at coordinating deviating opinions, and at elaborating questions still open and the possibility to deal with these. On the whole, ten questions have emerged. They are mainly concerned with site conditions. These questions will be dealt with in the form of partial studies by individual participants or by task groups. The results will be discussed at further colloquia. (orig.) [de

  13. Animal models for HCV and HBV studies

    Directory of Open Access Journals (Sweden)

    Isabelle Chemin

    2007-02-01

    develop fulminant hepatitis, acute hepatitis, or chronic liver disease after adoptive transfer, and others spontaneously develop hepatocellular carcinoma (HCC. Among HCV transgenic mice, most develop no disease, but acute hepatitis has been observed in one model, and HCC in another. Although mice are not susceptible to HBV and HCV, their ability to replicate these viruses and to develop liver diseases characteristic of human infections provides opportunities to study pathogenesis and develop novel therapeutics In the search for the mechanism of hepatocarcinogenesis in hepatitis viral infection, two viral proteins, the core protein of hepatitis C virus (HCV and the HBx protein of hepatitis B virus (HBV, have been shown to possess oncogenic potential through transgenic mouse studies, indicating the direct involvement of the hepatitis viruses in hepatocarcinogenesis.

    This may explain the very high frequency of HCC in patients with HCV or HBV infection.

    Chimpanzees remain the only recognized animal model for the study of hepatitis C virus (HCV. Studies performed in chimpanzees played a critical role in the discovery of HCV and are continuing to play an essential role in defining the natural history of this important human pathogen. In the absence of a reproducible cell culture system, the infectivity titer of HCV challenge pools can be determined only in chimpanzees.

    Recent studies in chimpanzees have provided new insight into the nature of host immune responses-particularly the intrahepatic responses-following primary and secondary experimental HCV infections. The immunogenicity and efficacy of vaccine candidates against HCV can be tested only in chimpanzees. Finally, it would not have been possible to demonstrate

  14. A study on the intrusion model by physical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Yul; Kim, Yoo Sung; Hyun, Hye Ja [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of)

    1995-12-01

    In physical modeling, the actual phenomena of seismic wave propagation are directly measured like field survey and furthermore the structure and physical properties of subsurface can be known. So the measured datasets from physical modeling can be very desirable as input data to test the efficiency of various inversion algorithms. An underground structure formed by intrusion, which can be often seen in seismic section for oil exploration, is investigated by physical modeling. The model is characterized by various types of layer boundaries with steep dip angle. Therefore, this physical modeling data are very available not only to interpret seismic sections for oil exploration as a case history, but also to develop data processing techniques and estimate the capability of software such as migration, full waveform inversion. (author). 5 refs., 18 figs.

  15. An animal model to study regenerative endodontics.

    Science.gov (United States)

    Torabinejad, Mahmoud; Corr, Robert; Buhrley, Matthew; Wright, Kenneth; Shabahang, Shahrokh

    2011-02-01

    A growing body of evidence is demonstrating the possibility for regeneration of tissues within the pulp space and continued root development in teeth with necrotic pulps and open apices. There are areas of research related to regenerative endodontics that need to be investigated in an animal model. The purpose of this study was to investigate ferret cuspid teeth as a model to investigate factors involved in regenerative endodontics. Six young male ferrets between the ages of 36-133 days were used in this investigation. Each animal was anesthetized and perfused with 10% buffered formalin. Block sections including the mandibular and maxillary cuspid teeth and their surrounding periapical tissues were obtained, radiographed, decalcified, sectioned, and stained with hematoxylin-eosin to determine various stages of apical closure in these teeth. The permanent mandibular and maxillary cuspid teeth with open apices erupted approximately 50 days after birth. Initial signs of closure of the apical foramen in these teeth were observed between 90-110 days. Complete apical closure was observed in the cuspid teeth when the animals were 133 days old. Based on the experiment, ferret cuspid teeth can be used to investigate various factors involved in regenerative endodontics that cannot be tested in human subjects. The most appropriate time to conduct the experiments would be when the ferrets are between the ages of 50 and 90 days. Copyright © 2011. Published by Elsevier Inc.

  16. Assessing ocean vertical mixing schemes for the study of climate change

    Science.gov (United States)

    Howard, A. M.; Lindo, F.; Fells, J.; Tulsee, V.; Cheng, Y.; Canuto, V.

    2014-12-01

    Climate change is a burning issue of our time. It is critical to know the consequences of choosing "business as usual" vs. mitigating our emissions for impacts e.g. ecosystem disruption, sea-level rise, floods and droughts. To make predictions we must model realistically each component of the climate system. The ocean must be modeled carefully as it plays a critical role, including transporting heat and storing heat and dissolved carbon dioxide. Modeling the ocean realistically in turn requires physically based parameterizations of key processes in it that cannot be explicitly represented in a global climate model. One such process is vertical mixing. The turbulence group at NASA-GISS has developed a comprehensive new vertical mixing scheme (GISSVM) based on turbulence theory, including surface convection and wind shear, interior waves and double-diffusion, and bottom tides. The GISSVM is tested in stand-alone ocean simulations before being used in coupled climate models. It is also being upgraded to more faithfully represent the physical processes. To help assess mixing schemes, students use data from NASA-GISS to create visualizations and calculate statistics including mean bias and rms differences and correlations of fields. These are created and programmed with MATLAB. Results with the commonly used KPP mixing scheme and the present GISSVM and candidate improved variants of GISSVM will be compared between stand-alone ocean models and coupled models and observations. This project introduces students to modeling of a complex system, an important theme in contemporary science and helps them gain a better appreciation of climate science and a new perspective on it. They also gain familiarity with MATLAB, a widely used tool, and develop skills in writing and understanding programs. Moreover they contribute to the advancement of science by providing information that will help guide the improvement of the GISSVM and hence of ocean and climate models and ultimately our

  17. An Exploratory Study: Assessment of Modeled Dioxin ...

    Science.gov (United States)

    EPA has released an external review draft entitled, An Exploratory Study: Assessment of Modeled Dioxin Exposure in Ceramic Art Studios(External Review Draft). The public comment period and the external peer-review workshop are separate processes that provide opportunities for all interested parties to comment on the document. In addition to consideration by EPA, all public comments submitted in accordance with this notice will also be forwarded to EPA’s contractor for the external peer-review panel prior to the workshop. EPA has realeased this draft document solely for the purpose of pre-dissemination peer review under applicable information quality guidelines. This document has not been formally disseminated by EPA. It does not represent and should not be construed to represent any Agency policy or determination. The purpose of this report is to describe an exploratory investigation of potential dioxin exposures to artists/hobbyists who use ball clay to make pottery and related products.

  18. Ovine model for studying pulmonary immune responses

    International Nuclear Information System (INIS)

    Joel, D.D.; Chanana, A.D.

    1984-01-01

    Anatomical features of the sheep lung make it an excellent model for studying pulmonary immunity. Four specific lung segments were identified which drain exclusively to three separate lymph nodes. One of these segments, the dorsal basal segment of the right lung, is drained by the caudal mediastinal lymph node (CMLN). Cannulation of the efferent lymph duct of the CMLN along with highly localized intrabronchial instillation of antigen provides a functional unit with which to study factors involved in development of pulmonary immune responses. Following intrabronchial immunization there was an increased output of lymphoblasts and specific antibody-forming cells in efferent CMLN lymph. Continuous divergence of efferent lymph eliminated the serum antibody response but did not totally eliminate the appearance of specific antibody in fluid obtained by bronchoalveolar lavage. In these studies localized immunization of the right cranial lobe served as a control. Efferent lymphoblasts produced in response to intrabronchial antigen were labeled with 125 I-iododeoxyuridine and their migrational patterns and tissue distribution compared to lymphoblasts obtained from the thoracic duct. The results indicated that pulmonary immunoblasts tend to relocate in lung tissue and reappear with a higher specific activity in pulmonary lymph than in thoracic duct lymph. The reverse was observed with labeled intestinal lymphoblasts. 35 references, 2 figures, 3 tables

  19. Ovine model for studying pulmonary immune responses

    Energy Technology Data Exchange (ETDEWEB)

    Joel, D.D.; Chanana, A.D.

    1984-11-25

    Anatomical features of the sheep lung make it an excellent model for studying pulmonary immunity. Four specific lung segments were identified which drain exclusively to three separate lymph nodes. One of these segments, the dorsal basal segment of the right lung, is drained by the caudal mediastinal lymph node (CMLN). Cannulation of the efferent lymph duct of the CMLN along with highly localized intrabronchial instillation of antigen provides a functional unit with which to study factors involved in development of pulmonary immune responses. Following intrabronchial immunization there was an increased output of lymphoblasts and specific antibody-forming cells in efferent CMLN lymph. Continuous divergence of efferent lymph eliminated the serum antibody response but did not totally eliminate the appearance of specific antibody in fluid obtained by bronchoalveolar lavage. In these studies localized immunization of the right cranial lobe served as a control. Efferent lymphoblasts produced in response to intrabronchial antigen were labeled with /sup 125/I-iododeoxyuridine and their migrational patterns and tissue distribution compared to lymphoblasts obtained from the thoracic duct. The results indicated that pulmonary immunoblasts tend to relocate in lung tissue and reappear with a higher specific activity in pulmonary lymph than in thoracic duct lymph. The reverse was observed with labeled intestinal lymphoblasts. 35 references, 2 figures, 3 tables.

  20. Study on Uncertainty and Contextual Modelling

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana; Ocelíková, E.

    2007-01-01

    Roč. 1, č. 1 (2007), s. 12-15 ISSN 1998-0140 Institutional research plan: CEZ:AV0Z10750506 Keywords : Knowledge * contextual modelling * temporal modelling * uncertainty * knowledge management Subject RIV: BD - Theory of Information

  1. Biomass thermochemical gasification: Experimental studies and modeling

    Science.gov (United States)

    Kumar, Ajay

    The overall goals of this research were to study the biomass thermochemical gasification using experimental and modeling techniques, and to evaluate the cost of industrial gas production and combined heat and power generation. This dissertation includes an extensive review of progresses in biomass thermochemical gasification. Product gases from biomass gasification can be converted to biopower, biofuels and chemicals. However, for its viable commercial applications, the study summarizes the technical challenges in the gasification and downstream processing of product gas. Corn stover and dried distillers grains with solubles (DDGS), a non-fermentable byproduct of ethanol production, were used as the biomass feedstocks. One of the objectives was to determine selected physical and chemical properties of corn stover related to thermochemical conversion. The parameters of the reaction kinetics for weight loss were obtained. The next objective was to investigate the effects of temperature, steam to biomass ratio and equivalence ratio on gas composition and efficiencies. DDGS gasification was performed on a lab-scale fluidized-bed gasifier with steam and air as fluidizing and oxidizing agents. Increasing the temperature resulted in increases in hydrogen and methane contents and efficiencies. A model was developed to simulate the performance of a lab-scale gasifier using Aspen Plus(TM) software. Mass balance, energy balance and minimization of Gibbs free energy were applied for the gasification to determine the product gas composition. The final objective was to optimize the process by maximizing the net energy efficiency, and to estimate the cost of industrial gas, and combined heat and power (CHP) at a biomass feedrate of 2000 kg/h. The selling price of gas was estimated to be 11.49/GJ for corn stover, and 13.08/GJ for DDGS. For CHP generation, the electrical and net efficiencies were 37 and 86%, respectively for corn stover, and 34 and 78%, respectively for DDGS. For

  2. Mathematical modelling with case studies using Maple and Matlab

    CERN Document Server

    Barnes, B

    2014-01-01

    Introduction to Mathematical ModelingMathematical models An overview of the book Some modeling approaches Modeling for decision makingCompartmental Models Introduction Exponential decay and radioactivity Case study: detecting art forgeries Case study: Pacific rats colonize New Zealand Lake pollution models Case study: Lake Burley Griffin Drug assimilation into the blood Case study: dull, dizzy, or dead? Cascades of compartments First-order linear DEs Equilibrium points and stability Case study: money, money, money makes the world go aroundModels of Single PopulationsExponential growth Density-

  3. Comparative study of void fraction models

    International Nuclear Information System (INIS)

    Borges, R.C.; Freitas, R.L.

    1985-01-01

    Some models for the calculation of void fraction in water in sub-cooled boiling and saturated vertical upward flow with forced convection have been selected and compared with experimental results in the pressure range of 1 to 150 bar. In order to know the void fraction axial distribution it is necessary to determine the net generation of vapour and the fluid temperature distribution in the slightly sub-cooled boiling region. It was verified that the net generation of vapour was well represented by the Saha-Zuber model. The selected models for the void fraction calculation present adequate results but with a tendency to super-estimate the experimental results, in particular the homogeneous models. The drift flux model is recommended, followed by the Armand and Smith models. (F.E.) [pt

  4. Study of dissolution process and its modelling

    Directory of Open Access Journals (Sweden)

    Juan Carlos Beltran-Prieto

    2017-01-01

    Full Text Available The use of mathematical concepts and language aiming to describe and represent the interactions and dynamics of a system is known as a mathematical model. Mathematical modelling finds a huge number of successful applications in a vast amount of science, social and engineering fields, including biology, chemistry, physics, computer sciences, artificial intelligence, bioengineering, finance, economy and others. In this research, we aim to propose a mathematical model that predicts the dissolution of a solid material immersed in a fluid. The developed model can be used to evaluate the rate of mass transfer and the mass transfer coefficient. Further research is expected to be carried out to use the model as a base to develop useful models for the pharmaceutical industry to gain information about the dissolution of medicaments in the body stream and this could play a key role in formulation of medicaments.

  5. Bariatric Outcomes and Obesity Modeling: Study Meeting

    Science.gov (United States)

    2010-09-17

    Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA...developed a cost-effectiveness model and a payer-based budget and fiscal impact tool to compare bariatric surgical procedures to non- operative ...SURVIVAL MODELED FROM NHIS -NDI • Statistical analysis adapts the methods from Schauer 2010. • Logistic regression model is used to predict the 5-year

  6. Model boiler studies on deposition and corrosion

    International Nuclear Information System (INIS)

    Balakrishnan, P.V.; McVey, E.G.

    1977-09-01

    Deposit formation was studied in a model boiler, with sea-water injections to simulate the in-leakage which could occur from sea-water cooled condensers. When All Volatile Treatment (AVT) was used for chemistry control the deposits consisted of the sea-water salts and corrosion products. With sodium phosphate added to the boiler water, the deposits also contained the phosphates derived from the sea-water salts. The deposits were formed in layers of differing compositions. There was no significant corrosion of the Fe-Ni-Cr alloy boiler tube under deposits, either on the open area of the tube or in crevices. However, carbon steel that formed a crevice around the tube was corroded severely when the boiler water did not contain phosphate. The observed corrosion of carbon steel was caused by the presence of acidic, highly concentrated chloride solution produced from the sea-water within the crevice. Results of theoretical calculations of the composition of the concentrated solution are presented. (author)

  7. Bayesian graphical models for genomewide association studies.

    Science.gov (United States)

    Verzilli, Claudio J; Stallard, Nigel; Whittaker, John C

    2006-07-01

    As the extent of human genetic variation becomes more fully characterized, the research community is faced with the challenging task of using this information to dissect the heritable components of complex traits. Genomewide association studies offer great promise in this respect, but their analysis poses formidable difficulties. In this article, we describe a computationally efficient approach to mining genotype-phenotype associations that scales to the size of the data sets currently being collected in such studies. We use discrete graphical models as a data-mining tool, searching for single- or multilocus patterns of association around a causative site. The approach is fully Bayesian, allowing us to incorporate prior knowledge on the spatial dependencies around each marker due to linkage disequilibrium, which reduces considerably the number of possible graphical structures. A Markov chain-Monte Carlo scheme is developed that yields samples from the posterior distribution of graphs conditional on the data from which probabilistic statements about the strength of any genotype-phenotype association can be made. Using data simulated under scenarios that vary in marker density, genotype relative risk of a causative allele, and mode of inheritance, we show that the proposed approach has better localization properties and leads to lower false-positive rates than do single-locus analyses. Finally, we present an application of our method to a quasi-synthetic data set in which data from the CYP2D6 region are embedded within simulated data on 100K single-nucleotide polymorphisms. Analysis is quick (<5 min), and we are able to localize the causative site to a very short interval.

  8. Theoretical studies of Anderson impurity models

    International Nuclear Information System (INIS)

    Glossop, M.T.

    2000-01-01

    A Local Moment Approach (LMA) is developed for single-particle excitations of a symmetric single impurity Anderson model (SIAM) with a soft-gap hybridization vanishing at the Fermi level, Δ I ∝ vertical bar W vertical bar r with r > 0, and for the generic asymmetric case of the 'normal' (r = 0) SIAM. In all cases we work within a two-self-energy description with local moments introduced explicitly from the outset, and in which single-particle excitations are coupled dynamically to low-energy transverse spin fluctuations. For the soft-gap symmetric SIAM, the resultant theory is applicable on all energy scales, and captures both the spin-fluctuation regime of strong coupling (large-U), as well as the weak coupling regime where it is perturbatively exact for those r-domains in which perturbation theory in U is non-singular. While the primary emphasis is on single-particle dynamics, the quantum phase transition between strong coupling (SC) and local moment (LM) phases can also be addressed directly; for the spin-fluctuation regime in particular a number of asymptotically exact results are thereby obtained, notably for the behaviour of the critical U c (r) separating SC/LM states and the Kondo scale w m (r) characteristic of the SC phase. Results for both single-particle spectra and SG/LM phase boundaries are found to agree well with recent numerical renormalization group (NRG) studies; and a number of further testable predictions are made. Single-particle spectra are examined systematically for both SC and LM states; in particular, for all 0 ≤ r 0 SC phase which, in agreement with conclusions drawn from recent NRG work, may be viewed as a non-trivial but natural generalization of Fermi liquid physics. We also reinvestigate the problem via the NRG in light of the predictions arising from the LMA: all are borne out and excellent agreement is found. For the asymmetric single impurity Anderson model (ASIAM) we establish general conditions which must be satisfied

  9. Theoretical study on optical model potential

    International Nuclear Information System (INIS)

    Lim Hung Gi.

    1984-08-01

    The optical model potential of non-local effect on the rounded edge of the potential is derived. On the basis of this potential the functional form of the optical model potential, the energy dependence and relationship of its parameters, and the dependency of the values of the parameters on energy change are shown in this paper. (author)

  10. Clinton River Sediment Transport Modeling Study

    Science.gov (United States)

    The U.S. ACE develops sediment transport models for tributaries to the Great Lakes that discharge to AOCs. The models developed help State and local agencies to evaluate better ways for soil conservation and non-point source pollution prevention.

  11. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  12. Glistening-region model for multipath studies

    Science.gov (United States)

    Groves, Gordon W.; Chow, Winston C.

    1998-07-01

    The goal is to achieve a model of radar sea reflection with improved fidelity that is amenable to practical implementation. The geometry of reflection from a wavy surface is formulated. The sea surface is divided into two components: the smooth `chop' consisting of the longer wavelengths, and the `roughness' of the short wavelengths. Ordinary geometric reflection from the chop surface is broadened by the roughness. This same representation serves both for forward scatter and backscatter (sea clutter). The `Road-to-Happiness' approximation, in which the mean sea surface is assumed cylindrical, simplifies the reflection geometry for low-elevation targets. The effect of surface roughness is assumed to make the sea reflection coefficient depending on the `Deviation Angle' between the specular and the scattering directions. The `specular' direction is that into which energy would be reflected by a perfectly smooth facet. Assuming that the ocean waves are linear and random allows use of Gaussian statistics, greatly simplifying the formulation by allowing representation of the sea chop by three parameters. An approximation of `low waves' and retention of the sea-chop slope components only through second order provides further simplification. The simplifying assumptions make it possible to take the predicted 2D ocean wave spectrum into account in the calculation of sea-surface radar reflectivity, to provide algorithms for support of an operational system for dealing with target tracking in the presence of multipath. The product will be of use in simulated studies to evaluate different trade-offs in alternative tracking schemes, and will form the basis of a tactical system for ship defense against low flyers.

  13. Study on Standard Fatigue Vehicle Load Model

    Science.gov (United States)

    Huang, H. Y.; Zhang, J. P.; Li, Y. H.

    2018-02-01

    Based on the measured data of truck from three artery expressways in Guangdong Province, the statistical analysis of truck weight was conducted according to axle number. The standard fatigue vehicle model applied to industrial areas in the middle and late was obtained, which adopted equivalence damage principle, Miner linear accumulation law, water discharge method and damage ratio theory. Compared with the fatigue vehicle model Specified by the current bridge design code, the proposed model has better applicability. It is of certain reference value for the fatigue design of bridge in China.

  14. Amorphous track models: a numerical comparison study

    DEFF Research Database (Denmark)

    Greilich, Steffen; Grzanka, Leszek; Hahn, Ute

    in carbon ion treatment at the particle facility HIT in Heidelberg. Apparent differences between the LEM and the Katz model are the way how interactions of individual particle tracks and how extended targets are handled. Complex scenarios, however, can mask the actual effect of these differences. Here, we......Amorphous track models such as Katz' Ion-Gamma-Kill (IGK) approach [1, 2] or the Local Effect Model (LEM) [3, 4] had reasonable success in predicting the response of solid state dosimeters and radiobiological systems. LEM is currently applied in radiotherapy for biological dose optimization...

  15. Pulse radiolysis studies of model membranes

    International Nuclear Information System (INIS)

    Heijman, M.G.J.

    1984-01-01

    In this thesis the influence of the structure of membranes on the processes in cell membranes were examined. Different models of the membranes were evaluated. Pulse radiolysis was used as the technique to examine the membranes. (R.B.)

  16. Lower Monumental Spillway Hydraulic Model Study

    National Research Council Canada - National Science Library

    Wilhelms, Steven

    2003-01-01

    A 1:40 Froudian Scale model was used to investigate the hydraulic performance of the Lower Monumental Dam spillway, stilling basin, and tailrace for dissolved gas reduction and stilling basin apron scour...

  17. А mathematical model study of suspended monorail

    OpenAIRE

    Viktor GUTAREVYCH

    2012-01-01

    The mathematical model of suspended monorail track with allowance for elastic strain which occurs during movement of the monorail carriage was developed. Standard forms for single span and double span of suspended monorail sections were established.

  18. А mathematical model study of suspended monorail

    Directory of Open Access Journals (Sweden)

    Viktor GUTAREVYCH

    2012-01-01

    Full Text Available The mathematical model of suspended monorail track with allowance for elastic strain which occurs during movement of the monorail carriage was developed. Standard forms for single span and double span of suspended monorail sections were established.

  19. STRESS RESPONSE STUDIES USING ANIMAL MODELS

    Science.gov (United States)

    This presentation will provide the evidence that ozone exposure in animal models induce neuroendocrine stress response and this stress response modulates lung injury and inflammation through adrenergic and glucocorticoid receptors.

  20. Neuronal Models for Studying Tau Pathology

    Directory of Open Access Journals (Sweden)

    Thorsten Koechling

    2010-01-01

    Full Text Available Alzheimer's disease (AD is the most frequent neurodegenerative disorder leading to dementia in the aged human population. It is characterized by the presence of two main pathological hallmarks in the brain: senile plaques containing -amyloid peptide and neurofibrillary tangles (NFTs, consisting of fibrillar polymers of abnormally phosphorylated tau protein. Both of these histological characteristics of the disease have been simulated in genetically modified animals, which today include numerous mouse, fish, worm, and fly models of AD. The objective of this review is to present some of the main animal models that exist for reproducing symptoms of the disorder and their advantages and shortcomings as suitable models of the pathological processes. Moreover, we will discuss the results and conclusions which have been drawn from the use of these models so far and their contribution to the development of therapeutic applications for AD.

  1. A micromagnetic study of domain structure modeling

    International Nuclear Information System (INIS)

    Matsuo, Tetsuji; Mimuro, Naoki; Shimasaki, Masaaki

    2008-01-01

    To develop a mesoscopic model for magnetic-domain behavior, a domain structure model (DSM) was examined and compared with a micromagnetic simulation. The domain structure of this model is given by several domains with uniform magnetization vectors and domain walls. The directions of magnetization vectors and the locations of domain walls are determined so as to minimize the magnetic total energy of the magnetic material. The DSM was modified to improve its representation capability for domain behavior. The domain wall energy is multiplied by a vanishing factor to represent the disappearance of magnetic domain. The sequential quadratic programming procedure is divided into two steps to improve an energy minimization process. A comparison with micromagnetic simulation shows that the modified DSM improves the representation accuracy of the magnetization process

  2. Geomagnetic field models for satellite angular motion studies

    Science.gov (United States)

    Ovchinnikov, M. Yu.; Penkov, V. I.; Roldugin, D. S.; Pichuzhkina, A. V.

    2018-03-01

    Four geomagnetic field models are discussed: IGRF, inclined, direct and simplified dipoles. Geomagnetic induction vector expressions are provided in different reference frames. Induction vector behavior is compared for different models. Models applicability for the analysis of satellite motion is studied from theoretical and engineering perspectives. Relevant satellite dynamics analysis cases using analytical and numerical techniques are provided. These cases demonstrate the benefit of a certain model for a specific dynamics study. Recommendations for models usage are summarized in the end.

  3. Model Servqual Dengan Pendekatan Structural Equation Modeling (Studi Pada Mahasiswa Sistem Informasi)

    OpenAIRE

    Nurfaizal, Yusmedi

    2015-01-01

    Penelitian ini berjudul “MODEL SERVQUAL DENGAN PENDEKATAN STRUCTURAL EQUATION MODELING (Studi Pada Mahasiswa Sistem Informasi)”. Tujuan penelitian ini adalah untuk mengetahui model Servqual dengan pendekatan Structural Equation Modeling pada mahasiswa sistem informasi. Peneliti memutuskan untuk mengambil sampel sebanyak 100 responden. Untuk menguji model digunakan analisis SEM. Hasil penelitian menunjukkan bahwa tangibility, reliability responsiveness, assurance dan emphaty mempunyai pengaruh...

  4. Study on geo-information modelling

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana

    2006-01-01

    Roč. 5, č. 5 (2006), s. 1108-1113 ISSN 1109-2777 Institutional research plan: CEZ:AV0Z10750506 Keywords : control GIS * geo-information modelling * uncertainty * spatial temporal approach Web Services Subject RIV: BC - Control Systems Theory

  5. Studies on chemoviscosity modeling for thermosetting resins

    Science.gov (United States)

    Bai, J. M.; Hou, T. H.; Tiwari, S. N.

    1987-01-01

    A new analytical model for simulating chemoviscosity of thermosetting resins has been formulated. The model is developed by modifying the well-established Williams-Landel-Ferry (WLF) theory in polymer rheology for thermoplastic materials. By introducing a relationship between the glass transition temperature Tg(t) and the degree of cure alpha(t) of the resin system under cure, the WLF theory can be modified to account for the factor of reaction time. Temperature dependent functions of the modified WLF theory constants C sub 1 (t) and C sub 2 (t) were determined from the isothermal cure data. Theoretical predictions of the model for the resin under dynamic heating cure cycles were shown to compare favorably with the experimental data. This work represents progress toward establishing a chemoviscosity model which is capable of not only describing viscosity profiles accurately under various cure cycles, but also correlating viscosity data to the changes of physical properties associated with the structural transformation of the thermosetting resin systems during cure.

  6. A Study of Simple Diffraction Models

    DEFF Research Database (Denmark)

    Agerkvist, Finn

    1997-01-01

    Three different models for calculating edge diffraction are examined. The methods of Vanderkooy, Terai and Biot & Tolstoy are compared with measurements. Although a good agreement is obtained, the measurements also show that none of the methods work completely satisfactorily. The desired properties...

  7. Preliminary report on electromagnetic model studies

    Science.gov (United States)

    Frischknecht, F.C.; Mangan, G.B.

    1960-01-01

    More than 70 resopnse curves for various models have been obtained using the slingram and turam electromagnetic methods. Results show that for the slingram method, horizontal co-planar coils are usually more sensitive than vertical, co-axial or vertical, co-planar coils. The shape of the anomaly usually is simpler for the vertical coils.

  8. Animal models to study plaque vulnerability

    NARCIS (Netherlands)

    Schapira, K.; Heeneman, S.; Daemen, M. J. A. P.

    2007-01-01

    The need to identify and characterize vulnerable atherosclerotic lesions in humans has lead to the development of various animal models of plaque vulnerability. In this review, current concepts of the vulnerable plaque as it leads to an acute coronary event are described, such as plaque rupture,

  9. Modelling the dynamics of total precipitation and aboveground net primary production of fescue-feather grass steppe at Askania Nova according to global climate change scenariosModelling the dynamics of total precipitation and aboveground net primary production of fescue-feather grass steppe at Askania Nova according to global climate change scenarios

    Directory of Open Access Journals (Sweden)

    S. O. Belyakov

    2017-01-01

    were found to be 350–400 mm during AWSP with ANPP at 350 g/m2. On the basis of the regression model and current forecasts of changes in precipitation rates we made a forecast of net primary production of plant communities for four climate change scenarios (RCP2.6, RCP4.5, RCP6, and RCP8.5 described in the Fifth Assessment of Intergovernmental Panel on Climate Change (IPCC. For this purpose, bioclimate projections of 10 major climate models (The Community Climate System Model Version 4 (CCSM4, GISS-E2-R, HadGEM2-AO, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM, MIROC-ESM, MIROC5, MRI-CGCM3, NorESM1-M, used for preparation of the IPCC report, were analyzed and imported to the geographical information system package QGIS. QGIS modelling software was used for geoanalysis and calculation of GIS-layers for Askania-Nova and adjacent arid grasslands. The results of modelling with the 10 climate models were compared and analyzed for each of the four IPCC scenarios, depending on predicted CO2 levels. The presented modelling results showed a trend to growth in AWSP precipitation and NPP for all scenarios up to 2040–2060. The scenarios RCP2.6, RCP4.5, RCP6 predicted the optimum precipitation zone for current plant diversity for the period of 2040–2060 and scenario RCP8.5 predicted an optimum zone peak after 2080. The study confirmed the importance of monitoring the productivity of herbaceous communities in dry steppe ecosystems ofUkraine.

  10. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  11. Study on developing energy-macro model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Duk [Korea Energy Economics Institute, Euiwang (Kenya)

    1999-12-01

    It analyzed the effect of international oil price on domestic economy and its path, time difference and degree of effect. First of all, it analyzed whether the long-term relationship between international oil price and price exists focusing on integral relationship, and estimated dynamic price fluctuation by using error correction model. Moreover, using structural VAR model, it analyzed what kind of shocking reactions are showed when the increase of international oil price affects on domestic macro economic variables. Commonly it is estimated that price is increasing in a long term not in a short term as the international oil price is increasing. When the international oil price increases, it is estimated that its effect in a short term is insignificant because of direct price control by the government and then its spreading effect on economy shows a long-term effect by deepening the price control. (author). 16 refs., 3 figs., 10 tabs.

  12. Study on modeling of operator's learning mechanism

    International Nuclear Information System (INIS)

    Yoshimura, Seichi; Hasegawa, Naoko

    1998-01-01

    One effective method to analyze the causes of human errors is to model the behavior of human and to simulate it. The Central Research Institute of Electric Power Industry (CRIEPI) has developed an operator team behavior simulation system called SYBORG (Simulation System for the Behavior of an Operating Group) to analyze the human errors and to establish the countermeasures for them. As an operator behavior model which composes SYBORG has no learning mechanism and the knowledge of a plant is fixed, it cannot take suitable actions when unknown situations occur nor learn anything from the experience. However, considering actual operators, learning is an essential human factor to enhance their abilities to diagnose plant anomalies. In this paper, Q learning with 1/f fluctuation was proposed as a learning mechanism of an operator and simulation using the mechanism was conducted. The results showed the effectiveness of the learning mechanism. (author)

  13. Contaminant transport modeling studies of Russian sites

    International Nuclear Information System (INIS)

    Tsang, Chin-Fu

    1993-01-01

    Lawrence Berkeley Laboratory (LBL) established mechanisms that promoted cooperation between U.S. and Russian scientists in scientific research as well as environmental technology transfer. Using Russian experience and U.S technology, LBL developed approaches for field investigations, site evaluation, waste disposal, and remediation at Russian contaminated sites. LBL assessed a comprehensive database as well as an actual, large-scale contaminated site to evaluate existing knowledge of and test mathematical models used for the assessment of U.S. contaminated sites

  14. A flexible climate model for use in integrated assessments

    Science.gov (United States)

    Sokolov, A. P.; Stone, P. H.

    Because of significant uncertainty in the behavior of the climate system, evaluations of the possible impact of an increase in greenhouse gas concentrations in the atmosphere require a large number of long-term climate simulations. Studies of this kind are impossible to carry out with coupled atmosphere ocean general circulation models (AOGCMs) because of their tremendous computer resource requirements. Here we describe a two dimensional (zonally averaged) atmospheric model coupled with a diffusive ocean model developed for use in the integrated framework of the Massachusetts Institute of Technology (MIT) Joint Program on the Science and Policy of Global Change. The 2-D model has been developed from the Goddard Institute for Space Studies (GISS) GCM and includes parametrizations of all the main physical processes. This allows it to reproduce many of the nonlinear interactions occurring in simulations with GCMs. Comparisons of the results of present-day climate simulations with observations show that the model reasonably reproduces the main features of the zonally averaged atmospheric structure and circulation. The model's sensitivity can be varied by changing the magnitude of an inserted additional cloud feedback. Equilibrium responses of different versions of the 2-D model to an instantaneous doubling of atmospheric CO2 are compared with results of similar simulations with different AGCMs. It is shown that the additional cloud feedback does not lead to any physically inconsistent results. On the contrary, changes in climate variables such as precipitation and evaporation, and their dependencies on surface warming produced by different versions of the MIT 2-D model are similar to those shown by GCMs. By choosing appropriate values of the deep ocean diffusion coefficients, the transient behavior of different AOGCMs can be matched in simulations with the 2-D model, with a unique choice of diffusion coefficients allowing one to match the performance of a given AOGCM

  15. Evaluation of Extratropical Cyclone Precipitation in the North Atlantic Basin: An analysis of ERA-Interim, WRF, and two CMIP5 models.

    Science.gov (United States)

    Booth, James F; Naud, Catherine M; Willison, Jeff

    2018-03-01

    The representation of extratropical cyclones (ETCs) precipitation in general circulation models (GCMs) and a weather research and forecasting (WRF) model is analyzed. This work considers the link between ETC precipitation and dynamical strength and tests if parameterized convection affects this link for ETCs in the North Atlantic Basin. Lagrangian cyclone tracks of ETCs in ERA-Interim reanalysis (ERAI), the GISS and GFDL CMIP5 models, and WRF with two horizontal resolutions are utilized in a compositing analysis. The 20-km resolution WRF model generates stronger ETCs based on surface wind speed and cyclone precipitation. The GCMs and ERAI generate similar composite means and distributions for cyclone precipitation rates, but GCMs generate weaker cyclone surface winds than ERAI. The amount of cyclone precipitation generated by the convection scheme differs significantly across the datasets, with GISS generating the most, followed by ERAI and then GFDL. The models and reanalysis generate relatively more parameterized convective precipitation when the total cyclone-averaged precipitation is smaller. This is partially due to the contribution of parameterized convective precipitation occurring more often late in the ETC life cycle. For reanalysis and models, precipitation increases with both cyclone moisture and surface wind speed, and this is true if the contribution from the parameterized convection scheme is larger or not. This work shows that these different models generate similar total ETC precipitation despite large differences in the parameterized convection, and these differences do not cause unexpected behavior in ETC precipitation sensitivity to cyclone moisture or surface wind speed.

  16. Auditing predictive models : a case study in crop growth

    NARCIS (Netherlands)

    Metselaar, K.

    1999-01-01

    Methods were developed to assess and quantify the predictive quality of simulation models, with the intent to contribute to evaluation of model studies by non-scientists. In a case study, two models of different complexity, LINTUL and SUCROS87, were used to predict yield of forage maize

  17. Computerised modelling for developmental biology : an exploration with case studies

    NARCIS (Netherlands)

    Bertens, Laura M.F.

    2012-01-01

    Many studies in developmental biology rely on the construction and analysis of models. This research presents a broad view of modelling approaches for developmental biology, with a focus on computational methods. An overview of modelling techniques is given, followed by several case studies. Using

  18. A Comparative Study Of Stock Price Forecasting Using Nonlinear Models

    Directory of Open Access Journals (Sweden)

    Diteboho Xaba

    2017-03-01

    Full Text Available This study compared the in-sample forecasting accuracy of three forecasting nonlinear models namely: the Smooth Transition Regression (STR model, the Threshold Autoregressive (TAR model and the Markov-switching Autoregressive (MS-AR model. Nonlinearity tests were used to confirm the validity of the assumptions of the study. The study used model selection criteria, SBC to select the optimal lag order and for the selection of appropriate models. The Mean Square Error (MSE, Mean Absolute Error (MAE and Root Mean Square Error (RMSE served as the error measures in evaluating the forecasting ability of the models. The MS-AR models proved to perform well with lower error measures as compared to LSTR and TAR models in most cases.

  19. Comparison of conventional study model measurements and 3D digital study model measurements from laser scanned dental impressions

    Science.gov (United States)

    Nugrahani, F.; Jazaldi, F.; Noerhadi, N. A. I.

    2017-08-01

    The field of orthodontics is always evolving,and this includes the use of innovative technology. One type of orthodontic technology is the development of three-dimensional (3D) digital study models that replace conventional study models made by stone. This study aims to compare the mesio-distal teeth width, intercanine width, and intermolar width measurements between a 3D digital study model and a conventional study model. Twelve sets of upper arch dental impressions were taken from subjects with non-crowding teeth. The impressions were taken twice, once with alginate and once with polivinylsiloxane. The alginate impressions used in the conventional study model and the polivinylsiloxane impressions were scanned to obtain the 3D digital study model. Scanning was performed using a laser triangulation scanner device assembled by the School of Electrical Engineering and Informatics at the Institut Teknologi Bandung and David Laser Scan software. For the conventional model, themesio-distal width, intercanine width, and intermolar width were measured using digital calipers; in the 3D digital study model they were measured using software. There were no significant differences between the mesio-distal width, intercanine width, and intermolar width measurments between the conventional and 3D digital study models (p>0.05). Thus, measurements using 3D digital study models are as accurate as those obtained from conventional study models

  20. Digital Forensic Investigation Models, an Evolution study

    Directory of Open Access Journals (Sweden)

    Khuram Mushtaque

    2015-10-01

    Full Text Available In business today, one of the most important segments that enable any business to get competitive advantage over others is appropriate, effective adaptation of Information Technology into business and then managing and governing it on their will. To govern IT organizations need to identify value of acquiring services of forensic firms to compete cyber criminals. Digital forensic firms follow different mechanisms to perform investigation. Time by time forensic firms are facilitated with different models for investigation containing phases for different purposes of the entire process. Along with forensic firms, enterprises also need to build a secure and supportive platform to make successful investigation process possible. We have underlined different elements of organizations in Pakistan; need to be addressed to provide support to forensic firms.

  1. Studies and modeling of cold neutron sources

    International Nuclear Information System (INIS)

    Campioni, G.

    2004-11-01

    With the purpose of updating knowledge in the fields of cold neutron sources, the work of this thesis has been run according to the 3 following axes. First, the gathering of specific information forming the materials of this work. This set of knowledge covers the following fields: cold neutron, cross-sections for the different cold moderators, flux slowing down, different measurements of the cold flux and finally, issues in the thermal analysis of the problem. Secondly, the study and development of suitable computation tools. After an analysis of the problem, several tools have been planed, implemented and tested in the 3-dimensional radiation transport code Tripoli-4. In particular, a module of uncoupling, integrated in the official version of Tripoli-4, can perform Monte-Carlo parametric studies with a spare factor of Cpu time fetching 50 times. A module of coupling, simulating neutron guides, has also been developed and implemented in the Monte-Carlo code McStas. Thirdly, achieving a complete study for the validation of the installed calculation chain. These studies focus on 3 cold sources currently functioning: SP1 from Orphee reactor and 2 other sources (SFH and SFV) from the HFR at the Laue Langevin Institute. These studies give examples of problems and methods for the design of future cold sources

  2. A model system for DNA repair studies

    International Nuclear Information System (INIS)

    Lange, C.S.; Perlmutter, E.

    1984-01-01

    The search for the ''lethal lesion:'' which would yield a molecular explanation of biological survival curves led to attempts to correlate unrepaired DNA lesions with loss of reproductive integrity. Such studies have shown the crucial importance of DNA repair systems. The unrepaired DSB has been sought for such correlation, but in such study the DNA was too large, polydisperse, and/or structurally complex to permit precise measurement of break induction and repair. Therefore, an analog of higher order systems but with a genome of readily measurable size, is needed. Bacteriophage T4 is such an analog. Both its biological (PFU) and molecular (DNA) survival curves are exponentials. Its aerobic /sub PFU/D/sub 37///sub DNA/D/sub 37/ ratio, (410 +- 4.5Gy/540 +- 25 Gy) indicates that 76 +- 4% of lethality at low multiplicity infection (moi 1) the survival is greater than can be explained if the assumption of no parental DSB repair were valid. Both T4 and its host have DSB repair systems which can be studied by the infectious center method. Results of such studies are discussed

  3. Alternative Middle School Models: An Exploratory Study

    Science.gov (United States)

    Duffield, Stacy Kay

    2018-01-01

    A Midwestern state allocated grant funding to encourage more accessible alternative programming at the middle level. Seventeen schools were approved for this grant and used the funds to supplement the operation of a new or existing program. This study provides policymakers and educators with an overview of the various types of alternative middle…

  4. Flow model study of 'Monju' reactor vessel

    International Nuclear Information System (INIS)

    Miyaguchi, Kimihide

    1980-01-01

    In the case of designing the structures in nuclear reactors, various problems to be considered regarding thermo-hydrodynamics exist, such as the distribution of flow quantity and the pressure loss in reactors and the thermal shock to inlet and outlet nozzles. In order to grasp the flow characteristics of coolant in reactors, the 1/2 scale model of the reactor structure of ''Monju'' was attached to the water flow testing facility in the Oarai Engineering Center, and the simulation experiment has been carried out. The flow characteristics in reactors clarified by experiment and analysis so far are the distribution of flow quantity between high and low pressure regions in reactors, the distribution of flow quantity among flow zones in respective regions of high and low pressure, the pressure loss in respective parts in reactors, the flow pattern and the mixing effect of coolant in upper and lower plenums, the effect of the twisting angle of inlet nozzles on the flow characteristics in lower plenums, the effect of internal cylinders on the flow characteristics in upper plenums and so on. On the basis of these test results, the improvement of the design of structures in reactors was made, and the confirmation test on the improved structures was carried out. The testing method, the calculation method, the test results and the reflection to the design of actual machines are described. (Kako, I.)

  5. Theory, modeling, and integrated studies in the Arase (ERG) project

    Science.gov (United States)

    Seki, Kanako; Miyoshi, Yoshizumi; Ebihara, Yusuke; Katoh, Yuto; Amano, Takanobu; Saito, Shinji; Shoji, Masafumi; Nakamizo, Aoi; Keika, Kunihiro; Hori, Tomoaki; Nakano, Shin'ya; Watanabe, Shigeto; Kamiya, Kei; Takahashi, Naoko; Omura, Yoshiharu; Nose, Masahito; Fok, Mei-Ching; Tanaka, Takashi; Ieda, Akimasa; Yoshikawa, Akimasa

    2018-02-01

    Understanding of underlying mechanisms of drastic variations of the near-Earth space (geospace) is one of the current focuses of the magnetospheric physics. The science target of the geospace research project Exploration of energization and Radiation in Geospace (ERG) is to understand the geospace variations with a focus on the relativistic electron acceleration and loss processes. In order to achieve the goal, the ERG project consists of the three parts: the Arase (ERG) satellite, ground-based observations, and theory/modeling/integrated studies. The role of theory/modeling/integrated studies part is to promote relevant theoretical and simulation studies as well as integrated data analysis to combine different kinds of observations and modeling. Here we provide technical reports on simulation and empirical models related to the ERG project together with their roles in the integrated studies of dynamic geospace variations. The simulation and empirical models covered include the radial diffusion model of the radiation belt electrons, GEMSIS-RB and RBW models, CIMI model with global MHD simulation REPPU, GEMSIS-RC model, plasmasphere thermosphere model, self-consistent wave-particle interaction simulations (electron hybrid code and ion hybrid code), the ionospheric electric potential (GEMSIS-POT) model, and SuperDARN electric field models with data assimilation. ERG (Arase) science center tools to support integrated studies with various kinds of data are also briefly introduced.[Figure not available: see fulltext.

  6. Mathematical Modelling Research in Turkey: A Content Analysis Study

    Science.gov (United States)

    Çelik, H. Coskun

    2017-01-01

    The aim of the present study was to examine the mathematical modelling studies done between 2004 and 2015 in Turkey and to reveal their tendencies. Forty-nine studies were selected using purposeful sampling based on the term, "mathematical modelling" with Higher Education Academic Search Engine. They were analyzed with content analysis.…

  7. Fostering Transfer of Study Strategies: A Spiral Model.

    Science.gov (United States)

    Davis, Denise M.; Clery, Carolsue

    1994-01-01

    Describes the design and implementation of a Spiral Model for the introduction and repeated practice of study strategies, based on Taba's model for social studies. In a college reading and studies strategies course, key strategies were introduced early and used through several sets of humanities and social and physical sciences readings. (Contains…

  8. Bayesian Graphical Models for Genomewide Association Studies

    OpenAIRE

    Verzilli, Claudio J.; Stallard, Nigel; Whittaker, John C.

    2006-01-01

    As the extent of human genetic variation becomes more fully characterized, the research community is faced with the challenging task of using this information to dissect the heritable components of complex traits. Genomewide association studies offer great promise in this respect, but their analysis poses formidable difficulties. In this article, we describe a computationally efficient approach to mining genotype-phenotype associations that scales to the size of the data sets currently being ...

  9. [Statistical modeling studies of turbulent reacting flows

    International Nuclear Information System (INIS)

    Dwyer, H.A.

    1987-01-01

    This paper discusses the study of turbulent wall shear flows, and we feel that this problem is both more difficult and a better challenge for the new methods we are developing. Turbulent wall flows have a wide variety of length and time scales which interact with the transport processes to produce very large fluxes of mass, heat, and momentum. At the present time we have completed the first calculation of a wall diffusion flame, and we have begun a velocity PDF calculation for the flat plate boundary layer. A summary of the various activities is contained in this report

  10. Physical Model Method for Seismic Study of Concrete Dams

    Directory of Open Access Journals (Sweden)

    Bogdan Roşca

    2008-01-01

    Full Text Available The study of the dynamic behaviour of concrete dams by means of the physical model method is very useful to understand the failure mechanism of these structures to action of the strong earthquakes. Physical model method consists in two main processes. Firstly, a study model must be designed by a physical modeling process using the dynamic modeling theory. The result is a equations system of dimensioning the physical model. After the construction and instrumentation of the scale physical model a structural analysis based on experimental means is performed. The experimental results are gathered and are available to be analysed. Depending on the aim of the research may be designed an elastic or a failure physical model. The requirements for the elastic model construction are easier to accomplish in contrast with those required for a failure model, but the obtained results provide narrow information. In order to study the behaviour of concrete dams to strong seismic action is required the employment of failure physical models able to simulate accurately the possible opening of joint, sliding between concrete blocks and the cracking of concrete. The design relations for both elastic and failure physical models are based on dimensional analysis and consist of similitude relations among the physical quantities involved in the phenomenon. The using of physical models of great or medium dimensions as well as its instrumentation creates great advantages, but this operation involves a large amount of financial, logistic and time resources.

  11. System Dynamic Modelling for a Balanced Scorecard: A Case Study

    DEFF Research Database (Denmark)

    Nielsen, Steen; Nielsen, Erland Hejn

    Purpose - The purpose of this research is to make an analytical model of the BSC foundation by using a dynamic simulation approach for a 'hypothetical case' model, based on only part of an actual case study of BSC. Design/methodology/approach - The model includes five perspectives and a number...

  12. Salt intrusion study in Cochin estuary - Using empirical models

    Digital Repository Service at National Institute of Oceanography (India)

    Jacob, B.; Revichandran, C.; NaveenKumar, K.R.

    been applied to the Cochin estuary in the present study to identify the most suitable model for predicting the salt intrusion length. Comparison of the obtained results indicate that the model of Van der Burgh (1972) is the most suitable empirical model...

  13. Adolescents Family Models : A Cross-Cultural Study

    OpenAIRE

    Mayer, Boris

    2009-01-01

    This study explores and compares the family models of adolescents across ten cultures using a typological and multilevel approach. Thereby, it aims to empirically contribute to Kagitcibasi s (2007) theory of family change. This theory postulates the existence of three ideal-typical family models across cultures: a family model of independence prevailing in Western societies, a family model of (total) interdependence prevailing in non-industrialized agrarian cultures, and as a synthesis of the...

  14. Conducting field studies for testing pesticide leaching models

    Science.gov (United States)

    Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.

    1990-01-01

    A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.

  15. Pulse radiolysis studies in model lipid systems

    International Nuclear Information System (INIS)

    Patterson, L.K.; Hasegawa, K.

    1978-01-01

    The kinetic and spectral behavior of radicals formed by hydroxyl radical attack on linoleate anions has been studied by pulse radiolysis. Reactivity of OH toward this surfactant is an order of magnitude greater in monomeric form (kOH + linoleate = 8.0 x 10 9 M -1 sec -1 ) than in mecellar form (kOH + lin(micelle) = 1.0 x 10 9 M -1 sec -1 ). Abstraction of a hydrogen atom from the doubly allylic position gives rise to an intense absorption in the UV region (lambda max = 282-286 nm, epsilon approximately 3 x 10 4 M -1 cm -1 ) which may be used as a probe of radical activity at that site. This abstraction may occur, to a small extent, directly via OH attack. However, greater than 90% of initial attack occurs at other sites. Subsequent secondary abstraction of doubly allylic H atoms appears to occur predominantly by: (1) intramolecular processes in monomers, (2) intermolecular processes in micelles. Disappearance of radicals by secondary processes is slower in the micellar pseudo phase than in monomeric solution. (orig.) 891 HK 892 KR [de

  16. Mobile radio alternative systems study traffic model

    Science.gov (United States)

    Tucker, W. T.; Anderson, R. E.

    1983-06-01

    The markets for mobile radio services in non-urban areas of the United States are examined for the years 1985-2000. Three market categories are identified. New Services are defined as those for which there are different expressed ideas but which are not now met by any application of available technology. The complete fulfillment of the needs requires nationwide radio access to vehicles without knowledge of vehicle location, wideband data transmission from remote sites, one- and two way exchange of short data and control messages between vehicles and dispatch or control centers, and automatic vehicle location (surveillance). The commercial and public services market of interest to the study is drawn from existing users of mobile radio in non-urban areas who are dissatisfied with the geographical range or coverage of their systems. The mobile radio telephone market comprises potential users who require access to the public switched telephone network in areas that are not likely to be served by the traditional growth patterns of terrestrial mobile telephone services. Conservative, likely, and optimistic estimates of the markets are presented in terms of numbers of vehicles that will be served and the radio traffic they will generate.

  17. Process modeling for the Integrated Nonthermal Treatment System (INTS) study

    Energy Technology Data Exchange (ETDEWEB)

    Brown, B.W.

    1997-04-01

    This report describes the process modeling done in support of the Integrated Nonthermal Treatment System (INTS) study. This study was performed to supplement the Integrated Thermal Treatment System (ITTS) study and comprises five conceptual treatment systems that treat DOE contract-handled mixed low-level wastes (MLLW) at temperatures of less than 350{degrees}F. ASPEN PLUS, a chemical process simulator, was used to model the systems. Nonthermal treatment systems were developed as part of the INTS study and include sufficient processing steps to treat the entire inventory of MLLW. The final result of the modeling is a process flowsheet with a detailed mass and energy balance. In contrast to the ITTS study, which modeled only the main treatment system, the INTS study modeled each of the various processing steps with ASPEN PLUS, release 9.1-1. Trace constituents, such as radionuclides and minor pollutant species, were not included in the calculations.

  18. An Integrated Approach to Mathematical Modeling: A Classroom Study.

    Science.gov (United States)

    Doerr, Helen M.

    Modeling, simulation, and discrete mathematics have all been identified by professional mathematics education organizations as important areas for secondary school study. This classroom study focused on the components and tools for modeling and how students use these tools to construct their understanding of contextual problems in the content area…

  19. Evaluating EML Modeling Tools for Insurance Purposes: A Case Study

    Directory of Open Access Journals (Sweden)

    Mikael Gustavsson

    2010-01-01

    Full Text Available As with any situation that involves economical risk refineries may share their risk with insurers. The decision process generally includes modelling to determine to which extent the process area can be damaged. On the extreme end of modelling the so-called Estimated Maximum Loss (EML scenarios are found. These scenarios predict the maximum loss a particular installation can sustain. Unfortunately no standard model for this exists. Thus the insurers reach different results due to applying different models and different assumptions. Therefore, a study has been conducted on a case in a Swedish refinery where several scenarios previously had been modelled by two different insurance brokers using two different softwares, ExTool and SLAM. This study reviews the concept of EML and analyses the used models to see which parameters are most uncertain. Also a third model, EFFECTS, was employed in an attempt to reach a conclusion with higher reliability.

  20. A case study of consensus modelling for tracking oil spills

    International Nuclear Information System (INIS)

    King, Brian; Brushett, Ben; Lemckert, Charles

    2010-01-01

    Metocean forecast datasets are essential for the timely response to marine incidents and pollutant spill mitigation at sea. To effectively model the likely drift pattern and the area of impact for a marine spill, both wind and ocean current forecast datasets are required. There are two ocean current forecast models and two wind forecast models currently used operationally in the Australia and Asia Pacific region. The availability of several different forecast models provides a unique opportunity to compare the outcome of a particular modelling exercise with the outcome of another using a different model and determining whether there is consensus in the results. Two recent modelling exercises, the oil spill resulting from the damaged Pacific Adventurer (in Queensland) and the oil spill from the Montara well blowout (in Western Australia) are presented as case studies to examine consensus modelling.

  1. Complete wind farm electromagnetic transient modelling for grid integration studies

    International Nuclear Information System (INIS)

    Zubia, I.; Ostolaza, X.; Susperregui, A.; Tapia, G.

    2009-01-01

    This paper presents a modelling methodology to analyse the impact of wind farms in surrounding networks. Based on the transient modelling of the asynchronous generator, the multi-machine model of a wind farm composed of N generators is developed. The model incorporates step-up power transformers, distribution lines and surrounding loads up to their connection to the power network. This model allows the simulation of symmetric and asymmetric short-circuits located in the distribution network and the analysis of transient stability of wind farms. It can be also used to study the islanding operation of wind farms

  2. Projected future vegetation changes for the northwest United States and southwest Canada at a fine spatial resolution using a dynamic global vegetation model.

    Science.gov (United States)

    Shafer, Sarah; Bartlein, Patrick J.; Gray, Elizabeth M.; Pelltier, Richard T.

    2015-01-01

    Future climate change may significantly alter the distributions of many plant taxa. The effects of climate change may be particularly large in mountainous regions where climate can vary significantly with elevation. Understanding potential future vegetation changes in these regions requires methods that can resolve vegetation responses to climate change at fine spatial resolutions. We used LPJ, a dynamic global vegetation model, to assess potential future vegetation changes for a large topographically complex area of the northwest United States and southwest Canada (38.0–58.0°N latitude by 136.6–103.0°W longitude). LPJ is a process-based vegetation model that mechanistically simulates the effect of changing climate and atmospheric CO2 concentrations on vegetation. It was developed and has been mostly applied at spatial resolutions of 10-minutes or coarser. In this study, we used LPJ at a 30-second (~1-km) spatial resolution to simulate potential vegetation changes for 2070–2099. LPJ was run using downscaled future climate simulations from five coupled atmosphere-ocean general circulation models (CCSM3, CGCM3.1(T47), GISS-ER, MIROC3.2(medres), UKMO-HadCM3) produced using the A2 greenhouse gases emissions scenario. Under projected future climate and atmospheric CO2 concentrations, the simulated vegetation changes result in the contraction of alpine, shrub-steppe, and xeric shrub vegetation across the study area and the expansion of woodland and forest vegetation. Large areas of maritime cool forest and cold forest are simulated to persist under projected future conditions. The fine spatial-scale vegetation simulations resolve patterns of vegetation change that are not visible at coarser resolutions and these fine-scale patterns are particularly important for understanding potential future vegetation changes in topographically complex areas.

  3. Isolated heart models: cardiovascular system studies and technological advances.

    Science.gov (United States)

    Olejnickova, Veronika; Novakova, Marie; Provaznik, Ivo

    2015-07-01

    Isolated heart model is a relevant tool for cardiovascular system studies. It represents a highly reproducible model for studying broad spectrum of biochemical, physiological, morphological, and pharmaceutical parameters, including analysis of intrinsic heart mechanics, metabolism, and coronary vascular response. Results obtained in this model are under no influence of other organ systems, plasma concentration of hormones or ions and influence of autonomic nervous system. The review describes various isolated heart models, the modes of heart perfusion, and advantages and limitations of various experimental setups. It reports the improvements of perfusion setup according to Langendorff introduced by the authors.

  4. Modelling and propagation of uncertainties in the German Risk Study

    International Nuclear Information System (INIS)

    Hofer, E.; Krzykacz, B.

    1982-01-01

    Risk assessments are generally subject to uncertainty considerations. This is because of the various estimates that are involved. The paper points out those estimates in the so-called phase A of the German Risk Study, for which uncertainties were quantified. It explains the probabilistic models applied in the assessment to their impact on the findings of the study. Finally the resulting subjective confidence intervals of the study results are presented and their sensitivity to these probabilistic models is investigated

  5. Air Pollution Exposure Modeling for Health Studies | Science ...

    Science.gov (United States)

    Dr. Michael Breen is leading the development of air pollution exposure models, integrated with novel personal sensor technologies, to improve exposure and risk assessments for individuals in health studies. He is co-investigator for multiple health studies assessing the exposure and effects of air pollutants. These health studies include participants with asthma, diabetes, and coronary artery disease living in various U.S. cities. He has developed, evaluated, and applied novel exposure modeling and time-activity tools, which includes the Exposure Model for Individuals (EMI), GPS-based Microenvironment Tracker (MicroTrac) and Exposure Tracker models. At this seminar, Dr. Breen will present the development and application of these models to predict individual-level personal exposures to particulate matter (PM) for two health studies in central North Carolina. These health studies examine the association between PM and adverse health outcomes for susceptible individuals. During Dr. Breen’s visit, he will also have the opportunity to establish additional collaborations with researchers at Harvard University that may benefit from the use of exposure models for cohort health studies. These research projects that link air pollution exposure with adverse health outcomes benefit EPA by developing model-predicted exposure-dose metrics for individuals in health studies to improve the understanding of exposure-response behavior of air pollutants, and to reduce participant

  6. Injury Based on Its Study in Experimental Models

    Directory of Open Access Journals (Sweden)

    M. Mendes-Braz

    2012-01-01

    Full Text Available The present review focuses on the numerous experimental models used to study the complexity of hepatic ischemia/reperfusion (I/R injury. Although experimental models of hepatic I/R injury represent a compromise between the clinical reality and experimental simplification, the clinical transfer of experimental results is problematic because of anatomical and physiological differences and the inevitable simplification of experimental work. In this review, the strengths and limitations of the various models of hepatic I/R are discussed. Several strategies to protect the liver from I/R injury have been developed in animal models and, some of these, might find their way into clinical practice. We also attempt to highlight the fact that the mechanisms responsible for hepatic I/R injury depend on the experimental model used, and therefore the therapeutic strategies also differ according to the model used. Thus, the choice of model must therefore be adapted to the clinical question being answered.

  7. Sensitivity of aerosol indirect forcing and autoconversion to cloud droplet parameterization: an assessment with the NASA Global Modeling Initiative.

    Science.gov (United States)

    Sotiropoulou, R. P.; Meshkhidze, N.; Nenes, A.

    2006-12-01

    The aerosol indirect forcing is one of the largest sources of uncertainty in assessments of anthropogenic climate change [IPCC, 2001]. Much of this uncertainty arises from the approach used for linking cloud droplet number concentration (CDNC) to precursor aerosol. Global Climate Models (GCM) use a wide range of cloud droplet activation mechanisms ranging from empirical [Boucher and Lohmann, 1995] to detailed physically- based formulations [e.g., Abdul-Razzak and Ghan, 2000; Fountoukis and Nenes, 2005]. The objective of this study is to assess the uncertainties in indirect forcing and autoconversion of cloud water to rain caused by the application of different cloud droplet parameterization mechanisms; this is an important step towards constraining the aerosol indirect effects (AIE). Here we estimate the uncertainty in indirect forcing and autoconversion rate using the NASA Global Model Initiative (GMI). The GMI allows easy interchange of meteorological fields, chemical mechanisms and the aerosol microphysical packages. Therefore, it is an ideal tool for assessing the effect of different parameters on aerosol indirect forcing. The aerosol module includes primary emissions, chemical production of sulfate in clear air and in-cloud aqueous phase, gravitational sedimentation, dry deposition, wet scavenging in and below clouds, and hygroscopic growth. Model inputs include SO2 (fossil fuel and natural), black carbon (BC), organic carbon (OC), mineral dust and sea salt. The meteorological data used in this work were taken from the NASA Data Assimilation Office (DAO) and two different GCMs: the NASA GEOS4 finite volume GCM (FVGCM) and the Goddard Institute for Space Studies version II' (GISS II') GCM. Simulations were carried out for "present day" and "preindustrial" emissions using different meteorological fields (i.e. DAO, FVGCM, GISS II'); cloud droplet number concentration is computed from the correlations of Boucher and Lohmann [1995], Abdul-Razzak and Ghan [2000

  8. Bethe ansatz study for ground state of Fateev Zamolodchikov model

    International Nuclear Information System (INIS)

    Ray, S.

    1997-01-01

    A Bethe ansatz study of a self-dual Z N spin lattice model, originally proposed by V. A. Fateev and A. B. Zamolodchikov, is undertaken. The connection of this model to the Chiral Potts model is established. Transcendental equations connecting the zeros of Fateev endash Zamolodchikov transfer matrix are derived. The free energies for the ferromagnetic and the anti-ferromagnetic ground states are found for both even and odd spins. copyright 1997 American Institute of Physics

  9. A Dynamic Wind Generation Model for Power Systems Studies

    OpenAIRE

    Estanqueiro, Ana

    2007-01-01

    In this paper, a wind park dynamic model is presented together with a base methodology for its application to power system studies. This detailed wind generation model addresses the wind turbine components and phenomena more relevant to characterize the power quality of a grid connected wind park, as well as the wind park response to the grid fast perturbations, e.g., low voltage ride through fault. The developed model was applied to the operating conditions of the selected sets of wind turbi...

  10. MODELING SIMULATION AND PERFORMANCE STUDY OF GRIDCONNECTED PHOTOVOLTAIC ENERGY SYSTEM

    OpenAIRE

    Nagendra K; Karthik J; Keerthi Rao C; Kumar Raja Pemmadi

    2017-01-01

    This paper presents Modeling Simulation of grid connected Photovoltaic Energy System and performance study using MATLAB/Simulink. The Photovoltaic energy system is considered in three main parts PV Model, Power conditioning System and Grid interface. The Photovoltaic Model is inter-connected with grid through full scale power electronic devices. The simulation is conducted on the PV energy system at normal temperature and at constant load by using MATLAB.

  11. Modeling and Testing of EVs - Preliminary Study and Laboratory Development

    DEFF Research Database (Denmark)

    Yang, Guang-Ya; Marra, Francesco; Nielsen, Arne Hejde

    2010-01-01

    Electric vehicles (EVs) are expected to play a key role in the future energy management system to stabilize both supply and consumption with the presence of high penetration of renewable generation. A reasonably accurate model of battery is a key element for the study of EVs behavior and the grid...... tests, followed by the suggestions towards a feasible battery model for further studies.......Electric vehicles (EVs) are expected to play a key role in the future energy management system to stabilize both supply and consumption with the presence of high penetration of renewable generation. A reasonably accurate model of battery is a key element for the study of EVs behavior and the grid...... impact at different geographical areas, as well as driving and charging patterns. Electric circuit model is deployed in this work to represent the electrical properties of a lithium-ion battery. This paper reports the preliminary modeling and validation work based on manufacturer data sheet and realistic...

  12. Study and optimization of the partial discharges in capacitor model ...

    African Journals Online (AJOL)

    Page 1 ... experiments methodology for the study of such processes, in view of their modeling and optimization. The obtained result is a mathematical model capable to identify the parameters and the interactions between .... 5mn; the next landing is situated in 200 V over the voltage of partial discharges appearance and.

  13. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    2009-08-07

    Aug 7, 2009 ... Sensitivity study of reduced models of the activated sludge process, for the purposes of parameter estimation and process optimisation: Benchmark process with ASM1 and UCT reduced biological models. S du Plessis and R Tzoneva*. Department of Electrical Engineering, Cape Peninsula University of ...

  14. A Theoretical Study of Subsurface Drainage Model Simulation of ...

    African Journals Online (AJOL)

    A three-dimensional variable-density groundwater flow model, the SEAWAT model, was used to assess the influence of subsurface drain spacing, evapotranspiration and irrigation water quality on salt concentration at the base of the root zone, leaching and drainage in salt affected irrigated land. The study was carried out ...

  15. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    Science.gov (United States)

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  16. 2-D Model Test Study of the Suape Breakwater, Brazil

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Burcharth, Hans F.; Sopavicius, A.

    This report deals with a two-dimensional model test study of the extension of the breakwater in Suape, Brazil. One cross-section was tested for stability and overtopping in various sea conditions. The length scale used for the model tests was 1:35. Unless otherwise specified all values given...

  17. A Descriptive Study of Differing School Health Delivery Models

    Science.gov (United States)

    Becker, Sherri I.; Maughan, Erin

    2017-01-01

    The purpose of this exploratory qualitative study was to identify and describe emerging models of school health services. Participants (N = 11) provided information regarding their models in semistructured phone interviews. Results identified a variety of funding sources as well as different staffing configurations and supervision. Strengths of…

  18. Use of travel cost models in planning: A case study

    Science.gov (United States)

    Allan Marsinko; William T. Zawacki; J. Michael Bowker

    2002-01-01

    This article examines the use of the travel cost, method in tourism-related decision making in the area of nonconsumptive wildlife-associated recreation. A travel cost model of nonconsumptive wildlife-associated recreation, developed by Zawacki, Maninko, and Bowker, is used as a case study for this analysis. The travel cost model estimates the demand for the activity...

  19. Model for the dynamic study of AC contactors

    Energy Technology Data Exchange (ETDEWEB)

    Corcoles, F.; Pedra, J.; Garrido, J.P.; Baza, R. [Dep. d' Eng. Electrica ETSEIB. UPC, Barcelona (Spain)

    2000-08-01

    This paper proposes a model for the dynamic analysis of AC contactors. The calculation algorithm and implementation are discussed. The proposed model can be used to study the influence of the design parameters and the supply in their dynamic behaviour. The high calculation speed of the implemented algorithm allows extensive ranges of parameter variations to be analysed. (orig.)

  20. A test-bed modeling study for wave resource assessment

    Science.gov (United States)

    Yang, Z.; Neary, V. S.; Wang, T.; Gunawan, B.; Dallman, A.

    2016-02-01

    Hindcasts from phase-averaged wave models are commonly used to estimate standard statistics used in wave energy resource assessments. However, the research community and wave energy converter industry is lacking a well-documented and consistent modeling approach for conducting these resource assessments at different phases of WEC project development, and at different spatial scales, e.g., from small-scale pilot study to large-scale commercial deployment. Therefore, it is necessary to evaluate current wave model codes, as well as limitations and knowledge gaps for predicting sea states, in order to establish best wave modeling practices, and to identify future research needs to improve wave prediction for resource assessment. This paper presents the first phase of an on-going modeling study to address these concerns. The modeling study is being conducted at a test-bed site off the Central Oregon Coast using two of the most widely-used third-generation wave models - WaveWatchIII and SWAN. A nested-grid modeling approach, with domain dimension ranging from global to regional scales, was used to provide wave spectral boundary condition to a local scale model domain, which has a spatial dimension around 60km by 60km and a grid resolution of 250m - 300m. Model results simulated by WaveWatchIII and SWAN in a structured-grid framework are compared to NOAA wave buoy data for the six wave parameters, including omnidirectional wave power, significant wave height, energy period, spectral width, direction of maximum directionally resolved wave power, and directionality coefficient. Model performance and computational efficiency are evaluated, and the best practices for wave resource assessments are discussed, based on a set of standard error statistics and model run times.

  1. Data assimilation in modeling ocean processes: A bibliographic study

    Digital Repository Service at National Institute of Oceanography (India)

    Mahadevan, R.; Fernandes, A.A.; Saran, A.K.

    An annotated bibliography on studies related to data assimilation in modeling ocean processes has been prepared. The bibliography listed here is not comprehensive and is not prepared from the original references. Information obtainable from...

  2. Models for the study of Clostridium difficile infection

    Science.gov (United States)

    Best, Emma L.; Freeman, Jane; Wilcox, Mark H.

    2012-01-01

    Models of Clostridium difficile infection (C. difficile) have been used extensively for Clostridium difficile (C. difficile) research. The hamster model of C. difficile infection has been most extensively employed for the study of C. difficile and this has been used in many different areas of research, including the induction of C. difficile, the testing of new treatments, population dynamics and characterization of virulence. Investigations using in vitro models for C. difficile introduced the concept of colonization resistance, evaluated the role of antibiotics in C. difficile development, explored population dynamics and have been useful in the evaluation of C. difficile treatments. Experiments using models have major advantages over clinical studies and have been indispensible in furthering C. difficile research. It is important for future study programs to carefully consider the approach to use and therefore be better placed to inform the design and interpretation of clinical studies. PMID:22555466

  3. Guadalupe River, California, Sedimentation Study. Numerical Model Investigation

    National Research Council Canada - National Science Library

    Copeland, Ronald

    2002-01-01

    A numerical model study was conducted to evaluate the potential impact that the Guadalupe River flood-control project would have on channel stability in terms of channel aggradation and degradation...

  4. A study for production simulation model generation system based on data model at a shipyard

    Directory of Open Access Journals (Sweden)

    Myung-Gi Back

    2016-09-01

    Full Text Available Simulation technology is a type of shipbuilding product lifecycle management solution used to support production planning or decision-making. Normally, most shipbuilding processes are consisted of job shop production, and the modeling and simulation require professional skills and experience on shipbuilding. For these reasons, many shipbuilding companies have difficulties adapting simulation systems, regardless of the necessity for the technology. In this paper, the data model for shipyard production simulation model generation was defined by analyzing the iterative simulation modeling procedure. The shipyard production simulation data model defined in this study contains the information necessary for the conventional simulation modeling procedure and can serve as a basis for simulation model generation. The efficacy of the developed system was validated by applying it to the simulation model generation of the panel block production line. By implementing the initial simulation model generation process, which was performed in the past with a simulation modeler, the proposed system substantially reduced the modeling time. In addition, by reducing the difficulties posed by different modeler-dependent generation methods, the proposed system makes the standardization of the simulation model quality possible.

  5. Study of the properties of general relativistic Kink model (GRK)

    International Nuclear Information System (INIS)

    Oliveira, L.C.S. de.

    1980-01-01

    The stability of the general relativistic Kink model (GRK) is studied. It is shown that the model is stable at least against radial perturbations. Furthermore, the Dirac field in the background of the geometry generated by the GRK is studied. It is verified that the GRK localizes the Dirac field, around the region of largest curvature. The physical interpretation of this system (the Dirac field in the GRK background) is discussed. (Author) [pt

  6. Model study on radioecology in Biblis. Pt. 2

    Energy Technology Data Exchange (ETDEWEB)

    1980-03-01

    The present volume 'Water Pathway II' of the model study radioecology Biblis contains the remaining six part studies on the subjects: 1. Concentration of radionuclides in river sediments. 2. Incorporation via terrestrial food (milk, fruit, vegetables). 3. Radioactive substances in the Rhine not arising from nuclear power stations. 4. Dynamic model for intermittent outlet during reactor operation. 5. Exposure to radiation of the Rhine-fishes. 6. Influence of contaminated waste water on industrial utilization of surface waters.

  7. Drosophila melanogaster as a model organism to study nanotoxicity.

    Science.gov (United States)

    Ong, Cynthia; Yung, Lin-Yue Lanry; Cai, Yu; Bay, Boon-Huat; Baeg, Gyeong-Hun

    2015-05-01

    Drosophila melanogaster has been used as an in vivo model organism for the study of genetics and development since 100 years ago. Recently, the fruit fly Drosophila was also developed as an in vivo model organism for toxicology studies, in particular, the field of nanotoxicity. The incorporation of nanomaterials into consumer and biomedical products is a cause for concern as nanomaterials are often associated with toxicity in many in vitro studies. In vivo animal studies of the toxicity of nanomaterials with rodents and other mammals are, however, limited due to high operational cost and ethical objections. Hence, Drosophila, a genetically tractable organism with distinct developmental stages and short life cycle, serves as an ideal organism to study nanomaterial-mediated toxicity. This review discusses the basic biology of Drosophila, the toxicity of nanomaterials, as well as how the Drosophila model can be used to study the toxicity of various types of nanomaterials.

  8. Construction of a biodynamic model for Cry protein production studies.

    Science.gov (United States)

    Navarro-Mtz, Ana Karin; Pérez-Guevara, Fermín

    2014-12-01

    Mathematical models have been used from growth kinetic simulation to gen regulatory networks prediction for B. thuringiensis culture. However, this culture is a time dependent dynamic process where cells physiology suffers several changes depending on the changes in the cell environment. Therefore, through its culture, B. thuringiensis presents three phases related with the predominance of three major metabolic pathways: vegetative growth (Embded-Meyerhof-Parnas pathway), transition (γ-aminobutiric cycle) and sporulation (tricarboxylic acid cycle). There is not available a mathematical model that relates the different stages of cultivation with the metabolic pathway active on each one of them. Therefore, in the present study, and based on published data, a biodynamic model was generated to describe the dynamic of the three different phases based on their major metabolic pathways. The biodynamic model is used to study the interrelation between the different culture phases and their relationship with the Cry protein production. The model consists of three interconnected modules where each module represents one culture phase and its principal metabolic pathway. For model validation four new fermentations were done showing that the model constructed describes reasonably well the dynamic of the three phases. The main results of this model imply that poly-β-hydroxybutyrate is crucial for endospore and Cry protein production. According to the yields of dipicolinic acid and Cry from poly-β-hydroxybutyrate, calculated with the model, the endospore and Cry protein production are not just simultaneous and parallel processes they are also competitive processes.

  9. Study on Tower Models for EHV Transmission Line

    Directory of Open Access Journals (Sweden)

    Xu Bao-Qing

    2016-01-01

    Full Text Available Lightning outage accident is one of the main factors that threat seriously the safe and reliable operation of power system. So it is very important to establish reasonable transmission tower model and evaluate the impulse response characteristic of lightning wave traveling on the transmission tower properly for determining reliable lightning protection performance. With the help of Electromagnetic Transient Program (EMTP, six 500kV tower models are built. Aiming at one line to one transformer operating mode of 500kV substation, the intruding wave overvoltage under different tower models is calculated. The effect of tower model on intruding overvoltage has been studied. The results show that different tower models can result in great differences to the calculation results. Hence, reasonable selection of the tower model in the calculation of back- strike intruding wave is very important.

  10. Consequence model of the German reactor safety study

    International Nuclear Information System (INIS)

    Bayer, A.; Aldrich, D.; Burkart, K.; Horsch, F.; Hubschmann, W.; Schueckler, M.; Vogt, S.

    1979-01-01

    The consequency model developed for phase A of the German Reactor Safety Study (RSS) is similar in many respects to its counterpart in WASH-1400. As in that previous study, the model describes the atmosphere dispersion and transport of radioactive material released from the containment during a postulated reactor accident, and predicts its interaction with and influence on man. Differences do exist between the two models however, for the following reasons: (1) to more adequately reflect central European conditions, (2) to include improved submodels, and (3) to apply additional data and knowledge that have become available since publication of WASH-1400. The consequence model as used in phase A of the German RSS is described, highlighting differences between it and the U.S. model

  11. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  12. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  13. Studying historical occupational careers with multilevel growth models

    Directory of Open Access Journals (Sweden)

    Wiebke Schulz

    2010-10-01

    Full Text Available In this article we propose to study occupational careers with historical data by using multilevel growth models. Historical career data are often characterized by a lack of information on the timing of occupational changes and by different numbers of observations of occupations per individual. Growth models can handle these specificities, whereas standard methods, such as event history analyses can't. We illustrate the use of growth models by studying career success of men and women, using data from the Historical Sample of the Netherlands. The results show that the method is applicable to male careers, but causes trouble when analyzing female careers.

  14. Contribution to the study of conformal theories and integrable models

    International Nuclear Information System (INIS)

    Sochen, N.

    1992-05-01

    The purpose of this thesis is the 2-D physics study. The main tool is the conformal field theory with Kac-Moody and W algebra. This theory describes the 2-D models that have translation, rotation and dilatation symmetries, at their critical point. The expanded conformal theories describe models that have a larger symmetry than conformal symmetry. After a review of conformal theory methods, the author effects a detailed study of singular vector form in sl(2) affine algebra. With this important form, correlation functions can be calculated. The classical W algebra is studied and the relations between classical W algebra and quantum W algebra are specified. Bosonization method is presented and sl(2)/sl(2) topological model, studied. Partition function bosonization of different models is described. A program of rational theory classification is described linking rational conformal theories and spin integrable models, and interesting relations between Boltzmann weights of different models have been found. With these relations, the integrability of models by a direct calculation of their Boltzmann weights is proved

  15. Predictive mapping of soil organic carbon in wet cultivated lands using classification-tree based models

    DEFF Research Database (Denmark)

    Kheir, Rania Bou; Greve, Mogens Humlekrog; Bøcher, Peder Klith

    2010-01-01

    the geographic distribution of SOC across Denmark using remote sensing (RS), geographic information systems (GISs) and decision-tree modeling (un-pruned and pruned classification trees). Seventeen parameters, i.e. parent material, soil type, landscape type, elevation, slope gradient, slope aspect, mean curvature...... field measurements in the area of interest (Denmark). A large number of tree-based classification models (588) were developed using (i) all of the parameters, (ii) all Digital Elevation Model (DEM) parameters only, (iii) the primary DEM parameters only, (iv), the remote sensing (RS) indices only, (v......) selected pairs of parameters, (vi) soil type, parent material and landscape type only, and (vii) the parameters having a high impact on SOC distribution in built pruned trees. The best constructed classification tree models (in the number of three) with the lowest misclassification error (ME...

  16. Business Model Perusahaan Keluarga: Studi Kasus Pada Industri Batik

    Directory of Open Access Journals (Sweden)

    Achmad Sobirin

    2014-07-01

    Full Text Available AbstractThis paper was directed to review the existing busniness model of family firm within the contect of batik industry and propose a new one. Busniness model is conceived as the logic of doing business for value creation. Therefore business model is sometime understood as a construct, a mental model or a business paradigm, to be used as a guide on how to do every day’s business. Meanwhile, family firm, by definition is a firm in which the whole or majority of ownership is in the hand of family unit, managed by family members, and to be transferred to the next generation. Using a single case study that is Perusahaan Batik Bogavira – a family business enterprise producing and selling specifically batik Lampung, we identified that the existing business model of Perusahaan Batik Bogavira may potentially create cannibalism. Therefore we proposed a new business model configuration with the hope loyal buyers remain with the firm and at the same time firm can still maintain its growth.Keywords: business model, family firm, batik industry.Abstrak Paper ini membahas penerapan sebuah konsep yang relatif masih baru yaitu “business model” pada perusahaan keluarga yang bergerak di industry batik – Perusahaan Batik Bogavira yang memroduksi dan menjual batik khas Lampung. Tujuannya adalah untuk menelaah ulang business model berjalan sehingga bisa diketahui tingkat kecocokan business model tersebut dengan karakteristik bisnis dan lingkungannya, dan jika dianggap perlu mengusulkan business model baru yang lebih sesuai. Bahasan diawali dengan menelaah konsep business model dan perusahaan keluarga untuk mendapatkan gambaran tentang esensi kedua konsep tersebut. Secara umum business model adalah the logic of doing business for value creation sehingga business model sering disebut juga sebagai construct, mental model atau business paradigm yang menjadi panduan dalam menjalankan kegiatan bisnis. Sementara itu yang dimaksud dengan perusahaan keluarga

  17. An optomechanical model eye for ophthalmological refractive studies.

    Science.gov (United States)

    Arianpour, Ashkan; Tremblay, Eric J; Stamenov, Igor; Ford, Joseph E; Schanzlin, David J; Lo, Yuhwa

    2013-02-01

    To create an accurate, low-cost optomechanical model eye for investigation of refractive errors in clinical and basic research studies. An optomechanical fluid-filled eye model with dimensions consistent with the human eye was designed and fabricated. Optical simulations were performed on the optomechanical eye model, and the quantified resolution and refractive errors were compared with the widely used Navarro eye model using the ray-tracing software ZEMAX (Radiant Zemax, Redmond, WA). The resolution of the physical optomechanical eye model was then quantified with a complementary metal-oxide semiconductor imager using the image resolution software SFR Plus (Imatest, Boulder, CO). Refractive, manufacturing, and assembling errors were also assessed. A refractive intraocular lens (IOL) and a diffractive IOL were added to the optomechanical eye model for tests and analyses of a 1951 U.S. Air Force target chart. Resolution and aberrations of the optomechanical eye model and the Navarro eye model were qualitatively similar in ZEMAX simulations. Experimental testing found that the optomechanical eye model reproduced properties pertinent to human eyes, including resolution better than 20/20 visual acuity and a decrease in resolution as the field of view increased in size. The IOLs were also integrated into the optomechanical eye model to image objects at distances of 15, 10, and 3 feet, and they indicated a resolution of 22.8 cycles per degree at 15 feet. A life-sized optomechanical eye model with the flexibility to be patient-specific was designed and constructed. The model had the resolution of a healthy human eye and recreated normal refractive errors. This model may be useful in the evaluation of IOLs for cataract surgery. Copyright 2013, SLACK Incorporated.

  18. Wildland Fire Behaviour Case Studies and Fuel Models for Landscape-Scale Fire Modeling

    Directory of Open Access Journals (Sweden)

    Paul-Antoine Santoni

    2011-01-01

    Full Text Available This work presents the extension of a physical model for the spreading of surface fire at landscape scale. In previous work, the model was validated at laboratory scale for fire spreading across litters. The model was then modified to consider the structure of actual vegetation and was included in the wildland fire calculation system Forefire that allows converting the two-dimensional model of fire spread to three dimensions, taking into account spatial information. Two wildland fire behavior case studies were elaborated and used as a basis to test the simulator. Both fires were reconstructed, paying attention to the vegetation mapping, fire history, and meteorological data. The local calibration of the simulator required the development of appropriate fuel models for shrubland vegetation (maquis for use with the model of fire spread. This study showed the capabilities of the simulator during the typical drought season characterizing the Mediterranean climate when most wildfires occur.

  19. Best Practices in Academic Management. Study Programs Classification Model

    Directory of Open Access Journals (Sweden)

    Ofelia Ema Aleca

    2016-05-01

    Full Text Available This article proposes and tests a set of performance indicators for the assessment of Bachelor and Master studies, from two perspectives: the study programs and the disciplines. The academic performance at the level of a study program shall be calculated based on success and efficiency rates, and at discipline level, on the basis of rates of efficiency, success and absenteeism. This research proposes a model of classification of the study programs within a Bachelor and Master cycle based on the education performance and efficiency. What recommends this model as a best practice model in academic management is the possibility of grouping a study program or a discipline in a particular category of efficiency

  20. Innovation and Business Model: a case study about integration of Innovation Funnel and Business Model Canvas

    Directory of Open Access Journals (Sweden)

    Fábio Luiz Zandoval Bonazzi

    2014-12-01

    Full Text Available Unlike the past, currently, thinking about innovation refers to a reflection of value cocreation through strategic alliances, customer approach and adoption of different business models. Thus, this study analyzed and described the innovation process of company DSM, connecting it to concepts of organizational development strategies and the theory of business model. This is a basic interpretive qualitative research, developed by means of a single case study conducted through interviews and documentary analysis. This study enabled us to categorize the company business model as an open, unbundled and innovative model, which makes innovation a dependent variable of this internal configuration of value creation and value capture. As a theoretical contribution, we highlight the convergence and complementarity of the “Business Model Canvas” tool and “Innovation Funnel,” used here, to analyze the empirical case.

  1. A model for assessing human cognitive reliability in PRA studies

    International Nuclear Information System (INIS)

    Hannaman, G.W.; Spurgin, A.J.; Lukic, Y.

    1985-01-01

    This paper summarizes the status of a research project sponsored by EPRI as part of the Probabilistic Risk Assessment (PRA) technology improvement program and conducted by NUS Corporation to develop a model of Human Cognitive Reliability (HCR). The model was synthesized from features identified in a review of existing models. The model development was based on the hypothesis that the key factors affecting crew response times are separable. The inputs to the model consist of key parameters the values of which can be determined by PRA analysts for each accident situation being assessed. The output is a set of curves which represent the probability of control room crew non-response as a function of time for different conditions affecting their performance. The non-response probability is then a contributor to the overall non-success of operating crews to achieve a functional objective identified in the PRA study. Simulator data and some small scale tests were utilized to illustrate the calibration of interim HCR model coefficients for different types of cognitive processing since the data were sparse. The model can potentially help PRA analysts make human reliability assessments more explicit. The model incorporates concepts from psychological models of human cognitive behavior, information from current collections of human reliability data sources and crew response time data from simulator training exercises

  2. Study on modeling technology in digital reactor system

    International Nuclear Information System (INIS)

    Liu Xiaoping; Luo Yuetong; Tong Lili

    2004-01-01

    Modeling is the kernel part of a digital reactor system. As an extensible platform for reactor conceptual design, it is very important to study modeling technology and develop some kind of tools to speed up preparation of all classical computing models. This paper introduces the background of the project and basic conception of digital reactor. MCAM is taken as an example for modeling and its related technologies used are given. It is an interface program for MCNP geometry model developed by FDS team (ASIPP and HUT), and designed to run on windows system. MCAM aims at utilizing CAD technology to facilitate creation of MCNP geometry model. There have been two ways for MCAM to utilize CAD technology: (1) Making use of user interface technology in aid of generation of MCNP geometry model; (2) Making use of existing 3D CAD model to accelerate creation of MCNP geometry model. This paper gives an overview of MCAM's major function. At last, several examples are given to demonstrate MCAM's various capabilities. (authors)

  3. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  4. Example of emergency response model evaluation of studies using the Mathew/Adpic models

    International Nuclear Information System (INIS)

    Dickerson, M.H.; Lange, R.

    1986-04-01

    This report summarizes model evaluation studies conducted for the MATHEW/ADPIC transport and diffusion models during the past ten years. These models support the US Department of Energy Atmospheric Release Advisory Capability, an emergency response service for atmospheric releases of nuclear material. Field studies involving tracer releases used in these studies cover a broad range of meteorology, terrain and tracer release heights, the three most important aspects of estimating air concentration values resulting from airborne releases of toxic material. Results of these studies show that these models can estimate air concentration values within a factor of 2 20% to 50% of the time and a factor of 5 40% to 80% of the time. As the meterology and terrain become more complex and the release height of the tracer is increased, the accuracy of the model calculations degrades. This band of uncertainty appears to correctly represent the capability of these models at this time. A method for estimating angular uncertainty in the model calculations is described and used to suggest alternative methods for evaluating emergency response models

  5. Saccharomyces cerevisiae as a model organism: a comparative study.

    Directory of Open Access Journals (Sweden)

    Hiren Karathia

    Full Text Available BACKGROUND: Model organisms are used for research because they provide a framework on which to develop and optimize methods that facilitate and standardize analysis. Such organisms should be representative of the living beings for which they are to serve as proxy. However, in practice, a model organism is often selected ad hoc, and without considering its representativeness, because a systematic and rational method to include this consideration in the selection process is still lacking. METHODOLOGY/PRINCIPAL FINDINGS: In this work we propose such a method and apply it in a pilot study of strengths and limitations of Saccharomyces cerevisiae as a model organism. The method relies on the functional classification of proteins into different biological pathways and processes and on full proteome comparisons between the putative model organism and other organisms for which we would like to extrapolate results. Here we compare S. cerevisiae to 704 other organisms from various phyla. For each organism, our results identify the pathways and processes for which S. cerevisiae is predicted to be a good model to extrapolate from. We find that animals in general and Homo sapiens in particular are some of the non-fungal organisms for which S. cerevisiae is likely to be a good model in which to study a significant fraction of common biological processes. We validate our approach by correctly predicting which organisms are phenotypically more distant from S. cerevisiae with respect to several different biological processes. CONCLUSIONS/SIGNIFICANCE: The method we propose could be used to choose appropriate substitute model organisms for the study of biological processes in other species that are harder to study. For example, one could identify appropriate models to study either pathologies in humans or specific biological processes in species with a long development time, such as plants.

  6. Phase field model for the study of boiling

    International Nuclear Information System (INIS)

    Ruyer, P.

    2006-07-01

    This study concerns both the modeling and the numerical simulation of boiling flows. First we propose a review concerning nucleate boiling at high wall heat flux and focus more particularly on the current understanding of the boiling crisis. From this analysis we deduce a motivation for the numerical simulation of bubble growth dynamics. The main and remaining part of this study is then devoted to the development and analyze of a phase field model for the liquid-vapor flows with phase change. We propose a thermodynamic quasi-compressible formulation whose properties match the one required for the numerical study envisaged. The system of governing equations is a thermodynamically consistent regularization of the sharp interface model, that is the advantage of the di use interface models. We show that the thickness of the interface transition layer can be defined independently from the thermodynamic description of the bulk phases, a property that is numerically attractive. We derive the kinetic relation that allows to analyze the consequences of the phase field formulation on the model of the dissipative mechanisms. Finally we study the numerical resolution of the model with the help of simulations of phase transition in simple configurations as well as of isothermal bubble dynamics. (author)

  7. Modelling study of sea breezes in a complex coastal environment

    Science.gov (United States)

    Cai, X.-M.; Steyn, D. G.

    This study investigates a mesoscale modelling of sea breezes blowing from a narrow strait into the lower Fraser valley (LFV), British Columbia, Canada, during the period of 17-20 July, 1985. Without a nudging scheme in the inner grid, the CSU-RAMS model produces satisfactory wind and temperature fields during the daytime. In comparison with observation, the agreement indices for surface wind and temperature during daytime reach about 0.6 and 0.95, respectively, while the agreement indices drop to 0.4 at night. In the vertical, profiles of modelled wind and temperature generally agree with tethersonde data collected on 17 and 19 July. The study demonstrates that in late afternoon, the model does not capture the advection of an elevated warm layer which originated from land surfaces outside of the inner grid. Mixed layer depth (MLD) is calculated from model output of turbulent kinetic energy field. Comparison of MLD results with observation shows that the method generates a reliable MLD during the daytime, and that accurate estimates of MLD near the coast require the correct simulation of wind conditions over the sea. The study has shown that for a complex coast environment like the LFV, a reliable modelling study depends not only on local surface fluxes but also on elevated layers transported from remote land surfaces. This dependence is especially important when local forcings are weak, for example, during late afternoon and at night.

  8. Models hosts for the study of oral candidiasis.

    Science.gov (United States)

    Junqueira, Juliana Campos

    2012-01-01

    Oral candidiasis is an opportunistic infection caused by yeast of the Candida genus, primarily Candida albicans. It is generally associated with predisposing factors such as the use of immunosuppressive agents, antibiotics, prostheses, and xerostomia. The development of research in animal models is extremely important for understanding the nature of the fungal pathogenicity, host interactions, and treatment of oral mucosal Candida infections. Many oral candidiasis models in rats and mice have been developed with antibiotic administration, induction of xerostomia, treatment with immunosuppressive agents, or the use of germ-free animals, and all these models has both benefits and limitations. Over the past decade, invertebrate model hosts, including Galleria mellonella, Caenorhabditis elegans, and Drosophila melanogaster, have been used for the study of Candida pathogenesis. These invertebrate systems offer a number of advantages over mammalian vertebrate models, predominantly because they allow the study of strain collections without the ethical considerations associated with studies in mammals. Thus, the invertebrate models may be useful to understanding of pathogenicity of Candida isolates from the oral cavity, interactions of oral microorganisms, and study of new antifungal compounds for oral candidiasis.

  9. Synthetic Study on the Geological and Hydrogeological Model around KURT

    International Nuclear Information System (INIS)

    Park, Kyung Woo; Kim, Kyung Su; Koh, Yong Kwon; Choi, Jong Won

    2011-01-01

    To characterize the site specific properties of a study area for high-level radioactive waste disposal research in KAERI, the several geological investigations such as surface geological surveys and borehole drillings were carried out since 1997. Especially, KURT (KAERI Underground Research Tunnel) was constructed to understand the further study of geological environments in 2006. As a result, the first geological model of a study area was constructed by using the results of geological investigation. The objective of this research is to construct a hydrogeological model around KURT area on the basis of the geological model. Hydrogeological data which were obtained from in-situ hydraulic tests in the 9 boreholes were estimated to accomplish the objective. And, the hydrogeological properties of the 4 geological elements in the geological model, which were the subsurface weathering zone, the log angle fracture zone, the fracture zones and the bedrock were suggested. The hydrogeological model suggested in this study will be used as input parameters to carry out the groundwater flow modeling as a next step of the site characterization around KURT area

  10. Study on geological environment model using geostatistics method

    International Nuclear Information System (INIS)

    Honda, Makoto; Suzuki, Makoto; Sakurai, Hideyuki; Iwasa, Kengo; Matsui, Hiroya

    2005-03-01

    The purpose of this study is to develop the geostatistical procedure for modeling geological environments and to evaluate the quantitative relationship between the amount of information and the reliability of the model using the data sets obtained in the surface-based investigation phase (Phase 1) of the Horonobe Underground Research Laboratory Project. This study lasts for three years from FY2004 to FY2006 and this report includes the research in FY2005 as the second year of three-year study. In FY2005 research, the hydrogeological model was built as well as FY2004 research using the data obtained from the deep boreholes (HDB-6, 7 and 8) and the ground magnetotelluric (AMT) survey which were executed in FY2004 in addition to the data sets used in the first year of study. Above all, the relationship between the amount of information and the reliability of the model was demonstrated through a comparison of the models at each step which corresponds to the investigation stage in each FY. Furthermore, the statistical test was applied for detecting the difference of basic statistics of various data due to geological features with a view to taking the geological information into the modeling procedures. (author)

  11. Service-oriented enterprise modelling and analysis: a case study

    NARCIS (Netherlands)

    Iacob, Maria Eugenia; Jonkers, H.; Lankhorst, M.M.; Steen, M.W.A.

    2007-01-01

    In order to validate the concepts and techniques for service-oriented enterprise architecture modelling, developed in the ArchiMate project (Lankhorst, et al., 2005), we have conducted a number of case studies. This paper describes one of these case studies, conducted at the Dutch Tax and Customs

  12. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  13. STUDI MODEL UNTUK PENINGKATAN PRESIPITASI AWAN KONVEKTIF DENGAN BUBUK GARAM

    OpenAIRE

    M. V, Belyaeva; A.S, Drofa; V.N, Ivanov; Kudsy, Mahally; Haryanto, Untung; Goenawan, R Djoko; Harsanti, Dini; Ridwan, Ridwan

    2011-01-01

    Sebuah studi tentang penggunaan garam serbuk polidispersi sebagai bahan semaitelah dilaksanakan dengan memakai model 1-dimensi. Dalam studi ini pengaruhpenambahan serbuk garam tersebut terhadap distribusi tetes awan dan jumlah penambahan presipitasi telah dilakukan, serta hasilnya telah dianalisa dan dibandingkan dengan hasil yang diperoleh pada pemakaian partikel higroskopis yang diperoleh dari flare piroteknik. Kondisi awan yang dipelajari terdiri dari beberapa macam ketinggian, updraft dan...

  14. Case Studies in Modelling, Control in Food Processes.

    Science.gov (United States)

    Glassey, J; Barone, A; Montague, G A; Sabou, V

    This chapter discusses the importance of modelling and control in increasing food process efficiency and ensuring product quality. Various approaches to both modelling and control in food processing are set in the context of the specific challenges in this industrial sector and latest developments in each area are discussed. Three industrial case studies are used to demonstrate the benefits of advanced measurement, modelling and control in food processes. The first case study illustrates the use of knowledge elicitation from expert operators in the process for the manufacture of potato chips (French fries) and the consequent improvements in process control to increase the consistency of the resulting product. The second case study highlights the economic benefits of tighter control of an important process parameter, moisture content, in potato crisp (chips) manufacture. The final case study describes the use of NIR spectroscopy in ensuring effective mixing of dry multicomponent mixtures and pastes. Practical implementation tips and infrastructure requirements are also discussed.

  15. Bias-Correction in Vector Autoregressive Models: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tom Engsted

    2014-03-01

    Full Text Available We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it compares very favorably in non-stationary models.

  16. Evaluation of Multiclass Model Observers in PET LROC Studies

    Science.gov (United States)

    Gifford, H. C.; Kinahan, P. E.; Lartizien, C.; King, M. A.

    2007-02-01

    A localization ROC (LROC) study was conducted to evaluate nonprewhitening matched-filter (NPW) and channelized NPW (CNPW) versions of a multiclass model observer as predictors of human tumor-detection performance with PET images. Target localization is explicitly performed by these model observers. Tumors were placed in the liver, lungs, and background soft tissue of a mathematical phantom, and the data simulation modeled a full-3D acquisition mode. Reconstructions were performed with the FORE+AWOSEM algorithm. The LROC study measured observer performance with 2D images consisting of either coronal, sagittal, or transverse views of the same set of cases. Versions of the CNPW observer based on two previously published difference-of-Gaussian channel models demonstrated good quantitative agreement with human observers. One interpretation of these results treats the CNPW observer as a channelized Hotelling observer with implicit internal noise

  17. Molecular modeling of protein materials: case study of elastin

    International Nuclear Information System (INIS)

    Tarakanova, Anna; Buehler, Markus J

    2013-01-01

    Molecular modeling of protein materials is a quickly growing area of research that has produced numerous contributions in fields ranging from structural engineering to medicine and biology. We review here the history and methods commonly employed in molecular modeling of protein materials, emphasizing the advantages for using modeling as a complement to experimental work. We then consider a case study of the protein elastin, a critically important ‘mechanical protein’ to exemplify the approach in an area where molecular modeling has made a significant impact. We outline the progression of computational modeling studies that have considerably enhanced our understanding of this important protein which endows elasticity and recoil to the tissues it is found in, including the skin, lungs, arteries and the heart. A vast collection of literature has been directed at studying the structure and function of this protein for over half a century, the first molecular dynamics study of elastin being reported in the 1980s. We review the pivotal computational works that have considerably enhanced our fundamental understanding of elastin's atomistic structure and its extraordinary qualities—focusing on two in particular: elastin's superb elasticity and the inverse temperature transition—the remarkable ability of elastin to take on a more structured conformation at higher temperatures, suggesting its effectiveness as a biomolecular switch. Our hope is to showcase these methods as both complementary and enriching to experimental approaches that have thus far dominated the study of most protein-based materials. (topical review)

  18. A model for voltage collapse study considering load characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Aguiar, L B [Companhia de Energia Eletrica da Bahia (COELBA), Salvador, BA (Brazil)

    1994-12-31

    This paper presents a model for analysis of voltage collapse and instability problem considering the load characteristics. The model considers fundamentally the transmission lines represented by exact from through the generalized constants A, B, C, D and the loads as function of the voltage, emphasizing the cases of constant power, constant current and constant impedance. the study treats of the system behavior on steady state and presents illustrative graphics about the problem. (author) 12 refs., 4 figs.

  19. Singularity analysis in nonlinear biomathematical models: Two case studies

    International Nuclear Information System (INIS)

    Meletlidou, E.; Leach, P.G.L.

    2007-01-01

    We investigate the possession of the Painleve Property for certain values of the parameters in two biological models. The first is a metapopulation model for two species (prey and predator) and the second one is a study of a sexually transmitted disease, into which 'education' is introduced. We determine the cases for which the systems possess the Painleve Property, in particular some of the cases for which the equations can be directly integrated. We draw conclusions for these cases

  20. Model technique for aerodynamic study of boiler furnace

    Energy Technology Data Exchange (ETDEWEB)

    1966-02-01

    The help of the Division was recently sought to improve the heat transfer and reduce the exit gas temperature in a pulverized-fuel-fired boiler at an Australian power station. One approach adopted was to construct from Perspex a 1:20 scale cold-air model of the boiler furnace and to use a flow-visualization technique to study the aerodynamic patterns established when air was introduced through the p.f. burners of the model. The work established good correlations between the behaviour of the model and of the boiler furnace.

  1. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  2. A study of critical two-phase flow models

    International Nuclear Information System (INIS)

    Siikonen, T.

    1982-01-01

    The existing computer codes use different boundary conditions in the calculation of critical two-phase flow. In the present study these boundary conditions are compared. It is shown that the boundary condition should be determined from the hydraulic model used in the computer code. The use of a correlation, which is not based on the hydraulic model used, leads often to bad results. Usually a good agreement with data is obtained in the calculation as far as the critical mass flux is concerned, but the agreement is not so good in the pressure profiles. The reason is suggested to be mainly in inadequate modeling of non-equilibrium effects. (orig.)

  3. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  4. Recent validation studies for two NRPB environmental transfer models

    International Nuclear Information System (INIS)

    Brown, J.; Simmonds, J.R.

    1991-01-01

    The National Radiological Protection Board (NRPB) developed a dynamic model for the transfer of radionuclides through terrestrial food chains some years ago. This model, now called FARMLAND, predicts both instantaneous and time integrals of concentration of radionuclides in a variety of foods. The model can be used to assess the consequences of both accidental and routine releases of radioactivity to the environment; and results can be obtained as a function of time. A number of validation studies have been carried out on FARMLAND. In these the model predictions have been compared with a variety of sets of environmental measurement data. Some of these studies will be outlined in the paper. A model to predict external radiation exposure from radioactivity deposited on different surfaces in the environment has also been developed at NRPB. This model, called EXPURT (EXPosure from Urban Radionuclide Transfer), can be used to predict radiation doses as a function of time following deposition in a variety of environments, ranging from rural to inner-city areas. This paper outlines validation studies and future extensions to be carried out on EXPURT. (12 refs., 4 figs.)

  5. Geology - Background complementary studies. Forsmark modelling stage 2.2

    Energy Technology Data Exchange (ETDEWEB)

    Stephens, Michael B. [Geological Survey of Sweden, Uppsala (Sweden); Skagius, Kristina [Kemakta Konsult AB, Stockholm (Sweden)] (eds.)

    2007-09-15

    During Forsmark model stage 2.2, seven complementary geophysical and geological studies were initiated by the geological modelling team, in direct connection with and as a background support to the deterministic modelling of deformation zones. One of these studies involved a field control on the character of two low magnetic lineaments with NNE and NE trends inside the target volume. The interpretation of these lineaments formed one of the late deliveries to SKB that took place after the data freeze for model stage 2.2 and during the initial stage of the modelling work. Six studies involved a revised processing and analysis of reflection seismic, refraction seismic and selected oriented borehole radar data, all of which had been presented earlier in connection with the site investigation programme. A prime aim of all these studies was to provide a better understanding of the geological significance of indirect geophysical data to the geological modelling team. Such essential interpretative work was lacking in the material acquired in connection with the site investigation programme. The results of these background complementary studies are published together in this report. The titles and authors of the seven background complementary studies are presented below. Summaries of the results of each study, with a focus on the implications for the geological modelling of deformation zones, are presented in the master geological report, SKB-R--07-45. The sections in the master report, where reference is made to each background complementary study and where the summaries are placed, are also provided. The individual reports are listed in the order that they are referred to in the master geological report and as they appear in this report. 1. Scan line fracture mapping and magnetic susceptibility measurements across two low magnetic lineaments with NNE and NE trend, Forsmark. Jesper Petersson, Ulf B. Andersson and Johan Berglund. 2. Integrated interpretation of surface and

  6. Geology - Background complementary studies. Forsmark modelling stage 2.2

    International Nuclear Information System (INIS)

    Stephens, Michael B.; Skagius, Kristina

    2007-09-01

    During Forsmark model stage 2.2, seven complementary geophysical and geological studies were initiated by the geological modelling team, in direct connection with and as a background support to the deterministic modelling of deformation zones. One of these studies involved a field control on the character of two low magnetic lineaments with NNE and NE trends inside the target volume. The interpretation of these lineaments formed one of the late deliveries to SKB that took place after the data freeze for model stage 2.2 and during the initial stage of the modelling work. Six studies involved a revised processing and analysis of reflection seismic, refraction seismic and selected oriented borehole radar data, all of which had been presented earlier in connection with the site investigation programme. A prime aim of all these studies was to provide a better understanding of the geological significance of indirect geophysical data to the geological modelling team. Such essential interpretative work was lacking in the material acquired in connection with the site investigation programme. The results of these background complementary studies are published together in this report. The titles and authors of the seven background complementary studies are presented below. Summaries of the results of each study, with a focus on the implications for the geological modelling of deformation zones, are presented in the master geological report, SKB-R--07-45. The sections in the master report, where reference is made to each background complementary study and where the summaries are placed, are also provided. The individual reports are listed in the order that they are referred to in the master geological report and as they appear in this report. 1. Scan line fracture mapping and magnetic susceptibility measurements across two low magnetic lineaments with NNE and NE trend, Forsmark. Jesper Petersson, Ulf B. Andersson and Johan Berglund. 2. Integrated interpretation of surface and

  7. Modelling and Analysis of Smart Grid: A Stochastic Model Checking Case Study

    DEFF Research Database (Denmark)

    Yuksel, Ender; Zhu, Huibiao; Nielson, Hanne Riis

    2012-01-01

    that require novel methods and applications. In this context, an important issue is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese Smart Grid implementation as a case study and address the verification problem for performance and energy......Cyber-physical systems integrate information and communication technology functions to the physical elements of a system for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... consumption. We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....

  8. Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    2014-01-01

    Cyber-physical systems integrate information and communication technology with the physical elements of a system, mainly for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy...... consumption.We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....

  9. Looking beyond general metrics for model comparison - lessons from an international model intercomparison study

    Science.gov (United States)

    de Boer-Euser, Tanja; Bouaziz, Laurène; De Niel, Jan; Brauer, Claudia; Dewals, Benjamin; Drogue, Gilles; Fenicia, Fabrizio; Grelier, Benjamin; Nossent, Jiri; Pereira, Fernando; Savenije, Hubert; Thirel, Guillaume; Willems, Patrick

    2017-01-01

    International collaboration between research institutes and universities is a promising way to reach consensus on hydrological model development. Although model comparison studies are very valuable for international cooperation, they do often not lead to very clear new insights regarding the relevance of the modelled processes. We hypothesise that this is partly caused by model complexity and the comparison methods used, which focus too much on a good overall performance instead of focusing on a variety of specific events. In this study, we use an approach that focuses on the evaluation of specific events and characteristics. Eight international research groups calibrated their hourly model on the Ourthe catchment in Belgium and carried out a validation in time for the Ourthe catchment and a validation in space for nested and neighbouring catchments. The same protocol was followed for each model and an ensemble of best-performing parameter sets was selected. Although the models showed similar performances based on general metrics (i.e. the Nash-Sutcliffe efficiency), clear differences could be observed for specific events. We analysed the hydrographs of these specific events and conducted three types of statistical analyses on the entire time series: cumulative discharges, empirical extreme value distribution of the peak flows and flow duration curves for low flows. The results illustrate the relevance of including a very quick flow reservoir preceding the root zone storage to model peaks during low flows and including a slow reservoir in parallel with the fast reservoir to model the recession for the studied catchments. This intercomparison enhanced the understanding of the hydrological functioning of the catchment, in particular for low flows, and enabled to identify present knowledge gaps for other parts of the hydrograph. Above all, it helped to evaluate each model against a set of alternative models.

  10. Open Innovation and Business Model: A Brazilian Company Case Study

    Directory of Open Access Journals (Sweden)

    Elzo Alves Aranha

    2015-12-01

    Full Text Available Open Innovation is increasingly being introduced in international and national organizations for the creation of value. Open innovation is a practical tool, requiring new strategies and decisions from managers for the exploitation of innovative activities. The basic question that this study seeks to answer is linked to the practice of open innovation in connection with the open business model geared towards the creation of value in a Brazilian company. This paper aims to present a case study that illustrates how open innovation offers resources to change the open business model in order to create value for the Brazilian company. The case study method of a company in the sector of pharma-chemical products was used. The results indicate that internal sources of knowledge, external sources of knowledge and accentuate working partnerships were adopted by company as strategies to offer resources to change the open business model in order to create value.

  11. Teaching Mathematical Modelling for Earth Sciences via Case Studies

    Science.gov (United States)

    Yang, Xin-She

    2010-05-01

    Mathematical modelling is becoming crucially important for earth sciences because the modelling of complex systems such as geological, geophysical and environmental processes requires mathematical analysis, numerical methods and computer programming. However, a substantial fraction of earth science undergraduates and graduates may not have sufficient skills in mathematical modelling, which is due to either limited mathematical training or lack of appropriate mathematical textbooks for self-study. In this paper, we described a detailed case-study-based approach for teaching mathematical modelling. We illustrate how essential mathematical skills can be developed for students with limited training in secondary mathematics so that they are confident in dealing with real-world mathematical modelling at university level. We have chosen various topics such as Airy isostasy, greenhouse effect, sedimentation and Stokes' flow,free-air and Bouguer gravity, Brownian motion, rain-drop dynamics, impact cratering, heat conduction and cooling of the lithosphere as case studies; and we use these step-by-step case studies to teach exponentials, logarithms, spherical geometry, basic calculus, complex numbers, Fourier transforms, ordinary differential equations, vectors and matrix algebra, partial differential equations, geostatistics and basic numeric methods. Implications for teaching university mathematics for earth scientists for tomorrow's classroom will also be discussed. Refereces 1) D. L. Turcotte and G. Schubert, Geodynamics, 2nd Edition, Cambridge University Press, (2002). 2) X. S. Yang, Introductory Mathematics for Earth Scientists, Dunedin Academic Press, (2009).

  12. Animal Models for the Study of Female Sexual Dysfunction

    Science.gov (United States)

    Marson, Lesley; Giamberardino, Maria Adele; Costantini, Raffaele; Czakanski, Peter; Wesselmann, Ursula

    2017-01-01

    Introduction Significant progress has been made in elucidating the physiological and pharmacological mechanisms of female sexual function through preclinical animal research. The continued development of animal models is vital for the understanding and treatment of the many diverse disorders that occur in women. Aim To provide an updated review of the experimental models evaluating female sexual function that may be useful for clinical translation. Methods Review of English written, peer-reviewed literature, primarily from 2000 to 2012, that described studies on female sexual behavior related to motivation, arousal, physiological monitoring of genital function and urogenital pain. Main Outcomes Measures Analysis of supporting evidence for the suitability of the animal model to provide measurable indices related to desire, arousal, reward, orgasm, and pelvic pain. Results The development of female animal models has provided important insights in the peripheral and central processes regulating sexual function. Behavioral models of sexual desire, motivation, and reward are well developed. Central arousal and orgasmic responses are less well understood, compared with the physiological changes associated with genital arousal. Models of nociception are useful for replicating symptoms and identifying the neurobiological pathways involved. While in some cases translation to women correlates with the findings in animals, the requirement of circulating hormones for sexual receptivity in rodents and the multifactorial nature of women’s sexual function requires better designed studies and careful analysis. The current models have studied sexual dysfunction or pelvic pain in isolation; combining these aspects would help to elucidate interactions of the pathophysiology of pain and sexual dysfunction. Conclusions Basic research in animals has been vital for understanding the anatomy, neurobiology, and physiological mechanisms underlying sexual function and urogenital pain

  13. Study and discretization of kinetic models and fluid models at low Mach number

    International Nuclear Information System (INIS)

    Dellacherie, Stephane

    2011-01-01

    This thesis summarizes our work between 1995 and 2010. It concerns the analysis and the discretization of Fokker-Planck or semi-classical Boltzmann kinetic models and of Euler or Navier-Stokes fluid models at low Mach number. The studied Fokker-Planck equation models the collisions between ions and electrons in a hot plasma, and is here applied to the inertial confinement fusion. The studied semi-classical Boltzmann equations are of two types. The first one models the thermonuclear reaction between a deuterium ion and a tritium ion producing an α particle and a neutron particle, and is also in our case used to describe inertial confinement fusion. The second one (known as the Wang-Chang and Uhlenbeck equations) models the transitions between electronic quantified energy levels of uranium and iron atoms in the AVLIS isotopic separation process. The basic properties of these two Boltzmann equations are studied, and, for the Wang-Chang and Uhlenbeck equations, a kinetic-fluid coupling algorithm is proposed. This kinetic-fluid coupling algorithm incited us to study the relaxation concept for gas and immiscible fluids mixtures, and to underline connections with classical kinetic theory. Then, a diphasic low Mach number model without acoustic waves is proposed to model the deformation of the interface between two immiscible fluids induced by high heat transfers at low Mach number. In order to increase the accuracy of the results without increasing computational cost, an AMR algorithm is studied on a simplified interface deformation model. These low Mach number studies also incited us to analyse on cartesian meshes the inaccuracy at low Mach number of Godunov schemes. Finally, the LBM algorithm applied to the heat equation is justified

  14. Spatio-Temporal Modelling of Dust Transport over Surface Mining Areas and Neighbouring Residential Zones

    Directory of Open Access Journals (Sweden)

    Eva Gulikova

    2008-06-01

    Full Text Available Projects focusing on spatio-temporal modelling of the living environment need to manage a wide range of terrain measurements, existing spatial data, time series, results of spatial analysis and inputs/outputs from numerical simulations. Thus, GISs are often used to manage data from remote sensors, to provide advanced spatial analysis and to integrate numerical models. In order to demonstrate the integration of spatial data, time series and methods in the framework of the GIS, we present a case study focused on the modelling of dust transport over a surface coal mining area, exploring spatial data from 3D laser scanners, GPS measurements, aerial images, time series of meteorological observations, inputs/outputs form numerical models and existing geographic resources. To achieve this, digital terrain models, layers including GPS thematic mapping, and scenes with simulation of wind flows are created to visualize and interpret coal dust transport over the mine area and a neighbouring residential zone. A temporary coal storage and sorting site, located near the residential zone, is one of the dominant sources of emissions. Using numerical simulations, the possible effects of wind flows are observed over the surface, modified by natural objects and man-made obstacles. The coal dust drifts with the wind in the direction of the residential zone and is partially deposited in this area. The simultaneous display of the digital map layers together with the location of the dominant emission source, wind flows and protected areas enables a risk assessment of the dust deposition in the area of interest to be performed. In order to obtain a more accurate simulation of wind flows over the temporary storage and sorting site, 3D laser scanning and GPS thematic mapping are used to create a more detailed digital terrain model. Thus, visualization of wind flows over the area of interest combined with 3D map layers enables the exploration of the processes of coal dust

  15. Long-term changes in lower tropospheric baseline ozone concentrations: Comparing chemistry-climate models and observations at northern midlatitudes

    Science.gov (United States)

    Parrish, D. D.; Lamarque, J.-F.; Naik, V.; Horowitz, L.; Shindell, D. T.; Staehelin, J.; Derwent, R.; Cooper, O. R.; Tanimoto, H.; Volz-Thomas, A.; Gilge, S.; Scheel, H.-E.; Steinbacher, M.; Fröhlich, M.

    2014-05-01

    Two recent papers have quantified long-term ozone (O3) changes observed at northern midlatitude sites that are believed to represent baseline (here understood as representative of continental to hemispheric scales) conditions. Three chemistry-climate models (NCAR CAM-chem, GFDL-CM3, and GISS-E2-R) have calculated retrospective tropospheric O3 concentrations as part of the Atmospheric Chemistry and Climate Model Intercomparison Project and Coupled Model Intercomparison Project Phase 5 model intercomparisons. We present an approach for quantitative comparisons of model results with measurements for seasonally averaged O3 concentrations. There is considerable qualitative agreement between the measurements and the models, but there are also substantial and consistent quantitative disagreements. Most notably, models (1) overestimate absolute O3 mixing ratios, on average by 5 to 17 ppbv in the year 2000, (2) capture only 50% of O3 changes observed over the past five to six decades, and little of observed seasonal differences, and (3) capture 25 to 45% of the rate of change of the long-term changes. These disagreements are significant enough to indicate that only limited confidence can be placed on estimates of present-day radiative forcing of tropospheric O3 derived from modeled historic concentration changes and on predicted future O3 concentrations. Evidently our understanding of tropospheric O3, or the incorporation of chemistry and transport processes into current chemical climate models, is incomplete. Modeled O3 trends approximately parallel estimated trends in anthropogenic emissions of NOx, an important O3 precursor, while measured O3 changes increase more rapidly than these emission estimates.

  16. A comparative study of the constitutive models for silicon carbide

    Science.gov (United States)

    Ding, Jow-Lian; Dwivedi, Sunil; Gupta, Yogendra

    2001-06-01

    Most of the constitutive models for polycrystalline silicon carbide were developed and evaluated using data from either normal plate impact or Hopkinson bar experiments. At ISP, extensive efforts have been made to gain detailed insight into the shocked state of the silicon carbide (SiC) using innovative experimental methods, viz., lateral stress measurements, in-material unloading measurements, and combined compression shear experiments. The data obtained from these experiments provide some unique information for both developing and evaluating material models. In this study, these data for SiC were first used to evaluate some of the existing models to identify their strength and possible deficiencies. Motivated by both the results of this comparative study and the experimental observations, an improved phenomenological model was developed. The model incorporates pressure dependence of strength, rate sensitivity, damage evolution under both tension and compression, pressure confinement effect on damage evolution, stiffness degradation due to damage, and pressure dependence of stiffness. The model developments are able to capture most of the material features observed experimentally, but more work is needed to better match the experimental data quantitatively.

  17. Modeling of environmentally significant interfaces: Two case studies

    International Nuclear Information System (INIS)

    Williford, R.E.

    2006-01-01

    When some parameters cannot be easily measured experimentally, mathematical models can often be used to deconvolute or interpret data collected on complex systems, such as those characteristic of many environmental problems. These models can help quantify the contributions of various physical or chemical phenomena that contribute to the overall behavior, thereby enabling the scientist to control and manipulate these phenomena, and thus to optimize the performance of the material or device. In the first case study presented here, a model is used to test the hypothesis that oxygen interactions with hydrogen on the catalyst particles of solid oxide fuel cell anodes can sometimes occur a finite distance away from the triple phase boundary (TPB), so that such reactions are not restricted to the TPB as normally assumed. The model may help explain a discrepancy between the observed structure of SOFCs and their performance. The second case study develops a simple physical model that allows engineers to design and control the sizes and shapes of mesopores in silica thin films. Such pore design can be useful for enhancing the selectivity and reactivity of environmental sensors and catalysts. This paper demonstrates the mutually beneficial interactions between experiment and modeling in the solution of a wide range of problems

  18. The ARM-GCSS Intercomparison Study of Single-Column Models and Cloud System Models

    International Nuclear Information System (INIS)

    Cederwall, R.T.; Rodriques, D.J.; Krueger, S.K.; Randall, D.A.

    1999-01-01

    The Single-Column Model (SCM) Working Group (WC) and the Cloud Working Group (CWG) in the Atmospheric Radiation Measurement (ARM) Program have begun a collaboration with the GEWEX Cloud System Study (GCSS) WGs. The forcing data sets derived from the special ARM radiosonde measurements made during the SCM Intensive Observation Periods (IOPs), the wealth of cloud and related data sets collected by the ARM Program, and the ARM infrastructure support of the SCM WG are of great value to GCSS. In return, GCSS brings the efforts of an international group of cloud system modelers to bear on ARM data sets and ARM-related scientific questions. The first major activity of the ARM-GCSS collaboration is a model intercomparison study involving SCMs and cloud system models (CSMs), also known as cloud-resolving or cloud-ensemble models. The SCM methodologies developed in the ARM Program have matured to the point where an intercomparison will help identify the strengths and weaknesses of various approaches. CSM simulations will bring much additional information about clouds to evaluate cloud parameterizations used in the SCMs. CSMs and SCMs have been compared successfully in previous GCSS intercomparison studies for tropical conditions. The ARM Southern Great Plains (SGP) site offers an opportunity for GCSS to test their models in continental, mid-latitude conditions. The Summer 1997 SCM IOP has been chosen since it provides a wide range of summertime weather events that will be a challenging test of these models

  19. Performance Implications of Business Model Change: A Case Study

    Directory of Open Access Journals (Sweden)

    Jana Poláková

    2015-01-01

    Full Text Available The paper deals with changes in performance level introduced by the change of business model. The selected case is a small family business undergoing through substantial changes in reflection of structural changes of its markets. The authors used the concept of business model to describe value creation processes within the selected family business and by contrasting the differences between value creation processes before and after the change introduced they prove the role of business model as the performance differentiator. This is illustrated with the use of business model canvas constructed on the basis interviews, observations and document analysis. The two business model canvases allow for explanation of cause-and-effect relationships within the business leading to change in performance. The change in the performance is assessed by financial analysis of the business conducted over the period of 2006–2012 demonstrates changes in performance (comparing development of ROA, ROE and ROS having their lowest levels before the change of business model was introduced, growing after the introduction of the change, as well as the activity indicators with similar developments of the family business. The described case study contributes to the concept of business modeling with the arguments supporting its value as strategic tool facilitating decisions related to value creation within the business.

  20. Study on high-level waste geological disposal metadata model

    International Nuclear Information System (INIS)

    Ding Xiaobin; Wang Changhong; Zhu Hehua; Li Xiaojun

    2008-01-01

    This paper expatiated the concept of metadata and its researches within china and abroad, then explain why start the study on the metadata model of high-level nuclear waste deep geological disposal project. As reference to GML, the author first set up DML under the framework of digital underground space engineering. Based on DML, a standardized metadata employed in high-level nuclear waste deep geological disposal project is presented. Then, a Metadata Model with the utilization of internet is put forward. With the standardized data and CSW services, this model may solve the problem in the data sharing and exchanging of different data form A metadata editor is build up in order to search and maintain metadata based on this model. (authors)

  1. A model for fine mapping in family based association studies.

    Science.gov (United States)

    Boehringer, Stefan; Pfeiffer, Ruth M

    2009-01-01

    Genome wide association studies for complex diseases are typically followed by more focused characterization of the identified genetic region. We propose a latent class model to evaluate a candidate region with several measured markers using observations on families. The main goal is to estimate linkage disequilibrium (LD) between the observed markers and the putative true but unobserved disease locus in the region. Based on this model, we estimate the joint distribution of alleles at the observed markers and the unobserved true disease locus, and a penetrance parameter measuring the impact of the disease allele on disease risk. A family specific random effect allows for varying baseline disease prevalences for different families. We present a likelihood framework for our model and assess its properties in simulations. We apply the model to an Alzheimer data set and confirm previous findings in the ApoE region.

  2. NATO Advanced Study Institute on Advanced Physical Oceanographic Numerical Modelling

    CERN Document Server

    1986-01-01

    This book is a direct result of the NATO Advanced Study Institute held in Banyuls-sur-mer, France, June 1985. The Institute had the same title as this book. It was held at Laboratoire Arago. Eighty lecturers and students from almost all NATO countries attended. The purpose was to review the state of the art of physical oceanographic numerical modelling including the parameterization of physical processes. This book represents a cross-section of the lectures presented at the ASI. It covers elementary mathematical aspects through large scale practical aspects of ocean circulation calculations. It does not encompass every facet of the science of oceanographic modelling. We have, however, captured most of the essence of mesoscale and large-scale ocean modelling for blue water and shallow seas. There have been considerable advances in modelling coastal circulation which are not included. The methods section does not include important material on phase and group velocity errors, selection of grid structures, advanc...

  3. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  4. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  5. A study of composite models at LEP with ALEPH

    International Nuclear Information System (INIS)

    Badaud, F.

    1992-04-01

    Tests of composite models are performed in e + e - collisions in the vicinity of the Z 0 pole using the ALEPH detector. Two kinds of substructure effects are searched for: deviations of differential cross section for reactions e + e - → l + l - and e + e - → γ γ from standard model predictions, and direct search for excited neutrino. A new interaction, parametrized by a 4-fermion contact term, cell, is studied in lepton pair production reactions, assuming different chiralities of the currents. Lower limits on the compositeness scale Λ are obtained by fitting model predictions to the data. They are in the range from 1 to a few TeV depending on model and lepton flavour. Researches for the lightest excited particle that could be the excited neutrino, are presented

  6. How do humans inspect BPMN models: an exploratory study.

    Science.gov (United States)

    Haisjackl, Cornelia; Soffer, Pnina; Lim, Shao Yi; Weber, Barbara

    2018-01-01

    Even though considerable progress regarding the technical perspective on modeling and supporting business processes has been achieved, it appears that the human perspective is still often left aside. In particular, we do not have an in-depth understanding of how process models are inspected by humans, what strategies are taken, what challenges arise, and what cognitive processes are involved. This paper contributes toward such an understanding and reports an exploratory study investigating how humans identify and classify quality issues in BPMN process models. Providing preliminary answers to initial research questions, we also indicate other research questions that can be investigated using this approach. Our qualitative analysis shows that humans adapt different strategies on how to identify quality issues. In addition, we observed several challenges appearing when humans inspect process models. Finally, we present different manners in which classification of quality issues was addressed.

  7. Experimental study and modeling of a novel magnetorheological elastomer isolator

    International Nuclear Information System (INIS)

    Yang, Jian; Li, Weihua; Sun, Shuaishuai; Du, Haiping; Li, Yancheng; Li, Jianchun; Deng, H X

    2013-01-01

    This paper reports an experimental setup aiming at evaluating the performance of a newly designed magnetorheological elastomer (MRE) seismic isolator. As a further effort to explore the field-dependent stiffness/damping properties of the MRE isolator, a series of experimental testing were conducted. Based upon the analysis of the experimental responses and the characteristics of the MRE isolator, a new model that is capable of reproducing the unique MRE isolator dynamics behaviors is proposed. The validation results verify the model’s effectiveness to portray the MRE isolator. A study on the field-dependent parameters is then provided to make the model valid with fluctuating magnetic fields. To fully explore the mechanism of the proposed model, an investigation relating the dependence of the proposed model on every parameter is carried out. (technical note)

  8. Comparison Study on Low Energy Physics Model of GEANT4

    International Nuclear Information System (INIS)

    Park, So Hyun; Jung, Won Gyun; Suh, Tae Suk

    2010-01-01

    The Geant4 simulation toolkit provides improved or renewed physics model according to the version. The latest Geant4.9.3 which has been recoded by developers applies inserted Livermore data and renewed physics model to the low energy electromagnetic physics model. And also, Geant4.9.3 improved the physics factors by modified code. In this study, the stopping power and CSDA(Continuously Slowing Down Approximation) range data of electron or particles were acquired in various material and then, these data were compared with NIST(National Institute of Standards and Technology) data. Through comparison between data of Geant4 simulation and NIST, the improvement of physics model on low energy electromagnetic of Geant4.9.3 was evaluated by comparing the Geant4.9.2

  9. Studying autism in rodent models: reconciling endophenotypes with comorbidities.

    Directory of Open Access Journals (Sweden)

    Andrew eArgyropoulos

    2013-07-01

    Full Text Available Autism spectrum disorder (ASD patients commonly exhibit a variety of comorbid traits including seizures, anxiety, aggressive behavior, gastrointestinal problems, motor deficits, abnormal sensory processing and sleep disturbances for which the cause is unknown. These features impact negatively on daily life and can exaggerate the effects of the core diagnostic traits (social communication deficits and repetitive behaviors. Studying endophenotypes relevant to both core and comorbid features of ASD in rodent models can provide insight into biological mechanisms underlying these disorders. Here we review the characterization of endophenotypes in a selection of environmental, genetic and behavioural rodent models of ASD. In addition to exhibiting core ASD-like behaviours, each of these animal models display one or more endophenotypes relevant to comorbid features including altered sensory processing, seizure susceptibility, anxiety-like behaviour and disturbed motor functions, suggesting that these traits are indicators of altered biological pathways in ASD. However, the study of behaviours paralleling comorbid traits in animal models of ASD is an emerging field and further research is needed to assess altered gastrointestinal function, aggression and disorders of sleep onset across models. Future studies should include investigation of these endophenotypes in order to advance our understanding of the etiology of this complex disorder.

  10. QUALITY OF AN ACADEMIC STUDY PROGRAMME - EVALUATION MODEL

    Directory of Open Access Journals (Sweden)

    Mirna Macur

    2016-01-01

    Full Text Available Quality of an academic study programme is evaluated by many: employees (internal evaluation and by external evaluators: experts, agencies and organisations. Internal and external evaluation of an academic programme follow written structure that resembles on one of the quality models. We believe the quality models (mostly derived from EFQM excellence model don’t fit very well into non-profit activities, policies and programmes, because they are much more complex than environment, from which quality models derive from (for example assembly line. Quality of an academic study programme is very complex and understood differently by various stakeholders, so we present dimensional evaluation in the article. Dimensional evaluation, as opposed to component and holistic evaluation, is a form of analytical evaluation in which the quality of value of the evaluand is determined by looking at its performance on multiple dimensions of merit or evaluation criteria. First stakeholders of a study programme and their views, expectations and interests are presented, followed by evaluation criteria. They are both joined into the evaluation model revealing which evaluation criteria can and should be evaluated by which stakeholder. Main research questions are posed and research method for each dimension listed.

  11. Geochemical modeling of uranium mill tailings: a case study

    International Nuclear Information System (INIS)

    Peterson, S.R.; Felmy, A.R.; Serne, R.J.; Gee, G.W.

    1983-08-01

    Liner failure was not found to be a problem when various acidic tailings solutions leached through liner materials for periods up to 3 y. On the contrary, materials that contained over 30% clay showed a decrease in permeability with time in the laboratory columns. The decreases in permeability noted above are attributed to pore plugging resulting from the precipitation of minerals and solids. This precipitation takes place due to the increase in pH of the tailings solution brought about by the buffering capacity of the soil. Geochemical modeling predicts, and x-ray characterization confirms, that precipitation of solids from solution is occurring in the acidic tailings solution/liner interactions studied. X-ray diffraction identified gypsum and alunite group minerals, such as jarosite, as having precipitated after acidic tailings solutions reacted with clay liners. The geochemical modeling and experimental work described above were used to construct an equilibrium conceptual model consisting of minerals and solid phases. This model was developed to represent a soil column. A computer program was used as a tool to solve the system of mathematical equations imposed by the conceptual chemical model. The combined conceptual model and computer program were used to predict aqueous phase compositions of effluent solutions from permeability cells packed with geologic materials and percolated with uranium mill tailings solutions. An initial conclusion drawn from these studies is that the laboratory experiments and geochemical modeling predictions were capable of simulating field observations. The same mineralogical changes and contaminant reductions observed in the laboratory studies were found at a drained evaporation pond (Lucky Mc in Wyoming) with a 10-year history of acid attack. 24 references, 5 figures 5 tables

  12. Southeast Atmosphere Studies: learning from model-observation syntheses

    Directory of Open Access Journals (Sweden)

    J. Mao

    2018-02-01

    Full Text Available Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales.This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and

  13. Southeast Atmosphere Studies: learning from model-observation syntheses

    Science.gov (United States)

    Mao, Jingqiu; Carlton, Annmarie; Cohen, Ronald C.; Brune, William H.; Brown, Steven S.; Wolfe, Glenn M.; Jimenez, Jose L.; Pye, Havala O. T.; Ng, Nga Lee; Xu, Lu; McNeill, V. Faye; Tsigaridis, Kostas; McDonald, Brian C.; Warneke, Carsten; Guenther, Alex; Alvarado, Matthew J.; de Gouw, Joost; Mickley, Loretta J.; Leibensperger, Eric M.; Mathur, Rohit; Nolte, Christopher G.; Portmann, Robert W.; Unger, Nadine; Tosca, Mika; Horowitz, Larry W.

    2018-02-01

    Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes) elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS) conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales.This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA) and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and elsewhere. Here we

  14. The contribution of animal models to the study of obesity.

    Science.gov (United States)

    Speakman, John; Hambly, Catherine; Mitchell, Sharon; Król, Elzbieta

    2008-10-01

    Obesity results from prolonged imbalance of energy intake and energy expenditure. Animal models have provided a fundamental contribution to the historical development of understanding the basic parameters that regulate the components of our energy balance. Five different types of animal model have been employed in the study of the physiological and genetic basis of obesity. The first models reflect single gene mutations that have arisen spontaneously in rodent colonies and have subsequently been characterized. The second approach is to speed up the random mutation rate artificially by treating rodents with mutagens or exposing them to radiation. The third type of models are mice and rats where a specific gene has been disrupted or over-expressed as a deliberate act. Such genetically-engineered disruptions may be generated through the entire body for the entire life (global transgenic manipulations) or restricted in both time and to certain tissue or cell types. In all these genetically-engineered scenarios, there are two types of situation that lead to insights: where a specific gene hypothesized to play a role in the regulation of energy balance is targeted, and where a gene is disrupted for a different purpose, but the consequence is an unexpected obese or lean phenotype. A fourth group of animal models concern experiments where selective breeding has been utilized to derive strains of rodents that differ in their degree of fatness. Finally, studies have been made of other species including non-human primates and dogs. In addition to studies of the physiological and genetic basis of obesity, studies of animal models have also informed us about the environmental aspects of the condition. Studies in this context include exploring the responses of animals to high fat or high fat/high sugar (Cafeteria) diets, investigations of the effects of dietary restriction on body mass and fat loss, and studies of the impact of candidate pharmaceuticals on components of energy

  15. Climate and transboundary water management issues

    International Nuclear Information System (INIS)

    Bjonback, D.

    1991-01-01

    The potential effects of climate change on transboundary river systems, major water uses, interjurisdictional arrangements, and water issues affecting water management in the Great Plains of Canada are discussed. Three atmospheric general circulation models (GCM) have been applied for a two times carbon dioxide concentration scenario for the Saskatchewan River system. The models were the Goddard Institute for Space Studies (GISS) model, the Geophysical Fluid Dynamics Laboratory (GFDL) model, and the Oregon State University (OSU) model. For all models, soil moisture on the plains was reduced. The GISS model predicted slightly higher runoff for plains-originating streams, and a substantial increase in runoff (32%) in the Rockies. The GFDL model predicted lower runoffs in the plains and Rockies, with some locations near the Alberta-Saskatchewan border indicating zero runoff. The OSU model results generally bracketed the GISS and GFDL results, with total runoff approximating 1951-1980 mean. The GISS model indicated an increase in net basin supply of 28%, while the GFDL model, due to lower runoff and high soil moisture defecits, showed a decrease of 38%. For policy making, monitoring, and research, the GFDL model results can provide important guidelines. Greater attention to demand management and conservation will have short-term benefits in stretching the limited water resource base to support a larger economy, while providing flexibility to cope with future climate as it evolves. 1 ref

  16. STUDY OF INSTRUCTIONAL MODELS AND SYNTAX AS AN EFFORT FOR DEVELOPING ‘OIDDE’ INSTRUCTIONAL MODEL

    Directory of Open Access Journals (Sweden)

    Atok Miftachul Hudha

    2016-07-01

    Full Text Available The 21st century requires the availability of human resources with seven skills or competence (Maftuh, 2016, namely: 1 critical thinking and problem solving skills, 2 creative and innovative, 3 behave ethically, 4 flexible and quick to adapt, 5 competence in ICT and literacy, 6 interpersonal and collaborative capabilities, 7 social skills and cross-cultural interaction. One of the competence of human resources of the 21st century are behaving ethically should be established and developed through learning that includes the study of ethics because ethical behavior can not be created and owned as it is by human, but must proceed through solving problem, especially ethical dilemma solving on the ethical problems atau problematics of ethics. The fundamental problem, in order to ethical behavior competence can be achieved through learning, is the right model of learning is not found yet by teachers to implement the learning associated with ethical values as expected in character education (Hudha, et al, 2014a, 2014b, 2014c. Therefore, it needs a decent learning model (valid, practical and effective so that ethics learning, to establish a human resources behave ethically, can be met. Thus, it is necessary to study (to analyze and modificate the steps of learning (syntax existing learning model, in order to obtain the results of the development model of learning syntax. One model of learning that is feasible, practical, and effective question is the learning model on the analysis and modification of syntax model of social learning, syntax learning model systems behavior (Joyce and Weil, 1980, Joyce, et al, 2009 as well as syntax learning model Tri Prakoro (Akbar, 2013. The modified syntax generate learning model 'OIDDE' which is an acronym of orientation, identify, discussion, decision, and engage in behavior.

  17. Modeling study on geological environment at Horonobe URL site

    International Nuclear Information System (INIS)

    Shimo, Michito; Yamamoto, Hajime; Kumamoto, Sou; Fujiwara, Yasushi; Ono, Makoto

    2005-02-01

    The Horonobe underground research project has been operated by Japan Nuclear Cycle Development Institute to study the geological environment of sedimentary rocks in deep underground. The objectives of this study are to develop a geological environment model, which incorporate the current findings and the data obtained through the geological, geophysical, and borehole investigations at Horonobe site, and to predict the hydrological and geochemical impacts caused by the URL shaft excavation to the surrounding area. A three-dimensional geological structure model was constructed, integrating a large-scale model (25km x 15km) and a high-resolution site-scale model (4km x 4km) that have been developed by JNC. The constructed model includes surface topography, geologic formations (such as Yuchi, Koetoi, Wakkanai, and Masuporo Formations), and two major faults (Ohomagari fault and N1 fault). In hydrogeological modeling, water-conductive fractures identified in Wakkanai Formation are modeled stochastically using EHCM (Equivalent Heterogeneous Continuum Model) approach, to represent hydraulic heterogeneity and anisotropy in the fractured rock mass. Numerical code EQUIV FLO (Shimo et al., 1996), which is a 3D unsaturated-saturated groundwater simulator capable of EHCM, was used to simulate the regional groundwater flow. We used the same model and the code to predict the transient hydrological changes caused by the shaft excavations. Geochemical data in the Horonobe site such as water chemistries, mineral compositions of rocks were collected and summarized into digital datasets. M3 (Multivariate, Mixing and Mass-balance) method developed by SKB (Laaksoharju et al., 1999) was used to identify waters of different origins, and to infer the mixing ratio of these end-members to reproduce each sample's chemistry. Thermodynamic code such as RHREEQC, GWB, and EQ3/6 were used to model chemical reactions that explain the present minerals and aqueous concentrations observed in the site

  18. Application of Model Animals in the Study of Drug Toxicology

    Science.gov (United States)

    Song, Yagang; Miao, Mingsan

    2018-01-01

    Drug safety is a key factor in drug research and development, Drug toxicology test is the main method to evaluate the safety of drugs, The body condition of an animal has important implications for the results of the study, Previous toxicological studies of drugs were carried out in normal animals in the past, There is a great deviation from the clinical practice.The purpose of this study is to investigate the necessity of model animals as a substitute for normal animals for toxicological studies, It is expected to provide exact guidance for future drug safety evaluation.

  19. An updated summary of MATHEW/ADPIC model evaluation studies

    International Nuclear Information System (INIS)

    Foster, K.T.; Dickerson, M.H.

    1990-05-01

    This paper summarizes the major model evaluation studies conducted for the MATHEW/ADPIC atmospheric transport and diffusion models used by the US Department of Energy's Atmospheric Release Advisory Capability. These studies have taken place over the last 15 years and involve field tracer releases influenced by a variety of meteorological and topographical conditions. Neutrally buoyant tracers released both as surface and elevated point sources, as well as material dispersed by explosive, thermally bouyant release mechanisms have been studied. Results from these studies show that the MATHEW/ADPIC models estimate the tracer air concentrations to within a factor of two of the measured values 20% to 50% of the time, and within a factor of five of the measurements 35% to 85% of the time depending on the complexity of the meteorology and terrain, and the release height of the tracer. Comparisons of model estimates to peak downwind deposition and air concentration measurements from explosive releases are shown to be generally within a factor of two to three. 24 refs., 14 figs., 3 tabs

  20. Deformed shell model studies of spectroscopic properties of Zn and ...

    Indian Academy of Sciences (India)

    2014-04-05

    Apr 5, 2014 ... April 2014 physics pp. 757–767. Deformed shell model studies of ... experiments without isotopical enrichment thereby reducing the cost considerably. By taking a large mass of the sample because of its low cost, one can ...

  1. Model validation studies of solar systems, Phase III. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lantz, L.J.; Winn, C.B.

    1978-12-01

    Results obtained from a validation study of the TRNSYS, SIMSHAC, and SOLCOST solar system simulation and design are presented. Also included are comparisons between the FCHART and SOLCOST solar system design programs and some changes that were made to the SOLCOST program. Finally, results obtained from the analysis of several solar radiation models are presented. Separate abstracts were prepared for ten papers.

  2. Capillary microreactors for lactic acid extraction: experimental and modelling study

    NARCIS (Netherlands)

    Susanti, Susanti; Winkelman, Jozef; Schuur, Boelo; Heeres, Hero; Yue, Jun

    2015-01-01

    Lactic acid is an important biobased chemical and, among others, is used for the production of poly-lactic acid. Down-stream processing using state of the art technology is energy intensive and leads to the formation of large amounts of salts. In this presentation, experimental and modeling studies

  3. Interpretive and Critical Phenomenological Crime Studies: A Model Design

    Science.gov (United States)

    Miner-Romanoff, Karen

    2012-01-01

    The critical and interpretive phenomenological approach is underutilized in the study of crime. This commentary describes this approach, guided by the question, "Why are interpretive phenomenological methods appropriate for qualitative research in criminology?" Therefore, the purpose of this paper is to describe a model of the interpretive…

  4. Study of primitive universe in the Bianchi IX model

    International Nuclear Information System (INIS)

    Matsas, G.E.A.

    1988-03-01

    The theory of general relativity is used to study the homogeneous cosmological model Bianch IX with isometry group SO(3) near the cosmological singularity. The Bogoyavlenskii-Novikov formalism to explain the anusual behaviour of the Liapunov exponent associated with this chaotic system, is introduced. (author) [pt

  5. Studying historical occupational careers with multilevel growth models

    NARCIS (Netherlands)

    Schulz, W.; Maas, I.

    2010-01-01

    In this article we propose to study occupational careers with historical data by using multilevel growth models. Historical career data are often characterized by a lack of information on the timing of occupational changes and by different numbers of observations of occupations per individual.

  6. Conflicts Management Model in School: A Mixed Design Study

    Science.gov (United States)

    Dogan, Soner

    2016-01-01

    The object of this study is to evaluate the reasons for conflicts occurring in school according to perceptions and views of teachers and resolution strategies used for conflicts and to build a model based on the results obtained. In the research, explanatory design including quantitative and qualitative methods has been used. The quantitative part…

  7. Modelling studies of horizontal steam generator PGV-1000 with Cathare

    Energy Technology Data Exchange (ETDEWEB)

    Karppinen, I. [VTT Energy, Espoo (Finland)

    1995-12-31

    To perform thermal-hydraulic studies applied to nuclear power plants equipped with VVER, a program of qualification and assessment of the CATHARE computer code is in progress at the Institute of Protection and Nuclear Safety (IPSN). In this paper studies of modelling horizontal steam generator of VVER-1000 with the CATHARE computer code are presented. Steady state results are compared with measured data from the fifth unit of Novovoronezh nuclear power plant. (orig.). 10 refs.

  8. Modelling studies of horizontal steam generator PGV-1000 with Cathare

    Energy Technology Data Exchange (ETDEWEB)

    Karppinen, I [VTT Energy, Espoo (Finland)

    1996-12-31

    To perform thermal-hydraulic studies applied to nuclear power plants equipped with VVER, a program of qualification and assessment of the CATHARE computer code is in progress at the Institute of Protection and Nuclear Safety (IPSN). In this paper studies of modelling horizontal steam generator of VVER-1000 with the CATHARE computer code are presented. Steady state results are compared with measured data from the fifth unit of Novovoronezh nuclear power plant. (orig.). 10 refs.

  9. Multisite Case Study of Florida's Millennium High School Reform Model

    Directory of Open Access Journals (Sweden)

    Carol A. Mullen

    2002-10-01

    Full Text Available This study should have immediate utility for the United States and beyond its borders. School-to-work approaches to comprehensive reform are increasingly expected of schools while legislative funding for this purpose gets pulled back. This multisite case study launches the first analysis of the New Millennium High School (NMHS model in Florida. This improvement program relies upon exemplary leadership for preparing students for postsecondary education

  10. Phenomenological study of in the minimal model at LHC

    Indian Academy of Sciences (India)

    K M Balasubramaniam

    2017-10-05

    Oct 5, 2017 ... Phenomenological study of Z in the minimal B − L model at LHC ... The phenomenological study of neutral heavy gauge boson (Z. B−L) of the ...... JHEP10(2015)076, arXiv:1506.06767 [hep-ph] ... [15] ATLAS Collaboration: G Aad et al, Phys. Rev. D 90(5) ... [19] C W Chiang, N D Christensen, G J Ding and T.

  11. Cost Model Comparison: A Study of Internally and Commercially Developed Cost Models in Use by NASA

    Science.gov (United States)

    Gupta, Garima

    2011-01-01

    NASA makes use of numerous cost models to accurately estimate the cost of various components of a mission - hardware, software, mission/ground operations - during the different stages of a mission's lifecycle. The purpose of this project was to survey these models and determine in which respects they are similar and in which they are different. The initial survey included a study of the cost drivers for each model, the form of each model (linear/exponential/other CER, range/point output, capable of risk/sensitivity analysis), and for what types of missions and for what phases of a mission lifecycle each model is capable of estimating cost. The models taken into consideration consisted of both those that were developed by NASA and those that were commercially developed: GSECT, NAFCOM, SCAT, QuickCost, PRICE, and SEER. Once the initial survey was completed, the next step in the project was to compare the cost models' capabilities in terms of Work Breakdown Structure (WBS) elements. This final comparison was then portrayed in a visual manner with Venn diagrams. All of the materials produced in the process of this study were then posted on the Ground Segment Team (GST) Wiki.

  12. Comparative study between a QCD inspired model and a multiple diffraction model

    International Nuclear Information System (INIS)

    Luna, E.G.S.; Martini, A.F.; Menon, M.J.

    2003-01-01

    A comparative study between a QCD Inspired Model (QCDIM) and a Multiple Diffraction Model (MDM) is presented, with focus on the results for pp differential cross section at √s = 52.8 GeV. It is shown that the MDM predictions are in agreement with experimental data, except for the dip region and that the QCDIM describes only the diffraction peak region. Interpretations in terms of the corresponding eikonals are also discussed. (author)

  13. Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change

    Science.gov (United States)

    Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.

    2017-12-01

    Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at

  14. Cooling problems of thermal power plants. Physical model studies

    International Nuclear Information System (INIS)

    Neale, L.C.

    1975-01-01

    The Alden Research Laboratories of Worcester Polytechnic Institute has for many years conducted physical model studies, which are normally classified as river or structural hydraulic studies. Since 1952 one aspect of these studies has involved the heated discharge from steam power plants. The early studies on such problems concentrated on improving the thermal efficiency of the system. This was accomplished by minimizing recirculation and by assuring full use of available cold water supplies. With the growing awareness of the impact of thermal power generation on the environment attention has been redirected to reducing the effect of heated discharges on the biology of the receiving body of water. More specifically the efforts of designers and operators of power plants are aimed at meeting or complying with standards established by various governmental agencies. Thus the studies involve developing means of minimizing surface temperatures at an outfall or establishing a local area of higher temperature with limits specified in terms of areas or distances. The physical models used for these studies have varied widely in scope, size, and operating features. These models have covered large areas with both distorted geometric scales and uniform dimensions. Instrumentations has also varied from simple mercury thermometers to computer control and processing of hundreds of thermocouple indicators

  15. Development of hydrological models and surface process modelization Study case in High Mountain slopes

    International Nuclear Information System (INIS)

    Loaiza, Juan Carlos; Pauwels, Valentijn R

    2011-01-01

    Hydrological models are useful because allow to predict fluxes into the hydrological systems, which is useful to predict foods and violent phenomenon associated to water fluxes, especially in materials under a high meteorization level. The combination of these models with meteorological predictions, especially with rainfall models, allow to model water behavior into the soil. On most of cases, this type of models is really sensible to evapotranspiration. On climatic studies, the superficial processes have to be represented adequately. Calibration and validation of these models is necessary to obtain reliable results. This paper is a practical exercise of application of complete hydrological information at detailed scale in a high mountain catchment, considering the soil use and types more representatives. The information of soil moisture, infiltration, runoff and rainfall is used to calibrate and validate TOPLATS hydrological model to simulate the behavior of soil moisture. The finds show that is possible to implement an hydrological model by means of soil moisture information use and an equation of calibration by Extended Kalman Filter (EKF).

  16. Basic models modeling resistance training: an update for basic scientists interested in study skeletal muscle hypertrophy.

    Science.gov (United States)

    Cholewa, Jason; Guimarães-Ferreira, Lucas; da Silva Teixeira, Tamiris; Naimo, Marshall Alan; Zhi, Xia; de Sá, Rafaele Bis Dal Ponte; Lodetti, Alice; Cardozo, Mayara Quadros; Zanchi, Nelo Eidy

    2014-09-01

    Human muscle hypertrophy brought about by voluntary exercise in laboratorial conditions is the most common way to study resistance exercise training, especially because of its reliability, stimulus control and easy application to resistance training exercise sessions at fitness centers. However, because of the complexity of blood factors and organs involved, invasive data is difficult to obtain in human exercise training studies due to the integration of several organs, including adipose tissue, liver, brain and skeletal muscle. In contrast, studying skeletal muscle remodeling in animal models are easier to perform as the organs can be easily obtained after euthanasia; however, not all models of resistance training in animals displays a robust capacity to hypertrophy the desired muscle. Moreover, some models of resistance training rely on voluntary effort, which complicates the results observed when animal models are employed since voluntary capacity is something theoretically impossible to measure in rodents. With this information in mind, we will review the modalities used to simulate resistance training in animals in order to present to investigators the benefits and risks of different animal models capable to provoke skeletal muscle hypertrophy. Our second objective is to help investigators analyze and select the experimental resistance training model that best promotes the research question and desired endpoints. © 2013 Wiley Periodicals, Inc.

  17. A Study On Traditional And Evolutionary Software Development Models

    Directory of Open Access Journals (Sweden)

    Kamran Rasheed

    2017-07-01

    Full Text Available Today Computing technologies are becoming the pioneers of the organizations and helpful in individual functionality i.e. added to computing device we need to add softwares. Set of instruction or computer program is known as software. The development of software is done through some traditional or some new or evolutionary models. Software development is becoming a key and a successful business nowadays. Without software all hardware is useless. Some collective steps that are performed in the development of these are known as Software development life cycle SDLC. There are some adaptive and predictive models for developing software. Predictive mean already known like WATERFALL Spiral Prototype and V-shaped models while Adaptive model include agile Scrum. All methodologies of both adaptive and predictive have their own procedure and steps. Predictive are Static and Adaptive are dynamic mean change cannot be made to the predictive while adaptive have the capability of changing. The purpose of this study is to get familiar with all these and discuss their uses and steps of development. This discussion will be helpful in deciding which model they should use in which circumstance and what are the development step including in each model.

  18. Pre implanted mouse embryos as model for uranium toxicology studies

    International Nuclear Information System (INIS)

    Kundt, Miriam S.

    2001-01-01

    Full text: The search of 'in vitro' toxicology model that can predict toxicology effects 'in vivo' is a permanent challenge. A toxicology experimental model must to fill to certain requirements: to have a predictive character, an appropriate control to facilitate the interpretation of the data among the experimental groups, and to be able to control the independent variables that can interfere or modify the results that we are analyzing. The preimplantation embryos posses many advantages in this respect: they are a simple model that begins with the development of only one cell. The 'in vitro' model reproduces successfully the 'in vivo' situation. Due to the similarity that exists among the embryos of mammals during this period the model is practically valid for other species. The embryo is itself a stem cell, the toxicology effects are early observed in his clonal development and the physical-chemical parameters are easily controllable. The purpose of the exhibition is to explain the properties of the pre implanted embryo model for toxicology studies of uranium and to show our experimental results. The cultivation 'in vitro' of mouse embryos with uranylo nitrate demonstrated that the uranium causes from the 13 μgU/ml delay of development, decrease the number of cells per embryo and hipoploidy in the embryonic blastomere. (author)

  19. Bioresorbable polymer coated drug eluting stent: a model study.

    Science.gov (United States)

    Rossi, Filippo; Casalini, Tommaso; Raffa, Edoardo; Masi, Maurizio; Perale, Giuseppe

    2012-07-02

    In drug eluting stent technologies, an increased demand for better control, higher reliability, and enhanced performances of drug delivery systems emerged in the last years and thus offered the opportunity to introduce model-based approaches aimed to overcome the remarkable limits of trial-and-error methods. In this context a mathematical model was studied, based on detailed conservation equations and taking into account the main physical-chemical mechanisms involved in polymeric coating degradation, drug release, and restenosis inhibition. It allowed highlighting the interdependence between factors affecting each of these phenomena and, in particular, the influence of stent design parameters on drug antirestenotic efficacy. Therefore, the here-proposed model is aimed to simulate the diffusional release, for both in vitro and the in vivo conditions: results were verified against various literature data, confirming the reliability of the parameter estimation procedure. The hierarchical structure of this model also allows easily modifying the set of equations describing restenosis evolution to enhance model reliability and taking advantage of the deep understanding of physiological mechanisms governing the different stages of smooth muscle cell growth and proliferation. In addition, thanks to its simplicity and to the very low system requirements and central processing unit (CPU) time, our model allows obtaining immediate views of system behavior.

  20. A study on the modeling techniques using LS-INGRID

    Energy Technology Data Exchange (ETDEWEB)

    Ku, J. H.; Park, S. W

    2001-03-01

    For the development of radioactive material transport packages, the verification of structural safety of a package against the free drop impact accident should be carried out. The use of LS-DYNA, which is specially developed code for impact analysis, is essential for impact analysis of the package. LS-INGRID is a pre-processor for LS-DYNA with considerable capability to deal with complex geometries and allows for parametric modeling. LS-INGRID is most effective in combination with LS-DYNA code. Although the usage of LS-INGRID seems very difficult relative to many commercial mesh generators, the productivity of users performing parametric modeling tasks with LS-INGRID can be much higher in some cases. Therefore, LS-INGRID has to be used with LS-DYNA. This report presents basic explanations for the structure and commands, basic modelling examples and advanced modelling of LS-INGRID to use it for the impact analysis of various packages. The new users can build the complex model easily, through a study for the basic examples presented in this report from the modelling to the loading and constraint conditions.

  1. Sensitivity study of CFD turbulent models for natural convection analysis

    International Nuclear Information System (INIS)

    Yu sun, Park

    2007-01-01

    The buoyancy driven convective flow fields are steady circulatory flows which were made between surfaces maintained at two fixed temperatures. They are ubiquitous in nature and play an important role in many engineering applications. Application of a natural convection can reduce the costs and efforts remarkably. This paper focuses on the sensitivity study of turbulence analysis using CFD (Computational Fluid Dynamics) for a natural convection in a closed rectangular cavity. Using commercial CFD code, FLUENT and various turbulent models were applied to the turbulent flow. Results from each CFD model will be compared each other in the viewpoints of grid resolution and flow characteristics. It has been showed that: -) obtaining general flow characteristics is possible with relatively coarse grid; -) there is no significant difference between results from finer grid resolutions than grid with y + + is defined as y + = ρ*u*y/μ, u being the wall friction velocity, y being the normal distance from the center of the cell to the wall, ρ and μ being respectively the fluid density and the fluid viscosity; -) the K-ε models show a different flow characteristic from K-ω models or from the Reynolds Stress Model (RSM); and -) the y + parameter is crucial for the selection of the appropriate turbulence model to apply within the simulation

  2. Application of Learning Curves for Didactic Model Evaluation: Case Studies

    Directory of Open Access Journals (Sweden)

    Felix Mödritscher

    2013-01-01

    Full Text Available The success of (online courses depends, among other factors, on the underlying didactical models which have always been evaluated with qualitative and quantitative research methods. Several new evaluation techniques have been developed and established in the last years. One of them is ‘learning curves’, which aim at measuring error rates of users when they interact with adaptive educational systems, thereby enabling the underlying models to be evaluated and improved. In this paper, we report how we have applied this new method to two case studies to show that learning curves are useful to evaluate didactical models and their implementation in educational platforms. Results show that the error rates follow a power law distribution with each additional attempt if the didactical model of an instructional unit is valid. Furthermore, the initial error rate, the slope of the curve and the goodness of fit of the curve are valid indicators for the difficulty level of a course and the quality of its didactical model. As a conclusion, the idea of applying learning curves for evaluating didactical model on the basis of usage data is considered to be valuable for supporting teachers and learning content providers in improving their online courses.

  3. Parametric study of the Incompletely Stirred Reactor modeling

    Energy Technology Data Exchange (ETDEWEB)

    Mobini, K. [Department of Mechanical Engineering, Shahid Rajaee University, Lavizan, Tehran (Iran); Bilger, R.W. [School of Aerospace, Mechanical and Mechatronic Engineering, University of Sydney, Sydney (Australia)

    2009-09-15

    The Incompletely Stirred Reactor (ISR) is a generalization of the widely-used Perfectly Stirred Reactor (PSR) model and allows for incomplete mixing within the reactor. Its formulation is based on the Conditional Moment Closure (CMC) method. This model is applicable to nonpremixed combustion with strong recirculation such as in a gas turbine combustor primary zone. The model uses the simplifying assumptions that the conditionally-averaged reactive-scalar concentrations are independent of position in the reactor: this results in ordinary differential equations in mixture fraction space. The simplicity of the model permits the use of very complex chemical mechanisms. The effects of the detailed chemistry can be found while still including the effects of micromixing. A parametric study is performed here on an ISR for combustion of methane at overall stoichiometric conditions to investigate the sensitivity of the model to different parameters. The focus here is on emissions of nitric oxide and carbon monoxide. It is shown that the most important parameters in the ISR model are reactor residence time, the chemical mechanism and the core-averaged Probability Density Function (PDF). Using several different shapes for the core-averaged PDF, it is shown that use of a bimodal PDF with a low minimum at stoichiometric mixture fraction and a large variance leads to lower nitric oxide formation. The 'rich-plus-lean' mixing or staged combustion strategy for combustion is thus supported. (author)

  4. General circulation model study of atmospheric carbon monoxide

    International Nuclear Information System (INIS)

    Pinto, J.P.; Yung, Y.L.; Rind, D.; Russell, G.L.; Lerner, J.A.; Hansen, J.E.; Hameed, S.

    1983-01-01

    The carbon monoxide cycle is studied by incorporating the known and hypothetical sources and sinks in a tracer model that uses the winds generated by a general circulation model. Photochemical production and loss terms, which depend on OH radical concentrations, are calculated in an interactive fashion. The computed global distribution and seasonal variations of CO are compared with observations to obtain constraints on the distribution and magnitude of the sources and sinks of CO, and on the tropospheric abundance of OH. The simplest model that accounts for available observations requires a low latitude plant source of about 1.3 x 10 15 g yr -1 , in addition to sources from incomplete combustion of fossil fuels and oxidation of methane. The globally averaged OH concentration calculated in the model is 7 x 10 5 cm -3 . Models that calculate globally averaged OH concentrations much lower than our nominal value are not consistent with the observed variability of CO. Such models are also inconsistent with measurements of CO isotopic abundances, which imply the existence of plant sources

  5. Organizational home care models across Europe: A cross sectional study.

    Science.gov (United States)

    Van Eenoo, Liza; van der Roest, Henriëtte; Onder, Graziano; Finne-Soveri, Harriet; Garms-Homolova, Vjenka; Jonsson, Palmi V; Draisma, Stasja; van Hout, Hein; Declercq, Anja

    2018-01-01

    Decision makers are searching for models to redesign home care and to organize health care in a more sustainable way. The aim of this study is to identify and characterize home care models within and across European countries by means of structural characteristics and care processes at the policy and the organization level. At the policy level, variables that reflected variation in health care policy were included based on a literature review on the home care policy for older persons in six European countries: Belgium, Finland, Germany, Iceland, Italy, and the Netherlands. At the organizational level, data on the structural characteristics and the care processes were collected from 36 home care organizations by means of a survey. Data were collected between 2013 and 2015 during the IBenC project. An observational, cross sectional, quantitative design was used. The analyses consisted of a principal component analysis followed by a hierarchical cluster analysis. Fifteen variables at the organizational level, spread across three components, explained 75.4% of the total variance. The three components made it possible to distribute home care organizations into six care models that differ on the level of patient-centered care delivery, the availability of specialized care professionals, and the level of monitoring care performance. Policy level variables did not contribute to distinguishing between home care models. Six home care models were identified and characterized. These models can be used to describe best practices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Projected effect of 2000-2050 changes in climate and emissions on aerosol levels in China and associated transboundary transport

    Science.gov (United States)

    We investigate projected 2000–2050 changes in concentrations of aerosols in China and the associated transboundary aerosol transport by using the chemical transport model GEOS-Chem driven by the Goddard Institute for Space Studies (GISS) general circulation model (GCM) 3 at 4° × ...

  7. Study on competitive interaction models in Cayley tree

    International Nuclear Information System (INIS)

    Moreira, J.G.M.A.

    1987-12-01

    We propose two kinds of models in the Cayley tree to simulate Ising models with axial anisotropy in the cubic lattice. The interaction in the direction of the anisotropy is simulated by the interaction along the branches of the tree. The interaction in the planes perpendicular to the anisotropy direction, in the first model, is simulated by interactions between spins in neighbour branches of the same generation arising from same site of the previous generation. In the second model, the simulation of the interaction in the planes are produced by mean field interactions among all spins in sites of the same generation arising from the same site of the previous generations. We study these models in the limit of infinite coordination number. First, we analyse a situation with antiferromagnetic interactions along the branches between first neighbours only, and we find the analogous of a metamagnetic Ising model. In the following, we introduce competitive interactions between first and second neighbours along the branches, to simulate the ANNNI model. We obtain one equation of differences which relates the magnetization of one generation with the magnetization of the two previous generations, to permit a detailed study of the modulated phase region. We note that the wave number of the modulation, for one fixed temperature, changes with the competition parameter to form a devil's staircase with a fractal dimension which increases with the temperature. We discuss the existence of strange atractors, related to a possible caothic phase. Finally, we show the obtained results when we consider interactions along the branches with three neighbours. (author)

  8. A study on online monitoring system development using empirical models

    Energy Technology Data Exchange (ETDEWEB)

    An, Sang Ha

    2010-02-15

    Maintenance technologies have been progressed from a time-based to a condition-based manner. The fundamental idea of condition-based maintenance (CBM) is built on the real-time diagnosis of impending failures and/or the prognosis of residual lifetime of equipment by monitoring health conditions using various sensors. The success of CBM, therefore, hinges on the capability to develop accurate diagnosis/prognosis models. Even though there may be an unlimited number of methods to implement models, the models can normally be classified into two categories in terms of their origins: using physical principles or historical observations. I have focused on the latter method (sometimes referred as the empirical model based on statistical learning) because of some practical benefits such as context-free applicability, configuration flexibility, and customization adaptability. While several pilot-scale systems using empirical models have been applied to work sites in Korea, it should be noticed that these do not seem to be generally competitive against conventional physical models. As a result of investigating the bottlenecks of previous attempts, I have recognized the need for a novel strategy for grouping correlated variables such that an empirical model can accept not only statistical correlation but also some extent of physical knowledge of a system. Detailed examples of problems are as follows: (1) missing of important signals in a group caused by the lack of observations, (2) problems of signals with the time delay, (3) problems of optimal kernel bandwidth. In this study an improved statistical learning framework including the proposed strategy and case studies illustrating the performance of the method are presented.

  9. Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies

    Science.gov (United States)

    Suh, Peter M.; Conyers, Howard Jason; Mavris, Dimitri N.

    2015-01-01

    This report introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this report is on tool presentation, verification, and validation. These processes are carried out in stages throughout the report. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.

  10. Results of the eruptive column model inter-comparison study

    Science.gov (United States)

    Costa, Antonio; Suzuki, Yujiro; Cerminara, M.; Devenish, Ben J.; Esposti Ongaro, T.; Herzog, Michael; Van Eaton, Alexa; Denby, L.C.; Bursik, Marcus; de' Michieli Vitturi, Mattia; Engwell, S.; Neri, Augusto; Barsotti, Sara; Folch, Arnau; Macedonio, Giovanni; Girault, F.; Carazzo, G.; Tait, S.; Kaminski, E.; Mastin, Larry G.; Woodhouse, Mark J.; Phillips, Jeremy C.; Hogg, Andrew J.; Degruyter, Wim; Bonadonna, Costanza

    2016-01-01

    This study compares and evaluates one-dimensional (1D) and three-dimensional (3D) numerical models of volcanic eruption columns in a set of different inter-comparison exercises. The exercises were designed as a blind test in which a set of common input parameters was given for two reference eruptions, representing a strong and a weak eruption column under different meteorological conditions. Comparing the results of the different models allows us to evaluate their capabilities and target areas for future improvement. Despite their different formulations, the 1D and 3D models provide reasonably consistent predictions of some of the key global descriptors of the volcanic plumes. Variability in plume height, estimated from the standard deviation of model predictions, is within ~ 20% for the weak plume and ~ 10% for the strong plume. Predictions of neutral buoyancy level are also in reasonably good agreement among the different models, with a standard deviation ranging from 9 to 19% (the latter for the weak plume in a windy atmosphere). Overall, these discrepancies are in the range of observational uncertainty of column height. However, there are important differences amongst models in terms of local properties along the plume axis, particularly for the strong plume. Our analysis suggests that the simplified treatment of entrainment in 1D models is adequate to resolve the general behaviour of the weak plume. However, it is inadequate to capture complex features of the strong plume, such as large vortices, partial column collapse, or gravitational fountaining that strongly enhance entrainment in the lower atmosphere. We conclude that there is a need to more accurately quantify entrainment rates, improve the representation of plume radius, and incorporate the effects of column instability in future versions of 1D volcanic plume models.

  11. Ensembles modeling approach to study Climate Change impacts on Wheat

    Science.gov (United States)

    Ahmed, Mukhtar; Claudio, Stöckle O.; Nelson, Roger; Higgins, Stewart

    2017-04-01

    Simulations of crop yield under climate variability are subject to uncertainties, and quantification of such uncertainties is essential for effective use of projected results in adaptation and mitigation strategies. In this study we evaluated the uncertainties related to crop-climate models using five crop growth simulation models (CropSyst, APSIM, DSSAT, STICS and EPIC) and 14 general circulation models (GCMs) for 2 representative concentration pathways (RCP) of atmospheric CO2 (4.5 and 8.5 W m-2) in the Pacific Northwest (PNW), USA. The aim was to assess how different process-based crop models could be used accurately for estimation of winter wheat growth, development and yield. Firstly, all models were calibrated for high rainfall, medium rainfall, low rainfall and irrigated sites in the PNW using 1979-2010 as the baseline period. Response variables were related to farm management and soil properties, and included crop phenology, leaf area index (LAI), biomass and grain yield of winter wheat. All five models were run from 2000 to 2100 using the 14 GCMs and 2 RCPs to evaluate the effect of future climate (rainfall, temperature and CO2) on winter wheat phenology, LAI, biomass, grain yield and harvest index. Simulated time to flowering and maturity was reduced in all models except EPIC with some level of uncertainty. All models generally predicted an increase in biomass and grain yield under elevated CO2 but this effect was more prominent under rainfed conditions than irrigation. However, there was uncertainty in the simulation of crop phenology, biomass and grain yield under 14 GCMs during three prediction periods (2030, 2050 and 2070). We concluded that to improve accuracy and consistency in simulating wheat growth dynamics and yield under a changing climate, a multimodel ensemble approach should be used.

  12. Phenomenological study of extended seesaw model for light sterile neutrino

    International Nuclear Information System (INIS)

    Nath, Newton; Ghosh, Monojit; Goswami, Srubabati; Gupta, Shivani

    2017-01-01

    We study the zero textures of the Yukawa matrices in the minimal extended type-I seesaw (MES) model which can give rise to ∼ eV scale sterile neutrinos. In this model, three right handed neutrinos and one extra singlet S are added to generate a light sterile neutrino. The light neutrino mass matrix for the active neutrinos, m ν , depends on the Dirac neutrino mass matrix (M D ), Majorana neutrino mass matrix (M R ) and the mass matrix (M S ) coupling the right handed neutrinos and the singlet. The model predicts one of the light neutrino masses to vanish. We systematically investigate the zero textures in M D and observe that maximum five zeros in M D can lead to viable zero textures in m ν . For this study we consider four different forms for M R (one diagonal and three off diagonal) and two different forms of (M S ) containing one zero. Remarkably we obtain only two allowed forms of m ν (m eτ =0 and m ττ =0) having inverted hierarchical mass spectrum. We re-analyze the phenomenological implications of these two allowed textures of m ν in the light of recent neutrino oscillation data. In the context of the MES model, we also express the low energy mass matrix, the mass of the sterile neutrino and the active-sterile mixing in terms of the parameters of the allowed Yukawa matrices. The MES model leads to some extra correlations which disallow some of the Yukawa textures obtained earlier, even though they give allowed one-zero forms of m ν . We show that the allowed textures in our study can be realized in a simple way in a model based on MES mechanism with a discrete Abelian flavor symmetry group Z 8 ×Z 2 .

  13. Shear viscosity from Kubo formalism: NJL model study

    International Nuclear Information System (INIS)

    Lang, Robert; Weise, Wolfram

    2014-01-01

    A large-N c expansion is combined with the Kubo formalism to study the shear viscosity η of strongly interacting matter in the two-flavor NJL model. We discuss analytical and numerical approaches to η and investigate systematically its strong dependence on the spectral width and the momentum-space cutoff. Thermal effects on the constituent quark mass from spontaneous chiral symmetry breaking are included. The ratio η/s and its thermal dependence are derived for different parameterizations of the spectral width and for an explicit one-loop calculation including mesonic modes within the NJL model. (orig.)

  14. Molecular level in silico studies for oncology. Direct models review

    Science.gov (United States)

    Psakhie, S. G.; Tsukanov, A. A.

    2017-09-01

    The combination of therapy and diagnostics in one process "theranostics" is a trend in a modern medicine, especially in oncology. Such an approach requires development and usage of multifunctional hybrid nanoparticles with a hierarchical structure. Numerical methods and mathematical models play a significant role in the design of the hierarchical nanoparticles and allow looking inside the nanoscale mechanisms of agent-cell interactions. The current position of in silico approach in biomedicine and oncology is discussed. The review of the molecular level in silico studies in oncology, which are using the direct models, is presented.

  15. Volatile particles formation during PartEmis: a modelling study

    Directory of Open Access Journals (Sweden)

    X. Vancassel

    2004-01-01

    Full Text Available A modelling study of the formation of volatile particles in a combustor exhaust has been carried out in the frame of the PartEmis European project. A kinetic model has been used in order to investigate nucleation efficiency of the H2O-H2SO4 binary mixture in the sampling system. A value for the fraction of the fuel sulphur S(IV converted into S(VI has been indirectly deduced from comparisons between model results and measurements. In the present study, ranges between roughly 2.5% and 6%, depending on the combustor settings and on the value assumed for the parameter describing sulphuric acid wall losses. Soot particles hygroscopicity has also been investigated as their activation is a key parameter for contrail formation. Growth factors of monodisperse particles exposed to high relative humidity (95% have been calculated and compared with experimental results. The modelling study confirms that the growth factor increases as the soot particle size decreases.

  16. HOMOLOGY MODELING AND MOLECULAR DYNAMICS STUDY OF MYCOBACTERIUM TUBERCULOSIS UREASE

    Directory of Open Access Journals (Sweden)

    Lisnyak Yu. V.

    2017-10-01

    Full Text Available Introduction. M. tuberculosis urease (MTU is an attractive target for chemotherapeutic intervention in tuberculosis by designing new safe and efficient enzyme inhibitors. A prerequisite for designing such inhibitors is an understanding of urease's three-dimensional (3D structure organization. 3D structure of M. tuberculosis urease is unknown. When experimental three-dimensional structure of a protein is not known, homology modeling, the most commonly used computational structure prediction method, is the technique of choice. This paper aimed to build a 3D-structure of M. tuberculosis urease by homology modeling and to study its stability by molecular dynamics simulations. Materials and methods. To build MTU model, five high-resolution X-ray structures of bacterial ureases with three-subunit composition (2KAU, 5G4H, 4UBP, 4СEU, and 4EPB have been selected as templates. For each template five stochastic alignments were created and for each alignment, a three-dimensional model was built. Then, each model was energy minimized and the models were ranked by quality Z-score. The MTU model with highest quality estimation amongst 25 potential models was selected. To further improve structure quality the model was refined by short molecular dynamics simulation that resulted in 20 snapshots which were rated according to their energy and the quality Z-score. The best scoring model having minimum energy was chosen as a final homology model of 3D structure for M. tuberculosis. The final model of MTU was also validated by using PDBsum and QMEAN servers. These checks confirmed good quality of MTU homology model. Results and discussion. Homology model of MTU is a nonamer (homotrimer of heterotrimers, (αβγ3 consisting of 2349 residues. In MTU heterotrimer, sub-units α, β, and γ tightly interact with each other at a surface of approximately 3000 Å2. Sub-unit α contains the enzyme active site with two Ni atoms coordinated by amino acid residues His347, His

  17. Modeling studies of the Indo-Pacific warm pool

    International Nuclear Information System (INIS)

    Barnett, T.P.; Schneider N.; Tyree, M.; Ritchie, J.; Ramanathan, V.; Sherwood, S.; Zhang, G.; Flatau, M.

    1994-01-01

    A wide variety of modeling studies are being conducted, aimed at understanding the interactions of clouds, radiation, and the ocean in the region of the Indo-Pacific warm pool, the flywheel of the global climate system. These studies are designed to understand the important physical processes operating in the ocean and atmosphere in the region. A stand alone Atmospheric GCM, forced by observed sea surface temperature, has been used for several purposes. One study with the AGCM shows the high sensitivity of the tropical circulation to variations in mid- to high-level clouds. A stand-alone ocean general circulation model (OGCM) is being used to study the relative role of shortwave radiation changes in the buoyancy flux forcing of the upper ocean. Complete studies of the warm pool can only be conducted with a full coupled ocean/atmosphere model. The latest version of the Hamburg CGCM produces realistic simulations of the ocean/atmosphere system in the Indo-Pacific without use of a flux correction scheme

  18. Credible baseline analysis for multi-model public policy studies

    Energy Technology Data Exchange (ETDEWEB)

    Parikh, S.C.; Gass, S.I.

    1981-01-01

    The nature of public decision-making and resource allocation is such that many complex interactions can best be examined and understood by quantitative analysis. Most organizations do not possess the totality of models and needed analytical skills to perform detailed and systematic quantitative analysis. Hence, the need for coordinated, multi-organization studies that support public decision-making has grown in recent years. This trend is expected not only to continue, but to increase. This paper describes the authors' views on the process of multi-model analysis based on their participation in an analytical exercise, the ORNL/MITRE Study. One of the authors was the exercise coordinator. During the study, the authors were concerned with the issue of measuring and conveying credibility of the analysis. This work led them to identify several key determinants, described in this paper, that could be used to develop a rating of credibility.

  19. Escompte Pre-modelling Studies In The Marseille Area.

    Science.gov (United States)

    Meleux, F.; Rosset, R.

    On June and July 2001, the campaign ESCOMPTE took place in the Marseille area in southern of France, with the aim of generating a detailed 3-D data base for the study of dynamics and chemistry of high pollution events so as to validate and improve air quality models. Previous to this field experiment, a pre-modelling exercise has been performed to document the dynamic interactions between sea and land breezes and orographics flows over this complex topographical area. This study was carried out using a nesting procedure at local and regional scales using the MESO-NH model (jointly developed by Laboratoire d'Aérologie and Meteofrance at Toulouse). Tracers emitted at various locations in the Marseille and Etang de Berre areas were first fol- lowed, then in a second step, full chemistry simulations have been run for two selected periods on June and July 1999, quite similar to the meteorological situations met dur- ing the IOP2a and the IOP4 in the 2001 campaign. The performance of the model has been assessed by comparing measured data with simulated data for meteorological pa- rameters and ozone. The general ability of the model to correctly simulate these two situations allows to further study ozone plume developments in more details. In par- ticular, these studies bear upon the relative roles of O3 transport versus O3 chemical production, as a function of distance within the plume to anthropogenic emissions and biogenic emissions, together with ozone daily variations and peak values observed at rural sites.

  20. How do humans inspect BPMN models: an exploratory study

    DEFF Research Database (Denmark)

    Haisjackl, Cornelia; Soffer, Pnina; Lim, Shao Yi

    2016-01-01

    to initial research questions, we also indicate other research questions that can be investigated using this approach. Our qualitative analysis shows that humans adapt different strategies on how to identify quality issues. In addition, we observed several challenges appearing when humans inspect process......Even though considerable progress regarding the technical perspective on modeling and supporting business processes has been achieved, it appears that the human perspective is still often left aside. In particular, we do not have an in-depth understanding of how process models are inspected...... by humans, what strategies are taken, what challenges arise, and what cognitive processes are involved. This paper contributes toward such an understanding and reports an exploratory study investigating how humans identify and classify quality issues in BPMN process models. Providing preliminary answers...

  1. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  2. Capability maturity models in engineering companies: case study analysis

    Directory of Open Access Journals (Sweden)

    Titov Sergei

    2016-01-01

    Full Text Available In the conditions of the current economic downturn engineering companies in Russia and worldwide are searching for new approaches and frameworks to improve their strategic position, increase the efficiency of the internal business processes and enhance the quality of the final products. Capability maturity models are well-known tools used by many foreign engineering companies to assess the productivity of the processes, to elaborate the program of business process improvement and to prioritize the efforts to optimize the whole company performance. The impact of capability maturity model implementation on cost and time are documented and analyzed in the existing research. However, the potential of maturity models as tools of quality management is less known. The article attempts to analyze the impact of CMM implementation on the quality issues. The research is based on a case study methodology and investigates the real life situation in a Russian engineering company.

  3. Experimental Study and Dynamic Modeling of Metal Rubber Isolating Bearing

    International Nuclear Information System (INIS)

    Zhang, Ke; Zhou, Yanguo; Jiang, Jian

    2015-01-01

    In this paper, dynamic shear mechanical properties of a new metal rubber isolating bearing is tested and studied. The mixed damping model is provided for theoretical modeling of MR isolating bearing, the shear stiffness and damping characteristics of the MR bearing can be analyzed separately and easily discussed, and the mixed damping model is proved to be an rather effective approach. The test results indicate that loading frequency bears little impact over shear property of metal rubber isolating bearing, the total energy consumption of metal rubber isolating bearing increases with the increase in loading amplitude. With the increase in loading amplitude, the stiffness of the isolating bearing will reduce showing its “soft property”; and the type of damping force gradually changes to be close to dry friction. The features of “soft property” and dry friction energy consumption of metal rubber isolating bearing are very useful in practical engineering application. (paper)

  4. Light aircraft sound transmission studies - Noise reduction model

    Science.gov (United States)

    Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.

    1987-01-01

    Experimental tests conducted on the fuselage of a single-engine Piper Cherokee light aircraft suggest that the cabin interior noise can be reduced by increasing the transmission loss of the dominant sound transmission paths and/or by increasing the cabin interior sound absorption. The validity of using a simple room equation model to predict the cabin interior sound-pressure level for different fuselage and exterior sound field conditions is also presented. The room equation model is based on the sound power flow balance for the cabin space and utilizes the measured transmitted sound intensity data. The room equation model predictions were considered good enough to be used for preliminary acoustical design studies.

  5. Modeling study of the Pauzhetsky geothermal field, Kamchatka, Russia

    Energy Technology Data Exchange (ETDEWEB)

    Kiryukhin, A.V. [Institute of Volcanology, Kamchatsky (Russian Federation); Yampolsky, V.A. [Kamchatskburgeotermia State Enterprise, Elizovo (Russian Federation)

    2004-08-01

    Exploitation of the Pauzhetsky geothermal field started in 1966 with a 5 MW{sub e} power plant. A hydrogeological model of the Pauzhetsky field has been developed based on an integrated analysis of data on lithological units, temperature, pressure, production zones and natural discharge distributions. A one-layer 'well by well' model with specified vertical heat and mass exchange conditions has been used to represent the main features of the production reservoir. Numerical model development was based on the TOUGH2 code [Pruess, 1991. TOUGH2 - A General Purpose Numerical Simulator for Multiphase Fluid and Heat Flow, Lawrence Berkeley National Laboratory Report, Berkeley, CA; Pruess et al., 1999. TOUGH2 User's Guide, Version 2.0, Report LBNL-43134, Lawrence Berkeley National Laboratory, Berkeley, CA] coupled with tables generated by the HOLA wellbore simulator [Aunzo et al., 1991. Wellbore Models GWELL, GWNACL, and HOLA, Users Guide, Draft, 81 pp.]. Lahey Fortran-90 compiler and computer graphical packages (Didger-3, Surfer-8, Grapher-3) were also used to model the development process. The modeling study of the natural-state conditions was targeted on a temperature distribution match to estimate the natural high-temperature upflow parameters: the mass flow-rate was estimated at 220 kg/s with enthalpy of 830-920 kJ/kg. The modeling study for the 1964-2000 exploitation period of the Pauzhetsky geothermal field was targeted at matching the transient reservoir pressure and flowing enthalpies of the production wells. The modeling study of exploitation confirmed that 'double porosity' in the reservoir, with a 10-20% active volume of 'fractures', and a thermo-mechanical response to reinjection (including changes in porosity due to compressibility and expansivity), were the key parameters of the model. The calibrated model of the Pauzhetsky geothermal field was used to forecast reservoir behavior under different exploitation scenarios for

  6. A model ecosystem experiment and its computational simulation studies

    International Nuclear Information System (INIS)

    Doi, M.

    2002-01-01

    Simplified microbial model ecosystem and its computer simulation model are introduced as eco-toxicity test for the assessment of environmental responses from the effects of environmental impacts. To take the effects on the interactions between species and environment into account, one option is to select the keystone species on the basis of ecological knowledge, and to put it in the single-species toxicity test. Another option proposed is to put the eco-toxicity tests as experimental micro ecosystem study and a theoretical model ecosystem analysis. With these tests, the stressors which are more harmful to the ecosystems should be replace with less harmful ones on the basis of unified measures. Management of radioactive materials, chemicals, hyper-eutrophic, and other artificial disturbances of ecosystem should be discussed consistently from the unified view point of environmental protection. (N.C.)

  7. An empirical and model study on automobile market in Taiwan

    Science.gov (United States)

    Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren

    2006-03-01

    We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.

  8. A new in situ model to study erosive enamel wear, a clinical pilot study.

    NARCIS (Netherlands)

    Ruben, J.L.; Truin, G.J.; Bronkhorst, E.M.; Huysmans, M.C.D.N.J.M.

    2017-01-01

    OBJECTIVES: To develop an in situ model for erosive wear research which allows for more clinically relevant exposure parameters than other in situ models and to show tooth site-specific erosive wear effect of an acid challenge of orange juice on enamel. METHODS: This pilot study included 6

  9. Study of the nonlinear imperfect software debugging model

    International Nuclear Information System (INIS)

    Wang, Jinyong; Wu, Zhibo

    2016-01-01

    In recent years there has been a dramatic proliferation of research on imperfect software debugging phenomena. Software debugging is a complex process and is affected by a variety of factors, including the environment, resources, personnel skills, and personnel psychologies. Therefore, the simple assumption that debugging is perfect is inconsistent with the actual software debugging process, wherein a new fault can be introduced when removing a fault. Furthermore, the fault introduction process is nonlinear, and the cumulative number of nonlinearly introduced faults increases over time. Thus, this paper proposes a nonlinear, NHPP imperfect software debugging model in consideration of the fact that fault introduction is a nonlinear process. The fitting and predictive power of the NHPP-based proposed model are validated through related experiments. Experimental results show that this model displays better fitting and predicting performance than the traditional NHPP-based perfect and imperfect software debugging models. S-confidence bounds are set to analyze the performance of the proposed model. This study also examines and discusses optimal software release-time policy comprehensively. In addition, this research on the nonlinear process of fault introduction is significant given the recent surge of studies on software-intensive products, such as cloud computing and big data. - Highlights: • Fault introduction is a nonlinear changing process during the debugging phase. • The assumption that the process of fault introduction is nonlinear is credible. • Our proposed model can better fit and accurately predict software failure behavior. • Research on fault introduction case is significant to software-intensive products.

  10. Modeling and numerical study of two phase flow

    International Nuclear Information System (INIS)

    Champmartin, A.

    2011-01-01

    This thesis describes the modelization and the simulation of two-phase systems composed of droplets moving in a gas. The two phases interact with each other and the type of model to consider directly depends on the type of simulations targeted. In the first part, the two phases are considered as fluid and are described using a mixture model with a drift relation (to be able to follow the relative velocity between the two phases and take into account two velocities), the two-phase flows are assumed at the equilibrium in temperature and pressure. This part of the manuscript consists of the derivation of the equations, writing a numerical scheme associated with this set of equations, a study of this scheme and simulations. A mathematical study of this model (hyperbolicity in a simplified framework, linear stability analysis of the system around a steady state) was conducted in a frame where the gas is assumed baro-tropic. The second part is devoted to the modelization of the effect of inelastic collisions on the particles when the time of the simulation is shorter and the droplets can no longer be seen as a fluid. We introduce a model of inelastic collisions for droplets in a spray, leading to a specific Boltzmann kernel. Then, we build caricatures of this kernel of BGK type, in which the behavior of the first moments of the solution of the Boltzmann equation (that is mass, momentum, directional temperatures, variance of the internal energy) are mimicked. The quality of these caricatures is tested numerically at the end. (author) [fr

  11. Earthquake Source Spectral Study beyond the Omega-Square Model

    Science.gov (United States)

    Uchide, T.; Imanishi, K.

    2017-12-01

    Earthquake source spectra have been used for characterizing earthquake source processes quantitatively and, at the same time, simply, so that we can analyze the source spectra for many earthquakes, especially for small earthquakes, at once and compare them each other. A standard model for the source spectra is the omega-square model, which has the flat spectrum and the falloff inversely proportional to the square of frequencies at low and high frequencies, respectively, which are bordered by a corner frequency. The corner frequency has often been converted to the stress drop under the assumption of circular crack models. However, recent studies claimed the existence of another corner frequency [Denolle and Shearer, 2016; Uchide and Imanishi, 2016] thanks to the recent development of seismic networks. We have found that many earthquakes in areas other than the area studied by Uchide and Imanishi [2016] also have source spectra deviating from the omega-square model. Another part of the earthquake spectra we now focus on is the falloff rate at high frequencies, which will affect the seismic energy estimation [e.g., Hirano and Yagi, 2017]. In June, 2016, we deployed seven velocity seismometers in the northern Ibaraki prefecture, where the shallow crustal seismicity mainly with normal-faulting events was activated by the 2011 Tohoku-oki earthquake. We have recorded seismograms at 1000 samples per second and at a short distance from the source, so that we can investigate the high-frequency components of the earthquake source spectra. Although we are still in the stage of discovery and confirmation of the deviation from the standard omega-square model, the update of the earthquake source spectrum model will help us systematically extract more information on the earthquake source process.

  12. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    Science.gov (United States)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    /24 degree, if in the end you only look at monthly runoff? In this study an attempt is made to link time and space scales in the VIC model, to study the added value of a higher spatial resolution-model for different time steps. In order to do this, four different VIC models were constructed for the Thur basin in North-Eastern Switzerland (1700 km²), a tributary of the Rhine: one lumped model, and three spatially distributed models with a resolution of respectively 1x1 km, 5x5 km, and 10x10 km. All models are run at an hourly time step and aggregated and calibrated for different time steps (hourly, daily, monthly, yearly) using a novel Hierarchical Latin Hypercube Sampling Technique (Vořechovský, 2014). For each time and space scale, several diagnostics like Nash-Sutcliffe efficiency, Kling-Gupta efficiency, all the quantiles of the discharge etc., are calculated in order to compare model performance over different time and space scales for extreme events like floods and droughts. Next to that, the effect of time and space scale on the parameter distribution can be studied. In the end we hope to find a link for optimal time and space scale combinations.

  13. Mechanism study of pulsus paradoxus using mechanical models.

    Directory of Open Access Journals (Sweden)

    Chang-yang Xing

    Full Text Available Pulsus paradoxus is an exaggeration of the normal inspiratory decrease in systolic blood pressure. Despite a century of attempts to explain this sign consensus is still lacking. To solve the controversy and reveal the exact mechanism, we reexamined the characteristic anatomic arrangement of the circulation system in the chest and designed these mechanical models based on related hydromechanic principles. Model 1 was designed to observe the primary influence of respiratory intrathoracic pressure change (RIPC on systemic and pulmonary venous return systems (SVR and PVR respectively. Model 2, as an equivalent mechanical model of septal swing, was to study the secondary influence of RIPC on the motion of the interventriclar septum (IVS, which might be the direct cause for pulsus paradoxus. Model 1 demonstrated that the simulated RIPC had different influence on the simulated SVR and PVR. It increased the volume of the simulated right ventricle (SRV when the internal pressure was kept constant (8.16 cmH2O, while it had the opposite effect on PVR. Model 2 revealed the three major factors determining the respiratory displacement of IVS in normal and different pathophysiological conditions: the magnitude of RIPC, the pressure difference between the two ventricles and the intrapericardial pressure. Our models demonstrate that the different anatomical arrangement of the two venous return systems leads to a different effect of RIPC on right and left ventricles, and thus a pressure gradient across IVS that tends to shift IVS left- and rightwards. When the leftward displacement of IVS reaches a considerable amplitude in some pathologic condition such as cardiac tamponade, the pulsus paradoxus occurs.

  14. Flow regulation in coronary vascular tree: a model study.

    Directory of Open Access Journals (Sweden)

    Xinzhou Xie

    Full Text Available Coronary blood flow can always be matched to the metabolic demand of the myocardium due to the regulation of vasoactive segments. Myocardial compressive forces play an important role in determining coronary blood flow but its impact on flow regulation is still unknown. The purpose of this study was to develop a coronary specified flow regulation model, which can integrate myocardial compressive forces and other identified regulation factors, to further investigate the coronary blood flow regulation behavior.A theoretical coronary flow regulation model including the myogenic, shear-dependent and metabolic responses was developed. Myocardial compressive forces were included in the modified wall tension model. Shear-dependent response was estimated by using the experimental data from coronary circulation. Capillary density and basal oxygen consumption were specified to corresponding to those in coronary circulation. Zero flow pressure was also modeled by using a simplified capillary model.Pressure-flow relations predicted by the proposed model are consistent with previous experimental data. The predicted diameter changes in small arteries are in good agreement with experiment observations in adenosine infusion and inhibition of NO synthesis conditions. Results demonstrate that the myocardial compressive forces acting on the vessel wall would extend the auto-regulatory range by decreasing the myogenic tone at the given perfusion pressure.Myocardial compressive forces had great impact on coronary auto-regulation effect. The proposed model was proved to be consistent with experiment observations and can be employed to investigate the coronary blood flow regulation effect in physiological and pathophysiological conditions.

  15. High-Level Waste Glass Formulation Model Sensitivity Study 2009 Glass Formulation Model Versus 1996 Glass Formulation Model

    International Nuclear Information System (INIS)

    Belsher, J.D.; Meinert, F.L.

    2009-01-01

    This document presents the differences between two HLW glass formulation models (GFM): The 1996 GFM and 2009 GFM. A glass formulation model is a collection of glass property correlations and associated limits, as well as model validity and solubility constraints; it uses the pretreated HLW feed composition to predict the amount and composition of glass forming additives necessary to produce acceptable HLW glass. The 2009 GFM presented in this report was constructed as a nonlinear optimization calculation based on updated glass property data and solubility limits described in PNNL-18501 (2009). Key mission drivers such as the total mass of HLW glass and waste oxide loading are compared between the two glass formulation models. In addition, a sensitivity study was performed within the 2009 GFM to determine the effect of relaxing various constraints on the predicted mass of the HLW glass.

  16. Surface energy balances of three general circulation models: Current climate and response to increasing atmospheric CO2

    International Nuclear Information System (INIS)

    Gutowski, W.J.; Gutzler, D.S.; Portman, D.; Wang, W.C.

    1988-04-01

    The surface energy balance simulated by state-of-the-art general circulation models at GFDL, GISS and NCAR for climates with current levels of atmospheric CO 2 concentration (control climate) and with twice the current levels. The work is part of an effort sponsored by the US Department of Energy to assess climate simulations produced by these models. The surface energy balance enables us to diagnose differences between models in surface temperature climatology and sensitivity to doubling CO 2 in terms of the processes that control surface temperature. Our analysis compares the simulated balances by averaging the fields of interest over a hierarchy of spatial domains ranging from the entire globe down to regions a few hundred kilometers across

  17. Model Studies of the Dynamics of Bacterial Flagellar Motors

    Energy Technology Data Exchange (ETDEWEB)

    Bai, F; Lo, C; Berry, R; Xing, J

    2009-03-19

    The Bacterial Flagellar Motor is a rotary molecular machine that rotates the helical filaments which propel swimming bacteria. Extensive experimental and theoretical studies exist on the structure, assembly, energy input, power generation and switching mechanism of the motor. In our previous paper, we explained the general physics underneath the observed torque-speed curves with a simple two-state Fokker-Planck model. Here we further analyze this model. In this paper we show (1) the model predicts that the two components of the ion motive force can affect the motor dynamics differently, in agreement with the latest experiment by Lo et al.; (2) with explicit consideration of the stator spring, the model also explains the lack of dependence of the zero-load speed on stator number in the proton motor, recently observed by Yuan and Berg; (3) the model reproduces the stepping behavior of the motor even with the existence of the stator springs and predicts the dwelling time distribution. Predicted stepping behavior of motors with two stators is discussed, and we suggest future experimental verification.

  18. Animal models as tools to study the pathophysiology of depression

    Directory of Open Access Journals (Sweden)

    Helena M. Abelaira

    2013-01-01

    Full Text Available The incidence of depressive illness is high worldwide, and the inadequacy of currently available drug treatments contributes to the significant health burden associated with depression. A basic understanding of the underlying disease processes in depression is lacking; therefore, recreating the disease in animal models is not possible. Popular current models of depression creatively merge ethologically valid behavioral assays with the latest technological advances in molecular biology. Within this context, this study aims to evaluate animal models of depression and determine which has the best face, construct, and predictive validity. These models differ in the degree to which they produce features that resemble a depressive-like state, and models that include stress exposure are widely used. Paradigms that employ acute or sub-chronic stress exposure include learned helplessness, the forced swimming test, the tail suspension test, maternal deprivation, chronic mild stress, and sleep deprivation, to name but a few, all of which employ relatively short-term exposure to inescapable or uncontrollable stress and can reliably detect antidepressant drug response.

  19. COMPARATIVE STUDY ON MAIN SOLVENCY ASSESSMENT MODELS FOR INSURANCE FIELD

    Directory of Open Access Journals (Sweden)

    Daniela Nicoleta SAHLIAN

    2015-07-01

    Full Text Available During the recent financial crisis of insurance domain, there were imposed new aspects that have to be taken into account concerning the risks management and surveillance activity. The insurance societies could develop internal models in order to determine the minimum capital requirement imposed by the new regulations that are to be adopted on 1 January 2016. In this respect, the purpose of this research paper is to offer a real presentation and comparing with the main solvency regulation systems used worldwide, the accent being on their common characteristics and current tendencies. Thereby, we would like to offer a better understanding of the similarities and differences between the existent solvency regimes in order to develop the best regime of solvency for Romania within the Solvency II project. The study will show that there are clear differences between the existent Solvency I regime and the new approaches based on risk and will also point out the fact that even the key principles supporting the new solvency regimes are convergent, there are a lot of approaches for the application of these principles. In this context, the question we would try to find the answer is "how could the global solvency models be useful for the financial surveillance authority of Romania for the implementation of general model and for the development of internal solvency models according to the requirements of Solvency II" and "which would be the requirements for the implementation of this type of approach?". This thing makes the analysis of solvency models an interesting exercise.

  20. A discrete model to study reaction-diffusion-mechanics systems.

    Science.gov (United States)

    Weise, Louis D; Nash, Martyn P; Panfilov, Alexander V

    2011-01-01

    This article introduces a discrete reaction-diffusion-mechanics (dRDM) model to study the effects of deformation on reaction-diffusion (RD) processes. The dRDM framework employs a FitzHugh-Nagumo type RD model coupled to a mass-lattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material). Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the mass-lattice system. Using this framework results were reproduced on self-organized pacemaking activity that have been previously found with a continuous RD mechanics model. Mechanisms that determine the period of pacemakers and its dependency on the medium size are identified. Finally it is shown how the drift direction of pacemakers in RDM systems is related to the spatial distribution of deformation and curvature effects.

  1. Model Studies of the Dynamics of Bacterial Flagellar Motors

    Science.gov (United States)

    Bai, Fan; Lo, Chien-Jung; Berry, Richard M.; Xing, Jianhua

    2009-01-01

    Abstract The bacterial flagellar motor is a rotary molecular machine that rotates the helical filaments that propel swimming bacteria. Extensive experimental and theoretical studies exist on the structure, assembly, energy input, power generation, and switching mechanism of the motor. In a previous article, we explained the general physics underneath the observed torque-speed curves with a simple two-state Fokker-Planck model. Here, we further analyze that model, showing that 1), the model predicts that the two components of the ion motive force can affect the motor dynamics differently, in agreement with latest experiments; 2), with explicit consideration of the stator spring, the model also explains the lack of dependence of the zero-load speed on stator number in the proton motor, as recently observed; and 3), the model reproduces the stepping behavior of the motor even with the existence of the stator springs and predicts the dwell-time distribution. The predicted stepping behavior of motors with two stators is discussed, and we suggest future experimental procedures for verification. PMID:19383460

  2. A discrete model to study reaction-diffusion-mechanics systems.

    Directory of Open Access Journals (Sweden)

    Louis D Weise

    Full Text Available This article introduces a discrete reaction-diffusion-mechanics (dRDM model to study the effects of deformation on reaction-diffusion (RD processes. The dRDM framework employs a FitzHugh-Nagumo type RD model coupled to a mass-lattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material. Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the mass-lattice system. Using this framework results were reproduced on self-organized pacemaking activity that have been previously found with a continuous RD mechanics model. Mechanisms that determine the period of pacemakers and its dependency on the medium size are identified. Finally it is shown how the drift direction of pacemakers in RDM systems is related to the spatial distribution of deformation and curvature effects.

  3. Metocean input data for drift models applications: Loustic study

    International Nuclear Information System (INIS)

    Michon, P.; Bossart, C.; Cabioc'h, M.

    1995-01-01

    Real-time monitoring and crisis management of oil slicks or floating structures displacement require a good knowledge of local winds, waves and currents used as input data for operational drift models. Fortunately, thanks to world-wide and all-weather coverage, satellite measurements have recently enabled the introduction of new methods for the remote sensing of the marine environment. Within a French joint industry project, a procedure has been developed using basically satellite measurements combined to metocean models in order to provide marine operators' drift models with reliable wind, wave and current analyses and short term forecasts. Particularly, a model now allows the calculation of the drift current, under the joint action of wind and sea-state, thus radically improving the classical laws. This global procedure either directly uses satellite wind and waves measurements (if available on the study area) or indirectly, as calibration of metocean models results which are brought to the oil slick or floating structure location. The operational use of this procedure is reported here with an example of floating structure drift offshore from the Brittany coasts

  4. Process modeling for the Integrated Thermal Treatment System (ITTS) study

    Energy Technology Data Exchange (ETDEWEB)

    Liebelt, K.H.; Brown, B.W.; Quapp, W.J.

    1995-09-01

    This report describes the process modeling done in support of the integrated thermal treatment system (ITTS) study, Phases 1 and 2. ITTS consists of an integrated systems engineering approach for uniform comparison of widely varying thermal treatment technologies proposed for treatment of the contact-handled mixed low-level wastes (MLLW) currently stored in the U.S. Department of Energy complex. In the overall study, 19 systems were evaluated. Preconceptual designs were developed that included all of the various subsystems necessary for a complete installation, from waste receiving through to primary and secondary stabilization and disposal of the processed wastes. Each system included the necessary auxiliary treatment subsystems so that all of the waste categories in the complex were fully processed. The objective of the modeling task was to perform mass and energy balances of the major material components in each system. Modeling of trace materials, such as pollutants and radioactive isotopes, were beyond the present scope. The modeling of the main and secondary thermal treatment, air pollution control, and metal melting subsystems was done using the ASPEN PLUS process simulation code, Version 9.1-3. These results were combined with calculations for the remainder of the subsystems to achieve the final results, which included offgas volumes, and mass and volume waste reduction ratios.

  5. Dynamic modelling and experimental study of cantilever beam with clearance

    International Nuclear Information System (INIS)

    Li, B; Jin, W; Han, L; He, Z

    2012-01-01

    Clearances occur in almost all mechanical systems, typically such as the clearance between slide plate of gun barrel and guide. Therefore, to study the clearances of mechanisms can be very important to increase the working performance and lifetime of mechanisms. In this paper, rigid dynamic modelling of cantilever with clearance was done according to the subject investigated. In the rigid dynamic modelling, clearance is equivalent to the spring-dashpot model, the impact of beam and boundary face was also taken into consideration. In ADAMS software, the dynamic simulation was carried out according to the model above. The software simulated the movement of cantilever with clearance under external excitation. Research found: When the clearance is larger, the force of impact will become larger. In order to study how the stiffness of the cantilever's supporting part influences natural frequency of the system, A Euler beam which is restricted by a draught spring and a torsion spring at its end was raised. Through numerical calculation, the relationship between natural frequency and stiffness was found. When the value of the stiffness is close to the limit value, the corresponding boundary condition is illustrated. An ADAMS experiment was carried out to check the theory and the simulation.

  6. Holes in the t-Jz model: A diagrammatic study

    International Nuclear Information System (INIS)

    Chernyshev, A.L.; Leung, P.W.

    1999-01-01

    The t-J z model is the strongly anisotropic limit of the t-J model which captures some general properties of doped antiferromagnets (AF close-quote s). The absence of spin fluctuations simplifies the analytical treatment of hole motion in an AF background, and allows us to calculate single- and two-hole spectra with a high accuracy using a regular diagram technique combined with a real-space approach. At the same time, numerical studies of this model via exact diagonalization on small clusters show negligible finite-size effects for a number of quantities, thus allowing a direct comparison between analytical and numerical results. Both approaches demonstrate that the holes have a tendency to pair in p- and d-wave channels at realistic values of t/J. Interactions leading to pairing and effects selecting p and d waves are thoroughly investigated. The role of transverse spin fluctuations is considered using perturbation theory. Based on the results of the present study, we discuss the pairing problem in the realistic t-J-like model. Possible implications for preformed pairs formation and phase separation are drawn. copyright 1999 The American Physical Society

  7. Process modeling for the Integrated Thermal Treatment System (ITTS) study

    International Nuclear Information System (INIS)

    Liebelt, K.H.; Brown, B.W.; Quapp, W.J.

    1995-09-01

    This report describes the process modeling done in support of the integrated thermal treatment system (ITTS) study, Phases 1 and 2. ITTS consists of an integrated systems engineering approach for uniform comparison of widely varying thermal treatment technologies proposed for treatment of the contact-handled mixed low-level wastes (MLLW) currently stored in the U.S. Department of Energy complex. In the overall study, 19 systems were evaluated. Preconceptual designs were developed that included all of the various subsystems necessary for a complete installation, from waste receiving through to primary and secondary stabilization and disposal of the processed wastes. Each system included the necessary auxiliary treatment subsystems so that all of the waste categories in the complex were fully processed. The objective of the modeling task was to perform mass and energy balances of the major material components in each system. Modeling of trace materials, such as pollutants and radioactive isotopes, were beyond the present scope. The modeling of the main and secondary thermal treatment, air pollution control, and metal melting subsystems was done using the ASPEN PLUS process simulation code, Version 9.1-3. These results were combined with calculations for the remainder of the subsystems to achieve the final results, which included offgas volumes, and mass and volume waste reduction ratios

  8. Dynamic modelling and experimental study of cantilever beam with clearance

    Science.gov (United States)

    Li, B.; Jin, W.; Han, L.; He, Z.

    2012-05-01

    Clearances occur in almost all mechanical systems, typically such as the clearance between slide plate of gun barrel and guide. Therefore, to study the clearances of mechanisms can be very important to increase the working performance and lifetime of mechanisms. In this paper, rigid dynamic modelling of cantilever with clearance was done according to the subject investigated. In the rigid dynamic modelling, clearance is equivalent to the spring-dashpot model, the impact of beam and boundary face was also taken into consideration. In ADAMS software, the dynamic simulation was carried out according to the model above. The software simulated the movement of cantilever with clearance under external excitation. Research found: When the clearance is larger, the force of impact will become larger. In order to study how the stiffness of the cantilever's supporting part influences natural frequency of the system, A Euler beam which is restricted by a draught spring and a torsion spring at its end was raised. Through numerical calculation, the relationship between natural frequency and stiffness was found. When the value of the stiffness is close to the limit value, the corresponding boundary condition is illustrated. An ADAMS experiment was carried out to check the theory and the simulation.

  9. Study of gap conductance model for thermo mechanical fully coupled finite element model

    International Nuclear Information System (INIS)

    Kim, Hyo Cha; Yang, Yong Sik; Kim, Dae Ho; Bang, Je Geon; Kim, Sun Ki; Koo, Yang Hyun

    2012-01-01

    accurately, gap conductance model for thermomechanical fully coupled FE should be developed. However, gap conductance in FE can be difficult issue in terms of convergence because all elements which are positioned in gap have different gap conductance at each iteration step. It is clear that our code should have gap conductance model for thermo-mechanical fully coupled FE in three-dimension. In this paper, gap conductance model for thermomechanical coupled FE has been built using commercial FE code to understand gap conductance model in FE. We coded commercial FE code using APDL because it does not have iterative gap conductance model. Through model, convergence parameter and characteristics were studied

  10. A piecewise modeling approach for climate sensitivity studies: Tests with a shallow-water model

    Science.gov (United States)

    Shao, Aimei; Qiu, Chongjian; Niu, Guo-Yue

    2015-10-01

    In model-based climate sensitivity studies, model errors may grow during continuous long-term integrations in both the "reference" and "perturbed" states and hence the climate sensitivity (defined as the difference between the two states). To reduce the errors, we propose a piecewise modeling approach that splits the continuous long-term simulation into subintervals of sequential short-term simulations, and updates the modeled states through re-initialization at the end of each subinterval. In the re-initialization processes, this approach updates the reference state with analysis data and updates the perturbed states with the sum of analysis data and the difference between the perturbed and the reference states, thereby improving the credibility of the modeled climate sensitivity. We conducted a series of experiments with a shallow-water model to evaluate the advantages of the piecewise approach over the conventional continuous modeling approach. We then investigated the impacts of analysis data error and subinterval length used in the piecewise approach on the simulations of the reference and perturbed states as well as the resulting climate sensitivity. The experiments show that the piecewise approach reduces the errors produced by the conventional continuous modeling approach, more effectively when the analysis data error becomes smaller and the subinterval length is shorter. In addition, we employed a nudging assimilation technique to solve possible spin-up problems caused by re-initializations by using analysis data that contain inconsistent errors between mass and velocity. The nudging technique can effectively diminish the spin-up problem, resulting in a higher modeling skill.

  11. Using an experimental model for the study of therapeutic touch.

    Science.gov (United States)

    dos Santos, Daniella Soares; Marta, Ilda Estéfani Ribeiro; Cárnio, Evelin Capellari; de Quadros, Andreza Urba; Cunha, Thiago Mattar; de Carvalho, Emilia Campos

    2013-02-01

    to verify whether the Paw Edema Model can be used in investigations about the effects of Therapeutic Touch on inflammation by measuring the variables pain, edema and neutrophil migration. this is a pilot and experimental study, involving ten male mice of the same genetic strain and divided into experimental and control group, submitted to the chemical induction of local inflammation in the right back paw. The experimental group received a daily administration of Therapeutic Touch for 15 minutes during three days. the data showed statistically significant differences in the nociceptive threshold and in the paw circumference of the animals from the experimental group on the second day of the experiment. the experiment model involving animals can contribute to study the effects of Therapeutic Touch on inflammation, and adjustments are suggested in the treatment duration, number of sessions and experiment duration.

  12. Penson-Kolb-Hubbard model: a renormalisation group study

    International Nuclear Information System (INIS)

    Bhattacharyya, Bibhas; Roy, G.K.

    1995-01-01

    The Penson-Kolb-Hubbard (PKH) model in one dimension (1d) by means of real space renormalisation group (RG) method for the half-filled band has been studied. Different phases are identified by studying the RG-flow pattern, the energy gap and different correlation functions. The phase diagram consists of four phases: a spin density wave (SDW), a strong coupling superconducting phase (SSC), a weak coupling superconducting phase (WSC) and a nearly metallic phase. For the negative value of the pair hopping amplitude introduced in this model it was found that the pair-pair correlation indicates a superconducting phase for which the centre-of-mass of the pairs move with a momentum π. (author). 7 refs., 4 figs

  13. Overview of the reactor safety study consequence model

    International Nuclear Information System (INIS)

    Wall, I.B.; Yaniv, S.S.; Blond, R.M.; McGrath, P.E.; Church, H.W.; Wayland, J.R.

    1977-01-01

    The Reactor Safety Study (WASH-1400) is a comprehensive assessment of the potential risk to the public from accidents in light water power reactors. The engineering analysis of the plants is described in detail in the Reactor Safety Study: it provides an estimate of the probability versus magnitude of the release of radioactive material. The consequence model, which is the subject of this paper, describes the progression of the postulated accident after the release of the radioactive material from the containment. A brief discussion of the manner in which the consequence calculations are performed is presented. The emphasis in the description is on the models and data that differ significantly from those previously used for these types of assessments. The results of the risk calculations for 100 light water power reactors are summarized

  14. Cold flow model study of an oxyfuel combustion pilot plant

    Energy Technology Data Exchange (ETDEWEB)

    Guio-Perez, D.C.; Tondl, G.; Hoeltl, W.; Proell, T.; Hofbauer, H. [Vienna University of Technology, Institute of Chemical Engineering, Vienna (Austria)

    2011-12-15

    The fluid-dynamic behavior of a circulating fluidized bed pilot plant for oxyfuel combustion was studied in a cold flow model, down-scaled using Glicksman's criteria. Pressures along the unit and the global circulation rate were used for characterization. The analysis of five operating parameters and their influence on the system was carried out; namely, total solids inventory and the air velocity of primary, secondary, loop seal and support fluidizations. The cold flow model study shows that the reactor design allows stable operation at a wide range of fluidization rates, with results that agree well with previous observations described in the literature. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  15. Modelling of protective actions in the German Risk Study (FRG)

    International Nuclear Information System (INIS)

    Burkart, A.K.

    1981-01-01

    An emergency response model for nuclear accidents has to allow for a great number of widely different emergency conditions. In addition, it should be compatible with the pertinent laws, regulations, ordinances, guidelines, criteria and reference levels. The German (FRG) guidelines are basic and flexible rather than precise, many decisions being left to the emergency management. In the Risk Study these decisions had to be anticipated. After a brief discussion of the basis of the emergency response model employed in the German Risk Study (FRG), the essential requirements to be met are listed. The main part of the paper deals with the rationale and specification of protective actions. As a result of the calculations the numbers of persons and sizes of areas involved in protective actions are presented. The last section deals with the variation of input data. (author)

  16. Paradigms of knowledge management with systems modelling case studies

    CERN Document Server

    Pandey, Krishna Nath

    2016-01-01

    This book has been written by studying the knowledge management implementation at POWERGRID India, one of the largest power distribution companies in the world. The patterns which have led to models, both hypothesized and data-enabled, have been provided. The book suggests ways and means to follow for knowledge management implementation, especially for organizations with multiple business verticals to follow. The book underlines that knowledge is both an entity and organizational asset which can be managed. A holistic view of knowledge management implementation has been provided. It also emphasizes the phenomenological importance of human resource parameters as compared to that of technological parameters. Various hypotheses have been tested to validate the significant models hypothesized. This work will prove useful to corporations, researchers, and independent professionals working to study or implement knowledge management paradigms.

  17. Space engineering modeling and optimization with case studies

    CERN Document Server

    Pintér, János

    2016-01-01

    This book presents a selection of advanced case studies that cover a substantial range of issues and real-world challenges and applications in space engineering. Vital mathematical modeling, optimization methodologies and numerical solution aspects of each application case study are presented in detail, with discussions of a range of advanced model development and solution techniques and tools. Space engineering challenges are discussed in the following contexts: •Advanced Space Vehicle Design •Computation of Optimal Low Thrust Transfers •Indirect Optimization of Spacecraft Trajectories •Resource-Constrained Scheduling, •Packing Problems in Space •Design of Complex Interplanetary Trajectories •Satellite Constellation Image Acquisition •Re-entry Test Vehicle Configuration Selection •Collision Risk Assessment on Perturbed Orbits •Optimal Robust Design of Hybrid Rocket Engines •Nonlinear Regression Analysis in Space Engineering< •Regression-Based Sensitivity Analysis and Robust Design ...

  18. Design Models as Emergent Features: An Empirical Study in Communication and Shared Mental Models in Instructional

    Science.gov (United States)

    Botturi, Luca

    2006-01-01

    This paper reports the results of an empirical study that investigated the instructional design process of three teams involved in the development of an e-­learning unit. The teams declared they were using the same fast-­prototyping design and development model, and were composed of the same roles (although with a different number of SMEs).…

  19. Studies of Monte Carlo Modelling of Jets at ATLAS

    CERN Document Server

    Kar, Deepak; The ATLAS collaboration

    2017-01-01

    The predictions of different Monte Carlo generators for QCD jet production, both in multijets and for jets produced in association with other objects, are presented. Recent improvements in showering Monte Carlos provide new tools for assessing systematic uncertainties associated with these jets.  Studies of the dependence of physical observables on the choice of shower tune parameters and new prescriptions for assessing systematic uncertainties associated with the choice of shower model and tune are presented.

  20. Using Computational and Mechanical Models to Study Animal Locomotion

    OpenAIRE

    Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas

    2012-01-01

    Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locom...

  1. Study of ATES thermal behavior using a steady flow model

    Science.gov (United States)

    Doughty, C.; Hellstroem, G.; Tsang, C. F.; Claesson, J.

    1981-01-01

    The thermal behavior of a single well aquifer thermal energy storage system in which buoyancy flow is neglected is studied. A dimensionless formulation of the energy transport equations for the aquifer system is presented, and the key dimensionless parameters are discussed. A simple numerical model is used to generate graphs showing the thermal behavior of the system as a function of these parameters. Some comparisons with field experiments are given to illustrate the use of the dimensionless groups and graphs.

  2. Vertical circulation and thermospheric composition: a modelling study

    OpenAIRE

    H. Rishbeth; I. C. F. Müller-Wodarg; I. C. F. Müller-Wodarg

    1999-01-01

    The coupled thermosphere-ionosphere-plasmasphere model CTIP is used to study the global three-dimensional circulation and its effect on neutral composition in the midlatitude F-layer. At equinox, the vertical air motion is basically up by day, down by night, and the atomic oxygen/molecular nitrogen [O/N2] concentration ratio is symmetrical about the equator. At solstice there is a summer-to-winter flow of air, with downwelling at subauroral latitudes in winter that produc...

  3. Physical Model Study of Cross Vanes and Ice

    Science.gov (United States)

    2009-08-01

    spacing since, in the pre-scour state, experiments and the HEC - RAS hydraulic model (USACE 2002b) found that water surface ele- vation merged with the...docs/eng-manuals/em1110- 2-1612/toc.htm. USACE (2002b) HEC - RAS , Hydraulic Reference Manual. US Army Corps of Engineers Hydrologic Engineering Center...Currently little design guidance is available for constructing these structures on ice-affected rivers . This study used physical and numerical

  4. The green seaweed Ulva: A model system to study morphogenesis

    OpenAIRE

    Thomas eWichard; Benedicte eCharrier; Benedicte eCharrier; Frédéric eMineur; John Henry Bothwell; Olivier eDe Clerck; Juliet C. Coates

    2015-01-01

    Green macroalgae, mostly represented by the Ulvophyceae, the main multicellular branch of the Chlorophyceae, constitute important primary producers of marine and brackish coastal ecosystems. Ulva or sea lettuce species are some of the most abundant representatives, being ubiquitous in coastal benthic communities around the world. Nonetheless the genus also remains largely understudied. This review highlights Ulva as an exciting novel model organism for studies of algal growth, development and...

  5. The green seaweed Ulva: a model system to study morphogenesis

    OpenAIRE

    Wichard, Thomas; Charrier, Bénédicte; Mineur, Frédéric; Bothwell, John H; De Clerck, Olivier; Coates, Juliet C

    2015-01-01

    International audience; Green macroalgae, mostly represented by the Ulvophyceae, the main multicellular branch of the Chlorophyceae, constitute important primary producers of marine and brackish coastal ecosystems. Ulva or sea lettuce species are some of the most abundant representatives, being ubiquitous in coastal benthic communities around the world. Nonetheless the genus also remains largely understudied. This review highlights Ulva as an exciting novel model organism for studies of algal...

  6. Using animal models to study post-partum psychiatric disorders.

    Science.gov (United States)

    Perani, C V; Slattery, D A

    2014-10-01

    The post-partum period represents a time during which all maternal organisms undergo substantial plasticity in a wide variety of systems in order to ensure the well-being of the offspring. Although this time is generally associated with increased calmness and decreased stress responses, for a substantial subset of mothers, this period represents a time of particular risk for the onset of psychiatric disorders. Thus, post-partum anxiety, depression and, to a lesser extent, psychosis may develop, and not only affect the well-being of the mother but also place at risk the long-term health of the infant. Although the risk factors for these disorders, as well as normal peripartum-associated adaptations, are well known, the underlying aetiology of post-partum psychiatric disorders remains poorly understood. However, there have been a number of attempts to model these disorders in basic research, which aim to reveal their underlying mechanisms. In the following review, we first discuss known peripartum adaptations and then describe post-partum mood and anxiety disorders, including their risk factors, prevalence and symptoms. Thereafter, we discuss the animal models that have been designed in order to study them and what they have revealed about their aetiology to date. Overall, these studies show that it is feasible to study such complex disorders in animal models, but that more needs to be done in order to increase our knowledge of these severe and debilitating mood and anxiety disorders. © 2014 The British Pharmacological Society.

  7. Spatial Temporal Modelling of Particulate Matter for Health Effects Studies

    Science.gov (United States)

    Hamm, N. A. S.

    2016-10-01

    Epidemiological studies of the health effects of air pollution require estimation of individual exposure. It is not possible to obtain measurements at all relevant locations so it is necessary to predict at these space-time locations, either on the basis of dispersion from emission sources or by interpolating observations. This study used data obtained from a low-cost sensor network of 32 air quality monitoring stations in the Dutch city of Eindhoven, which make up the ILM (innovative air (quality) measurement system). These stations currently provide PM10 and PM2.5 (particulate matter less than 10 and 2.5 m in diameter), aggregated to hourly means. The data provide an unprecedented level of spatial and temporal detail for a city of this size. Despite these benefits the time series of measurements is characterized by missing values and noisy values. In this paper a space-time analysis is presented that is based on a dynamic model for the temporal component and a Gaussian process geostatistical for the spatial component. Spatial-temporal variability was dominated by the temporal component, although the spatial variability was also substantial. The model delivered accurate predictions for both isolated missing values and 24-hour periods of missing values (RMSE = 1.4 μg m-3 and 1.8 μg m-3 respectively). Outliers could be detected by comparison to the 95% prediction interval. The model shows promise for predicting missing values, outlier detection and for mapping to support health impact studies.

  8. Regional scale groundwater modelling study for Ganga River basin

    Science.gov (United States)

    Maheswaran, R.; Khosa, R.; Gosain, A. K.; Lahari, S.; Sinha, S. K.; Chahar, B. R.; Dhanya, C. T.

    2016-10-01

    Subsurface movement of water within the alluvial formations of Ganga Basin System of North and East India, extending over an area of 1 million km2, was simulated using Visual MODFLOW based transient numerical model. The study incorporates historical groundwater developments as recorded by various concerned agencies and also accommodates the role of some of the major tributaries of River Ganga as geo-hydrological boundaries. Geo-stratigraphic structures, along with corresponding hydrological parameters,were obtained from Central Groundwater Board, India,and used in the study which was carried out over a time horizon of 4.5 years. The model parameters were fine tuned for calibration using Parameter Estimation (PEST) simulations. Analyses of the stream aquifer interaction using Zone Budget has allowed demarcation of the losing and gaining stretches along the main stem of River Ganga as well as some of its principal tributaries. From a management perspective,and entirely consistent with general understanding, it is seen that unabated long term groundwater extraction within the study basin has induced a sharp decrease in critical dry weather base flow contributions. In view of a surge in demand for dry season irrigation water for agriculture in the area, numerical models can be a useful tool to generate not only an understanding of the underlying groundwater system but also facilitate development of basin-wide detailed impact scenarios as inputs for management and policy action.

  9. Analytical, Experimental, and Modelling Studies of Lunar and Terrestrial Rocks

    Science.gov (United States)

    Haskin, Larry A.

    1997-01-01

    The goal of our research has been to understand the paths and the processes of planetary evolution that produced planetary surface materials as we find them. Most of our work has been on lunar materials and processes. We have done studies that obtain geological knowledge from detailed examination of regolith materials and we have reported implications for future sample-collecting and on-surface robotic sensing missions. Our approach has been to study a suite of materials that we have chosen in order to answer specific geologic questions. We continue this work under NAG5-4172. The foundation of our work has been the study of materials with precise chemical and petrographic analyses, emphasizing analysis for trace chemical elements. We have used quantitative models as tests to account for the chemical compositions and mineralogical properties of the materials in terms of regolith processes and igneous processes. We have done experiments as needed to provide values for geochemical parameters used in the models. Our models take explicitly into account the physical as well as the chemical processes that produced or modified the materials. Our approach to planetary geoscience owes much to our experience in terrestrial geoscience, where samples can be collected in field context and sampling sites revisited if necessary. Through studies of terrestrial analog materials, we have tested our ideas about the origins of lunar materials. We have been mainly concerned with the materials of the lunar highland regolith, their properties, their modes of origin, their provenance, and how to extrapolate from their characteristics to learn about the origin and evolution of the Moon's early igneous crust. From this work a modified model for the Moon's structure and evolution is emerging, one of globally asymmetric differentiation of the crust and mantle to produce a crust consisting mainly of ferroan and magnesian igneous rocks containing on average 70-80% plagioclase, with a large

  10. Simulation study of a rectifying bipolar ion channel: Detailed model versus reduced model

    Directory of Open Access Journals (Sweden)

    Z. Ható

    2016-02-01

    Full Text Available We study a rectifying mutant of the OmpF porin ion channel using both all-atom and reduced models. The mutant was created by Miedema et al. [Nano Lett., 2007, 7, 2886] on the basis of the NP semiconductor diode, in which an NP junction is formed. The mutant contains a pore region with positive amino acids on the left-hand side and negative amino acids on the right-hand side. Experiments show that this mutant rectifies. Although we do not know the structure of this mutant, we can build an all-atom model for it on the basis of the structure of the wild type channel. Interestingly, molecular dynamics simulations for this all-atom model do not produce rectification. A reduced model that contains only the important degrees of freedom (the positive and negative amino acids and free ions in an implicit solvent, on the other hand, exhibits rectification. Our calculations for the reduced model (using the Nernst-Planck equation coupled to Local Equilibrium Monte Carlo simulations reveal a rectification mechanism that is different from that seen for semiconductor diodes. The basic reason is that the ions are different in nature from electrons and holes (they do not recombine. We provide explanations for the failure of the all-atom model including the effect of all the other atoms in the system as a noise that inhibits the response of ions (that would be necessary for rectification to the polarizing external field.

  11. Sensitivity model study of regional mercury dispersion in the atmosphere

    Science.gov (United States)

    Gencarelli, Christian N.; Bieser, Johannes; Carbone, Francesco; De Simone, Francesco; Hedgecock, Ian M.; Matthias, Volker; Travnikov, Oleg; Yang, Xin; Pirrone, Nicola

    2017-01-01

    Atmospheric deposition is the most important pathway by which Hg reaches marine ecosystems, where it can be methylated and enter the base of food chain. The deposition, transport and chemical interactions of atmospheric Hg have been simulated over Europe for the year 2013 in the framework of the Global Mercury Observation System (GMOS) project, performing 14 different model sensitivity tests using two high-resolution three-dimensional chemical transport models (CTMs), varying the anthropogenic emission datasets, atmospheric Br input fields, Hg oxidation schemes and modelling domain boundary condition input. Sensitivity simulation results were compared with observations from 28 monitoring sites in Europe to assess model performance and particularly to analyse the influence of anthropogenic emission speciation and the Hg0(g) atmospheric oxidation mechanism. The contribution of anthropogenic Hg emissions, their speciation and vertical distribution are crucial to the simulated concentration and deposition fields, as is also the choice of Hg0(g) oxidation pathway. The areas most sensitive to changes in Hg emission speciation and the emission vertical distribution are those near major sources, but also the Aegean and the Black seas, the English Channel, the Skagerrak Strait and the northern German coast. Considerable influence was found also evident over the Mediterranean, the North Sea and Baltic Sea and some influence is seen over continental Europe, while this difference is least over the north-western part of the modelling domain, which includes the Norwegian Sea and Iceland. The Br oxidation pathway produces more HgII(g) in the lower model levels, but overall wet deposition is lower in comparison to the simulations which employ an O3 / OH oxidation mechanism. The necessity to perform continuous measurements of speciated Hg and to investigate the local impacts of Hg emissions and deposition, as well as interactions dependent on land use and vegetation, forests, peat

  12. Study and mathematical model of ultra-low gas burner

    International Nuclear Information System (INIS)

    Gueorguieva, A.

    2001-01-01

    The main objective of this project is prediction and reduction of NOx and CO 2 emissions under levels recommended from European standards for gas combustion processes. A mathematical model of burner and combustion chamber is developed based on interacting fluid dynamics processes: turbulent flow, gas phase chemical reactions, heat and radiation transfer The NOx prediction model for prompt and thermal NOx is developed. The validation of CFD (Computer fluid-dynamics) simulations corresponds to 5 MWI burner type - TEA, installed on CASPER boiler. This burner is three-stream air distribution burner with swirl effect, designed by ENEL to meet future NOx emission standards. For performing combustion computer modelling, FLUENT CFD code is preferred, because of its capabilities to provide accurately description of large number of rapid interacting processes: turbulent flow, phase chemical reactions and heat transfer and for its possibilities to present wide range of calculation and graphical output reporting data The computational tool used in this study is FLUENT version 5.4.1, installed on fs 8200 UNIX systems The work includes: study the effectiveness of low-NOx concepts and understand the impact of combustion and swirl air distribution and flue gas recirculation on peak flame temperatures, flame structure and fuel/air mixing. A finite rate combustion model: Eddy-Dissipation (Magnussen-Hjertager) Chemical Model for 1, 2 step Chemical reactions of bi-dimensional (2D) grid is developed along with NOx and CO 2 predictions. The experimental part of the project consists of participation at combustion tests on experimental facilities located in Livorno. The results of the experiments are used, to obtain better vision for combustion process on small-scaled design and to collect the necessary input data for further Fluent simulations

  13. Foothills model forest grizzly bear study : project update

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-01-01

    This report updates a five year study launched in 1999 to ensure the continued healthy existence of grizzly bears in west-central Alberta by integrating their needs into land management decisions. The objective was to gather better information and to develop computer-based maps and models regarding grizzly bear migration, habitat use and response to human activities. The study area covers 9,700 square km in west-central Alberta where 66 to 147 grizzly bears exist. During the first 3 field seasons, researchers captured and radio collared 60 bears. Researchers at the University of Calgary used remote sensing tools and satellite images to develop grizzly bear habitat maps. Collaborators at the University of Washington used trained dogs to find bear scat which was analyzed for DNA, stress levels and reproductive hormones. Resource Selection Function models are being developed by researchers at the University of Alberta to identify bear locations and to see how habitat is influenced by vegetation cover and oil, gas, forestry and mining activities. The health of the bears is being studied by researchers at the University of Saskatchewan and the Canadian Cooperative Wildlife Health Centre. The study has already advanced the scientific knowledge of grizzly bear behaviour. Preliminary results indicate that grizzlies continue to find mates, reproduce and gain weight and establish dens. These are all good indicators of a healthy population. Most bear deaths have been related to poaching. The study will continue for another two years. 1 fig.

  14. Comprehensive School Reform Models: A Study Guide for Comparing CSR Models (and How Well They Meet Minnesota's Learning Standards).

    Science.gov (United States)

    St. John, Edward P.; Loescher, Siri; Jacob, Stacy; Cekic, Osman; Kupersmith, Leigh; Musoba, Glenda Droogsma

    A growing number of schools are exploring the prospect of applying for funding to implement a Comprehensive School Reform (CSR) model. But the process of selecting a CSR model can be complicated because it frequently involves self-study and a review of models to determine which models best meet the needs of the school. This study guide is intended…

  15. A study of spatial resolution in pollution exposure modelling

    Directory of Open Access Journals (Sweden)

    Gustafsson Susanna

    2007-06-01

    Full Text Available Abstract Background This study is part of several ongoing projects concerning epidemiological research into the effects on health of exposure to air pollutants in the region of Scania, southern Sweden. The aim is to investigate the optimal spatial resolution, with respect to temporal resolution, for a pollutant database of NOx-values which will be used mainly for epidemiological studies with durations of days, weeks or longer periods. The fact that a pollutant database has a fixed spatial resolution makes the choice critical for the future use of the database. Results The results from the study showed that the accuracy between the modelled concentrations of the reference grid with high spatial resolution (100 m, denoted the fine grid, and the coarser grids (200, 400, 800 and 1600 meters improved with increasing spatial resolution. When the pollutant values were aggregated in time (from hours to days and weeks the disagreement between the fine grid and the coarser grids were significantly reduced. The results also illustrate a considerable difference in optimal spatial resolution depending on the characteristic of the study area (rural or urban areas. To estimate the accuracy of the modelled values comparison were made with measured NOx values. The mean difference between the modelled and the measured value were 0.6 μg/m3 and the standard deviation 5.9 μg/m3 for the daily difference. Conclusion The choice of spatial resolution should not considerably deteriorate the accuracy of the modelled NOx values. Considering the comparison between modelled and measured values we estimate that an error due to coarse resolution greater than 1 μg/m3 is inadvisable if a time resolution of one day is used. Based on the study of different spatial resolutions we conclude that for urban areas a spatial resolution of 200–400 m is suitable; and for rural areas the spatial resolution could be coarser (about 1600 m. This implies that we should develop a pollutant

  16. Xenopus: An Emerging Model for Studying Congenital Heart Disease

    Science.gov (United States)

    Kaltenbrun, Erin; Tandon, Panna; Amin, Nirav M.; Waldron, Lauren; Showell, Chris; Conlon, Frank L.

    2011-01-01

    Congenital heart defects affect nearly 1% of all newborns and are a significant cause of infant death. Clinical studies have identified a number of congenital heart syndromes associated with mutations in genes that are involved in the complex process of cardiogenesis. The African clawed frog, Xenopus, has been instrumental in studies of vertebrate heart development and provides a valuable tool to investigate the molecular mechanisms underlying human congenital heart diseases. In this review, we discuss the methodologies that make Xenopus an ideal model system to investigate heart development and disease. We also outline congenital heart conditions linked to cardiac genes that have been well-studied in Xenopus and describe some emerging technologies that will further aid in the study of these complex syndromes. PMID:21538812

  17. ANIMAL MODELS FOR THE STUDY OF LEISHMANIASIS IMMUNOLOGY

    Directory of Open Access Journals (Sweden)

    Elsy Nalleli Loria-Cervera

    2014-01-01

    Full Text Available Leishmaniasis remains a major public health problem worldwide and is classified as Category I by the TDR/WHO, mainly due to the absence of control. Many experimental models like rodents, dogs and monkeys have been developed, each with specific features, in order to characterize the immune response to Leishmania species, but none reproduces the pathology observed in human disease. Conflicting data may arise in part because different parasite strains or species are being examined, different tissue targets (mice footpad, ear, or base of tail are being infected, and different numbers (“low” 1×102 and “high” 1×106 of metacyclic promastigotes have been inoculated. Recently, new approaches have been proposed to provide more meaningful data regarding the host response and pathogenesis that parallels human disease. The use of sand fly saliva and low numbers of parasites in experimental infections has led to mimic natural transmission and find new molecules and immune mechanisms which should be considered when designing vaccines and control strategies. Moreover, the use of wild rodents as experimental models has been proposed as a good alternative for studying the host-pathogen relationships and for testing candidate vaccines. To date, using natural reservoirs to study Leishmania infection has been challenging because immunologic reagents for use in wild rodents are lacking. This review discusses the principal immunological findings against Leishmania infection in different animal models highlighting the importance of using experimental conditions similar to natural transmission and reservoir species as experimental models to study the immunopathology of the disease.

  18. A crowdsourcing model for creating preclinical medical education study tools.

    Science.gov (United States)

    Bow, Hansen C; Dattilo, Jonathan R; Jonas, Andrea M; Lehmann, Christoph U

    2013-06-01

    During their preclinical course work, medical students must memorize and recall substantial amounts of information. Recent trends in medical education emphasize collaboration through team-based learning. In the technology world, the trend toward collaboration has been characterized by the crowdsourcing movement. In 2011, the authors developed an innovative approach to team-based learning that combined students' use of flashcards to master large volumes of content with a crowdsourcing model, using a simple informatics system to enable those students to share in the effort of generating concise, high-yield study materials. The authors used Google Drive and developed a simple Java software program that enabled students to simultaneously access and edit sets of questions and answers in the form of flashcards. Through this crowdsourcing model, medical students in the class of 2014 at the Johns Hopkins University School of Medicine created a database of over 16,000 questions that corresponded to the Genes to Society basic science curriculum. An analysis of exam scores revealed that students in the class of 2014 outperformed those in the class of 2013, who did not have access to the flashcard system, and a survey of students demonstrated that users were generally satisfied with the system and found it a valuable study tool. In this article, the authors describe the development and implementation of their crowdsourcing model for creating study materials, emphasize its simplicity and user-friendliness, describe its impact on students' exam performance, and discuss how students in any educational discipline could implement a similar model of collaborative learning.

  19. Detailed kinetic modeling study of n-pentanol oxidation

    KAUST Repository

    Heufer, Karl Alexander; Sarathy, Mani; Curran, Henry J.; Davis, Alexander C.; Westbrook, Charles K.; Pitz, William J.

    2012-01-01

    To help overcome the world's dependence upon fossil fuels, suitable biofuels are promising alternatives that can be used in the transportation sector. Recent research on internal combustion engines shows that short alcoholic fuels (e.g., ethanol or n-butanol) have reduced pollutant emissions and increased knock resistance compared to fossil fuels. Although higher molecular weight alcohols (e.g., n-pentanol and n-hexanol) exhibit higher reactivity that lowers their knock resistance, they are suitable for diesel engines or advanced engine concepts, such as homogeneous charge compression ignition (HCCI), where higher reactivity at lower temperatures is necessary for engine operation. The present study presents a detailed kinetic model for n-pentanol based on modeling rules previously presented for n-butanol. This approach was initially validated using quantum chemistry calculations to verify the most stable n-pentanol conformation and to obtain C-H and C-C bond dissociation energies. The proposed model has been validated against ignition delay time data, speciation data from a jet-stirred reactor, and laminar flame velocity measurements. Overall, the model shows good agreement with the experiments and permits a detailed discussion of the differences between alcohols and alkanes. © 2012 American Chemical Society.

  20. Study on modeling of Energy-Economy-Environment system

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seung Jin [Korea Energy Economics Institute, Euiwang (Korea)

    1999-07-01

    This study analyzed the effect of carbon dioxide reduction policy generated by energy use by developing a new operation general equilibrium model. This model is a multi sector successive dynamic model, designed to be able to forecast economic variables as well as GDP, energy consumption, and carbon dioxide emission amount until 2030 for every 5 years. Using this model, it analyzed three greenhouse gas reduction policy scenarios, the introduction of world single carbon tax, the setting up limit of greenhouse gas discharge, and the introduction of international discharge permit trading system. It analyzes that it gives a heavy burden to Korean economy when Korean government implements the greenhouse gas reduction policy with only domestic policy instrument. Therefore it is considered that it is required to reduce greenhouse gas cost-effectively by using Kyoto Protocol actively, such as international permit trading, co-implementation, and clean development system, when greenhouse gas reduction gives a heavy burden. Moreover, a policy that is dependent only on price mechanism, such as carbon tax or permit trading, to reduce greenhouse gas requires a very high cost and has a limitation. Therefore, to relieve some burden on economy requires to implement non-price mechanism simultaneously such as energy technology development and restructuring on industry and transportation system. (author). 70 refs., 11 figs., 34 tabs.

  1. Experimental study and modelling of iron ore reduction by hydrogen

    International Nuclear Information System (INIS)

    Wagner, D.

    2008-01-01

    In an effort to find new ways to drastically reduce the CO 2 emissions from the steel industry (ULCOS project), the reduction of iron ore by pure hydrogen in a shaft furnace was investigated. The work consisted of literature, experimental, and modelling studies. The chemical reaction and its kinetics were analysed on the basis of thermogravimetric experiments and physicochemical characterizations of partially reduced samples. A specific kinetic model was designed, which simulates the successive reactions, the different steps of mass transport, and possible iron sintering, at the particle scale. Finally, a 2-dimensional numerical model of a shaft furnace was developed. It depicts the variation of the solid and gas temperatures and compositions throughout the reactor. One original feature of the model is using the law of additive characteristic times for calculating the reaction rates. This allowed us to handle both the particle and the reactor scale, while keeping reasonable calculation time. From the simulation results, the influence of the process parameters was assessed. Optimal operating conditions were concluded, which reveal the efficiency of the hydrogen process. (author)

  2. Experimental and modelling studies of radionuclide migration from contaminated groundwaters

    International Nuclear Information System (INIS)

    Tompkins, J. A.; Butler, A. P.; Wheater, H. S.; Shaw, G.; Wadey, P.; Bell, J. N. B.

    1994-01-01

    Lysimeter-based studies of radionuclide uptake by winter wheat are being undertaken to investigate soil-to-plant transfer processes. A five year multi-disciplinary research project has concentrated on the upward migration of contaminants from near surface water-tables and their subsequent uptake by a winter wheat crop. A weighted transfer factor approach and a physically based modelling methodology, for the simulation and prediction of radionuclide uptake, have been developed which offer alternatives to the traditional transfer factor approach. Integrated hydrological and solute transport models are used to simulate contaminant movement and subsequent root uptake. This approach enables prediction of radionuclide transport for a wide range of soil, plant and radionuclide types. This paper presents simulated results of 22 Na plant uptake and soil activity profiles, which are verified with respect to lysimeter data. The results demonstrate that a simple modelling approach can describe the variability in radioactivity in both the harvested crop and the soil profile, without recourse to a large number of empirical parameters. The proposed modelling technique should be readily applicable to a range of scales and conditions, since it embodies an understanding of the underlying physical processes of the system. This work constitutes part of an ongoing research programme being undertaken by UK Nirex Ltd., to assess the long term safety of a deep level repository for low and intermediate level nuclear waste. (author)

  3. A CASE STUDY ON POINT PROCESS MODELLING IN DISEASE MAPPING

    Directory of Open Access Journals (Sweden)

    Viktor Beneš

    2011-05-01

    Full Text Available We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis (TBE, and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependence of the risk on the covariates. Instead of using the common area level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics.

  4. Online modelling of water distribution systems: a UK case study

    Directory of Open Access Journals (Sweden)

    J. Machell

    2010-03-01

    Full Text Available Hydraulic simulation models of water distribution networks are routinely used for operational investigations and network design purposes. However, their full potential is often never realised because, in the majority of cases, they have been calibrated with data collected manually from the field during a single historic time period and, as such, reflect the network operational conditions that were prevalent at that time, and they are then applied as part of a reactive, desktop investigation. In order to use a hydraulic model to assist proactive distribution network management its element asset information must be up to date and it should be able to access current network information to drive simulations. Historically this advance has been restricted by the high cost of collecting and transferring the necessary field measurements. However, recent innovation and cost reductions associated with data transfer is resulting in collection of data from increasing numbers of sensors in water supply systems, and automatic transfer of the data to point of use. This means engineers potentially have access to a constant stream of current network data that enables a new era of "on-line" modelling that can be used to continually assess standards of service compliance for pressure and reduce the impact of network events, such as mains bursts, on customers. A case study is presented here that shows how an online modelling system can give timely warning of changes from normal network operation, providing capacity to minimise customer impact.

  5. A multiple-compartment model for biokinetics studies in plants

    International Nuclear Information System (INIS)

    Garcia, Fermin; Pietrobron, Flavio; Fonseca, Agnes M.F.; Mol, Anderson W.; Rodriguez, Oscar; Guzman, Fernando

    2001-01-01

    In the present work is used the system of linear equations based in the general Assimakopoulos's GMCM model , for the development of a new method that will determine the flow's parameters and transfer coefficients in plants. The need of mathematical models to quantify the penetration of a trace substance in animals and plants, has often been stressed in the literature. Usually, in radiological environment studies, it is used the mean value of contaminant concentrations on whole or edible part plant body, without taking in account vegetable physiology regularities. In this work concepts and mathematical formulation of a Vegetable Multi-compartment Model (VMCM), taking into account the plant's physiology regularities is presented. The model based in general ideas of the GMCM , and statistical Square Minimum Method STATFLUX is proposed to use in inverse sense: the experimental time dependence of concentration in each compartment, should be input, and the parameters should be determined from this data in a statistical approach. The case of Uranium metabolism is discussed. (author)

  6. Detailed kinetic modeling study of n-pentanol oxidation

    KAUST Repository

    Heufer, Karl Alexander

    2012-10-18

    To help overcome the world\\'s dependence upon fossil fuels, suitable biofuels are promising alternatives that can be used in the transportation sector. Recent research on internal combustion engines shows that short alcoholic fuels (e.g., ethanol or n-butanol) have reduced pollutant emissions and increased knock resistance compared to fossil fuels. Although higher molecular weight alcohols (e.g., n-pentanol and n-hexanol) exhibit higher reactivity that lowers their knock resistance, they are suitable for diesel engines or advanced engine concepts, such as homogeneous charge compression ignition (HCCI), where higher reactivity at lower temperatures is necessary for engine operation. The present study presents a detailed kinetic model for n-pentanol based on modeling rules previously presented for n-butanol. This approach was initially validated using quantum chemistry calculations to verify the most stable n-pentanol conformation and to obtain C-H and C-C bond dissociation energies. The proposed model has been validated against ignition delay time data, speciation data from a jet-stirred reactor, and laminar flame velocity measurements. Overall, the model shows good agreement with the experiments and permits a detailed discussion of the differences between alcohols and alkanes. © 2012 American Chemical Society.

  7. Phenomenological study of extended seesaw model for light sterile neutrino

    Energy Technology Data Exchange (ETDEWEB)

    Nath, Newton [Physical Research Laboratory,Navarangpura, Ahmedabad 380 009 (India); Indian Institute of Technology,Gandhinagar, Ahmedabad-382424 (India); Ghosh, Monojit [Department of Physics, Tokyo Metropolitan University,Hachioji, Tokyo 192-0397 (Japan); Goswami, Srubabati [Physical Research Laboratory,Navarangpura, Ahmedabad 380 009 (India); Gupta, Shivani [Center of Excellence for Particle Physics (CoEPP), University of Adelaide,Adelaide SA 5005 (Australia)

    2017-03-14

    We study the zero textures of the Yukawa matrices in the minimal extended type-I seesaw (MES) model which can give rise to ∼ eV scale sterile neutrinos. In this model, three right handed neutrinos and one extra singlet S are added to generate a light sterile neutrino. The light neutrino mass matrix for the active neutrinos, m{sub ν}, depends on the Dirac neutrino mass matrix (M{sub D}), Majorana neutrino mass matrix (M{sub R}) and the mass matrix (M{sub S}) coupling the right handed neutrinos and the singlet. The model predicts one of the light neutrino masses to vanish. We systematically investigate the zero textures in M{sub D} and observe that maximum five zeros in M{sub D} can lead to viable zero textures in m{sub ν}. For this study we consider four different forms for M{sub R} (one diagonal and three off diagonal) and two different forms of (M{sub S}) containing one zero. Remarkably we obtain only two allowed forms of m{sub ν} (m{sub eτ}=0 and m{sub ττ}=0) having inverted hierarchical mass spectrum. We re-analyze the phenomenological implications of these two allowed textures of m{sub ν} in the light of recent neutrino oscillation data. In the context of the MES model, we also express the low energy mass matrix, the mass of the sterile neutrino and the active-sterile mixing in terms of the parameters of the allowed Yukawa matrices. The MES model leads to some extra correlations which disallow some of the Yukawa textures obtained earlier, even though they give allowed one-zero forms of m{sub ν}. We show that the allowed textures in our study can be realized in a simple way in a model based on MES mechanism with a discrete Abelian flavor symmetry group Z{sub 8}×Z{sub 2}.

  8. THE FLAT TAX - A COMPARATIVE STUDY OF THE EXISTING MODELS

    Directory of Open Access Journals (Sweden)

    Schiau (Macavei Laura - Liana

    2011-07-01

    Full Text Available In the two last decades the flat tax systems have spread all around the globe from East and Central Europe to Asia and Central America. Many specialists consider this phenomenon a real fiscal revolution, but others see it as a mistake as long as the new systems are just a feint of the true flat tax designed by the famous Stanford University professors Robert Hall and Alvin Rabushka. In this context this paper tries to determine which of the existing flat tax systems resemble the true flat tax model by comparing and contrasting their main characteristics with the features of the model proposed by Hall and Rabushka. The research also underlines the common features and the differences between the existing models. The idea of this kind of study is not really new, others have done it but the comparison was limited to one country. For example Emil Kalchev from New Bulgarian University has asses the Bulgarian income system, by comparing it with the flat tax and concluding that taxation in Bulgaria is not simple, neutral and non-distortive. Our research is based on several case studies and on compare and contrast qualitative and quantitative methods. The study starts form the fiscal design drawn by the two American professors in the book The Flat Tax. Four main characteristics of the flat tax system were chosen in order to build the comparison: fiscal design, simplicity, avoidance of double taxation and uniformity of the tax rates. The jurisdictions chosen for the case study are countries all around the globe with fiscal systems which are considered flat tax systems. The results obtained show that the fiscal design of Hong Kong is the only flat tax model which is built following an economic logic and not a legal sense, being in the same time a simple and transparent system. Others countries as Slovakia, Albania, Macedonia in Central and Eastern Europe fulfill the requirement regarding the uniformity of taxation. Other jurisdictions avoid the double

  9. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  10. A magnetospheric specification model validation study: Geosynchronous electrons

    Science.gov (United States)

    Hilmer, R. V.; Ginet, G. P.

    2000-09-01

    The Rice University Magnetospheric Specification Model (MSM) is an operational space environment model of the inner and middle magnetosphere designed to specify charged particle fluxes up to 100keV. Validation test data taken between January 1996 and June 1998 consist of electron fluxes measured by a charge control system (CCS) on a defense satellite communications system (DSCS) spacecraft. The CCS includes both electrostatic analyzers to measure the particle environment and surface potential monitors to track differential charging between various materials and vehicle ground. While typical RMS error analysis methods provide a sense of the models overall abilities, they do not specifically address physical situations critical to operations, i.e., how well does the model specify when a high differential charging state is probable. In this validation study, differential charging states observed by DSCS are used to determine several threshold fluxes for the associated 20-50keV electrons and joint probability distributions are constructed to determine Hit, Miss, and False Alarm rates for the models. An MSM run covering the two and one-half year interval is performed using the minimum required input parameter set, consisting of only the magnetic activity index Kp, in order to statistically examine the model's seasonal and yearly performance. In addition, the relative merits of the input parameter, i.e., Kp, Dst, the equatorward boundary of diffuse aurora at midnight, cross-polar cap potential, solar wind density and velocity, and interplanetary magnetic field values, are evaluated as drivers of shorter model runs of 100 d each. In an effort to develop operational tools that can address spacecraft charging issues, we also identify temporal features in the model output that can be directly linked to input parameter variations and model boundary conditions. All model output is interpreted using the full three-dimensional, dipole tilt-dependent algorithms currently in

  11. Cellular Automata Models Applied to the Study of Landslide Dynamics

    Science.gov (United States)

    Liucci, Luisa; Melelli, Laura; Suteanu, Cristian

    2015-04-01

    Landslides are caused by complex processes controlled by the interaction of numerous factors. Increasing efforts are being made to understand the spatial and temporal evolution of this phenomenon, and the use of remote sensing data is making significant contributions in improving forecast. This paper studies landslides seen as complex dynamic systems, in order to investigate their potential Self Organized Critical (SOC) behavior, and in particular, scale-invariant aspects of processes governing the spatial development of landslides and their temporal evolution, as well as the mechanisms involved in driving the system and keeping it in a critical state. For this purpose, we build Cellular Automata Models, which have been shown to be capable of reproducing the complexity of real world features using a small number of variables and simple rules, thus allowing for the reduction of the number of input parameters commonly used in the study of processes governing landslide evolution, such as those linked to the geomechanical properties of soils. This type of models has already been successfully applied in studying the dynamics of other natural hazards, such as earthquakes and forest fires. The basic structure of the model is composed of three modules: (i) An initialization module, which defines the topographic surface at time zero as a grid of square cells, each described by an altitude value; the surface is acquired from real Digital Elevation Models (DEMs). (ii) A transition function, which defines the rules used by the model to update the state of the system at each iteration. The rules use a stability criterion based on the slope angle and introduce a variable describing the weakening of the material over time, caused for example by rainfall. The weakening brings some sites of the system out of equilibrium thus causing the triggering of landslides, which propagate within the system through local interactions between neighboring cells. By using different rates of

  12. A study of pilot modeling in multi-controller tasks

    Science.gov (United States)

    Whitbeck, R. F.; Knight, J. R.

    1972-01-01

    A modeling approach, which utilizes a matrix of transfer functions to describe the human pilot in multiple input, multiple output control situations, is studied. The approach used was to extend a well established scalar Wiener-Hopf minimization technique to the matrix case and then study, via a series of experiments, the data requirements when only finite record lengths are available. One of these experiments was a two-controller roll tracking experiment designed to force the pilot to use rudder in order to coordinate and reduce the effects of aileron yaw. One model was computed for the case where the signals used to generate the spectral matrix are error and bank angle while another model was computed for the case where error and yaw angle are the inputs. Several anomalies were observed to be present in the experimental data. These are defined by the descriptive terms roll up, break up, and roll down. Due to these algorithm induced anomalies, the frequency band over which reliable estimates of power spectra can be achieved is considerably less than predicted by the sampling theorem.

  13. Modeling eBook acceptance: A study on mathematics teachers

    Science.gov (United States)

    Jalal, Azlin Abd; Ayub, Ahmad Fauzi Mohd; Tarmizi, Rohani Ahmad

    2014-12-01

    The integration and effectiveness of eBook utilization in Mathematics teaching and learning greatly relied upon the teachers, hence the need to understand their perceptions and beliefs. The eBook, an individual laptop completed with digitized textbook sofwares, were provided for each students in line with the concept of 1 student:1 laptop. This study focuses on predicting a model on the acceptance of the eBook among Mathematics teachers. Data was collected from 304 mathematics teachers in selected schools using a survey questionnaire. The selection were based on the proportionate stratified sampling. Structural Equation Modeling (SEM) were employed where the model was tested and evaluated and was found to have a good fit. The variance explained for the teachers' attitude towards eBook is approximately 69.1% where perceived usefulness appeared to be a stronger determinant compared to perceived ease of use. This study concluded that the attitude of mathematics teachers towards eBook depends largely on the perception of how useful the eBook is on improving their teaching performance, implying that teachers should be kept updated with the latest mathematical application and sofwares to use with the eBook to ensure positive attitude towards using it in class.

  14. Modelling and Simulation of TCPAR for Power System Flow Studies

    Directory of Open Access Journals (Sweden)

    Narimen Lahaçani AOUZELLAG

    2012-12-01

    Full Text Available In this paper, the modelling of Thyristor Controlled Phase Angle Regulator ‘TCPAR’ for power flow studies and the role of that modelling in the study of Flexible Alternating Current Transmission Systems ‘FACTS’ for power flow control are discussed. In order to investigate the impact of TCPAR on power systems effectively, it is essential to formulate a correct and appropriate model for it. The TCPAR, thus, makes it possible to increase or decrease the power forwarded in the line where it is inserted in a considerable way, which makes of it an ideal tool for this kind of use. Knowing that the TCPAR does not inject any active power, it offers a good solution with a less consumption. One of the adverse effects of the TCPAR is the voltage drop which it causes in the network although it is not significant. To solve this disadvantage, it is enough to introduce a Static VAR Compensator ‘SVC’ into the electrical network which will compensate the voltages fall and will bring them back to an acceptable level.

  15. Information System Model as a Mobbing Prevention: A Case Study

    Directory of Open Access Journals (Sweden)

    Ersin Karaman

    2014-06-01

    Full Text Available In this study, it is aimed to detect mobbing issues in Atatürk University, Economics and Administrative Science Facultyand provide an information system model to prevent mobbing and reduce the risk. The study consists of two parts;i detect mobbing situation via questionnaire and ii design an information system based on the findings of the first part. The questionnaire was applied to research assistants in the faculty. Five factors were analyzed and it is concluded that research assistants have not been exposed to mobbing except the fact that they have mobbing perception about task assignment process. Results show that task operational difficulty, task time and task period are the common mobbing issues.  In order to develop an information system to cope with these issues,   assignment of exam proctor process is addressed. Exam time, instructor location, classroom location and exam duration are the considered as decision variables to developed linear programming (LP model. Coefficients of these variables and constraints about the LP model are specified in accordance with the findings. It is recommended that research assistants entrusting process should be conducted by using this method to prevent and reduce the risk of mobbing perception in the organization.

  16. Ports: Definition and study of types, sizes and business models

    Directory of Open Access Journals (Sweden)

    Ivan Roa

    2013-09-01

    Full Text Available Purpose: In the world today there are thousands of port facilities of different types and sizes, competing to capture some market share of freight by sea, mainly. This article aims to determine the type of port and the most common size, in order to find out which business model is applied in that segment and what is the legal status of the companies of such infrastructure.Design/methodology/approach: To achieve this goal, we develop a research on a representative sample of 800 ports worldwide, which manage 90% of the containerized port loading. Then you can find out the legal status of the companies that manage them.Findings: The results indicate a port type and a dominant size, which are mostly managed by companies subject to a concession model.Research limitations/implications: In this research, we study only those ports that handle freight (basically containerized, ignoring other activities such as fishing, military, tourism or recreational.Originality/value: This is an investigation to show that the vast majority of the studied segment port facilities are governed by a similar corporate model and subject to pressure from the markets, which increasingly demand efficiency and service. Consequently, we tend to concession terminals to private operators in a process that might be called privatization, but in the strictest sense of the term, is not entirely realistic because the ownership of the land never ceases to be public

  17. Foresight Model of Turkey's Defense Industries' Space Studies until 2040

    Science.gov (United States)

    Yuksel, Nurdan; Cifci, Hasan; Cakir, Serhat

    2016-07-01

    Being advanced in science and technology is inevitable reality in order to be able to have a voice in the globalized world. Therefore, for the countries, making policies in consistent with their societies' intellectual, economic and political infrastructure and attributing them to the vision having been embraced by all parties of the society is quite crucial for the success. The generated policies are supposed to ensure the usage of countries' resources in the most effective and fastest way, determine the priorities and needs of society and set their goals and related roadmaps. In this sense, technology foresight studies based on justified forecasting in science and technology have critical roles in the process of developing policies. In this article, Foresight Model of Turkey's Defense Industries' Space Studies, which is turned out to be the important part of community life and fundamental background of most technologies, up to 2040 is presented. Turkey got late in space technology studies. Hence, for being fast and efficient to use its national resources in a cost effective way and within national and international collaboration, it should be directed to its pre-set goals. By taking all these factors into consideration, the technology foresight model of Turkey's Defense Industry's Space Studies was presented in the study. In the model, the present condition of space studies in the World and Turkey was analyzed; literature survey and PEST analysis were made. PEST analysis will be the inputs of SWOT analysis and Delphi questionnaire will be used in the study. A two-round Delphi survey will be applied to the participants from universities, public and private organizations operating in space studies at Defense Industry. Critical space technologies will be distinguished according to critical technology measures determined by expert survey; space technology fields and goals will be established according to their importance and feasibility indexes. Finally, for the

  18. Molecular dynamics study of thermal disorder in a bicrystal model

    International Nuclear Information System (INIS)

    Nguyen, T.; Ho, P.S.; Kwok, T.; Yip, S.

    1990-01-01

    This paper studies a (310) θ = 36.86 degrees left-angle 001 right-angle symmetrical-tilt bicrystal model using an Embedded Atom Method aluminum potential. Based on explicit results obtained from the simulations regarding structural order, energy, and mobility, the authors find that their bicrystal model shows no evidence of pre-melting. Both the surface and the grain-boundary interface exhibit thermal disorder at temperatures below T m , with complete melting occurring only at, or very near, T m . Concerning the details of the onset of melting, the data show considerable disordering in the interfacial region starting at about 0.93 T m . The interfaces exhibit metastable behavior in this temperature range, and the temperature variation of the interfacial thickness suggests that the disordering induced by the interface is a continuous transition, a behavior that has been predicted by a theoretical analysis

  19. An experimental and modeling study of n-octanol combustion

    KAUST Repository

    Cai, Liming

    2015-01-01

    This study presents the first investigation on the combustion chemistry of n-octanol, a long chain alcohol. Ignition delay times were determined experimentally in a high-pressure shock tube, and stable species concentration profiles were obtained in a jet stirred reactor for a range of initial conditions. A detailed kinetic model was developed to describe the oxidation of n-octanol at both low and high temperatures, and the model shows good agreement with the present dataset. The fuel\\'s combustion characteristics are compared to those of n-alkanes and to short chain alcohols to illustrate the effects of the hydroxyl moiety and the carbon chain length on important combustion properties. Finally, the results are discussed in detail. © 2014 The Combustion Institute.

  20. The green seaweed Ulva: a model system to study morphogenesis.

    Science.gov (United States)

    Wichard, Thomas; Charrier, Bénédicte; Mineur, Frédéric; Bothwell, John H; Clerck, Olivier De; Coates, Juliet C

    2015-01-01

    Green macroalgae, mostly represented by the Ulvophyceae, the main multicellular branch of the Chlorophyceae, constitute important primary producers of marine and brackish coastal ecosystems. Ulva or sea lettuce species are some of the most abundant representatives, being ubiquitous in coastal benthic communities around the world. Nonetheless the genus also remains largely understudied. This review highlights Ulva as an exciting novel model organism for studies of algal growth, development and morphogenesis as well as mutualistic interactions. The key reasons that Ulva is potentially such a good model system are: (i) patterns of Ulva development can drive ecologically important events, such as the increasing number of green tides observed worldwide as a result of eutrophication of coastal waters, (ii) Ulva growth is symbiotic, with proper development requiring close association with bacterial epiphytes, (iii) Ulva is extremely developmentally plastic, which can shed light on the transition from simple to complex multicellularity and (iv) Ulva will provide additional information about the evolution of the green lineage.

  1. Functional renormalization group study of the Anderson–Holstein model

    International Nuclear Information System (INIS)

    Laakso, M A; Kennes, D M; Jakobs, S G; Meden, V

    2014-01-01

    We present a comprehensive study of the spectral and transport properties in the Anderson–Holstein model both in and out of equilibrium using the functional renormalization group (fRG). We show how the previously established machinery of Matsubara and Keldysh fRG can be extended to include the local phonon mode. Based on the analysis of spectral properties in equilibrium we identify different regimes depending on the strength of the electron–phonon interaction and the frequency of the phonon mode. We supplement these considerations with analytical results from the Kondo model. We also calculate the nonlinear differential conductance through the Anderson–Holstein quantum dot and find clear signatures of the presence of the phonon mode. (paper)

  2. Study on the development of geological environmental model

    International Nuclear Information System (INIS)

    Tsujimoto, Keiichi; Shinohara, Yoshinori; Ueta, Shinzo; Saito, Shigeyuki; Kawamura, Yuji; Tomiyama, Shingo; Ohashi, Toyo

    2002-03-01

    The safety performance assessment was carried out in potential geological environment in the conventional research and development of geological disposal, but the importance of safety assessment based on the repository design and scenario considering the concrete geological environment will increase in the future. The research considering the link of the major three fields of geological disposal, investigation of geological environment, repository design, and safety performance assessment, is the contemporary worldwide research theme. Hence it is important to organize information flow that contains the series of information process form the data production to analysis in the three fields, and to systemize the knowledge base that unifies the information flow hierarchically. The purpose of the research is to support the development of the unified analysis system for geological disposal. The development technology for geological environmental model studied for the second progress report by JNC are organized and examined for the purpose of developing database system with considering the suitability for the deep underground research facility. The geological environmental investigation technology and building methodology for geological structure and hydro geological structure models are organized and systemized. Furthermore, the quality assurance methods in building geological environment models are examined. Information which is used and stored in the unified analysis system are examined to design database structure of the system based on the organized methodology for building geological environmental model. The graphic processing function for data stored in the unified database are examined. furthermore, future research subjects for the development of detail models for geological disposal are surveyed to organize safety performance system. (author)

  3. A transgenic Xenopus laevis reporter model to study lymphangiogenesis

    Directory of Open Access Journals (Sweden)

    Annelii Ny

    2013-07-01

    The importance of the blood- and lymph vessels in the transport of essential fluids, gases, macromolecules and cells in vertebrates warrants optimal insight into the regulatory mechanisms underlying their development. Mouse and zebrafish models of lymphatic development are instrumental for gene discovery and gene characterization but are challenging for certain aspects, e.g. no direct accessibility of embryonic stages, or non-straightforward visualization of early lymphatic sprouting, respectively. We previously demonstrated that the Xenopus tadpole is a valuable model to study the processes of lymphatic development. However, a fluorescent Xenopus reporter directly visualizing the lymph vessels was lacking. Here, we created transgenic Tg(Flk1:eGFP Xenopus laevis reporter lines expressing green fluorescent protein (GFP in blood- and lymph vessels driven by the Flk1 (VEGFR-2 promoter. We also established a high-resolution fluorescent dye labeling technique selectively and persistently visualizing lymphatic endothelial cells, even in conditions of impaired lymph vessel formation or drainage function upon silencing of lymphangiogenic factors. Next, we applied the model to dynamically document blood and lymphatic sprouting and patterning of the initially avascular tadpole fin. Furthermore, quantifiable models of spontaneous or induced lymphatic sprouting into the tadpole fin were developed for dynamic analysis of loss-of-function and gain-of-function phenotypes using pharmacologic or genetic manipulation. Together with angiography and lymphangiography to assess functionality, Tg(Flk1:eGFP reporter tadpoles readily allowed detailed lymphatic phenotyping of live tadpoles by fluorescence microscopy. The Tg(Flk1:eGFP tadpoles represent a versatile model for functional lymph/angiogenomics and drug screening.

  4. Enhanced phytoremediation in the vadose zone: Modeling and column studies

    Science.gov (United States)

    Sung, K.; Chang, Y.; Corapcioglu, M.; Cho, C.

    2002-05-01

    Phytoremediation is a plant-based technique with potential for enhancing the remediation of vadoese zone soils contaminated by pollutants. The use of deep-rooted plants is an alternative to conventional methodologies. However, when the phytoremediation is applied to the vadose zone, it might have some restrictions since it uses solely naturally driven energy and mechanisms in addition to the complesxity of the vadose zone. As a more innovative technique than conventional phytoremediation methods, air injected phytoremediation technique is introduced to enhance the remediation efficiency or to apply at the former soil vapor extraction or bio venting sites. Effects of air injection, vegetation treatment, and air injection with vegetation treatments on the removal of hydrocarbon were investigated by column studies to simulate the field situation. Both the removal efficiency and the microbial activity were highest in air-injected and vegetated column soils. It was suggested that increased microorganisms activity stimulated by plant root exudates enhanced biodegradation of hydrocarbon compounds. Air injection provided sufficient opportunity for promoting the microbial activity at depths where the conditions are anaerobic. Air injection can enhance the physicochemical properties of the medium and contaminant and increase the bioavailability i.e., the plant and microbial accessibility to the contaminant. A mathematical model that can be applied to phytoremediation, especially to air injected phytoremediation, for simulating the fate and the transport of a diesel contaminant in the vadose zone is developed. The approach includes a two-phase model of water flow in vegetated and unplanted vadose zone soil. A time-specific root distribution model and a microbial growth model in the rhizosphere of vegetated soil were combined with an unsaturated soil water flow equation as well as with a contaminant transport equation. The proposed model showed a satisfactory representation of

  5. Experimental study of mass boiling in a porous medium model

    International Nuclear Information System (INIS)

    Sapin, Paul

    2014-01-01

    This manuscript presents a pore-scale experimental study of convective boiling heat transfer in a two-dimensional porous medium. The purpose is to deepen the understanding of thermohydraulics of porous media saturated with multiple fluid phases, in order to enhance management of severe accidents in nuclear reactors. Indeed, following a long-lasting failure in the cooling system of a pressurized water reactor (PWR) or a boiling water reactor (BWR) and despite the lowering of the control rods that stops the fission reaction, residual power due to radioactive decay keeps heating up the core. This induces water evaporation, which leads to the drying and degradation of the fuel rods. The resulting hot debris bed, comparable to a porous heat-generating medium, can be cooled down by reflooding, provided a water source is available. This process involves intense boiling mechanisms that must be modelled properly. The experimental study of boiling in porous media presented in this thesis focuses on the influence of different pore-scale boiling regimes on local heat transfer. The experimental setup is a model porous medium made of a bundle of heating cylinders randomly placed between two ceramic plates, one of which is transparent. Each cylinder is a resistance temperature detector (RTD) used to give temperature measurements as well as heat generation. Thermal measurements and high-speed image acquisition allow the effective heat exchanges to be characterized according to the observed local boiling regimes. This provides precious indications precious indications for the type of correlations used in the non-equilibrium macroscopic model used to model reflooding process. (author) [fr

  6. Modeling CICR in rat ventricular myocytes: voltage clamp studies

    Directory of Open Access Journals (Sweden)

    Palade Philip T

    2010-11-01

    Full Text Available Abstract Background The past thirty-five years have seen an intense search for the molecular mechanisms underlying calcium-induced calcium-release (CICR in cardiac myocytes, with voltage clamp (VC studies being the leading tool employed. Several VC protocols including lowering of extracellular calcium to affect Ca2+ loading of the sarcoplasmic reticulum (SR, and administration of blockers caffeine and thapsigargin have been utilized to probe the phenomena surrounding SR Ca2+ release. Here, we develop a deterministic mathematical model of a rat ventricular myocyte under VC conditions, to better understand mechanisms underlying the response of an isolated cell to calcium perturbation. Motivation for the study was to pinpoint key control variables influencing CICR and examine the role of CICR in the context of a physiological control system regulating cytosolic Ca2+ concentration ([Ca2+]myo. Methods The cell model consists of an electrical-equivalent model for the cell membrane and a fluid-compartment model describing the flux of ionic species between the extracellular and several intracellular compartments (cell cytosol, SR and the dyadic coupling unit (DCU, in which resides the mechanistic basis of CICR. The DCU is described as a controller-actuator mechanism, internally stabilized by negative feedback control of the unit's two diametrically-opposed Ca2+ channels (trigger-channel and release-channel. It releases Ca2+ flux into the cyto-plasm and is in turn enclosed within a negative feedback loop involving the SERCA pump, regulating[Ca2+]myo. Results Our model reproduces measured VC data published by several laboratories, and generates graded Ca2+ release at high Ca2+ gain in a homeostatically-controlled environment where [Ca2+]myo is precisely regulated. We elucidate the importance of the DCU elements in this process, particularly the role of the ryanodine receptor in controlling SR Ca2+ release, its activation by trigger Ca2+, and its

  7. Drift Scale Modeling: Study of Unsaturated Flow into a Drift Using a Stochastic Continuum Model

    International Nuclear Information System (INIS)

    Birkholzer, J.T.; Tsang, C.F.; Tsang, Y.W.; Wang, J.S

    1996-01-01

    Unsaturated flow in heterogeneous fractured porous rock was simulated using a stochastic continuum model (SCM). In this model, both the more conductive fractures and the less permeable matrix are generated within the framework of a single continuum stochastic approach, based on non-parametric indicator statistics. High-permeable fracture zones are distinguished from low-permeable matrix zones in that they have assigned a long range correlation structure in prescribed directions. The SCM was applied to study small-scale flow in the vicinity of an access tunnel, which is currently being drilled in the unsaturated fractured tuff formations at Yucca Mountain, Nevada. Extensive underground testing is underway in this tunnel to investigate the suitability of Yucca Mountain as an underground nuclear waste repository. Different flow scenarios were studied in the present paper, considering the flow conditions before and after the tunnel emplacement, and assuming steady-state net infiltration as well as episodic pulse infiltration. Although the capability of the stochastic continuum model has not yet been fully explored, it has been demonstrated that the SCM is a good alternative model feasible of describing heterogeneous flow processes in unsaturated fractured tuff at Yucca Mountain

  8. PENGEMBANGAN MODEL PEMBINAAN KOMPETENSI CALON GURU MATEMATIKA MELALUI LESSON STUDY

    Directory of Open Access Journals (Sweden)

    Rahmad Bustanul Anwar

    2014-06-01

    Full Text Available Education has a very important role in improving the quality of human resources. Therefore, education is expected to be one of the ways to prepare generations of qualified human resources and has the ability to deal with the progress of time and technology development . In order to enhance the quality of student mastery of competencies in the development of prospective teachers in this study will be applied to the activities in the process of lesson study in lecture . Lesson study is a model of coaching to people who work as both teacher educators and lecturers through collaborative learning and assessment in building sustainable learning communities. The purpose of this research is to improve the competence of prospective mathematics teachers through lesson study . More specifically , this study aims to describe the efforts made to improve the pedagogical, professional competence , social competence and personal competence prospective mathematics teachers through lesson study . Subjects in this study were students who took the micro teaching courses totaling 15 students , divided into 3 group . This type of research is a qualitative descriptive study is to develop the competence of prospective mathematics teachers through lesson study . Lesson study conducted collaborated with Action Research activities ( Action Reseach. The results of this research activity is the implementation of lesson study to greater competence to prospective teachers teaching mathematics through the micro subjects namely: pedagogical competence categories were 80 % and 20 % lower, professional competence categories were 46.7 % and 53.3 % lower, personal competence 100 % category being and social competence categories were 86.7 % and 13.3 % lower .

  9. Mathematical and computational modeling and simulation fundamentals and case studies

    CERN Document Server

    Moeller, Dietmar P F

    2004-01-01

    Mathematical and Computational Modeling and Simulation - a highly multi-disciplinary field with ubiquitous applications in science and engineering - is one of the key enabling technologies of the 21st century. This book introduces to the use of Mathematical and Computational Modeling and Simulation in order to develop an understanding of the solution characteristics of a broad class of real-world problems. The relevant basic and advanced methodologies are explained in detail, with special emphasis on ill-defined problems. Some 15 simulation systems are presented on the language and the logical level. Moreover, the reader can accumulate experience by studying a wide variety of case studies. The latter are briefly described within the book but their full versions as well as some simulation software demos are available on the Web. The book can be used for University courses of different level as well as for self-study. Advanced sections are marked and can be skipped in a first reading or in undergraduate courses...

  10. Analytical study on model tests of soil-structure interaction

    International Nuclear Information System (INIS)

    Odajima, M.; Suzuki, S.; Akino, K.

    1987-01-01

    Since nuclear power plant (NPP) structures are stiff, heavy and partly-embedded, the behavior of those structures during an earthquake depends on the vibrational characteristics of not only the structure but also the soil. Accordingly, seismic response analyses considering the effects of soil-structure interaction (SSI) are extremely important for seismic design of NPP structures. Many studies have been conducted on analytical techniques concerning SSI and various analytical models and approaches have been proposed. Based on the studies, SSI analytical codes (computer programs) for NPP structures have been improved at JINS (Japan Institute of Nuclear Safety), one of the departments of NUPEC (Nuclear Power Engineering Test Center) in Japan. These codes are soil-spring lumped-mass code (SANLUM), finite element code (SANSSI), thin layered element code (SANSOL). In proceeding with the improvement of the analytical codes, in-situ large-scale forced vibration SSI tests were performed using models simulating light water reactor buildings, and simulation analyses were performed to verify the codes. This paper presents an analytical study to demonstrate the usefulness of the codes

  11. Applications of the FIV Model to Study HIV Pathogenesis

    Directory of Open Access Journals (Sweden)

    Craig Miller

    2018-04-01

    Full Text Available Feline immunodeficiency virus (FIV is a naturally-occurring retrovirus that infects domestic and non-domestic feline species, producing progressive immune depletion that results in an acquired immunodeficiency syndrome (AIDS. Much has been learned about FIV since it was first described in 1987, particularly in regard to its application as a model to study the closely related lentivirus, human immunodeficiency virus (HIV. In particular, FIV and HIV share remarkable structure and sequence organization, utilize parallel modes of receptor-mediated entry, and result in a similar spectrum of immunodeficiency-related diseases due to analogous modes of immune dysfunction. This review summarizes current knowledge of FIV infection kinetics and the mechanisms of immune dysfunction in relation to opportunistic disease, specifically in regard to studying HIV pathogenesis. Furthermore, we present data that highlight changes in the oral microbiota and oral immune system during FIV infection, and outline the potential for the feline model of oral AIDS manifestations to elucidate pathogenic mechanisms of HIV-induced oral disease. Finally, we discuss advances in molecular biology, vaccine development, neurologic dysfunction, and the ability to apply pharmacologic interventions and sophisticated imaging technologies to study experimental and naturally occurring FIV, which provide an excellent, but often overlooked, resource for advancing therapies and the management of HIV/AIDS.

  12. Parameter study on dynamic behavior of ITER tokamak scaled model

    International Nuclear Information System (INIS)

    Nakahira, Masataka; Takeda, Nobukazu

    2004-12-01

    This report summarizes that the study on dynamic behavior of ITER tokamak scaled model according to the parametric analysis of base plate thickness, in order to find a reasonable solution to give the sufficient rigidity without affecting the dynamic behavior. For this purpose, modal analyses were performed changing the base plate thickness from the present design of 55 mm to 100 mm, 150 mm and 190 mm. Using these results, the modification plan of the plate thickness was studied. It was found that the thickness of 150 mm gives well fitting of 1st natural frequency about 90% of ideal rigid case. Thus, the modification study was performed to find out the adequate plate thickness. Considering the material availability, transportation and weldability, it was found that the 300mm thickness would be a limitation. The analysis result of 300mm thickness case showed 97% fitting of 1st natural frequency to the ideal rigid case. It was however found that the bolt length was too long and it gave additional twisting mode. As a result, it was concluded that the base plate thickness of 150mm or 190mm gives sufficient rigidity for the dynamic behavior of the scaled model. (author)

  13. Characterization of a Novel Murine Model to Study Zika Virus.

    Science.gov (United States)

    Rossi, Shannan L; Tesh, Robert B; Azar, Sasha R; Muruato, Antonio E; Hanley, Kathryn A; Auguste, Albert J; Langsjoen, Rose M; Paessler, Slobodan; Vasilakis, Nikos; Weaver, Scott C

    2016-06-01

    The mosquito-borne Zika virus (ZIKV) is responsible for an explosive ongoing outbreak of febrile illness across the Americas. ZIKV was previously thought to cause only a mild, flu-like illness, but during the current outbreak, an association with Guillain-Barré syndrome and microcephaly in neonates has been detected. A previous study showed that ZIKV requires murine adaptation to generate reproducible murine disease. In our study, a low-passage Cambodian isolate caused disease and mortality in mice lacking the interferon (IFN) alpha receptor (A129 mice) in an age-dependent manner, but not in similarly aged immunocompetent mice. In A129 mice, viremia peaked at ∼10(7) plaque-forming units/mL by day 2 postinfection (PI) and reached high titers in the spleen by day 1. ZIKV was detected in the brain on day 3 PI and caused signs of neurologic disease, including tremors, by day 6. Robust replication was also noted in the testis. In this model, all mice infected at the youngest age (3 weeks) succumbed to illness by day 7 PI. Older mice (11 weeks) showed signs of illness, viremia, and weight loss but recovered starting on day 8. In addition, AG129 mice, which lack both type I and II IFN responses, supported similar infection kinetics to A129 mice, but with exaggerated disease signs. This characterization of an Asian lineage ZIKV strain in a murine model, and one of the few studies reporting a model of Zika disease and demonstrating age-dependent morbidity and mortality, could provide a platform for testing the efficacy of antivirals and vaccines. © The American Society of Tropical Medicine and Hygiene.

  14. Assessment and Reduction of Model Parametric Uncertainties: A Case Study with A Distributed Hydrological Model

    Science.gov (United States)

    Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.

    2017-12-01

    The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40

  15. Pulse radiolysis in model studies toward radiation processing

    Energy Technology Data Exchange (ETDEWEB)

    Sonntag, C Von; Bothe, E; Ulanski, P; Deeble, D J [Max-Planck-Institut fuer Strahlenchemie, Muelheim an der Ruhr (Germany)

    1995-10-01

    Using the pulse radiolysis technique, the OH-radical-induced reactions of poly(vinyl alcohol) PVAL, poly(acrylic acid) PAA, poly(methyacrylic acid) PMA, and hyaluronic acid have been investigated in dilute aqueos solution. The reactions of the free-radical intermediates were followed by UV-spectroscopy and low-angle laser light-scattering; the scission of the charged polymers was also monitored by conductometry. For more detailed product studies, model systems such as 2,4-dihydroxypentane (for PVAL) and 2,4-dimethyl glutaric acid (for PAA) was also investigated. (author).

  16. Studies on 14C labelled chlorpyrifos in model marine ecosystem

    International Nuclear Information System (INIS)

    Pandit, G.G.; Mohan Rao, A.M.; Kale, S.P.; Murthy, N.B.K.; Raghu, K.

    1997-01-01

    Chlorpyrifos is one of the widely used organophosphorus insecticides in tropical countries. Experiments were conducted with 14 C labelled chlorpyrifos to study the distribution of this compound in model marine ecosystem. Less than 50 per cent of the applied activity remained in water in 24 h. Major portion of the applied chlorpyrifos (about 4.2 % residue per g) accumulated into the clams with sediment containing a maximum of 5 to 6 per cent of applied compound. No degradation of chlorpyrifos was observed in water or sediment samples. However, metabolic products were formed in clams. (author). 4 refs., 3 tabs

  17. Polarized Airway Epithelial Models for Immunological Co-Culture Studies

    DEFF Research Database (Denmark)

    Papazian, Dick; Würtzen, Peter A; Hansen, Søren Werner Karlskov

    2016-01-01

    Epithelial cells line all cavities and surfaces throughout the body and play a substantial role in maintaining tissue homeostasis. Asthma and other atopic diseases are increasing worldwide and allergic disorders are hypothesized to be a consequence of a combination of dysregulation...... of the epithelial response towards environmental antigens and genetic susceptibility, resulting in inflammation and T cell-derived immune responses. In vivo animal models have long been used to study immune homeostasis of the airways but are limited by species restriction and lack of exposure to a natural...

  18. Deschutes estuary feasibility study: hydrodynamics and sediment transport modeling

    Science.gov (United States)

    George, Douglas A.; Gelfenbaum, Guy; Lesser, Giles; Stevens, Andrew W.

    2006-01-01

    Continual sediment accumulation in Capitol Lake since the damming of the Deschutes River in 1951 has altered the initial morphology of the basin. As part of the Deschutes River Estuary Feasibility Study (DEFS), the United States Geological Survey (USGS) was tasked to model how tidal and storm processes will influence the river, lake and lower Budd Inlet should estuary restoration occur. Understanding these mechanisms will assist in developing a scientifically sound assessment on the feasibility of restoring the estuary. The goals of the DEFS are as follows. - Increase understanding of the estuary alternative to the same level as managing the lake environment.

  19. Decerebrate mouse model for studies of the spinal cord circuits

    DEFF Research Database (Denmark)

    Meehan, Claire Francesca; Mayr, Kyle A; Manuel, Marin

    2017-01-01

    The adult decerebrate mouse model (a mouse with the cerebrum removed) enables the study of sensory-motor integration and motor output from the spinal cord for several hours without compromising these functions with anesthesia. For example, the decerebrate mouse is ideal for examining locomotor be......, which is ample time to perform most short-term procedures. These protocols can be modified for those interested in cardiovascular or respiratory function in addition to motor function and can be performed by trainees with some previous experience in animal surgery....

  20. Model-independent study of light cone current commutators

    International Nuclear Information System (INIS)

    Gautam, S.R.; Dicus, D.A.

    1974-01-01

    An attempt is made to extract information on the nature of light cone current commutators (L. C. C.) in a model independent manner. Using simple assumptions on the validity of the DGS representation for the structure functions of deep inelastic scattering and using the Bjorken--Johnston--Low theorem it is shown that in principle the L. C. C. may be constructed knowing the experimental electron--proton scattering data. On the other hand the scaling behavior of the structure functions is utilized to study the consistency of a vanishing value for various L. C. C. under mild assumptions on the behavior of the DGS spectral moments. (U.S.)

  1. Shell-model Monte Carlo studies of nuclei

    International Nuclear Information System (INIS)

    Dean, D.J.

    1997-01-01

    The pair content and structure of nuclei near N = Z are described in the frwnework of shell-model Monte Carlo (SMMC) calculations. Results include the enhancement of J=0 T=1 proton-neutron pairing at N=Z nuclei, and the maxked difference of thermal properties between even-even and odd-odd N=Z nuclei. Additionally, a study of the rotational properties of the T=1 (ground state), and T=0 band mixing seen in 74 Rb is presented

  2. Modeled Urea Distribution Volume and Mortality in the HEMO Study

    Science.gov (United States)

    Greene, Tom; Depner, Thomas A.; Levin, Nathan W.; Chertow, Glenn M.

    2011-01-01

    Summary Background and objectives In the Hemodialysis (HEMO) Study, observed small decreases in achieved equilibrated Kt/Vurea were noncausally associated with markedly increased mortality. Here we examine the association of mortality with modeled volume (Vm), the denominator of equilibrated Kt/Vurea. Design, setting, participants, & measurements Parameters derived from modeled urea kinetics (including Vm) and blood pressure (BP) were obtained monthly in 1846 patients. Case mix–adjusted time-dependent Cox regressions were used to relate the relative mortality hazard at each time point to Vm and to the change in Vm over the preceding 6 months. Mixed effects models were used to relate Vm to changes in intradialytic systolic BP and to other factors at each follow-up visit. Results Mortality was associated with Vm and change in Vm over the preceding 6 months. The association between change in Vm and mortality was independent of vascular access complications. In contrast, mortality was inversely associated with V calculated from anthropometric measurements (Vant). In case mix–adjusted analysis using Vm as a time-dependent covariate, the association of mortality with Vm strengthened after statistical adjustment for Vant. After adjustment for Vant, higher Vm was associated with slightly smaller reductions in intradialytic systolic BP and with risk factors for mortality including recent hospitalization and reductions in serum albumin concentration and body weight. Conclusions An increase in Vm is a marker for illness and mortality risk in hemodialysis patients. PMID:21511841

  3. Modeling AEC—New Approaches to Study Rare Genetic Disorders

    Science.gov (United States)

    Koch, Peter J.; Dinella, Jason; Fete, Mary; Siegfried, Elaine C.; Koster, Maranke I.

    2015-01-01

    Ankyloblepharon-ectodermal defects-cleft lip/palate (AEC) syndrome is a rare monogenetic disorder that is characterized by severe abnormalities in ectoderm-derived tissues, such as skin and its appendages. A major cause of morbidity among affected infants is severe and chronic skin erosions. Currently, supportive care is the only available treatment option for AEC patients. Mutations in TP63, a gene that encodes key regulators of epidermal development, are the genetic cause of AEC. However, it is currently not clear how mutations in TP63 lead to the various defects seen in the patients’ skin. In this review, we will discuss current knowledge of the AEC disease mechanism obtained by studying patient tissue and genetically engineered mouse models designed to mimic aspects of the disorder. We will then focus on new approaches to model AEC, including the use of patient cells and stem cell technology to replicate the disease in a human tissue culture model. The latter approach will advance our understanding of the disease and will allow for the development of new in vitro systems to identify drugs for the treatment of skin erosions in AEC patients. Further, the use of stem cell technology, in particular induced pluripotent stem cells (iPSC), will enable researchers to develop new therapeutic approaches to treat the disease using the patient’s own cells (autologous keratinocyte transplantation) after correction of the disease-causing mutations. PMID:24665072

  4. Experimental study and modelization of a propane storage tank depressurization

    International Nuclear Information System (INIS)

    Veneau, Tania

    1995-01-01

    The risks associated with the fast depressurization of propane storage tanks reveals the importance of the 'source term' determination. This term is directly linked, among others, to the characteristics of the jet developed downstream of the breach. The first aim of this work was to provide an original data bank concerning drop velocity and diameter distributions in a propane jet. For this purpose, a phase Doppler anemometer bas been implemented on an experimental set-up. Propane blowdowns have been performed with different breach sizes and several initial pressures in the storage tank. Drop diameter and velocity distributions have been investigated at different locations in the jet zone. These measurements exhibited the fragmentation and vaporisation trends in the jet. The second aim of this work concerned the 'source term'. lt required to study the coupling between the fluid behaviour inside the tank and the flow through the breach. This model took into account the phase exchange when flashing occurred in the tank. The flow at the breach was described with an homogeneous relaxation model. This coupled modelization has been successfully and exhaustively validated. lt originality lies on the application to propane flows. (author) [fr

  5. Numerical study of similarity in prototype and model pumped turbines

    International Nuclear Information System (INIS)

    Li, Z J; Wang, Z W; Bi, H L

    2014-01-01

    Similarity study of prototype and model pumped turbines are performed by numerical simulation and the partial discharge case is analysed in detail. It is found out that in the RSI (rotor-stator interaction) region where the flow is convectively accelerated with minor flow separation, a high level of similarity in flow patterns and pressure fluctuation appear with relative pressure fluctuation amplitude of model turbine slightly higher than that of prototype turbine. As for the condition in the runner where the flow is convectively accelerated with severe separation, similarity fades substantially due to different topology of flow separation and vortex formation brought by distinctive Reynolds numbers of the two turbines. In the draft tube where the flow is diffusively decelerated, similarity becomes debilitated owing to different vortex rope formation impacted by Reynolds number. It is noted that the pressure fluctuation amplitude and characteristic frequency of model turbine are larger than those of prototype turbine. The differences in pressure fluctuation characteristics are discussed theoretically through dimensionless Navier-Stokes equation. The above conclusions are all made based on simulation without regard to the penstock response and resonance

  6. a Model Study of Small-Scale World Map Generalization

    Science.gov (United States)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  7. The accident consequence model of the German safety study

    International Nuclear Information System (INIS)

    Huebschmann, W.

    1977-01-01

    The accident consequence model essentially describes a) the diffusion in the atmosphere and deposition on the soil of radioactive material released from the reactor into the atmosphere; b) the irradiation exposure and health consequences of persons affected. It is used to calculate c) the number of persons suffering from acute or late damage, taking into account possible counteractions such as relocation or evacuation, and d) the total risk to the population from the various types of accident. The model, the underlying parameters and assumptions are described. The bone marrow dose distribution is shown for the case of late overpressure containment failure, which is discussed in the paper of Heuser/Kotthoff, combined with four typical weather conditions. The probability distribution functions for acute mortality, late incidence of cancer and genetic damage are evaluated, assuming a characteristic population distribution. The aim of these calculations is first the presentation of some results of the consequence model as an example, in second the identification of problems, which need possibly in a second phase of study to be evaluated in more detail. (orig.) [de

  8. A study of doppler waveform using pulsatile flow model

    International Nuclear Information System (INIS)

    Chung, Hye Won; Chung, Myung Jin; Park, Jae Hyung; Chung, Jin Wook; Lee, Dong Hyuk; Min, Byoung Goo

    1997-01-01

    Through the construction of a pulsatile flow model using an artificial heart pump and stenosis to demonstrate triphasic Doppler waveform, which simulates in vivo conditions, and to evaluate the relationship between Doppler waveform and vascular compliance. The flow model was constructed using a flowmeter, rubber tube, glass tube with stenosis, and artificial heart pump. Doppler study was carried out at the prestenotic, poststenotic, and distal segments;compliance was changed by changing the length of the rubber tube. With increasing proximal compliance, Doppler waveforms show decreasing peak velocity of the first phase and slightly delayed acceleration time, but the waveform itself did not change significantly. Distal compliance influenced the second phase, and was important for the formation of pulsus tardus and parvus, which without poststenotic vascular compliance, did not develop. The peak velocity of the first phase was inversely proportional to proximal compliance, and those of the second and third phases were directly proportional to distal compliance. After constructing this pulsatile flow model, we were able to explain the relationship between vascular compliance and Doppler waveform, and also better understand the formation of pulsus tardus and parvus

  9. Study on the development of geological environmental model. 2

    International Nuclear Information System (INIS)

    Tsujimoto, Keiichi; Shinohara, Yoshinori; Saito, Shigeyuki; Ueta, Shinzo; Ohashi, Toyo; Sasaki, Ryouichi; Tomiyama, Shingo

    2003-02-01

    The safety performance assessment was carried out in imaginary geological environment in the conventional research and development of geological disposal, but the importance of safety assessment based on the repository design and scenario considering the concrete geological environment will increase in the future. The research considering the link of the major three fields of geological disposal, investigation of geological environment, repository design, and safety performance assessment, is the contemporary worldwide research theme. Hence it is important to organize information flow that contains the series of information process from the data production to analysis in the three fields, and to systematize the knowledge base that unifies the information flow hierarchically. The information flow for geological environment model generation process is examined and modified base on the product of the research of 'Study on the development of geological environment model' that was examined in 2002. The work flow diagrams for geological structure and hydrology are modified, and those for geochemical and rock property are examined from the scratch. Furthermore, database design was examined to build geoclinal environment database (knowledgebase) based on the results of the systemisation of the environment model generation technology. The geoclinal environment database was designed and the prototype system is build to contribute databased design. (author)

  10. A comprehensive experimental and modeling study of 2-methylbutanol combustion

    KAUST Repository

    Park, Sungwoo

    2015-05-01

    2-Methylbutanol (2-methyl-1-butanol) is one of several next-generation biofuels that can be used as an alternative fuel or blending component for combustion engines. This paper presents new experimental data for 2-methylbutanol, including ignition delay times in a high-pressure shock tube and premixed laminar flame speeds in a constant volume combustion vessel. Shock tube ignition delay times were measured for 2-methylbutanol/air mixtures at three equivalence ratios, temperatures ranging from 750 to 1250. K, and at nominal pressures near 20 and 40. bar. Laminar flame speed data were obtained using the spherically propagating premixed flame configuration at pressures of 1, 2, and 5. bar. A detailed chemical kinetic model for 2-methylbutanol oxidation was developed including high- and low-temperature chemistry based on previous modeling studies on butanol and pentanol isomers. The proposed model was tested against new and existing experimental data at pressures of 1-40. atm, temperatures of 740-1636. K, equivalence ratios of 0.25-2.0. Reaction path and sensitivity analyses were conducted for identifying key reactions at various combustion conditions, and to obtain better understanding of the combustion characteristics of larger alcohols.

  11. Pharmacokinetic-Pharmacodynamic Modeling to Study the Antipyretic Effect of Qingkailing Injection on Pyrexia Model Rats

    Directory of Open Access Journals (Sweden)

    Zhixin Zhang

    2016-03-01

    Full Text Available Qingkailing injection (QKLI is a modern Chinese medicine preparation derived from a well-known classical formulation, An-Gong-Niu-Huang Wan. Although the clinical efficacy of QKLI has been well defined, its severe adverse drug reactions (ADRs were extensively increased. Through thorough attempts to reduce ADR rates, it was realized that the effect-based rational use plays the key role in clinical practices. Hence, the pharmacokinetic-pharmacodynamic (PK-PD model was introduced in the present study, aiming to link the pharmacokinetic profiles with the therapeutic outcomes of QKLI, and subsequently to provide valuable guidelines for the rational use of QKLI in clinical settings. The PK properties of the six dominant ingredients in QKLI were compared between the normal treated group (NTG and the pyrexia model group (MTG. Rectal temperatures were measured in parallel with blood sampling for NTG, MTG, model control group (MCG, and normal control group (NCG. Baicalin and geniposide exhibited appropriate PK parameters, and were selected as the PK markers to map the antipyretic effect of QKLI. Then, a PK-PD model was constructed upon the bacalin and geniposide plasma concentrations vs. the rectal temperature variation values, by a two-compartment PK model with a Sigmoid Emax PD model to explain the time delay between the drug plasma concentration of PK markers and the antipyretic effect after a single dose administration of QKLI. The findings obtained would provide fundamental information to propose a more reasonable dosage regimen and improve the level of individualized drug therapy in clinical settings.

  12. Monte Carlo study of the hull distribution for the q = 1 Brauer model

    NARCIS (Netherlands)

    Kager, W.; Nienhuis, B.

    2006-01-01

    We study a special case of the Brauer model in which every path of the model has weight q = 1. The model has been studied before as a solvable lattice model and can be viewed as a Lorentz lattice gas. The paths of the model are also called self-avoiding trails. We consider the model in a triangle

  13. Model sensitivity studies of the decrease in atmospheric carbon tetrachloride

    Directory of Open Access Journals (Sweden)

    M. P. Chipperfield

    2016-12-01

    Full Text Available Carbon tetrachloride (CCl4 is an ozone-depleting substance, which is controlled by the Montreal Protocol and for which the atmospheric abundance is decreasing. However, the current observed rate of this decrease is known to be slower than expected based on reported CCl4 emissions and its estimated overall atmospheric lifetime. Here we use a three-dimensional (3-D chemical transport model to investigate the impact on its predicted decay of uncertainties in the rates at which CCl4 is removed from the atmosphere by photolysis, by ocean uptake and by degradation in soils. The largest sink is atmospheric photolysis (74 % of total, but a reported 10 % uncertainty in its combined photolysis cross section and quantum yield has only a modest impact on the modelled rate of CCl4 decay. This is partly due to the limiting effect of the rate of transport of CCl4 from the main tropospheric reservoir to the stratosphere, where photolytic loss occurs. The model suggests large interannual variability in the magnitude of this stratospheric photolysis sink caused by variations in transport. The impact of uncertainty in the minor soil sink (9 % of total is also relatively small. In contrast, the model shows that uncertainty in ocean loss (17 % of total has the largest impact on modelled CCl4 decay due to its sizeable contribution to CCl4 loss and large lifetime uncertainty range (147 to 241 years. With an assumed CCl4 emission rate of 39 Gg year−1, the reference simulation with the best estimate of loss processes still underestimates the observed CCl4 (overestimates the decay over the past 2 decades but to a smaller extent than previous studies. Changes to the rate of CCl4 loss processes, in line with known uncertainties, could bring the model into agreement with in situ surface and remote-sensing measurements, as could an increase in emissions to around 47 Gg year−1. Further progress in constraining the CCl4 budget is partly limited by

  14. Modelling catchment areas for secondary care providers: a case study.

    Science.gov (United States)

    Jones, Simon; Wardlaw, Jessica; Crouch, Susan; Carolan, Michelle

    2011-09-01

    Hospitals need to understand patient flows in an increasingly competitive health economy. New initiatives like Patient Choice and the Darzi Review further increase this demand. Essential to understanding patient flows are demographic and geographic profiles of health care service providers, known as 'catchment areas' and 'catchment populations'. This information helps Primary Care Trusts (PCTs) to review how their populations are accessing services, measure inequalities and commission services; likewise it assists Secondary Care Providers (SCPs) to measure and assess potential gains in market share, redesign services, evaluate admission thresholds and plan financial budgets. Unlike PCTs, SCPs do not operate within fixed geographic boundaries. Traditionally, SCPs have used administrative boundaries or arbitrary drive times to model catchment areas. Neither approach satisfactorily represents current patient flows. Furthermore, these techniques are time-consuming and can be challenging for healthcare managers to exploit. This paper presents three different approaches to define catchment areas, each more detailed than the previous method. The first approach 'First Past the Post' defines catchment areas by allocating a dominant SCP to each Census Output Area (OA). The SCP with the highest proportion of activity within each OA is considered the dominant SCP. The second approach 'Proportional Flow' allocates activity proportionally to each OA. This approach allows for cross-boundary flows to be captured in a catchment area. The third and final approach uses a gravity model to define a catchment area, which incorporates drive or travel time into the analysis. Comparing approaches helps healthcare providers to understand whether using more traditional and simplistic approaches to define catchment areas and populations achieves the same or similar results as complex mathematical modelling. This paper has demonstrated, using a case study of Manchester, that when estimating

  15. A study of quality measures for protein threading models

    Directory of Open Access Journals (Sweden)

    Rychlewski Leszek

    2001-08-01

    Full Text Available Abstract Background Prediction of protein structures is one of the fundamental challenges in biology today. To fully understand how well different prediction methods perform, it is necessary to use measures that evaluate their performance. Every two years, starting in 1994, the CASP (Critical Assessment of protein Structure Prediction process has been organized to evaluate the ability of different predictors to blindly predict the structure of proteins. To capture different features of the models, several measures have been developed during the CASP processes. However, these measures have not been examined in detail before. In an attempt to develop fully automatic measures that can be used in CASP, as well as in other type of benchmarking experiments, we have compared twenty-one measures. These measures include the measures used in CASP3 and CASP2 as well as have measures introduced later. We have studied their ability to distinguish between the better and worse models submitted to CASP3 and the correlation between them. Results Using a small set of 1340 models for 23 different targets we show that most methods correlate with each other. Most pairs of measures show a correlation coefficient of about 0.5. The correlation is slightly higher for measures of similar types. We found that a significant problem when developing automatic measures is how to deal with proteins of different length. Also the comparisons between different measures is complicated as many measures are dependent on the size of the target. We show that the manual assessment can be reproduced to about 70% using automatic measures. Alignment independent measures, detects slightly more of the models with the correct fold, while alignment dependent measures agree better when selecting the best models for each target. Finally we show that using automatic measures would, to a large extent, reproduce the assessors ranking of the predictors at CASP3. Conclusions We show that given a

  16. Alveolocapillary model system to study alveolar re-epithelialization

    Energy Technology Data Exchange (ETDEWEB)

    Willems, Coen H.M.P.; Zimmermann, Luc J.I.; Sanders, Patricia J.L.T.; Wagendorp, Margot; Kloosterboer, Nico [Department of Paediatrics, School for Oncology and Developmental Biology (GROW), Maastricht University Medical Centre, Maastricht (Netherlands); Cohen Tervaert, Jan Willem [Division of Clinical and Experimental Immunology, Department of Internal Medicine, Maastricht University Medical Centre, Maastricht (Netherlands); Duimel, Hans J.Q.; Verheyen, Fons K.C.P. [Electron Microscopy Unit, Department of Molecular Cell Biology, Maastricht University Medical Centre, Maastricht (Netherlands); Iwaarden, J. Freek van, E-mail: f.vaniwaarden@maastrichtuniversity.nl [Department of Paediatrics, School for Oncology and Developmental Biology (GROW), Maastricht University Medical Centre, Maastricht (Netherlands)

    2013-01-01

    In the present study an in vitro bilayer model system of the pulmonary alveolocapillary barrier was established to investigate the role of the microvascular endothelium on re-epithelialization. The model system, confluent monolayer cultures on opposing sides of a porous membrane, consisted of a human microvascular endothelial cell line (HPMEC-ST1.6R) and an alveolar type II like cell line (A549), stably expressing EGFP and mCherry, respectively. These fluorescent proteins allowed the real time assessment of the integrity of the monolayers and the automated analysis of the wound healing process after a scratch injury. The HPMECs significantly attenuated the speed of re-epithelialization, which was associated with the proximity to the A549 layer. Examination of cross-sectional transmission electron micrographs of the model system revealed protrusions through the membrane pores and close contact between the A549 cells and the HPMECs. Immunohistochemical analysis showed that these close contacts consisted of heterocellular gap-, tight- and adherens-junctions. Additional analysis, using a fluorescent probe to assess gap-junctional communication, revealed that the HPMECs and A549 cells were able to exchange the fluorophore, which could be abrogated by disrupting the gap junctions using connexin mimetic peptides. These data suggest that the pulmonary microvascular endothelium may impact the re-epithelialization process. -- Highlights: ► Model system for vital imaging and high throughput screening. ► Microvascular endothelium influences re-epithelialization. ► A549 cells form protrusions through membrane to contact HPMEC. ► A549 cells and HPMECs form heterocellular tight-, gap- and adherens-junctions.

  17. In vitro placental model optimization for nanoparticle transport studies

    Directory of Open Access Journals (Sweden)

    Cartwright L

    2012-01-01

    Full Text Available Laura Cartwright1, Marie Sønnegaard Poulsen2, Hanne Mørck Nielsen3, Giulio Pojana4, Lisbeth E Knudsen2, Margaret Saunders1, Erik Rytting2,51Bristol Initiative for Research of Child Health (BIRCH, Biophysics Research Unit, St Michael's Hospital, UH Bristol NHS Foundation Trust, Bristol, UK; 2University of Copenhagen, Faculty of Health Sciences, Department of Public Health, 3University of Copenhagen, Faculty of Pharmaceutical Sciences, Department of Pharmaceutics and Analytical Chemistry, Copenhagen, Denmark; 4Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, Venice, Italy; 5Department of Obstetrics and Gynecology, University of Texas Medical Branch, Galveston, Texas, USABackground: Advances in biomedical nanotechnology raise hopes in patient populations but may also raise questions regarding biodistribution and biocompatibility, especially during pregnancy. Special consideration must be given to the placenta as a biological barrier because a pregnant woman's exposure to nanoparticles could have significant effects on the fetus developing in the womb. Therefore, the purpose of this study is to optimize an in vitro model for characterizing the transport of nanoparticles across human placental trophoblast cells.Methods: The growth of BeWo (clone b30 human placental choriocarcinoma cells for nanoparticle transport studies was characterized in terms of optimized Transwell® insert type and pore size, the investigation of barrier properties by transmission electron microscopy, tight junction staining, transepithelial electrical resistance, and fluorescein sodium transport. Following the determination of nontoxic concentrations of fluorescent polystyrene nanoparticles, the cellular uptake and transport of 50 nm and 100 nm diameter particles was measured using the in vitro BeWo cell model.Results: Particle size measurements, fluorescence readings, and confocal microscopy indicated both cellular uptake of

  18. Fidelity study of superconductivity in extended Hubbard models

    Science.gov (United States)

    Plonka, N.; Jia, C. J.; Wang, Y.; Moritz, B.; Devereaux, T. P.

    2015-07-01

    The Hubbard model with local on-site repulsion is generally thought to possess a superconducting ground state for appropriate parameters, but the effects of more realistic long-range Coulomb interactions have not been studied extensively. We study the influence of these interactions on superconductivity by including nearest- and next-nearest-neighbor extended Hubbard interactions in addition to the usual on-site terms. Utilizing numerical exact diagonalization, we analyze the signatures of superconductivity in the ground states through the fidelity metric of quantum information theory. We find that nearest and next-nearest neighbor interactions have thresholds above which they destabilize superconductivity regardless of whether they are attractive or repulsive, seemingly due to competing charge fluctuations.

  19. Comparative study of cost models for tokamak DEMO fusion reactors

    International Nuclear Information System (INIS)

    Oishi, Tetsutarou; Yamazaki, Kozo; Arimoto, Hideki; Ban, Kanae; Kondo, Takuya; Tobita, Kenji; Goto, Takuya

    2012-01-01

    Cost evaluation analysis of the tokamak-type demonstration reactor DEMO using the PEC (physics-engineering-cost) system code is underway to establish a cost evaluation model for the DEMO reactor design. As a reference case, a DEMO reactor with reference to the SSTR (steady state tokamak reactor) was designed using PEC code. The calculated total capital cost was in the same order of that proposed previously in cost evaluation studies for the SSTR. Design parameter scanning analysis and multi regression analysis illustrated the effect of parameters on the total capital cost. The capital cost was predicted to be inside the range of several thousands of M$s in this study. (author)

  20. System studies for micro grid design: modeling and simulation examples

    Energy Technology Data Exchange (ETDEWEB)

    Rosa, Arlei Lucas S. [Federal University of Juiz de Fora (UFJF), MG (Brazil); Ribeiro, Paulo F. [Calvin College, Grand Rapids, MI (United States). Electrical Engineering Dept.

    2009-07-01

    The search for new energy sources to replace or increase the existing power plant of the traditional system leads to changes in concepts of energy generation and consumption. In this context, the concept of Microgrid has been developed and opens the opportunity for local power generation. The Microgrid can operate connected to the existing distribution network, increasing the reliability and safety of the system. Control measures and electronics interface are taken to maintain the network as a single and strong unit to any perturbation. This paper presents the typical system studies required to investigate the performance of a Microgrid operating under different system condition (e.g. interconnected or isolated from the utility grid and under system disturbance). Load flow and electromagnetic transient studies are used for modeling and simulation of a typical Microgrid configuration. (author)

  1. SVM and ANFIS Models for precipitaton Modeling (Case Study: GonbadKavouse

    Directory of Open Access Journals (Sweden)

    N. Zabet Pishkhani

    2016-10-01

    Full Text Available Introduction: In recent years, according to the intelligent models increased as new techniques and tools in hydrological processes such as precipitation forecasting. ANFIS model has good ability in train, construction and classification, and also has the advantage that allows the extraction of fuzzy rules from numerical information or knowledge. Another intelligent technique in recent years has been used in various areas is support vector machine (SVM. In this paper the ability of artificial intelligence methods including support vector machine (SVM and adaptive neuro fuzzy inference system (ANFIS were analyzed in monthly precipitation prediction. Materials and Methods: The study area was the city of Gonbad in Golestan Province. The city has a temperate climate in the southern highlands and southern plains, mountains and temperate humid, semi-arid and semi-arid in the north of Gorganroud river. In total, the city's climate is temperate and humid. In the present study, monthly precipitation was modeled in Gonbad using ANFIS and SVM and two different database structures were designed. The first structure: input layer consisted of mean temperature, relative humidity, pressure and wind speed at Gonbad station. The second structure: According to Pearson coefficient, the monthly precipitation data were used from four stations: Arazkoose, Bahalke, Tamar and Aqqala which had a higher correlation with Gonbad station precipitation. In this study precipitation data was used from 1995 to 2012. 80% data were used for model training and the remaining 20% of data for validation. SVM was developed from support vector machines in the 1990s by Vapnik. SVM has been widely recognized as a powerful tool to deal with function fitting problems. An Adaptive Neuro-Fuzzy Inference System (ANFIS refers, in general, to an adaptive network which performs the function of a fuzzy inference system. The most commonly used fuzzy system in ANFIS architectures is the Sugeno model

  2. Surface science studies of ethene containing model interstellar ices

    Science.gov (United States)

    Puletti, F.; Whelan, M.; Brown, W. A.

    2011-05-01

    The formation of saturated hydrocarbons in the interstellar medium (ISM) is difficult to explain only by taking into account gas phase reactions. This is mostly due to the fact that carbonium ions only react with H_2 to make unsaturated hydrocarbons, and hence no viable route to saturated hydrocarbons has been postulated to date. It is therefore likely that saturation processes occur via surface reactions that take place on interstellar dust grains. One of the species of interest in this family of reactions is C_2H_4 (ethene) which is an intermediate in several molecular formation routes (e.g. C_2H_2 → C_2H_6). To help to understand some of the surface processes involving ethene, a study of ethene deposited on a dust grain analogue surface (highly oriented pyrolytic graphite) held under ultra-high vacuum at 20 K has been performed. The adsorption and desorption of ethene has been studied both in water-free and water-dominated model interstellar ices. A combination of temperature programmed desorption (TPD) and reflection absorption infrared spectroscopy (RAIRS) have been used to identify the adsorbed and trapped species and to determine the kinetics of the desorption processes. In all cases, ethene is found to physisorb on the carbonaceous surface. As expected water has a very strong influence on the desorption of ethene, as previously observed for other model interstellar ice systems.

  3. Interpersonal social responsibility model of service learning: A longitudinal study.

    Science.gov (United States)

    Lahav, Orit; Daniely, Noa; Yalon-Chamovitz, Shira

    2018-01-01

    Service-learning (SL) is commonly used in Occupational Therapy (OT) programs worldwide as a community placement educational strategy. However, most SL models are not clearly defined in terms of both methodology and learning outcomes. This longitudinal study explores a structured model of Service-Learning (Interpersonal Social Responsibility-Service Learning: ISR-SL) aimed towards the development of professional identity among OT students. Based on OT students experiences from the end of the course through later stages as mature students and professionals. A qualitative research design was used to explore the perceptions and experiences of 150 first, second, and third-year OT students and graduates who have participated in ISR-SL during their first academic year. Our findings suggest that the structured, long-term relationship with a person with a disability in the natural environment, which is the core of the ISR-SL, allowed students to develop a professional identity based on seeing the person as a whole and recognizing his/her centrality in the therapeutic relationship. This study suggests ISR-SL as future direction or next step for implementing SL in OT and other healthcare disciplines programs.

  4. Hydropower recovery in water supply systems: Models and case study

    International Nuclear Information System (INIS)

    Vilanova, Mateus Ricardo Nogueira; Balestieri, José Antônio Perrella

    2014-01-01

    Highlights: • We present hydropower recovery models for water supply systems. • Hydropower recovery potential in water supply systems is highly variable. • The case studied could make the supply systems self-sufficient in terms of energy. • Hydropower recovery can reduce GHGs emissions and generate carbon credits. - Abstract: The energy efficiency of water supply systems can be increased through the recovery of hydraulic energy implicit to the volumes of water transported in various stages of the supply process, which can be converted into electricity through hydroelectric recovery systems. Such a process allows the use of a clean energy source that is usually neglected in water supplies, reducing its dependence on energy from the local network and the system’s operation costs. This article evaluates the possibilities and benefits of the use of water supply facilities, structures and equipment for hydraulic energy recovery, addressing several applicable hydroelectric models. A real case study was developed in Brazil to illustrate the technical, economic and environmental aspects of hydropower recovery in water supply systems

  5. Impacts modeling using the SPH particulate method. Case study

    International Nuclear Information System (INIS)

    Debord, R.

    1999-01-01

    The aim of this study is the modeling of the impact of melted metal on the reactor vessel head in the case of a core-meltdown accident. Modeling using the classical finite-element method alone is not sufficient but requires a coupling with particulate methods in order to take into account the behaviour of the corium. After a general introduction about particulate methods, the Nabor and SPH (smoothed particle hydrodynamics) methods are described. Then, the theoretical and numerical reliability of the SPH method is determined using simple cases. In particular, the number of neighbours significantly influences the preciseness of calculations. Also, the mesh of the structure must be adapted to the mesh of the fluid in order to reduce the edge effects. Finally, this study has shown that the values of artificial velocity coefficients used in the simulation of the BERDA test performed by the FZK Karlsruhe (Germany) are not correct. The domain of use of these coefficients was precised during a low speed impact. (J.S.)

  6. Mouse models for the study of postnatal cardiac hypertrophy

    Directory of Open Access Journals (Sweden)

    A. Del Olmo-Turrubiarte

    2015-06-01

    Full Text Available The main objective of this study was to create a postnatal model for cardiac hypertrophy (CH, in order to explain the mechanisms that are present in childhood cardiac hypertrophy. Five days after implantation, intraperitoneal (IP isoproterenol (ISO was injected for 7 days to pregnant female mice. The fetuses were obtained at 15, 17 and 19 dpc from both groups, also newborns (NB, neonates (7–15 days and young adults (6 weeks of age. Histopathological exams were done on the hearts. Immunohistochemistry and western blot demonstrated GATA4 and PCNA protein expression, qPCR real time the mRNA of adrenergic receptors (α-AR and β-AR, alpha and beta myosins (α-MHC, β-MHC and GATA4. After the administration of ISO, there was no change in the number of offsprings. We observed significant structural changes in the size of the offspring hearts. Morphometric analysis revealed an increase in the size of the left ventricular wall and interventricular septum (IVS. Histopathological analysis demonstrated loss of cellular compaction and presence of left ventricular small fibrous foci after birth. Adrenergic receptors might be responsible for changing a physiological into a pathological hypertrophy. However GATA4 seemed to be the determining factor in the pathology. A new animal model was established for the study of pathologic CH in early postnatal stages.

  7. Modeling digital breast tomosynthesis imaging systems for optimization studies

    Science.gov (United States)

    Lau, Beverly Amy

    Digital breast tomosynthesis (DBT) is a new imaging modality for breast imaging. In tomosynthesis, multiple images of the compressed breast are acquired at different angles, and the projection view images are reconstructed to yield images of slices through the breast. One of the main problems to be addressed in the development of DBT is the optimal parameter settings to obtain images ideal for detection of cancer. Since it would be unethical to irradiate women multiple times to explore potentially optimum geometries for tomosynthesis, it is ideal to use a computer simulation to generate projection images. Existing tomosynthesis models have modeled scatter and detector without accounting for oblique angles of incidence that tomosynthesis introduces. Moreover, these models frequently use geometry-specific physical factors measured from real systems, which severely limits the robustness of their algorithms for optimization. The goal of this dissertation was to design the framework for a computer simulation of tomosynthesis that would produce images that are sensitive to changes in acquisition parameters, so an optimization study would be feasible. A computer physics simulation of the tomosynthesis system was developed. The x-ray source was modeled as a polychromatic spectrum based on published spectral data, and inverse-square law was applied. Scatter was applied using a convolution method with angle-dependent scatter point spread functions (sPSFs), followed by scaling using an angle-dependent scatter-to-primary ratio (SPR). Monte Carlo simulations were used to generate sPSFs for a 5-cm breast with a 1-cm air gap. Detector effects were included through geometric propagation of the image onto layers of the detector, which were blurred using depth-dependent detector point-spread functions (PRFs). Depth-dependent PRFs were calculated every 5-microns through a 200-micron thick CsI detector using Monte Carlo simulations. Electronic noise was added as Gaussian noise as a

  8. Alaska North Slope Tundra Travel Model and Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Harry R. Bader; Jacynthe Guimond

    2006-03-01

    The Alaska Department of Natural Resources (DNR), Division of Mining, Land, and Water manages cross-country travel, typically associated with hydrocarbon exploration and development, on Alaska's arctic North Slope. This project is intended to provide natural resource managers with objective, quantitative data to assist decision making regarding opening of the tundra to cross-country travel. DNR designed standardized, controlled field trials, with baseline data, to investigate the relationships present between winter exploration vehicle treatments and the independent variables of ground hardness, snow depth, and snow slab thickness, as they relate to the dependent variables of active layer depth, soil moisture, and photosynthetically active radiation (a proxy for plant disturbance). Changes in the dependent variables were used as indicators of tundra disturbance. Two main tundra community types were studied: Coastal Plain (wet graminoid/moist sedge shrub) and Foothills (tussock). DNR constructed four models to address physical soil properties: two models for each main community type, one predicting change in depth of active layer and a second predicting change in soil moisture. DNR also investigated the limited potential management utility in using soil temperature, the amount of photosynthetically active radiation (PAR) absorbed by plants, and changes in microphotography as tools for the identification of disturbance in the field. DNR operated under the assumption that changes in the abiotic factors of active layer depth and soil moisture drive alteration in tundra vegetation structure and composition. Statistically significant differences in depth of active layer, soil moisture at a 15 cm depth, soil temperature at a 15 cm depth, and the absorption of photosynthetically active radiation were found among treatment cells and among treatment types. The models were unable to thoroughly investigate the interacting role between snow depth and disturbance due to a

  9. Studies on sulfate attack: Mechanisms, test methods, and modeling

    Science.gov (United States)

    Santhanam, Manu

    The objective of this research study was to investigate various issues pertaining to the mechanism, testing methods, and modeling of sulfate attack in concrete. The study was divided into the following segments: (1) effect of gypsum formation on the expansion of mortars, (2) attack by the magnesium ion, (3) sulfate attack in the presence of chloride ions---differentiating seawater and groundwater attack, (4) use of admixtures to mitigate sulfate attack---entrained air, sodium citrate, silica fume, and metakaolin, (5) effects of temperature and concentration of the attack solution, (6) development of new test methods using concrete specimens, and (7) modeling of the sulfate attack phenomenon. Mortar specimens using portland cement (PC) and tricalcium silicate (C 3S), with or without mineral admixtures, were prepared and immersed in different sulfate solutions. In addition to this, portland cement concrete specimens were also prepared and subjected to complete and partial immersion in sulfate solutions. Physical measurements, chemical analyses and microstructural studies were performed periodically on the specimens. Gypsum formation was seen to cause expansion of the C3S mortar specimens. Statistical analyses of the data also indicated that the quantity of gypsum was the most significant factor controlling the expansion of mortar bars. The attack by magnesium ion was found to drive the reaction towards the formation of brucite. Decalcification of the C-S-H and its subsequent conversion to the non-cementitious M-S-H was identified as the mechanism of destruction in magnesium sulfate attack. Mineral admixtures were beneficial in combating sodium sulfate attack, while reducing the resistance to magnesium sulfate attack. Air entrainment did not change the measured physical properties, but reduced the visible distress of the mortars. Sodium citrate caused a substantial reduction in the rate of damage of the mortars due to its retarding effect. Temperature and

  10. Fundamental study on interfacial area transport model (I) (contract research)

    International Nuclear Information System (INIS)

    Mishima, Kaichiro; Nakamura, Hideo

    2001-03-01

    Recently, improvement in the best-estimate (BE) code predictive capability is attempted by incorporating the interfacial area transport model (IATM) into a one-dimensional two-fluid model to represent gas-liquid two-phase flows in detail with less uncertainty in the flow predictions. Internationally, the nuclear regulatory commission (NRC) and Purdue University in the U.S.A. and CEA in France have promoted the renewal of their BE codes such as TRAC, RELAP5 and CATHARE, by introducing the IATM in cooperative manner. In Japan, JAERI is underway to develop a one-dimensional code based primarily on the IATM against the licensing procedures of next-generation nuclear reactors. The IATM has a possibility to correctly predict flow transient along flow path for such flows as developing flows, multi-dimensional flows, transitional flows, boiling flows, which are difficult to accurately predict by the two-fluid models employed in the current BE codes. The newly developed code with the IATM would dramatically improve the accuracy in the flow prediction. The model, however, is under development and needs great effort to overcome many difficulties with plenty of theoretical considerations based on much of data bases to be acquired further. This study attempts to measure interfacial area in air-water two-phase flows in a large-diameter tube to understand the characteristic of multi-dimensional flows that usually appear in large-diameter tube flows, and provide data bases, to contribute the development of the IATM. The results obtained by such institutes as Purdue University and CEA France were reviewed first. Clarified are the current status and problems of the IATM, basics and practical methods to measure the interfacial area using multi-sensor miniature local probes; metal needle electro-resistance probe and fiber-optic probe. It was found that the applicability of the IATM is limited mostly to a one-dimensional bubbly flow, and is far from satisfactory for multi

  11. Feasibility Study of a Lunar Analog Bed Rest Model

    Science.gov (United States)

    Cromwell, Ronita L.; Platts, Steven H.; Yarbough, Patrice; Buccello-Stout, Regina

    2010-01-01

    The purpose of this study was to determine the feasibility of using a 9.5deg head-up tilt bed rest model to simulate the effects of the 1/6 g load to the human body that exists on the lunar surface. The lunar analog bed rest model utilized a modified hospital bed. The modifications included mounting the mattress on a sled that rolled on bearings to provide freedom of movement. The weight of the sled was off-loaded using a counterweight system to insure that 1/6 body weight was applied along the long axis (z-axis) of the body. Force was verified through use of a force plate mounted at the foot of the bed. A seating assembly was added to the bed to permit periods of sitting. Subjects alternated between standing and sitting positions throughout the day. A total of 35% of the day was spent in the standing position and 65% was spent sitting. In an effort to achieve physiologic fluid shifts expected for a 1/6 G environment, subjects wore compression stockings and performed unloaded foot and ankle exercises. Eight subjects (3 females and 5 males) participated in this study. Subjects spent 13 days in the pre-bed rest phase, 6 days in bed rest and 3 days post bed rest. Subjects consumed a standardized diet throughout the study. To determine feasibility, measures of subject comfort, force and plasma volume were collected. Subject comfort was assessed using a Likert scale. Subjects were asked to assess level of comfort (0-100) for 11 body regions and provide an overall rating. Results indicated minimal to no discomfort as most subjects reported scores of zero. Force measures were performed for each standing position and were validated against subject s calculated 1/6 body weight (r(sup 2) = 0.993). The carbon monoxide rebreathing technique was used to assess plasma volume during pre-bed rest and on the last day of bed rest. Plasma volume results indicated a significant decrease (p = 0.001) from pre to post bed rest values. Subjects lost on average 8.3% (sd = 6.1%) during the

  12. Laparoscopic kidney orthotopic transplant: preclinical study in the pig model.

    Science.gov (United States)

    He, B; Musk, G C; Mou, L; Waneck, G L; Delriviere, L

    2013-06-01

    Laparoscopic surgery has rapidly expanded in clinical practice replacing conventional open surgery over the last three decades. Laparoscopic donor nephrectomy has been favored due to its multiple benefits. The aim of this study was to explore the safety and feasibility of kidney transplantation by a laparoscopic technique in a pig model. The study was approved by the university animal ethics committee. Eight female pigs (Sus Scrofra, weighing 45-50 kg) were divided into 2 groups: group I included 4 animals that underwent laparoscopic kidney orthotopic transplantation on the left side. The right kidney was remained functional in situ. The pigs recovered and were observed for 1 week. In the 4 hosts group II pigs underwent a laparoscopic kidney transplantation on the left side. With simultaneous clipping of the right ureter. After recovery, the pigs were observed for 4 weeks. A laparotomy for examination was performed prior to euthanasia. All 4 group I pigs survived for 1 week. The laparotomy showed normal graft perfusion with wall patent renal artery and vein as well as satisfactory urine output upon transection of ureter in 3 hosts. Renal artery stenosis occurred in one pig. In The Immediate kidney graft function was achieved in 3 group II pigs. The fourth died following extubation due to laryngospasm despite a functional graft. The average creatinine levels were 195.5 μmol/L on day 3; 224.5 μmol/L at week 1; 127 μmol/L at week 2; 182.7 umol/L at week 3; and 154.7 umol/L at week 4. Laparoscopic kidney transplantation was feasible and safe in a pig model with immediate graft function. This study will provide further evidence to support application of laparoscopic technique to human kidney transplant. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Study on team evaluation. Team process model for team evaluation

    International Nuclear Information System (INIS)

    Sasou Kunihide; Ebisu, Mitsuhiro; Hirose, Ayako

    2004-01-01

    Several studies have been done to evaluate or improve team performance in nuclear and aviation industries. Crew resource management is the typical example. In addition, team evaluation recently gathers interests in other teams of lawyers, medical staff, accountants, psychiatrics, executive, etc. However, the most evaluation methods focus on the results of team behavior that can be observed through training or actual business situations. What is expected team is not only resolving problems but also training younger members being destined to lead the next generation. Therefore, the authors set the final goal of this study establishing a series of methods to evaluate and improve teams inclusively such as decision making, motivation, staffing, etc. As the first step, this study develops team process model describing viewpoints for the evaluation. The team process is defined as some kinds of power that activate or inactivate competency of individuals that is the components of team's competency. To find the team process, the authors discussed the merits of team behavior with the experienced training instructors and shift supervisors of nuclear/thermal power plants. The discussion finds four team merits and many components to realize those team merits. Classifying those components into eight groups of team processes such as 'Orientation', 'Decision Making', 'Power and Responsibility', 'Workload Management', 'Professional Trust', 'Motivation', 'Training' and 'staffing', the authors propose Team Process Model with two to four sub processes in each team process. In the future, the authors will develop methods to evaluate some of the team processes for nuclear/thermal power plant operation teams. (author)

  14. A gastrointestinal rotavirus infection mouse model for immune modulation studies

    Directory of Open Access Journals (Sweden)

    van Amerongen Geert

    2011-03-01

    Full Text Available Abstract Background Rotaviruses are the single most important cause of severe diarrhea in young children worldwide. The current study was conducted to assess whether colostrum containing rotavirus-specific antibodies (Gastrogard-R® could protect against rotavirus infection. In addition, this illness model was used to study modulatory effects of intervention on several immune parameters after re-infection. Methods BALB/c mice were treated by gavage once daily with Gastrogard-R® from the age of 4 to 10 days, and were inoculated with rhesus rotavirus (RRV at 7 days of age. A secondary inoculation with epizootic-diarrhea infant-mouse (EDIM virus was administered at 17 days of age. Disease symptoms were scored daily and viral shedding was measured in fecal samples during the post-inoculation periods. Rotavirus-specific IgM, IgG and IgG subclasses in serum, T cell proliferation and rotavirus-specific delayed-type hypersensitivity (DTH responses were also measured. Results Primary inoculation with RRV induced a mild but consistent level of diarrhea during 3-4 days post-inoculation. All mice receiving Gastrogard-R® were 100% protected against rotavirus-induced diarrhea. Mice receiving both RRV and EDIM inoculation had a lower faecal-viral load following EDIM inoculation then mice receiving EDIM alone or Gastrogard-R®. Mice receiving Gastrogard-R® however displayed an enhanced rotavirus-specific T-cell proliferation whereas rotavirus-specific antibody subtypes were not affected. Conclusions Preventing RRV-induced diarrhea by Gastrogard-R® early in life showed a diminished protection against EDIM re-infection, but a rotavirus-specific immune response was developed including both B cell and T cell responses. In general, this intervention model can be used for studying clinical symptoms as well as the immune responses required for protection against viral re-infection.

  15. Regime-switching models to study psychological process

    NARCIS (Netherlands)

    Hamaker, E.L.; Grasman, R.P.P.P.; Kamphuis, J.H.

    2010-01-01

    Many psychological processes are characterized by recurrent shifts between different states. To model these processes at the level of the individual, regime-switching models may prove useful. In this chapter we discuss two of these models: the threshold autoregressive model and the Markov

  16. Implementation of IEC Standard Models for Power System Stability Studies

    DEFF Research Database (Denmark)

    Margaris, Ioannis; Hansen, Anca Daniela; Bech, John

    2012-01-01

    , namely a model for a variable speed wind turbine with full scale power converter WTG including a 2- mass mechanical model. The generic models for fixed and variable speed WTGs models are suitable for fundamental frequency positive sequence response simulations during short events in the power system...

  17. Hydrodynamic and Ecological Assessment of Nearshore Restoration: A Modeling Study

    International Nuclear Information System (INIS)

    Yang, Zhaoqing; Sobocinski, Kathryn L.; Heatwole, Danelle W.; Khangaonkar, Tarang; Thom, Ronald M.; Fuller, Roger

    2010-01-01

    Along the Pacific Northwest coast, much of the estuarine habitat has been diked over the last century for agricultural land use, residential and commercial development, and transportation corridors. As a result, many of the ecological processes and functions have been disrupted. To protect coastal habitats that are vital to aquatic species, many restoration projects are currently underway to restore the estuarine and coastal ecosystems through dike breaches, setbacks, and removals. Information on physical processes and hydrodynamic conditions are critical for the assessment of the success of restoration actions. Restoration of a 160- acre property at the mouth of the Stillaguamish River in Puget Sound has been proposed. The goal is to restore native tidal habitats and estuary-scale ecological processes by removing the dike. In this study, a three-dimensional hydrodynamic model was developed for the Stillaguamish River estuary to simulate estuarine processes. The model was calibrated to observed tide, current, and salinity data for existing conditions and applied to simulate the hydrodynamic responses to two restoration alternatives. Responses were evaluated at the scale of the restoration footprint. Model data was combined with biophysical data to predict habitat responses at the site. Results showed that the proposed dike removal would result in desired tidal flushing and conditions that would support four habitat types on the restoration footprint. At the estuary scale, restoration would substantially increase the proportion of area flushed with freshwater (< 5 ppt) at flood tide. Potential implications of predicted changes in salinity and flow dynamics are discussed relative to the distribution of tidal marsh habitat.

  18. A modelling study of long term green roof retention performance.

    Science.gov (United States)

    Stovin, Virginia; Poë, Simon; Berretta, Christian

    2013-12-15

    This paper outlines the development of a conceptual hydrological flux model for the long term continuous simulation of runoff and drought risk for green roof systems. A green roof's retention capacity depends upon its physical configuration, but it is also strongly influenced by local climatic controls, including the rainfall characteristics and the restoration of retention capacity associated with evapotranspiration during dry weather periods. The model includes a function that links evapotranspiration rates to substrate moisture content, and is validated against observed runoff data. The model's application to typical extensive green roof configurations is demonstrated with reference to four UK locations characterised by contrasting climatic regimes, using 30-year rainfall time-series inputs at hourly simulation time steps. It is shown that retention performance is dependent upon local climatic conditions. Volumetric retention ranges from 0.19 (cool, wet climate) to 0.59 (warm, dry climate). Per event retention is also considered, and it is demonstrated that retention performance decreases significantly when high return period events are considered in isolation. For example, in Sheffield the median per-event retention is 1.00 (many small events), but the median retention for events exceeding a 1 in 1 yr return period threshold is only 0.10. The simulation tool also provides useful information about the likelihood of drought periods, for which irrigation may be required. A sensitivity study suggests that green roofs with reduced moisture-holding capacity and/or low evapotranspiration rates will tend to offer reduced levels of retention, whilst high moisture-holding capacity and low evapotranspiration rates offer the strongest drought resistance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Electrochemical Corrosion Studies for Modeling Metallic Waste Form Release Rates

    International Nuclear Information System (INIS)

    Poineau, Frederic; Tamalis, Dimitri

    2016-01-01

    The isotope 99 Tc is an important fission product generated from nuclear power production. Because of its long half-life (t 1/2 = 2.13 ∙ 105 years) and beta-radiotoxicity (β - = 292 keV), it is a major concern in the long-term management of spent nuclear fuel. In the spent nuclear fuel, Tc is present as an alloy with Mo, Ru, Rh, and Pd called the epsilon-phase, the relative amount of which increases with fuel burn-up. In some separation schemes for spent nuclear fuel, Tc would be separated from the spent fuel and disposed of in a durable waste form. Technetium waste forms under consideration include metallic alloys, oxide ceramics and borosilicate glass. In the development of a metallic waste form, after separation from the spent fuel, Tc would be converted to the metal, incorporated into an alloy and the resulting waste form stored in a repository. Metallic alloys under consideration include Tc–Zr alloys, Tc–stainless steel alloys and Tc–Inconel alloys (Inconel is an alloy of Ni, Cr and iron which is resistant to corrosion). To predict the long-term behavior of the metallic Tc waste form, understanding the corrosion properties of Tc metal and Tc alloys in various chemical environments is needed, but efforts to model the behavior of Tc metallic alloys are limited. One parameter that should also be considered in predicting the long-term behavior of the Tc waste form is the ingrowth of stable Ru that occurs from the radioactive decay of 99 Tc ( 99 Tc → 99 Ru + β - ). After a geological period of time, significant amounts of Ru will be present in the Tc and may affect its corrosion properties. Studying the effect of Ru on the corrosion behavior of Tc is also of importance. In this context, we studied the electrochemical behavior of Tc metal, Tc-Ni alloys (to model Tc-Inconel alloy) and Tc-Ru alloys in acidic media. The study of Tc-U alloys has also been performed in order to better understand the nature of Tc in metallic spent fuel. Computational modeling

  20. Electrochemical Corrosion Studies for Modeling Metallic Waste Form Release Rates

    Energy Technology Data Exchange (ETDEWEB)

    Poineau, Frederic [Univ. of Nevada, Las Vegas, NV (United States); Tamalis, Dimitri [Florida Memorial Univ., Miami Gardens, FL (United States)

    2016-08-01

    The isotope 99Tc is an important fission product generated from nuclear power production. Because of its long half-life (t1/2 = 2.13 ∙ 105 years) and beta-radiotoxicity (β⁻ = 292 keV), it is a major concern in the long-term management of spent nuclear fuel. In the spent nuclear fuel, Tc is present as an alloy with Mo, Ru, Rh, and Pd called the epsilon-phase, the relative amount of which increases with fuel burn-up. In some separation schemes for spent nuclear fuel, Tc would be separated from the spent fuel and disposed of in a durable waste form. Technetium waste forms under consideration include metallic alloys, oxide ceramics and borosilicate glass. In the development of a metallic waste form, after separation from the spent fuel, Tc would be converted to the metal, incorporated into an alloy and the resulting waste form stored in a repository. Metallic alloys under consideration include Tc–Zr alloys, Tc–stainless steel alloys and Tc–Inconel alloys (Inconel is an alloy of Ni, Cr and iron which is resistant to corrosion). To predict the long-term behavior of the metallic Tc waste form, understanding the corrosion properties of Tc metal and Tc alloys in various chemical environments is needed, but efforts to model the behavior of Tc metallic alloys are limited. One parameter that should also be considered in predicting the long-term behavior of the Tc waste form is the ingrowth of stable Ru that occurs from the radioactive decay of 99Tc (99Tc → 99Ru + β⁻). After a geological period of time, significant amounts of Ru will be present in the Tc and may affect its corrosion properties. Studying the effect of Ru on the corrosion behavior of Tc is also of importance. In this context, we studied the electrochemical behavior of Tc metal, Tc-Ni alloys (to model Tc-Inconel alloy) and Tc-Ru alloys in acidic media. The study of Tc-U alloys has also been performed in order to better understand the

  1. Coupled Aerosol-Chemistry-Climate Twentieth-Century Transient Model Investigation: Trends in Short-Lived Species and Climate Responses

    Science.gov (United States)

    Koch, Dorothy; Bauer, Susanne E.; Del Genio, Anthony; Faluvegi, Greg; McConnell, Joseph R.; Menon, Surabi; Miller, Ronald L.; Rind, David; Ruedy, Reto; Schmidt, Gavin A.; hide

    2011-01-01

    The authors simulate transient twentieth-century climate in the Goddard Institute for Space Studies (GISS) GCM, with aerosol and ozone chemistry fully coupled to one another and to climate including a full dynamic ocean. Aerosols include sulfate, black carbon (BC), organic carbon, nitrate, sea salt, and dust. Direct and BC snow-albedo radiative effects are included. Model BC and sulfur trends agree fairly well with records from Greenland and European ice cores and with sulfur deposition in North America; however, the model underestimates the sulfur decline at the end of the century in Greenland. Global BC effects peak early in the century (1940s); afterward the BC effects decrease at high latitudes of the Northern Hemisphere but continue to increase at lower latitudes. The largest increase in aerosol optical depth occurs in the middle of the century (1940s-80s) when sulfate forcing peaks and causes global dimming. After this, aerosols decrease in eastern North America and northern Eurasia leading to regional positive forcing changes and brightening. These surface forcing changes have the correct trend but are too weak. Over the century, the net aerosol direct effect is -0.41 Watts per square meter, the BC-albedo effect is -0.02 Watts per square meter, and the net ozone forcing is +0.24 Watts per square meter. The model polar stratospheric ozone depletion develops, beginning in the 1970s. Concurrently, the sea salt load and negative radiative flux increase over the oceans around Antarctica. Net warming over the century is modeled fairly well; however, the model fails to capture the dynamics of the observedmidcentury cooling followed by the late century warming.Over the century, 20% of Arctic warming and snow ice cover loss is attributed to the BC albedo effect. However, the decrease in this effect at the end of the century contributes to Arctic cooling. To test the climate responses to sulfate and BC pollution, two experiments were branched from 1970 that removed

  2. Comparative Study of Injury Models for Studying Muscle Regeneration in Mice.

    Directory of Open Access Journals (Sweden)

    David Hardy

    Full Text Available A longstanding goal in regenerative medicine is to reconstitute functional tissues or organs after injury or disease. Attention has focused on the identification and relative contribution of tissue specific stem cells to the regeneration process. Relatively little is known about how the physiological process is regulated by other tissue constituents. Numerous injury models are used to investigate tissue regeneration, however, these models are often poorly understood. Specifically, for skeletal muscle regeneration several models are reported in the literature, yet the relative impact on muscle physiology and the distinct cells types have not been extensively characterised.We have used transgenic Tg:Pax7nGFP and Flk1GFP/+ mouse models to respectively count the number of muscle stem (satellite cells (SC and number/shape of vessels by confocal microscopy. We performed histological and immunostainings to assess the differences in the key regeneration steps. Infiltration of immune cells, chemokines and cytokines production was assessed in vivo by Luminex®.We compared the 4 most commonly used injury models i.e. freeze injury (FI, barium chloride (BaCl2, notexin (NTX and cardiotoxin (CTX. The FI was the most damaging. In this model, up to 96% of the SCs are destroyed with their surrounding environment (basal lamina and vasculature leaving a "dead zone" devoid of viable cells. The regeneration process itself is fulfilled in all 4 models with virtually no fibrosis 28 days post-injury, except in the FI model. Inflammatory cells return to basal levels in the CTX, BaCl2 but still significantly high 1-month post-injury in the FI and NTX models. Interestingly the number of SC returned to normal only in the FI, 1-month post-injury, with SCs that are still cycling up to 3-months after the induction of the injury in the other models.Our studies show that the nature of the injury model should be chosen carefully depending on the experimental design and desired

  3. The green seaweed Ulva: A model system to study morphogenesis

    Directory of Open Access Journals (Sweden)

    Thomas eWichard

    2015-02-01

    Full Text Available Green macroalgae, mostly represented by the Ulvophyceae, the main multicellular branch of the Chlorophyceae, constitute important primary producers of marine and brackish coastal ecosystems. Ulva or sea lettuce species are some of the most abundant representatives, being ubiquitous in coastal benthic communities around the world. Nonetheless the genus also remains largely understudied. This review highlights Ulva as an exciting novel model organism for studies of algal growth, development and morphogenesis as well as mutualistic interactions. The key reasons that Ulva is potentially such a good model system are: (i patterns of Ulva development can drive ecologically important events, such as the increasing number of green tides observed worldwide as a result of eutrophication of coastal waters, (ii Ulva growth is symbiotic, with proper development requiring close association with bacterial epiphytes, (iii Ulva is extremely developmentally plastic, which can shed light on the transition from simple to complex multicellularity and (iv Ulva will provide additional information about the evolution of the green lineage.

  4. Comparative study of computational model for pipe whip analysis

    International Nuclear Information System (INIS)

    Koh, Sugoong; Lee, Young-Shin

    1993-01-01

    Many types of pipe whip restraints are installed to protect the structural components from the anticipated pipe whip phenomena of high energy lines in nuclear power plants. It is necessary to investigate these phenomena accurately in order to evaluate the acceptability of the pipe whip restraint design. Various research programs have been conducted in many countries to develop analytical methods and to verify the validity of the methods. In this study, various calculational models in ANSYS code and in ADLPIPE code, the general purpose finite element computer programs, were used to simulate the postulated pipe whips to obtain impact loads and the calculated results were compared with the specific experimental results from the sample pipe whip test for the U-shaped pipe whip restraints. Some calculational models, having the spring element between the pipe whip restraint and the pipe line, give reasonably good transient responses of the restraint forces compared with the experimental results, and could be useful in evaluating the acceptability of the pipe whip restraint design. (author)

  5. Static response of deformable microchannels: a comparative modelling study

    Science.gov (United States)

    Shidhore, Tanmay C.; Christov, Ivan C.

    2018-02-01

    We present a comparative modelling study of fluid-structure interactions in microchannels. Through a mathematical analysis based on plate theory and the lubrication approximation for low-Reynolds-number flow, we derive models for the flow rate-pressure drop relation for long shallow microchannels with both thin and thick deformable top walls. These relations are tested against full three-dimensional two-way-coupled fluid-structure interaction simulations. Three types of microchannels, representing different elasticity regimes and having been experimentally characterized previously, are chosen as benchmarks for our theory and simulations. Good agreement is found in most cases for the predicted, simulated and measured flow rate-pressure drop relationships. The numerical simulations performed allow us to also carefully examine the deformation profile of the top wall of the microchannel in any cross section, showing good agreement with the theory. Specifically, the prediction that span-wise displacement in a long shallow microchannel decouples from the flow-wise deformation is confirmed, and the predicted scaling of the maximum displacement with the hydrodynamic pressure and the various material and geometric parameters is validated.

  6. Integrated source-risk model for radon: A definition study

    International Nuclear Information System (INIS)

    Laheij, G.M.H.; Aldenkamp, F.J.; Stoop, P.

    1993-10-01

    The purpose of a source-risk model is to support policy making on radon mitigation by comparing effects of various policy options and to enable optimization of counter measures applied to different parts of the source-risk chain. There are several advantages developing and using a source-risk model: risk calculations are standardized; the effects of measures applied to different parts of the source-risk chain can be better compared because interactions are included; and sensitivity analyses can be used to determine the most important parameters within the total source-risk chain. After an inventory of processes and sources to be included in the source-risk chain, the models presently available in the Netherlands are investigated. The models were screened for completeness, validation and operational status. The investigation made clear that, by choosing for each part of the source-risk chain the most convenient model, a source-risk chain model for radon may be realized. However, the calculation of dose out of the radon concentrations and the status of the validation of most models should be improved. Calculations with the proposed source-risk model will give estimations with a large uncertainty at the moment. For further development of the source-risk model an interaction between the source-risk model and experimental research is recommended. Organisational forms of the source-risk model are discussed. A source-risk model in which only simple models are included is also recommended. The other models are operated and administrated by the model owners. The model owners execute their models for a combination of input parameters. The output of the models is stored in a database which will be used for calculations with the source-risk model. 5 figs., 15 tabs., 7 appendices, 14 refs

  7. Computational Fluid Dynamics Modeling Of Scaled Hanford Double Shell Tank Mixing - CFD Modeling Sensitivity Study Results

    International Nuclear Information System (INIS)

    Jackson, V.L.

    2011-01-01

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

  8. Freight Calculation Model: A Case Study of Coal Distribution

    Science.gov (United States)

    Yunianto, I. T.; Lazuardi, S. D.; Hadi, F.

    2018-03-01

    Coal has been known as one of energy alternatives that has been used as energy source for several power plants in Indonesia. During its transportation from coal sites to power plant locations is required the eligible shipping line services that are able to provide the best freight rate. Therefore, this study aims to obtain the standardized formulations for determining the ocean freight especially for coal distribution based on the theoretical concept. The freight calculation model considers three alternative transport modes commonly used in coal distribution: tug-barge, vessel and self-propelled barge. The result shows there are two cost components very dominant in determining the value of freight with the proportion reaching 90% or even more, namely: time charter hire and fuel cost. Moreover, there are three main factors that have significant impacts on the freight calculation, which are waiting time at ports, time charter rate and fuel oil price.

  9. Bias-correction in vector autoregressive models: A simulation study

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    We analyze and compare the properties of various methods for bias-correcting parameter estimates in vector autoregressions. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that this simple...... and easy-to-use analytical bias formula compares very favorably to the more standard but also more computer intensive bootstrap bias-correction method, both in terms of bias and mean squared error. Both methods yield a notable improvement over both OLS and a recently proposed WLS estimator. We also...... of pushing an otherwise stationary model into the non-stationary region of the parameter space during the process of correcting for bias....

  10. Stable isotope composition of atmospheric carbon monoxide. A modelling study

    International Nuclear Information System (INIS)

    Gromov, Sergey S.

    2014-01-01

    This study aims at an improved understanding of the stable carbon and oxygen isotope composition of the carbon monoxide (CO) in the global atmosphere by means of numerical simulations. At first, a new kinetic chemistry tagging technique for the most complete parameterisation of isotope effects has been introduced into the Modular Earth Submodel System (MESSy) framework. Incorporated into the ECHAM/MESSy Atmospheric Chemistry (EMAC) general circulation model, an explicit treatment of the isotope effects on the global scale is now possible. The expanded model system has been applied to simulate the chemical system containing up to five isotopologues of all carbon- and oxygen-bearing species, which ultimately determine the δ 13 C, δ 18 O and Δ 17 O isotopic signatures of atmospheric CO. As model input, a new stable isotope-inclusive emission inventory for the relevant trace gases has been compiled. The uncertainties of the emission estimates and of the resulting simulated mixing and isotope ratios have been analysed. The simulated CO mixing and stable isotope ratios have been compared to in-situ measurements from ground-based observatories and from the civil-aircraft-mounted CARIBIC-1 measurement platform. The systematically underestimated 13 CO/ 12 CO ratios of earlier, simplified modelling studies can now be partly explained. The EMAC simulations do not support the inferences of those studies, which suggest for CO a reduced input of the highly depleted in 13 C methane oxidation source. In particular, a high average yield of 0.94 CO per reacted methane (CH 4 ) molecule is simulated in the troposphere, to a large extent due to the competition between the deposition and convective transport processes affecting the CH 4 to CO reaction chain intermediates. None of the other factors, assumed or disregarded in previous studies, however hypothesised to have the potential in enriching tropospheric CO in 13 C, were found significant when explicitly simulated. The

  11. A comparative study of machine learning models for ethnicity classification

    Science.gov (United States)

    Trivedi, Advait; Bessie Amali, D. Geraldine

    2017-11-01

    This paper endeavours to adopt a machine learning approach to solve the problem of ethnicity recognition. Ethnicity identification is an important vision problem with its use cases being extended to various domains. Despite the multitude of complexity involved, ethnicity identification comes naturally to humans. This meta information can be leveraged to make several decisions, be it in target marketing or security. With the recent development of intelligent systems a sub module to efficiently capture ethnicity would be useful in several use cases. Several attempts to identify an ideal learning model to represent a multi-ethnic dataset have been recorded. A comparative study of classifiers such as support vector machines, logistic regression has been documented. Experimental results indicate that the logical classifier provides a much accurate classification than the support vector machine.

  12. Testicular Damage following Testicular Sperm Retrieval: A Ram Model Study

    Directory of Open Access Journals (Sweden)

    Jens Fedder

    2017-01-01

    Full Text Available The aim of this study was to evaluate the possible development of histological abnormalities such as fibrosis and microcalcifications after sperm retrieval in a ram model. Fourteen testicles in nine rams were exposed to open biopsy, multiple TESAs, or TESE, and the remaining four testicles were left unoperated on as controls. Three months after sperm retrieval, the testicles were removed, fixed, and cut into 1/2 cm thick slices and systematically put onto a glass plate exposing macroscopic abnormalities. Tissue from abnormal areas was cut into 3 μm sections and stained for histological evaluation. Pathological abnormalities were observed in testicles exposed to sperm retrieval (≥11 of 14 compared to 0 of 4 control testicles. Testicular damage was found independently of the kind of intervention used. Therefore, cryopreservation of excess sperm should be considered while retrieving sperm.

  13. Two new rodent models for actinide toxicity studies

    International Nuclear Information System (INIS)

    Taylor, G.N.; Jones, C.W.; Gardner, P.A.; Lloyd, R.D.; Mays, C.W.; Charrier, K.E.

    1981-01-01

    Two small rodent species, the grasshopper mouse (Onychomys leucogaster) and the deer mouse (Peromyscus maniculatus), have tenacious and high retention in the liver and skeleton of plutonium and americium following intraperitoneal injection of Pu and Am in citrate solution. Liver retention of Pu and Am in the grasshopper mouse is higher than liver retention in the deer mouse. Both of these rodents are relatively long-lived, breed well in captivity, and adapt suitably to laboratory conditions. It is suggested that these two species of mice, in which plutonium retention is high and prolonged in both the skeleton and liver, as it is in man, may be useful animal models for actinide toxicity studies

  14. Azolla--a model organism for plant genomic studies.

    Science.gov (United States)

    Qiu, Yin-Long; Yu, Jun

    2003-02-01

    The aquatic ferns of the genus Azolla are nitrogen-fixing plants that have great potentials in agricultural production and environmental conservation. Azolla in many aspects is qualified to serve as a model organism for genomic studies because of its importance in agriculture, its unique position in plant evolution, its symbiotic relationship with the N2-fixing cyanobacterium, Anabaena azollae, and its moderate-sized genome. The goals of this genome project are not only to understand the biology of the Azolla genome to