WorldWideScience

Sample records for physics-based system-level model

  1. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  2. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  3. Physics-based models of the plasmasphere

    Energy Technology Data Exchange (ETDEWEB)

    Jordanova, Vania K [Los Alamos National Laboratory; Pierrard, Vivane [BELGIUM; Goldstein, Jerry [SWRI; Andr' e, Nicolas [ESTEC/ESA; Kotova, Galina A [SRI, RUSSIA; Lemaire, Joseph F [BELGIUM; Liemohn, Mike W [U OF MICHIGAN; Matsui, H [UNIV OF NEW HAMPSHIRE

    2008-01-01

    We describe recent progress in physics-based models of the plasmasphere using the Auid and the kinetic approaches. Global modeling of the dynamics and inAuence of the plasmasphere is presented. Results from global plasmasphere simulations are used to understand and quantify (i) the electric potential pattern and evolution during geomagnetic storms, and (ii) the inAuence of the plasmasphere on the excitation of electromagnetic ion cyclotron (ElvIIC) waves a.nd precipitation of energetic ions in the inner magnetosphere. The interactions of the plasmasphere with the ionosphere a.nd the other regions of the magnetosphere are pointed out. We show the results of simulations for the formation of the plasmapause and discuss the inAuence of plasmaspheric wind and of ultra low frequency (ULF) waves for transport of plasmaspheric material. Theoretical formulations used to model the electric field and plasma distribution in the plasmasphere are given. Model predictions are compared to recent CLUSTER and MAGE observations, but also to results of earlier models and satellite observations.

  4. Structural Acoustic Physics Based Modeling of Curved Composite Shells

    Science.gov (United States)

    2017-09-19

    NUWC-NPT Technical Report 12,236 19 September 2017 Structural Acoustic Physics -Based Modeling of Curved Composite Shells Rachel E. Hesse...SUBTITLE Structural Acoustic Physics -Based Modeling of Curved Composite Shells 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...study was to use physics -based modeling (PBM) to investigate wave propagations through curved shells that are subjected to acoustic excitation. An

  5. Physics-Based Pneumatic Hammer Instability Model, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Florida Turbine Technologies (FTT) proposes to conduct research necessary to develop a physics-based pneumatic hammer instability model for hydrostatic bearings...

  6. Physics-Based Pneumatic Hammer Instability Model, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project is to develop a physics-based pneumatic hammer instability model that accurately predicts the stability of hydrostatic bearings...

  7. A physically based analytical spatial air temperature and humidity model

    Science.gov (United States)

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2013-01-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...

  8. Comparison of physically based catchment models for estimating Phosphorus losses

    OpenAIRE

    Nasr, Ahmed Elssidig; Bruen, Michael

    2003-01-01

    As part of a large EPA-funded research project, coordinated by TEAGASC, the Centre for Water Resources Research at UCD reviewed the available distributed physically based catchment models with a potential for use in estimating phosphorous losses for use in implementing the Water Framework Directive. Three models, representative of different levels of approach and complexity, were chosen and were implemented for a number of Irish catchments. This paper reports on (i) the lessons and experience...

  9. Evaluating performances of simplified physically based landslide susceptibility models.

    Science.gov (United States)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  10. Interactive physically-based structural modeling of hydrocarbon systems

    International Nuclear Information System (INIS)

    Bosson, Mael; Grudinin, Sergei; Bouju, Xavier; Redon, Stephane

    2012-01-01

    Hydrocarbon systems have been intensively studied via numerical methods, including electronic structure computations, molecular dynamics and Monte Carlo simulations. Typically, these methods require an initial structural model (atomic positions and types, topology, etc.) that may be produced using scripts and/or modeling tools. For many systems, however, these building methods may be ineffective, as the user may have to specify the positions of numerous atoms while maintaining structural plausibility. In this paper, we present an interactive physically-based modeling tool to construct structural models of hydrocarbon systems. As the user edits the geometry of the system, atomic positions are also influenced by the Brenner potential, a well-known bond-order reactive potential. In order to be able to interactively edit systems containing numerous atoms, we introduce a new adaptive simulation algorithm, as well as a novel algorithm to incrementally update the forces and the total potential energy based on the list of updated relative atomic positions. The computational cost of the adaptive simulation algorithm depends on user-defined error thresholds, and our potential update algorithm depends linearly with the number of updated bonds. This allows us to enable efficient physically-based editing, since the computational cost is decoupled from the number of atoms in the system. We show that our approach may be used to effectively build realistic models of hydrocarbon structures that would be difficult or impossible to produce using other tools.

  11. Application of Physically based landslide susceptibility models in Brazil

    Science.gov (United States)

    Carvalho Vieira, Bianca; Martins, Tiago D.

    2017-04-01

    Shallow landslides and floods are the processes responsible for most material and environmental damages in Brazil. In the last decades, some landslides events induce a high number of deaths (e.g. Over 1000 deaths in one event) and incalculable social and economic losses. Therefore, the prediction of those processes is considered an important tool for land use planning tools. Among different methods the physically based landslide susceptibility models having been widely used in many countries, but in Brazil it is still incipient when compared to other ones, like statistical tools and frequency analyses. Thus, the main objective of this research was to assess the application of some Physically based landslide susceptibility models in Brazil, identifying their main results, the efficiency of susceptibility mapping, parameters used and limitations of the tropical humid environment. In order to achieve that, it was evaluated SHALSTAB, SINMAP and TRIGRS models in some studies in Brazil along with the Geotechnical values, scales, DEM grid resolution and the results based on the analysis of the agreement between predicted susceptibility and the landslide scar's map. Most of the studies in Brazil applied SHALSTAB, SINMAP and to a lesser extent the TRIGRS model. The majority researches are concentrated in the Serra do Mar mountain range, that is a system of escarpments and rugged mountains that extends more than 1,500 km along the southern and southeastern Brazilian coast, and regularly affected by heavy rainfall that generates widespread mass movements. Most part of these studies used conventional topographic maps with scales ranging from 1:2000 to 1:50000 and DEM-grid resolution between 2 and 20m. Regarding the Geotechnical and hydrological values, a few studies use field collected data which could produce more efficient results, as indicated by international literature. Therefore, even though they have enormous potential in the susceptibility mapping, even for comparison

  12. System-level modeling of acetone-butanol-ethanol fermentation.

    Science.gov (United States)

    Liao, Chen; Seo, Seung-Oh; Lu, Ting

    2016-05-01

    Acetone-butanol-ethanol (ABE) fermentation is a metabolic process of clostridia that produces bio-based solvents including butanol. It is enabled by an underlying metabolic reaction network and modulated by cellular gene regulation and environmental cues. Mathematical modeling has served as a valuable strategy to facilitate the understanding, characterization and optimization of this process. In this review, we highlight recent advances in system-level, quantitative modeling of ABE fermentation. We begin with an overview of integrative processes underlying the fermentation. Next we survey modeling efforts including early simple models, models with a systematic metabolic description, and those incorporating metabolism through simple gene regulation. Particular focus is given to a recent system-level model that integrates the metabolic reactions, gene regulation and environmental cues. We conclude by discussing the remaining challenges and future directions towards predictive understanding of ABE fermentation. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. A physically based model of global freshwater surface temperature

    Science.gov (United States)

    van Beek, Ludovicus P. H.; Eikelboom, Tessa; van Vliet, Michelle T. H.; Bierkens, Marc F. P.

    2012-09-01

    Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for

  14. A physically based analytical spatial air temperature and humidity model

    Science.gov (United States)

    Yang, Yang; Endreny, Theodore A.; Nowak, David J.

    2013-09-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat storage based on semiempirical functions and generates spatially distributed estimates based on inputs of topography, land cover, and the weather data measured at a reference site. The model assumes that for all grids under the same mesoscale climate, grid air temperature and humidity are modified by local variation in absorbed solar radiation and the partitioning of sensible and latent heat. The model uses a reference grid site for time series meteorological data and the air temperature and humidity of any other grid can be obtained by solving the heat flux network equations. PASATH was coupled with the USDA iTree-Hydro water balance model to obtain evapotranspiration terms and run from 20 to 29 August 2010 at a 360 m by 360 m grid scale and hourly time step across a 285 km2 watershed including the urban area of Syracuse, NY. PASATH predictions were tested at nine urban weather stations representing variability in urban topography and land cover. The PASATH model predictive efficiency R2 ranged from 0.81 to 0.99 for air temperature and 0.77 to 0.97 for dew point temperature. PASATH is expected to have broad applications on environmental and ecological models.

  15. System-level Modeling of Wireless Integrated Sensor Networks

    DEFF Research Database (Denmark)

    Virk, Kashif M.; Hansen, Knud; Madsen, Jan

    2005-01-01

    Wireless integrated sensor networks have emerged as a promising infrastructure for a new generation of monitoring and tracking applications. In order to efficiently utilize the extremely limited resources of wireless sensor nodes, accurate modeling of the key aspects of wireless sensor networks...... is necessary so that system-level design decisions can be made about the hardware and the software (applications and real-time operating system) architecture of sensor nodes. In this paper, we present a SystemC-based abstract modeling framework that enables system-level modeling of sensor network behavior...... by modeling the applications, real-time operating system, sensors, processor, and radio transceiver at the sensor node level and environmental phenomena, including radio signal propagation, at the sensor network level. We demonstrate the potential of our modeling framework by simulating and analyzing a small...

  16. Physics-based Entry, Descent and Landing Risk Model

    Science.gov (United States)

    Gee, Ken; Huynh, Loc C.; Manning, Ted

    2014-01-01

    A physics-based risk model was developed to assess the risk associated with thermal protection system failures during the entry, descent and landing phase of a manned spacecraft mission. In the model, entry trajectories were computed using a three-degree-of-freedom trajectory tool, the aerothermodynamic heating environment was computed using an engineering-level computational tool and the thermal response of the TPS material was modeled using a one-dimensional thermal response tool. The model was capable of modeling the effect of micrometeoroid and orbital debris impact damage on the TPS thermal response. A Monte Carlo analysis was used to determine the effects of uncertainties in the vehicle state at Entry Interface, aerothermodynamic heating and material properties on the performance of the TPS design. The failure criterion was set as a temperature limit at the bondline between the TPS and the underlying structure. Both direct computation and response surface approaches were used to compute the risk. The model was applied to a generic manned space capsule design. The effect of material property uncertainty and MMOD damage on risk of failure were analyzed. A comparison of the direct computation and response surface approach was undertaken.

  17. Advancing reservoir operation description in physically based hydrological models

    Science.gov (United States)

    Anghileri, Daniela; Giudici, Federico; Castelletti, Andrea; Burlando, Paolo

    2016-04-01

    Last decades have seen significant advances in our capacity of characterizing and reproducing hydrological processes within physically based models. Yet, when the human component is considered (e.g. reservoirs, water distribution systems), the associated decisions are generally modeled with very simplistic rules, which might underperform in reproducing the actual operators' behaviour on a daily or sub-daily basis. For example, reservoir operations are usually described by a target-level rule curve, which represents the level that the reservoir should track during normal operating conditions. The associated release decision is determined by the current state of the reservoir relative to the rule curve. This modeling approach can reasonably reproduce the seasonal water volume shift due to reservoir operation. Still, it cannot capture more complex decision making processes in response, e.g., to the fluctuations of energy prices and demands, the temporal unavailability of power plants or varying amount of snow accumulated in the basin. In this work, we link a physically explicit hydrological model with detailed hydropower behavioural models describing the decision making process by the dam operator. In particular, we consider two categories of behavioural models: explicit or rule-based behavioural models, where reservoir operating rules are empirically inferred from observational data, and implicit or optimization based behavioural models, where, following a normative economic approach, the decision maker is represented as a rational agent maximising a utility function. We compare these two alternate modelling approaches on the real-world water system of Lake Como catchment in the Italian Alps. The water system is characterized by the presence of 18 artificial hydropower reservoirs generating almost 13% of the Italian hydropower production. Results show to which extent the hydrological regime in the catchment is affected by different behavioural models and reservoir

  18. Physically-based modelling of polycrystalline semiconductor devices

    International Nuclear Information System (INIS)

    Lee, S.

    2000-01-01

    Thin-film technology using polycrystalline semiconductors has been widely applied to active-matrix-addressed liquid crystal displays (AMLCDs) where thin-film transistors act as digital pixel switches. Research and development is in progress to integrate the driver circuits around the peripheral of the display, resulting in significant cost reduction of connections between rows and columns and the peripheral circuitry. For this latter application, where for instance it is important to control the greyscale voltage level delivered to the pixel, an understanding of device behaviour is required so that models can be developed for analogue circuit simulation. For this purpose, various analytical models have been developed based on that of Seto who considered the effect of monoenergetic trap states and grain boundaries in polycrystalline materials but not the contribution of the grains to the electrical properties. The principal aim of this thesis is to describe the use of a numerical device simulator (ATLAS) as a tool to investigate the physics of the trapping process involved in the device operation, which additionally takes into account the effect of multienergetic trapping levels and the contribution of the grain into the modelling. A study of the conventional analytical models is presented, and an alternative approach is introduced which takes into account the grain regions to enhance the accuracy of the analytical modelling. A physically-based discrete-grain-boundary model and characterisation method are introduced to study the effects of the multienergetic trap states on the electrical characteristics of poly-TFTs using CdSe devices as the experimental example, and the electrical parameters such as the density distribution of the trapping states are extracted. The results show excellent agreement between the simulation and experimental data. The limitations of this proposed physical model are also studied and discussed. (author)

  19. Simplified Physics Based Models Research Topical Report on Task #2

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta; Ganesh, Priya

    2014-10-31

    We present a simplified-physics based approach, where only the most important physical processes are modeled, to develop and validate simplified predictive models of CO2 sequestration in deep saline formation. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. We use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Similar correlations are also developed to predict the average pressure within the injection reservoir, and the pressure buildup within the caprock.

  20. System level modeling and component level control of fuel cells

    Science.gov (United States)

    Xue, Xingjian

    This dissertation investigates the fuel cell systems and the related technologies in three aspects: (1) system-level dynamic modeling of both PEM fuel cell (PEMFC) and solid oxide fuel cell (SOFC); (2) condition monitoring scheme development of PEM fuel cell system using model-based statistical method; and (3) strategy and algorithm development of precision control with potential application in energy systems. The dissertation first presents a system level dynamic modeling strategy for PEM fuel cells. It is well known that water plays a critical role in PEM fuel cell operations. It makes the membrane function appropriately and improves the durability. The low temperature operating conditions, however, impose modeling difficulties in characterizing the liquid-vapor two phase change phenomenon, which becomes even more complex under dynamic operating conditions. This dissertation proposes an innovative method to characterize this phenomenon, and builds a comprehensive model for PEM fuel cell at the system level. The model features the complete characterization of multi-physics dynamic coupling effects with the inclusion of dynamic phase change. The model is validated using Ballard stack experimental result from open literature. The system behavior and the internal coupling effects are also investigated using this model under various operating conditions. Anode-supported tubular SOFC is also investigated in the dissertation. While the Nernst potential plays a central role in characterizing the electrochemical performance, the traditional Nernst equation may lead to incorrect analysis results under dynamic operating conditions due to the current reverse flow phenomenon. This dissertation presents a systematic study in this regard to incorporate a modified Nernst potential expression and the heat/mass transfer into the analysis. The model is used to investigate the limitations and optimal results of various operating conditions; it can also be utilized to perform the

  1. Sorption isotherms: A review on physical bases, modeling and measurement

    Energy Technology Data Exchange (ETDEWEB)

    Limousin, G. [Atomic Energy Commission, Tracers Technology Laboratory, 38054 Grenoble Cedex (France) and Laboratoire d' etude des Transferts en Hydrologie et Environnement (CNRS-INPG-IRD-UJF), BP 53, 38041 Grenoble Cedex (France)]. E-mail: guillaumelimousin@yahoo.fr; Gaudet, J.-P. [Laboratoire d' etude des Transferts en Hydrologie et Environnement (CNRS-INPG-IRD-UJF), BP 53, 38041 Grenoble Cedex (France); Charlet, L. [Laboratoire de Geophysique Interne et Techtonophysique - CNRS-IRD-LCPC-UJF-Universite de Savoie, BP 53, 38041 Grenoble Cedex (France); Szenknect, S. [Atomic Energy Commission, Tracers Technology Laboratory, 38054 Grenoble Cedex (France); Barthes, V. [Atomic Energy Commission, Tracers Technology Laboratory, 38054 Grenoble Cedex (France); Krimissa, M. [Electricite de France, Division Recherche et Developpement, Laboratoire National d' Hydraulique et d' Environnement - P78, 6 quai Watier, 78401 Chatou (France)

    2007-02-15

    The retention (or release) of a liquid compound on a solid controls the mobility of many substances in the environment and has been quantified in terms of the 'sorption isotherm'. This paper does not review the different sorption mechanisms. It presents the physical bases underlying the definition of a sorption isotherm, different empirical or mechanistic models, and details several experimental methods to acquire a sorption isotherm. For appropriate measurements and interpretations of isotherm data, this review emphasizes 4 main points: (i) the adsorption (or desorption) isotherm does not provide automatically any information about the reactions involved in the sorption phenomenon. So, mechanistic interpretations must be carefully verified. (ii) Among studies, the range of reaction times is extremely wide and this can lead to misinterpretations regarding the irreversibility of the reaction: a pseudo-hysteresis of the release compared with the retention is often observed. The comparison between the mean characteristic time of the reaction and the mean residence time of the mobile phase in the natural system allows knowing if the studied retention/release phenomenon should be considered as an instantaneous reversible, almost irreversible phenomenon, or if reaction kinetics must be taken into account. (iii) When the concentration of the retained substance is low enough, the composition of the bulk solution remains constant and a single-species isotherm is often sufficient, although it remains strongly dependent on the background medium. At higher concentrations, sorption may be driven by the competition between several species that affect the composition of the bulk solution. (iv) The measurement method has a great influence. Particularly, the background ionic medium, the solid/solution ratio and the use of flow-through or closed reactor are of major importance. The chosen method should balance easy-to-use features and representativity of the studied

  2. Physics-Based Modeling of Meteor Entry and Breakup

    Science.gov (United States)

    Prabhu, Dinesh K.; Agrawal, Parul; Allen, Gary A., Jr.; Bauschlicher, Charles W., Jr.; Brandis, Aaron M.; Chen, Yih-Kang; Jaffe, Richard L.; Palmer, Grant E.; Saunders, David A.; Stern, Eric C.; hide

    2015-01-01

    A new research effort at NASA Ames Research Center has been initiated in Planetary Defense, which integrates the disciplines of planetary science, atmospheric entry physics, and physics-based risk assessment. This paper describes work within the new program and is focused on meteor entry and breakup.Over the last six decades significant effort was expended in the US and in Europe to understand meteor entry including ablation, fragmentation and airburst (if any) for various types of meteors ranging from stony to iron spectral types. These efforts have produced primarily empirical mathematical models based on observations. Weaknesses of these models, apart from their empiricism, are reliance on idealized shapes (spheres, cylinders, etc.) and simplified models for thermal response of meteoritic materials to aerodynamic and radiative heating. Furthermore, the fragmentation and energy release of meteors (airburst) is poorly understood.On the other hand, flight of human-made atmospheric entry capsules is well understood. The capsules and their requisite heatshields are designed and margined to survive entry. However, the highest speed Earth entry for capsules is 13 kms (Stardust). Furthermore, Earth entry capsules have never exceeded diameters of 5 m, nor have their peak aerothermal environments exceeded 0.3 atm and 1 kW/sq cm. The aims of the current work are: (i) to define the aerothermal environments for objects with entry velocities from 13 to 20 kms; (ii) to explore various hypotheses of fragmentation and airburst of stony meteors in the near term; (iii) to explore the possibility of performing relevant ground-based tests to verify candidate hypotheses; and (iv) to quantify the energy released in airbursts. The results of the new simulations will be used to anchor said risk assessment analyses. With these aims in mind, state-of-the-art entry capsule design tools are being extended for meteor entries. We describe: (i) applications of current simulation tools to

  3. Physics based Degradation Modeling and Prognostics of Electrolytic Capacitors under Electrical Overstress Conditions

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper proposes a physics based degradation modeling and prognostics approach for electrolytic capacitors. Electrolytic capacitors are critical components in...

  4. A Physics-Based Modeling Framework for Prognostic Studies

    Science.gov (United States)

    Kulkarni, Chetan S.

    2014-01-01

    Prognostics and Health Management (PHM) methodologies have emerged as one of the key enablers for achieving efficient system level maintenance as part of a busy operations schedule, and lowering overall life cycle costs. PHM is also emerging as a high-priority issue in critical applications, where the focus is on conducting fundamental research in the field of integrated systems health management. The term diagnostics relates to the ability to detect and isolate faults or failures in a system. Prognostics on the other hand is the process of predicting health condition and remaining useful life based on current state, previous conditions and future operating conditions. PHM methods combine sensing, data collection, interpretation of environmental, operational, and performance related parameters to indicate systems health under its actual application conditions. The development of prognostics methodologies for the electronics field has become more important as more electrical systems are being used to replace traditional systems in several applications in the aeronautics, maritime, and automotive fields. The development of prognostics methods for electronics presents several challenges due to the great variety of components used in a system, a continuous development of new electronics technologies, and a general lack of understanding of how electronics fail. Similarly with electric unmanned aerial vehicles, electrichybrid cars, and commercial passenger aircraft, we are witnessing a drastic increase in the usage of batteries to power vehicles. However, for battery-powered vehicles to operate at maximum efficiency and reliability, it becomes crucial to both monitor battery health and performance and to predict end of discharge (EOD) and end of useful life (EOL) events. We develop an electrochemistry-based model of Li-ion batteries that capture the significant electrochemical processes, are computationally efficient, capture the effects of aging, and are of suitable

  5. A system-level model for the microbial regulatory genome.

    Science.gov (United States)

    Brooks, Aaron N; Reiss, David J; Allard, Antoine; Wu, Wei-Ju; Salvanha, Diego M; Plaisier, Christopher L; Chandrasekaran, Sriram; Pan, Min; Kaur, Amardeep; Baliga, Nitin S

    2014-07-15

    Microbes can tailor transcriptional responses to diverse environmental challenges despite having streamlined genomes and a limited number of regulators. Here, we present data-driven models that capture the dynamic interplay of the environment and genome-encoded regulatory programs of two types of prokaryotes: Escherichia coli (a bacterium) and Halobacterium salinarum (an archaeon). The models reveal how the genome-wide distributions of cis-acting gene regulatory elements and the conditional influences of transcription factors at each of those elements encode programs for eliciting a wide array of environment-specific responses. We demonstrate how these programs partition transcriptional regulation of genes within regulons and operons to re-organize gene-gene functional associations in each environment. The models capture fitness-relevant co-regulation by different transcriptional control mechanisms acting across the entire genome, to define a generalized, system-level organizing principle for prokaryotic gene regulatory networks that goes well beyond existing paradigms of gene regulation. An online resource (http://egrin2.systemsbiology.net) has been developed to facilitate multiscale exploration of conditional gene regulation in the two prokaryotes. © 2014 The Authors. Published under the terms of the CC BY 4.0 license.

  6. PREFACE: Physics-Based Mathematical Models for Nanotechnology

    Science.gov (United States)

    Voon, Lok C. Lew Yan; Melnik, Roderick; Willatzen, Morten

    2008-03-01

    stain-resistant clothing, but with thousands more anticipated. The focus of this interdisciplinary workshop was on determining what kind of new theoretical and computational tools will be needed to advance the science and engineering of nanomaterials and nanostructures. Thanks to the stimulating environment of the BIRS, participants of the workshop had plenty of opportunity to exchange new ideas on one of the main topics of this workshop—physics-based mathematical models for the description of low-dimensional semiconductor nanostructures (LDSNs) that are becoming increasingly important in technological innovations. The main objective of the workshop was to bring together some of the world leading experts in the field from each of the key research communities working on different aspects of LDSNs in order to (a) summarize the state-of-the-art models and computational techniques for modeling LDSNs, (b) identify critical problems of major importance that require solution and prioritize them, (c) analyze feasibility of existing mathematical and computational methodologies for the solution of some such problems, and (d) use some of the workshop working sessions to explore promising approaches in addressing identified challenges. With the possibility of growing practically any shape and size of heterostructures, it becomes essential to understand the mathematical properties of quantum-confined structures including properties of bulk states, interface states, and surface states as a function of shape, size, and internal strain. This workshop put strong emphasis on discussions of the new mathematics needed in nanotechnology especially in relation to geometry and material-combination optimization of device properties such as electronic, optical, and magnetic properties. The problems that were addressed at this meeting are of immense importance in determining such quantum-mechanical properties and the group of invited participants covered very well all the relevant disciplines

  7. Virtual design and optimization studies for industrial silicon microphones applying tailored system-level modeling

    Science.gov (United States)

    Kuenzig, Thomas; Dehé, Alfons; Krumbein, Ulrich; Schrag, Gabriele

    2018-05-01

    Maxing out the technological limits in order to satisfy the customers’ demands and obtain the best performance of micro-devices and-systems is a challenge of today’s manufacturers. Dedicated system simulation is key to investigate the potential of device and system concepts in order to identify the best design w.r.t. the given requirements. We present a tailored, physics-based system-level modeling approach combining lumped with distributed models that provides detailed insight into the device and system operation at low computational expense. The resulting transparent, scalable (i.e. reusable) and modularly composed models explicitly contain the physical dependency on all relevant parameters, thus being well suited for dedicated investigation and optimization of MEMS devices and systems. This is demonstrated for an industrial capacitive silicon microphone. The performance of such microphones is determined by distributed effects like viscous damping and inhomogeneous capacitance variation across the membrane as well as by system-level phenomena like package-induced acoustic effects and the impact of the electronic circuitry for biasing and read-out. The here presented model covers all relevant figures of merit and, thus, enables to evaluate the optimization potential of silicon microphones towards high fidelity applications. This work was carried out at the Technical University of Munich, Chair for Physics of Electrotechnology. Thomas Kuenzig is now with Infineon Technologies AG, Neubiberg.

  8. Enabling full field physics based OPC via dynamic model generation

    Science.gov (United States)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-03-01

    As EUV lithography marches closer to reality for high volume production, its peculiar modeling challenges related to both inter- and intra- field effects has necessitated building OPC infrastructure that operates with field position dependency. Previous state of the art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7nm and 5nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of EPE errors. The introduction of Dynamic Model Generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through field. DMG allows unique models for EMF, apodization, aberrations, etc to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  9. A Physics-Based Starting Model for Gas Turbine Engines, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposal is to demonstrate the feasibility of producing an integrated starting model for gas turbine engines using a new physics-based...

  10. Physics-based Modeling Tools for Life Prediction and Durability Assessment of Advanced Materials, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The technical objectives of this program are: (1) to develop a set of physics-based modeling tools to predict the initiation of hot corrosion and to address pit and...

  11. Short review of runoff and erosion physically based models

    Directory of Open Access Journals (Sweden)

    Gabrić Ognjen

    2015-01-01

    Full Text Available Processes of runoff and erosion are one of the main research subjects in hydrological science. Based on the field and laboratory measurements, and analogous with development of computational techniques, runoff and erosion models based on equations which describe the physics of the process are also developed. Several models of runoff and erosion which describes entire process of genesis and sediment transport on the catchment are described and compared.

  12. A simplified physics-based model for nickel hydrogen battery

    Science.gov (United States)

    Liu, Shengyi; Dougal, Roger A.; Weidner, John W.; Gao, Lijun

    This paper presents a simplified model of a nickel hydrogen battery based on a first approximation. The battery is assumed uniform throughout. The reversible potential is considered primarily due to one-electron transfer redox reaction of nickel hydroxide and nickel oxyhydroxide. The non-ideality due to phase reactions is characterized by the two-parameter activity coefficients. The overcharge process is characterized by the oxygen reaction. The overpotentials are lumped to a tunable resistive drop to fit particular battery designs. The model is implemented in the Virtual Test Bed environment, and the characteristics of the battery are simulated and in good agreement with the experimental data within the normal operating regime. The model can be used for battery dynamic simulation and design in a satellite power system, an example of which is given.

  13. Physics Based Modeling and Prognostics of Electrolytic Capacitors

    Science.gov (United States)

    Kulkarni, Chetan; Ceyla, Jose R.; Biswas, Gautam; Goebel, Kai

    2012-01-01

    This paper proposes first principles based modeling and prognostics approach for electrolytic capacitors. Electrolytic capacitors have become critical components in electronics systems in aeronautics and other domains. Degradations and faults in DC-DC converter unit propagates to the GPS and navigation subsystems and affects the overall solution. Capacitors and MOSFETs are the two major components, which cause degradations and failures in DC-DC converters. This type of capacitors are known for its low reliability and frequent breakdown on critical systems like power supplies of avionics equipment and electrical drivers of electromechanical actuators of control surfaces. Some of the more prevalent fault effects, such as a ripple voltage surge at the power supply output can cause glitches in the GPS position and velocity output, and this, in turn, if not corrected will propagate and distort the navigation solution. In this work, we study the effects of accelerated aging due to thermal stress on different sets of capacitors under different conditions. Our focus is on deriving first principles degradation models for thermal stress conditions. Data collected from simultaneous experiments are used to validate the desired models. Our overall goal is to derive accurate models of capacitor degradation, and use them to predict performance changes in DC-DC converters.

  14. Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks

    Science.gov (United States)

    Karpatne, A.; Kumar, V.

    2017-12-01

    Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.

  15. System level permeability modeling of porous hydrogen storage materials.

    Energy Technology Data Exchange (ETDEWEB)

    Kanouff, Michael P.; Dedrick, Daniel E.; Voskuilen, Tyler (Purdue University, West Lafayette, IN)

    2010-01-01

    A permeability model for hydrogen transport in a porous material is successfully applied to both laboratory-scale and vehicle-scale sodium alanate hydrogen storage systems. The use of a Knudsen number dependent relationship for permeability of the material in conjunction with a constant area fraction channeling model is shown to accurately predict hydrogen flow through the reactors. Generally applicable model parameters were obtained by numerically fitting experimental measurements from reactors of different sizes and aspect ratios. The degree of channeling was experimentally determined from the measurements and found to be 2.08% of total cross-sectional area. Use of this constant area channeling model and the Knudsen dependent Young & Todd permeability model allows for accurate prediction of the hydrogen uptake performance of full-scale sodium alanate and similar metal hydride systems.

  16. Evaluating crown fire rate of spread predictions from physics-based models

    Science.gov (United States)

    C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont

    2015-01-01

    Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...

  17. A physically-based constitutive model for SA508-III steel: Modeling and experimental verification

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Dingqian [National Die & Mold CAD Engineering Research Center, Shanghai Jiao Tong University, 1954 Huashan Rd., Shanghai 200030 (China); Chen, Fei, E-mail: feechn@gmail.com [National Die & Mold CAD Engineering Research Center, Shanghai Jiao Tong University, 1954 Huashan Rd., Shanghai 200030 (China); Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, Nottingham NG7 2RD (United Kingdom); Cui, Zhenshan, E-mail: cuizs@sjtu.edu.cn [National Die & Mold CAD Engineering Research Center, Shanghai Jiao Tong University, 1954 Huashan Rd., Shanghai 200030 (China)

    2015-05-14

    Due to its good toughness and high weldability, SA508-III steel has been widely used in the components manufacturing of reactor pressure vessels (RPV) and steam generators (SG). In this study, the hot deformation behaviors of SA508-III steel are investigated by isothermal hot compression tests with forming temperature of (950–1250)°C and strain rate of (0.001–0.1)s{sup −1}, and the corresponding flow stress curves are obtained. According to the experimental results, quantitative analysis of work hardening and dynamic softening behaviors is presented. The critical stress and critical strain for initiation of dynamic recrystallization are calculated by setting the second derivative of the third order polynomial. Based on the classical stress–dislocation relation and the kinetics of dynamic recrystallization, a two-stage constitutive model is developed to predict the flow stress of SA508-III steel. Comparisons between the predicted and measured flow stress indicate that the established physically-based constitutive model can accurately characterize the hot deformations for the steel. Furthermore, a successful numerical simulation of the industrial upsetting process is carried out by implementing the developed constitutive model into a commercial software, which evidences that the physically-based constitutive model is practical and promising to promote industrial forging process for nuclear components.

  18. A physically-based constitutive model for SA508-III steel: Modeling and experimental verification

    International Nuclear Information System (INIS)

    Dong, Dingqian; Chen, Fei; Cui, Zhenshan

    2015-01-01

    Due to its good toughness and high weldability, SA508-III steel has been widely used in the components manufacturing of reactor pressure vessels (RPV) and steam generators (SG). In this study, the hot deformation behaviors of SA508-III steel are investigated by isothermal hot compression tests with forming temperature of (950–1250)°C and strain rate of (0.001–0.1)s −1 , and the corresponding flow stress curves are obtained. According to the experimental results, quantitative analysis of work hardening and dynamic softening behaviors is presented. The critical stress and critical strain for initiation of dynamic recrystallization are calculated by setting the second derivative of the third order polynomial. Based on the classical stress–dislocation relation and the kinetics of dynamic recrystallization, a two-stage constitutive model is developed to predict the flow stress of SA508-III steel. Comparisons between the predicted and measured flow stress indicate that the established physically-based constitutive model can accurately characterize the hot deformations for the steel. Furthermore, a successful numerical simulation of the industrial upsetting process is carried out by implementing the developed constitutive model into a commercial software, which evidences that the physically-based constitutive model is practical and promising to promote industrial forging process for nuclear components

  19. Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?

    Energy Technology Data Exchange (ETDEWEB)

    Granderson, Jessica [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bonvini, Marco [Whisker Labs, Oakland, CA (United States); Piette, Mary Ann [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Page, Janie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lin, Guanjing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hu, R. Lilly [Univ. of California, Berkeley, CA (United States)

    2017-08-11

    We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and building behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.

  20. System Level Modelling and Performance Estimation of Embedded Systems

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer

    The advances seen in the semiconductor industry within the last decade have brought the possibility of integrating evermore functionality onto a single chip forming functionally highly advanced embedded systems. These integration possibilities also imply that as the design complexity increases, so...... does the design time and eort. This challenge is widely recognized throughout academia and the industry and in order to address this, novel frameworks and methods, which will automate design steps as well as raise the level of abstraction used to design systems, are being called upon. To support...... is carried out in collaboration with the Danish company and DaNES partner, Bang & Olufsen ICEpower. Bang & Olufsen ICEpower provides industrial case studies which will allow the proposed modelling framework to be exercised and assessed in terms of ease of use, production speed, accuracy and efficiency...

  1. Development and validation of a physics-based urban fire spread model

    OpenAIRE

    HIMOTO, Keisuke; TANAKA, Takeyoshi

    2008-01-01

    A computational model for fire spread in a densely built urban area is developed. The model is distinct from existing models in that it explicitly describes fire spread phenomena with physics-based knowledge achieved in the field of fire safety engineering. In the model, urban fire is interpreted as an ensemble of multiple building fires; that is, the fire spread is simulated by predicting behaviors of individual building fires under the thermal influence of neighboring building fires. Adopte...

  2. Sustainable Manufacturing via Multi-Scale, Physics-Based Process Modeling and Manufacturing-Informed Design

    Energy Technology Data Exchange (ETDEWEB)

    None

    2017-04-01

    This factsheet describes a project that developed and demonstrated a new manufacturing-informed design framework that utilizes advanced multi-scale, physics-based process modeling to dramatically improve manufacturing productivity and quality in machining operations while reducing the cost of machined components.

  3. A physics-based potential and electric field model of a nanoscale ...

    Indian Academy of Sciences (India)

    In this paper, we have developed a physics-based model for surface potential, channel potential, electric field and drain current for AlGaN/GaN high electron mobility transistor with high-K gate dielectric using two-dimensional Poisson equation under full depletion approximation with the inclusion of effect of polarization ...

  4. A physics-based potential and electric field model of a nanoscale ...

    Indian Academy of Sciences (India)

    ... paper, we have developed a physics-based model for surface potential, channel potential, electric field and drain current for AlGaN/GaN high electron mobility transistor with high-K gate dielectric using two-dimensional Poisson equation under full depletion approximation with the inclusion of effect of polarization charges.

  5. Physics-based statistical model and simulation method of RF propagation in urban environments

    Science.gov (United States)

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  6. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    Science.gov (United States)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that

  7. Integrating 3D geological information with a national physically-based hydrological modelling system

    Science.gov (United States)

    Lewis, Elizabeth; Parkin, Geoff; Kessler, Holger; Whiteman, Mark

    2016-04-01

    Robust numerical models are an essential tool for informing flood and water management and policy around the world. Physically-based hydrological models have traditionally not been used for such applications due to prohibitively large data, time and computational resource requirements. Given recent advances in computing power and data availability, a robust, physically-based hydrological modelling system for Great Britain using the SHETRAN model and national datasets has been created. Such a model has several advantages over less complex systems. Firstly, compared with conceptual models, a national physically-based model is more readily applicable to ungauged catchments, in which hydrological predictions are also required. Secondly, the results of a physically-based system may be more robust under changing conditions such as climate and land cover, as physical processes and relationships are explicitly accounted for. Finally, a fully integrated surface and subsurface model such as SHETRAN offers a wider range of applications compared with simpler schemes, such as assessments of groundwater resources, sediment and nutrient transport and flooding from multiple sources. As such, SHETRAN provides a robust means of simulating numerous terrestrial system processes which will add physical realism when coupled to the JULES land surface model. 306 catchments spanning Great Britain have been modelled using this system. The standard configuration of this system performs satisfactorily (NSE > 0.5) for 72% of catchments and well (NSE > 0.7) for 48%. Many of the remaining 28% of catchments that performed relatively poorly (NSE land cover change studies and integrated assessments of groundwater and surface water resources.

  8. Physics-Based Identification, Modeling and Risk Management for Aeroelastic Flutter and Limit-Cycle Oscillations (LCO), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed research program will develop a physics-based identification, modeling and risk management infrastructure for aeroelastic transonic flutter and...

  9. Physics-based distributed snow models in the operational arena: Current and future challenges

    Science.gov (United States)

    Winstral, A. H.; Jonas, T.; Schirmer, M.; Helbig, N.

    2017-12-01

    The demand for modeling tools robust to climate change and weather extremes along with coincident increases in computational capabilities have led to an increase in the use of physics-based snow models in operational applications. Current operational applications include the WSL-SLF's across Switzerland, ASO's in California, and USDA-ARS's in Idaho. While the physics-based approaches offer many advantages there remain limitations and modeling challenges. The most evident limitation remains computation times that often limit forecasters to a single, deterministic model run. Other limitations however remain less conspicuous amidst the assumptions that these models require little to no calibration based on their foundation on physical principles. Yet all energy balance snow models seemingly contain parameterizations or simplifications of processes where validation data are scarce or present understanding is limited. At the research-basin scale where many of these models were developed these modeling elements may prove adequate. However when applied over large areas, spatially invariable parameterizations of snow albedo, roughness lengths and atmospheric exchange coefficients - all vital to determining the snowcover energy balance - become problematic. Moreover as we apply models over larger grid cells, the representation of sub-grid variability such as the snow-covered fraction adds to the challenges. Here, we will demonstrate some of the major sensitivities of distributed energy balance snow models to particular model constructs, the need for advanced and spatially flexible methods and parameterizations, and prompt the community for open dialogue and future collaborations to further modeling capabilities.

  10. Comparisons between physics-based, engineering, and statistical learning models for outdoor sound propagation.

    Science.gov (United States)

    Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T

    2016-05-01

    Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.

  11. Comparison of a Conceptual Groundwater Model and Physically Based Groundwater Mode

    Science.gov (United States)

    Yang, J.; Zammit, C.; Griffiths, J.; Moore, C.; Woods, R. A.

    2017-12-01

    Groundwater is a vital resource for human activities including agricultural practice and urban water demand. Hydrologic modelling is an important way to study groundwater recharge, movement and discharge, and its response to both human activity and climate change. To understand the groundwater hydrologic processes nationally in New Zealand, we have developed a conceptually based groundwater flow model, which is fully integrated into a national surface-water model (TopNet), and able to simulate groundwater recharge, movement, and interaction with surface water. To demonstrate the capability of this groundwater model (TopNet-GW), we applied the model to an irrigated area with water shortage and pollution problems in the upper Ruamahanga catchment in Great Wellington Region, New Zealand, and compared its performance with a physically-based groundwater model (MODFLOW). The comparison includes river flow at flow gauging sites, and interaction between groundwater and river. Results showed that the TopNet-GW produced similar flow and groundwater interaction patterns as the MODFLOW model, but took less computation time. This shows the conceptually-based groundwater model has the potential to simulate national groundwater process, and could be used as a surrogate for the more physically based model.

  12. An Improved Physics-Based Model for Topographic Correction of Landsat TM Images

    Directory of Open Access Journals (Sweden)

    Ainong Li

    2015-05-01

    Full Text Available Optical remotely sensed images in mountainous areas are subject to radiometric distortions induced by topographic effects, which need to be corrected before quantitative applications. Based on Li model and Sandmeier model, this paper proposed an improved physics-based model for the topographic correction of Landsat Thematic Mapper (TM images. The model employed Normalized Difference Vegetation Index (NDVI thresholds to approximately divide land targets into eleven groups, due to NDVI’s lower sensitivity to topography and its significant role in indicating land cover type. Within each group of terrestrial targets, corresponding MODIS BRDF (Bidirectional Reflectance Distribution Function products were used to account for land surface’s BRDF effect, and topographic effects are corrected without Lambertian assumption. The methodology was tested with two TM scenes of severely rugged mountain areas acquired under different sun elevation angles. Results demonstrated that reflectance of sun-averted slopes was evidently enhanced, and the overall quality of images was improved with topographic effect being effectively suppressed. Correlation coefficients between Near Infra-Red band reflectance and illumination condition reduced almost to zero, and coefficients of variance also showed some reduction. By comparison with the other two physics-based models (Sandmeier model and Li model, the proposed model showed favorable results on two tested Landsat scenes. With the almost half-century accumulation of Landsat data and the successive launch and operation of Landsat 8, the improved model in this paper can be potentially helpful for the topographic correction of Landsat and Landsat-like data.

  13. Physically based model for extracting dual permeability parameters using non-Newtonian fluids

    Science.gov (United States)

    Abou Najm, M. R.; Basset, C.; Stewart, R. D.; Hauswirth, S.

    2017-12-01

    Dual permeability models are effective for the assessment of flow and transport in structured soils with two dominant structures. The major challenge to those models remains in the ability to determine appropriate and unique parameters through affordable, simple, and non-destructive methods. This study investigates the use of water and a non-Newtonian fluid in saturated flow experiments to derive physically-based parameters required for improved flow predictions using dual permeability models. We assess the ability of these two fluids to accurately estimate the representative pore sizes in dual-domain soils, by determining the effective pore sizes of macropores and micropores. We developed two sub-models that solve for the effective macropore size assuming either cylindrical (e.g., biological pores) or planar (e.g., shrinkage cracks and fissures) pore geometries, with the micropores assumed to be represented by a single effective radius. Furthermore, the model solves for the percent contribution to flow (wi) corresponding to the representative macro and micro pores. A user-friendly solver was developed to numerically solve the system of equations, given that relevant non-Newtonian viscosity models lack forms conducive to analytical integration. The proposed dual-permeability model is a unique attempt to derive physically based parameters capable of measuring dual hydraulic conductivities, and therefore may be useful in reducing parameter uncertainty and improving hydrologic model predictions.

  14. Evaluating performance of simplified physically based models for shallow landslide susceptibility

    Directory of Open Access Journals (Sweden)

    G. Formetta

    2016-11-01

    Full Text Available Rainfall-induced shallow landslides can lead to loss of life and significant damage to private and public properties, transportation systems, etc. Predicting locations that might be susceptible to shallow landslides is a complex task and involves many disciplines: hydrology, geotechnical science, geology, hydrogeology, geomorphology, and statistics. Two main approaches are commonly used: statistical or physically based models. Reliable model applications involve automatic parameter calibration, objective quantification of the quality of susceptibility maps, and model sensitivity analyses. This paper presents a methodology to systemically and objectively calibrate, verify, and compare different models and model performance indicators in order to identify and select the models whose behavior is the most reliable for particular case studies.The procedure was implemented in a package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslide susceptibility analysis (M1, M2, and M3 and a component for model verification. It computes eight goodness-of-fit indices by comparing pixel-by-pixel model results and measurement data. The integration of the package in NewAge-JGrass uses other components, such as geographic information system tools, to manage input–output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy along the Salerno–Reggio Calabria highway, between Cosenza and Altilia. The area is extensively subject to rainfall-induced shallow landslides mainly because of its complex geology and climatology. The analysis was carried out considering all the combinations of the eight optimized indices and the three models. Parameter calibration, verification, and model performance assessment were performed by a comparison with a detailed landslide

  15. Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments

    Science.gov (United States)

    Manning, Ted A.; Lawrence, Scott L.

    2014-01-01

    As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.

  16. A unified dislocation density-dependent physical-based constitutive model for cold metal forming

    Science.gov (United States)

    Schacht, K.; Motaman, A. H.; Prahl, U.; Bleck, W.

    2017-10-01

    Dislocation-density-dependent physical-based constitutive models of metal plasticity while are computationally efficient and history-dependent, can accurately account for varying process parameters such as strain, strain rate and temperature; different loading modes such as continuous deformation, creep and relaxation; microscopic metallurgical processes; and varying chemical composition within an alloy family. Since these models are founded on essential phenomena dominating the deformation, they have a larger range of usability and validity. Also, they are suitable for manufacturing chain simulations since they can efficiently compute the cumulative effect of the various manufacturing processes by following the material state through the entire manufacturing chain and also interpass periods and give a realistic prediction of the material behavior and final product properties. In the physical-based constitutive model of cold metal plasticity introduced in this study, physical processes influencing cold and warm plastic deformation in polycrystalline metals are described using physical/metallurgical internal variables such as dislocation density and effective grain size. The evolution of these internal variables are calculated using adequate equations that describe the physical processes dominating the material behavior during cold plastic deformation. For validation, the model is numerically implemented in general implicit isotropic elasto-viscoplasticity algorithm as a user-defined material subroutine (UMAT) in ABAQUS/Standard and used for finite element simulation of upsetting tests and a complete cold forging cycle of case hardenable MnCr steel family.

  17. Innovative Calibration Method for System Level Simulation Models of Internal Combustion Engines

    Directory of Open Access Journals (Sweden)

    Ivo Prah

    2016-09-01

    Full Text Available The paper outlines a procedure for the computer-controlled calibration of the combined zero-dimensional (0D and one-dimensional (1D thermodynamic simulation model of a turbocharged internal combustion engine (ICE. The main purpose of the calibration is to determine input parameters of the simulation model in such a way as to achieve the smallest difference between the results of the measurements and the results of the numerical simulations with minimum consumption of the computing time. An innovative calibration methodology is based on a novel interaction between optimization methods and physically based methods of the selected ICE sub-systems. Therein physically based methods were used for steering the division of the integral ICE to several sub-models and for determining parameters of selected components considering their governing equations. Innovative multistage interaction between optimization methods and physically based methods allows, unlike the use of well-established methods that rely only on the optimization techniques, for successful calibration of a large number of input parameters with low time consumption. Therefore, the proposed method is suitable for efficient calibration of simulation models of advanced ICEs.

  18. A review of selected topics in physics based modeling for tunnel field-effect transistors

    Science.gov (United States)

    Esseni, David; Pala, Marco; Palestri, Pierpaolo; Alper, Cem; Rollo, Tommaso

    2017-08-01

    The research field on tunnel-FETs (TFETs) has been rapidly developing in the last ten years, driven by the quest for a new electronic switch operating at a supply voltage well below 1 V and thus delivering substantial improvements in the energy efficiency of integrated circuits. This paper reviews several aspects related to physics based modeling in TFETs, and shows how the description of these transistors implies a remarkable innovation and poses new challenges compared to conventional MOSFETs. A hierarchy of numerical models exist for TFETs covering a wide range of predictive capabilities and computational complexities. We start by reviewing seminal contributions on direct and indirect band-to-band tunneling (BTBT) modeling in semiconductors, from which most TCAD models have been actually derived. Then we move to the features and limitations of TCAD models themselves and to the discussion of what we define non-self-consistent quantum models, where BTBT is computed with rigorous quantum-mechanical models starting from frozen potential profiles and closed-boundary Schrödinger equation problems. We will then address models that solve the open-boundary Schrödinger equation problem, based either on the non-equilibrium Green’s function NEGF or on the quantum-transmitting-boundary formalism, and show how the computational burden of these models may vary in a wide range depending on the Hamiltonian employed in the calculations. A specific section is devoted to TFETs based on 2D crystals and van der Waals hetero-structures. The main goal of this paper is to provide the reader with an introduction to the most important physics based models for TFETs, and with a possible guidance to the wide and rapidly developing literature in this exciting research field.

  19. System-Level Modelling and Simulation of MEMS-Based Sensors

    DEFF Research Database (Denmark)

    Virk, Kashif M.; Madsen, Jan; Shafique, Mohammad

    2005-01-01

    The growing complexity of MEMS devices and their increased used in embedded systems (e.g., wireless integrated sensor networks) demands a disciplined aproach for MEMS design as well as the development of techniques for system-level modeling of these devices so that a seamless integration with the......The growing complexity of MEMS devices and their increased used in embedded systems (e.g., wireless integrated sensor networks) demands a disciplined aproach for MEMS design as well as the development of techniques for system-level modeling of these devices so that a seamless integration...... with the existing embedded system design methodologies is possible. In this paper, we present a MEMS design methodology that uses VHDL-AMS based system-level model of a MEMS device as a starting point and combines the top-down and bottom-up design approaches for design, verification, and optimization...

  20. A physics-based probabilistic forecasting model for rainfall-induced shallow landslides at regional scale

    Directory of Open Access Journals (Sweden)

    S. Zhang

    2018-03-01

    Full Text Available Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.

  1. A physics-based probabilistic forecasting model for rainfall-induced shallow landslides at regional scale

    Science.gov (United States)

    Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun

    2018-03-01

    Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.

  2. Comparison of Lithium-Ion Anode Materials Using an Experimentally Verified Physics-Based Electrochemical Model

    Directory of Open Access Journals (Sweden)

    Rujian Fu

    2017-12-01

    Full Text Available Researchers are in search of parameters inside Li-ion batteries that can be utilized to control their external behavior. Physics-based electrochemical model could bridge the gap between Li+ transportation and distribution inside battery and battery performance outside. In this paper, two commercially available Li-ion anode materials: graphite and Lithium titanate (Li4Ti5O12 or LTO were selected and a physics-based electrochemical model was developed based on half-cell assembly and testing. It is found that LTO has a smaller diffusion coefficient (Ds than graphite, which causes a larger overpotential, leading to a smaller capacity utilization and, correspondingly, a shorter duration of constant current charge or discharge. However, in large current applications, LTO performs better than graphite because its effective particle radius decreases with increasing current, leading to enhanced diffusion. In addition, LTO has a higher activation overpotential in its side reactions; its degradation rate is expected to be much smaller than graphite, indicating a longer life span.

  3. Improvement of the physically-based groundwater model simulations through complementary correction of its errors

    Directory of Open Access Journals (Sweden)

    Jorge Mauricio Reyes Alcalde

    2017-04-01

    Full Text Available Physically-Based groundwater Models (PBM, such MODFLOW, are used as groundwater resources evaluation tools supposing that the produced differences (residuals or errors are white noise. However, in the facts these numerical simulations usually show not only random errors but also systematic errors. For this work it has been developed a numerical procedure to deal with PBM systematic errors, studying its structure in order to model its behavior and correct the results by external and complementary means, trough a framework called Complementary Correction Model (CCM. The application of CCM to PBM shows a decrease in local biases, better distribution of errors and reductions in its temporal and spatial correlations, with 73% of reduction in global RMSN over an original PBM. This methodology seems an interesting chance to update a PBM avoiding the work and costs of interfere its internal structure.

  4. A physics based method for combining multiple anatomy models with application to medical simulation.

    Science.gov (United States)

    Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David

    2009-01-01

    We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.

  5. Ionic polymer-metal composite torsional sensor: physics-based modeling and experimental validation

    Science.gov (United States)

    Aidi Sharif, Montassar; Lei, Hong; Khalid Al-Rubaiai, Mohammed; Tan, Xiaobo

    2018-07-01

    Ionic polymer-metal composites (IPMCs) have intrinsic sensing and actuation properties. Typical IPMC sensors are in the shape of beams and only respond to stimuli acting along beam-bending directions. Rod or tube-shaped IPMCs have been explored as omnidirectional bending actuators or sensors. In this paper, physics-based modeling is studied for a tubular IPMC sensor under pure torsional stimulus. The Poisson–Nernst–Planck model is used to describe the fundamental physics within the IPMC, where it is hypothesized that the anion concentration is coupled to the sum of shear strains induced by the torsional stimulus. Finite element simulation is conducted to solve for the torsional sensing response, where some of the key parameters are identified based on experimental measurements using an artificial neural network. Additional experimental results suggest that the proposed model is able to capture the torsional sensing dynamics for different amplitudes and rates of the torsional stimulus.

  6. Physics Based Model for Cryogenic Chilldown and Loading. Part I: Algorithm

    Science.gov (United States)

    Luchinsky, Dmitry G.; Smelyanskiy, Vadim N.; Brown, Barbara

    2014-01-01

    We report the progress in the development of the physics based model for cryogenic chilldown and loading. The chilldown and loading is model as fully separated non-equilibrium two-phase flow of cryogenic fluid thermally coupled to the pipe walls. The solution follow closely nearly-implicit and semi-implicit algorithms developed for autonomous control of thermal-hydraulic systems developed by Idaho National Laboratory. A special attention is paid to the treatment of instabilities. The model is applied to the analysis of chilldown in rapid loading system developed at NASA-Kennedy Space Center. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The numerical predictions are in reasonable agreement with the experimental time traces. The obtained results pave the way to the development of autonomous loading operation on the ground and space.

  7. Comparison of physically based constitutive models characterizing armor steel over wide temperature and strain rate ranges

    International Nuclear Information System (INIS)

    Xu, Zejian; Huang, Fenglei

    2012-01-01

    Both descriptive and predictive capabilities of five physically based constitutive models (PB, NNL, ZA, VA, and RK) are investigated and compared systematically, in characterizing plastic behavior of the 603 steel at temperatures ranging from 288 to 873 K, and strain rates ranging from 0.001 to 4500 s −1 . Determination of the constitutive parameters is introduced in detail for each model. Validities of the established models are checked by strain rate jump tests performed under different loading conditions. The results show that the RK and NNL models have better performance in the description of material behavior, especially the work-hardening effect, while the PB and VA models predict better. The inconsistency that is observed between the capabilities of description and prediction of the models indicates the existence of the minimum number of required fitting data, reflecting the degree of a model's requirement for basic data in parameter calibration. It is also found that the description capability of a model is dependent to a large extent on both its form and the number of its constitutive parameters, while the precision of prediction relies largely on the performance of description. In the selection of constitutive models, the experimental data and the constitutive models should be considered synthetically to obtain a better efficiency in material behavior characterization

  8. A system-level modelling perspective of the KwaZulu-Natal Bight ...

    African Journals Online (AJOL)

    Requirements to take the hydrodynamic, biogeochemical and first ecosystem modelling efforts towards a meaningful predictive capability are discussed. The importance of adopting a system-level view of the bight and its connected systems for realistic exploration of global change scenarios is highlighted. Keywords: ...

  9. Simple physics-based models of compensatory plant water uptake: concepts and eco-hydrological consequences

    Directory of Open Access Journals (Sweden)

    N. J. Jarvis

    2011-11-01

    Full Text Available Many land surface schemes and simulation models of plant growth designed for practical use employ simple empirical sub-models of root water uptake that cannot adequately reflect the critical role water uptake from sparsely rooted deep subsoil plays in meeting atmospheric transpiration demand in water-limited environments, especially in the presence of shallow groundwater. A failure to account for this so-called "compensatory" water uptake may have serious consequences for both local and global modeling of water and energy fluxes, carbon balances and climate. Some purely empirical compensatory root water uptake models have been proposed, but they are of limited use in global modeling exercises since their parameters cannot be related to measurable soil and vegetation properties. A parsimonious physics-based model of uptake compensation has been developed that requires no more parameters than empirical approaches. This model is described and some aspects of its behavior are illustrated with the help of example simulations. These analyses demonstrate that hydraulic lift can be considered as an extreme form of compensation and that the degree of compensation is principally a function of soil capillarity and the ratio of total effective root length to potential transpiration. Thus, uptake compensation increases as root to leaf area ratios increase, since potential transpiration depends on leaf area. Results of "scenario" simulations for two case studies, one at the local scale (riparian vegetation growing above shallow water tables in seasonally dry or arid climates and one at a global scale (water balances across an aridity gradient in the continental USA, are presented to illustrate biases in model predictions that arise when water uptake compensation is neglected. In the first case, it is shown that only a compensated model can match the strong relationships between water table depth and leaf area and transpiration observed in riparian forest

  10. A physically based constitutive model for a V-4Cr-4Ti alloy

    International Nuclear Information System (INIS)

    Donahue, E.G.; Odette, G.R.; Lucas, G.E.

    2000-01-01

    A constitutive model for low-to-intermediate temperatures, strains, and strain rates is developed for the program heat of V-4Cr-4Ti. The basic form of the model is derived from more general dislocation-based models of yield stress and strain hardening. The physically based forms are fit to a database derived from tensile tests carried out over a wide range of temperatures and strain rates. Yield and post-yield strain-hardening contributions to the flow stress are additive. The yield stress has both thermally activated and athermal components. The former is described by a two-mechanism activated dislocation slip model, with contributions that appear to arise from both lattice friction (at lower temperatures) and dislocation pinning by interstitial impurities (at higher temperatures). The yield stress data can be correlated using a strain rate-compensated temperature. The model uses a temperature-weighted average of the two mechanisms. Post-yield strain hardening was found to be approximately athermal. Strain hardening is fit to a two-component modified Voce-type saturating flow stress model. The constitutive model is also used to determine the flow stability limits as estimates of uniform tensile strains. The relatively compact, but mechanism-based, semi-empirical model has a number of both fundamental and practical advantages that are briefly outlined

  11. Landslide Susceptibility Evaluation on agricultural terraces of DOURO VALLEY (PORTUGAL), using physically based mathematical models.

    Science.gov (United States)

    Faria, Ana; Bateira, Carlos; Laura, Soares; Fernandes, Joana; Gonçalves, José; Marques, Fernando

    2016-04-01

    The work focuses the evaluation of landslide susceptibility in Douro Region agricultural terraces, supported by dry stone walls and earth embankments, using two physically based models. The applied models, SHALSTAB (Montgomery et al.,1994; Dietrich et al., 1995) and SINMAP (PACK et al., 2005), combine an infinite slope stability model with a steady state hydrological model, and both use the following geophysical parameters: cohesion, friction angle, specific weight and soil thickness. The definition of the contributing areas is different in both models. The D∞ methodology used by SINMAP model suggests a great influence of the terraces morphology, providing a much more diffuse flow on the internal flow modelling. The MD8 used in SHALSTAB promotes an important degree of flow concentration, representing an internal flow based on preferential paths of the runoff as the areas more susceptible to saturation processes. The model validation is made through the contingency matrix method (Fawcett, 2006; Raia et al., 2014) and implies the confrontation with the inventory of past landslides. The True Positive Rate shows that SHALSTAB classifies 77% of the landslides on the high susceptibility areas, while SINMAP reaches 90%. The SINMAP has a False Positive Rate (represents the percentage of the slipped area that is classified as unstable but without landslides) of 83% and the SHALSTAB has 67%. The reliability (analyzes the areas that were correctly classified on the total area) of SHALSTAB is better (33% against 18% of SINMAP). Relative to Precision (refers to the ratio of the slipped area correctly classified over the whole area classified as unstable) SHALSTAB has better results (0.00298 against 0.00283 of SINMAP). It was elaborate the index TPR/FPR and better results obtained by SHALSTAB (1.14 against 1.09 of SINMAP). SHALSTAB shows a better performance in the definition of susceptibility most prone areas to instability processes. One of the reasons for the difference of

  12. System-level modeling for economic evaluation of geological CO2 storage in gas reservoirs

    International Nuclear Information System (INIS)

    Zhang, Yingqi; Oldenburg, Curtis M.; Finsterle, Stefan; Bodvarsson, Gudmundur S.

    2007-01-01

    One way to reduce the effects of anthropogenic greenhouse gases on climate is to inject carbon dioxide (CO 2 ) from industrial sources into deep geological formations such as brine aquifers or depleted oil or gas reservoirs. Research is being conducted to improve understanding of factors affecting particular aspects of geological CO 2 storage (such as storage performance, storage capacity, and health, safety and environmental (HSE) issues) as well as to lower the cost of CO 2 capture and related processes. However, there has been less emphasis to date on system-level analyses of geological CO 2 storage that consider geological, economic, and environmental issues by linking detailed process models to representations of engineering components and associated economic models. The objective of this study is to develop a system-level model for geological CO 2 storage, including CO 2 capture and separation, compression, pipeline transportation to the storage site, and CO 2 injection. Within our system model we are incorporating detailed reservoir simulations of CO 2 injection into a gas reservoir and related enhanced production of methane. Potential leakage and associated environmental impacts are also considered. The platform for the system-level model is GoldSim [GoldSim User's Guide. GoldSim Technology Group; 2006, http://www.goldsim.com]. The application of the system model focuses on evaluating the feasibility of carbon sequestration with enhanced gas recovery (CSEGR) in the Rio Vista region of California. The reservoir simulations are performed using a special module of the TOUGH2 simulator, EOS7C, for multicomponent gas mixtures of methane and CO 2 . Using a system-level modeling approach, the economic benefits of enhanced gas recovery can be directly weighed against the costs and benefits of CO 2 injection

  13. A Physically Based Distributed Hydrologic Model with a no-conventional terrain analysis

    Science.gov (United States)

    Rulli, M.; Menduni, G.; Rosso, R.

    2003-12-01

    A physically based distributed hydrological model is presented. Starting from a contour-based terrain analysis, the model makes a no-conventional discretization of the terrain. From the maximum slope lines, obtained using the principles of minimum distance and orthogonality, the models obtains a stream tubes structure. The implemented model automatically can find the terrain morphological characteristics, e.g. peaks and saddles, and deal with them respecting the stream flow. Using this type of discretization, the model divides the elements in which the water flows in two classes; the cells, that are mixtilinear polygons where the overland flow is modelled as a sheet flow and channels, obtained by the interception of two or more stream tubes and whenever surface runoff occurs, the surface runoff is channelised. The permanent drainage paths can are calculated using one of the most common methods: threshold area, variable threshold area or curvature. The subsurface flow is modelled using the Simplified Bucket Model. The model considers three type of overland flow, depending on how it is produced:infiltration excess;saturation of superficial layer of the soil and exfiltration of sub-surface flow from upstream. The surface flow and the subsurface flow across a element are routed according with the mono-dimensional equation of the kinematic wave. The also model considers the spatial variability of the channels geometry with the flow. The channels have a rectangular section with length of the base decreasing with the distance from the outlet and depending on a power of the flow. The model was tested on the Rio Gallina and Missiaga catchments and the results showed model good performances.

  14. Tidal Simulations of an Incised-Valley Fluvial System with a Physics-Based Geologic Model

    Science.gov (United States)

    Ghayour, K.; Sun, T.

    2012-12-01

    Physics-based geologic modeling approaches use fluid flow in conjunction with sediment transport and deposition models to devise evolutionary geologic models that focus on underlying physical processes and attempt to resolve them at pertinent spatial and temporal scales. Physics-based models are particularly useful when the evolution of a depositional system is driven by the interplay of autogenic processes and their response to allogenic controls. This interplay can potentially create complex reservoir architectures with high permeability sedimentary bodies bounded by a hierarchy of shales that can effectively impede flow in the subsurface. The complex stratigraphy of tide-influenced fluvial systems is an example of such co-existing and interacting environments of deposition. The focus of this talk is a novel formulation of boundary conditions for hydrodynamics-driven models of sedimentary systems. In tidal simulations, a time-accurate boundary treatment is essential for proper imposition of tidal forcing and fluvial inlet conditions where the flow may be reversed at times within a tidal cycle. As such, the boundary treatment at the inlet has to accommodate for a smooth transition from inflow to outflow and vice-versa without creating numerical artifacts. Our numerical experimentations showed that boundary condition treatments based on a local (frozen) one-dimensional approach along the boundary normal which does not account for the variation of flow quantities in the tangential direction often lead to unsatisfactory results corrupted by numerical artifacts. In this talk, we propose a new boundary treatment that retains all spatial and temporal terms in the model and as such is capable to account for nonlinearities and sharp variations of model variables near boundaries. The proposed approach borrows heavily from the idea set forth by J. Sesterhenn1 for compressible Navier-Stokes equations. The methodology is successfully applied to a tide-influenced incised

  15. On the effects of adaptive reservoir operating rules in hydrological physically-based models

    Science.gov (United States)

    Giudici, Federico; Anghileri, Daniela; Castelletti, Andrea; Burlando, Paolo

    2017-04-01

    Recent years have seen a significant increase of the human influence on the natural systems both at the global and local scale. Accurately modeling the human component and its interaction with the natural environment is key to characterize the real system dynamics and anticipate future potential changes to the hydrological regimes. Modern distributed, physically-based hydrological models are able to describe hydrological processes with high level of detail and high spatiotemporal resolution. Yet, they lack in sophistication for the behavior component and human decisions are usually described by very simplistic rules, which might underperform in reproducing the catchment dynamics. In the case of water reservoir operators, these simplistic rules usually consist of target-level rule curves, which represent the average historical level trajectory. Whilst these rules can reasonably reproduce the average seasonal water volume shifts due to the reservoirs' operation, they cannot properly represent peculiar conditions, which influence the actual reservoirs' operation, e.g., variations in energy price or water demand, dry or wet meteorological conditions. Moreover, target-level rule curves are not suitable to explore the water system response to climate and socio economic changing contexts, because they assume a business-as-usual operation. In this work, we quantitatively assess how the inclusion of adaptive reservoirs' operating rules into physically-based hydrological models contribute to the proper representation of the hydrological regime at the catchment scale. In particular, we contrast target-level rule curves and detailed optimization-based behavioral models. We, first, perform the comparison on past observational records, showing that target-level rule curves underperform in representing the hydrological regime over multiple time scales (e.g., weekly, seasonal, inter-annual). Then, we compare how future hydrological changes are affected by the two modeling

  16. Green roof rainfall-runoff modelling: is the comparison between conceptual and physically based approaches relevant?

    Science.gov (United States)

    Versini, Pierre-Antoine; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2017-04-01

    Green roofs are commonly considered as efficient tools to mitigate urban runoff as they can store precipitation, and consequently provide retention and detention performances. Designed as a compromise between water holding capacity, weight and hydraulic conductivity, their substrate is usually an artificial media differentiating significantly from a traditional soil. In order to assess green roofs hydrological performances, many models have been developed. Classified into two categories (conceptual and physically based), they are usually applied to reproduce the discharge of a particular monitored green roof considered as homogeneous. Although the resulted simulations could be satisfactory, the question of robustness and consistency of the calibrated parameters is often not addressed. Here, a modeling framework has been developed to assess the efficiency and the robustness of both modelling approaches (conceptual and physically based) in reproducing green roof hydrological behaviour. SWMM and VS2DT models have been used for this purpose. This work also benefits from an experimental setup where several green roofs differentiated by their substrate thickness and vegetation cover are monitored. Based on the data collected for several rainfall events, it has been studied how the calibrated parameters are effectively linked to their physical properties and how they can vary from one green roof configuration to another. Although both models reproduce correctly the observed discharges in most of the cases, their calibrated parameters exhibit a high inconsistency. For a same green roof configuration, these parameters can vary significantly from one rainfall event to another, even if they are supposed to be linked to the green roof characteristics (roughness, residual moisture content for instance). They can also be different from one green roof configuration to another although the implemented substrate is the same. Finally, it appears very difficult to find any

  17. Using large hydrological datasets to create a robust, physically based, spatially distributed model for Great Britain

    Science.gov (United States)

    Lewis, Elizabeth; Kilsby, Chris; Fowler, Hayley

    2014-05-01

    The impact of climate change on hydrological systems requires further quantification in order to inform water management. This study intends to conduct such analysis using hydrological models. Such models are of varying forms, of which conceptual, lumped parameter models and physically-based models are two important types. The majority of hydrological studies use conceptual models calibrated against measured river flow time series in order to represent catchment behaviour. This method often shows impressive results for specific problems in gauged catchments. However, the results may not be robust under non-stationary conditions such as climate change, as physical processes and relationships amenable to change are not accounted for explicitly. Moreover, conceptual models are less readily applicable to ungauged catchments, in which hydrological predictions are also required. As such, the physically based, spatially distributed model SHETRAN is used in this study to develop a robust and reliable framework for modelling historic and future behaviour of gauged and ungauged catchments across the whole of Great Britain. In order to achieve this, a large array of data completely covering Great Britain for the period 1960-2006 has been collated and efficiently stored ready for model input. The data processed include a DEM, rainfall, PE and maps of geology, soil and land cover. A desire to make the modelling system easy for others to work with led to the development of a user-friendly graphical interface. This allows non-experts to set up and run a catchment model in a few seconds, a process that can normally take weeks or months. The quality and reliability of the extensive dataset for modelling hydrological processes has also been evaluated. One aspect of this has been an assessment of error and uncertainty in rainfall input data, as well as the effects of temporal resolution in precipitation inputs on model calibration. SHETRAN has been updated to accept gridded rainfall

  18. Physics-based electromechanical model of IPMC considering various underlying currents

    Science.gov (United States)

    Pugal, D.; Kim, K. J.; Palmre, V.; Leang, K. K.; Aabloo, A.

    2012-04-01

    Experiments indicate that the electrodes affect the charge dynamics, and therefore actuation of ionic polymermetal composite (IPMC) via three different types of currents - electric potential induced ionic current, leakage current, and electrochemical current if approximately higher than 2 V voltage is applied to a typical 200 μm thick IPMC. The ionic current via charge accumulation near the electrodes is the direct cause of the osmotic and electrostatic stresses in the polymer and therefore carries the major role in the actuation of IPMC. However, the leakage and the electrochemical - electrolysis in case of water based IPMCs - currents do not affect the actuation dynamics as directly but cause potential gradients on the electrodes. These in turn affect the ionic current. A physics based finite element (FE) model was developed to incorporate the effect of the electrodes and three different types of currents in the actuation calculations. The Poisson-Nernst-Planck system of equations is used in the model to describe the ionic current and the Butler-Volmer relation is used to describe the electrolysis current for different applied voltages and IPMC thicknesses. To validate the model, calculated tip deflection, applied net current, and potential drop in case of various IPMC thicknesses and applied voltages are compared to experimental data.

  19. Steering disturbance rejection using a physics-based neuromusculoskeletal driver model

    Science.gov (United States)

    Mehrabi, Naser; Sharif Razavian, Reza; McPhee, John

    2015-10-01

    The aim of this work is to develop a comprehensive yet practical driver model to be used in studying driver-vehicle interactions. Drivers interact with their vehicle and the road through the steering wheel. This interaction forms a closed-loop coupled human-machine system, which influences the driver's steering feel and control performance. A hierarchical approach is proposed here to capture the complexity of the driver's neuromuscular dynamics and the central nervous system in the coordination of the driver's upper extremity activities, especially in the presence of external disturbance. The proposed motor control framework has three layers: the first (or the path planning) plans a desired vehicle trajectory and the required steering angles to perform the desired trajectory; the second (or the musculoskeletal controller) actuates the musculoskeletal arm to rotate the steering wheel accordingly; and the final layer ensures the precision control and disturbance rejection of the motor control units. The physics-based driver model presented here can also provide insights into vehicle control in relaxed and tensed driving conditions, which are simulated by adjusting the driver model parameters such as cognition delay and muscle co-contraction dynamics.

  20. A system-level multiprocessor system-on-chip modeling framework

    DEFF Research Database (Denmark)

    Virk, Kashif Munir; Madsen, Jan

    2004-01-01

    We present a system-level modeling framework to model system-on-chips (SoC) consisting of heterogeneous multiprocessors and network-on-chip communication structures in order to enable the developers of today's SoC designs to take advantage of the flexibility and scalability of network-on-chip and...... SoC design. We show how a hand-held multimedia terminal, consisting of JPEG, MP3 and GSM applications, can be modeled as a multiprocessor SoC in our framework....

  1. Empirical LTE Smartphone Power Model with DRX Operation for System Level Simulations

    DEFF Research Database (Denmark)

    Lauridsen, Mads; Noël, Laurent; Mogensen, Preben

    2013-01-01

    An LTE smartphone power model is presented to enable academia and industry to evaluate users’ battery life on system level. The model is based on empirical measurements on a smartphone using a second generation LTE chipset, and the model includes functions of receive and transmit data rates...... and power levels. The first comprehensive Discontinuous Reception (DRX) power consumption measurements are reported together with cell bandwidth, screen and CPU power consumption. The transmit power level and to some extent the receive data rate constitute the overall power consumption, while DRX proves...

  2. Robust Building Energy Load Forecasting Using Physically-Based Kernel Models

    Directory of Open Access Journals (Sweden)

    Anand Krishnan Prakash

    2018-04-01

    Full Text Available Robust and accurate building energy load forecasting is important for helping building managers and utilities to plan, budget, and strategize energy resources in advance. With recent prevalent adoption of smart-meters in buildings, a significant amount of building energy consumption data became available. Many studies have developed physics-based white box models and data-driven black box models to predict building energy consumption; however, they require extensive prior knowledge about building system, need a large set of training data, or lack robustness to different forecasting scenarios. In this paper, we introduce a new building energy forecasting method based on Gaussian Process Regression (GPR that incorporates physical insights about load data characteristics to improve accuracy while reducing training requirements. The GPR is a non-parametric regression method that models the data as a joint Gaussian distribution with mean and covariance functions and forecast using the Bayesian updating. We model the covariance function of the GPR to reflect the data patterns in different forecasting horizon scenarios, as prior knowledge. Our method takes advantage of the modeling flexibility and computational efficiency of the GPR while benefiting from the physical insights to further improve the training efficiency and accuracy. We evaluate our method with three field datasets from two university campuses (Carnegie Mellon University and Stanford University for both short- and long-term load forecasting. The results show that our method performs more accurately, especially when the training dataset is small, compared to other state-of-the-art forecasting models (up to 2.95 times smaller prediction error.

  3. Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling

    Science.gov (United States)

    Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw

    2005-01-01

    The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.

  4. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    Science.gov (United States)

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  5. CyberShake: A Physics-Based Seismic Hazard Model for Southern California

    Science.gov (United States)

    Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.

    2011-01-01

    CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and

  6. High resolution global flood hazard map from physically-based hydrologic and hydraulic models.

    Science.gov (United States)

    Begnudelli, L.; Kaheil, Y.; McCollum, J.

    2017-12-01

    The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak

  7. On the use of a physically-based baseflow timescale in land surface models.

    Science.gov (United States)

    Jost, A.; Schneider, A. C.; Oudin, L.; Ducharne, A.

    2017-12-01

    Groundwater discharge is an important component of streamflow and estimating its spatio-temporal variation in response to changes in recharge is of great value to water resource planning, and essential for modelling accurate large scale water balance in land surface models (LSMs). First-order representation of groundwater as a single linear storage element is frequently used in LSMs for the sake of simplicity, but requires a suitable parametrization of the aquifer hydraulic behaviour in the form of the baseflow characteristic timescale (τ). Such a modelling approach can be hampered by the lack of available calibration data at global scale. Hydraulic groundwater theory provides an analytical framework to relate the baseflow characteristics to catchment descriptors. In this study, we use the long-time solution of the linearized Boussinesq equation to estimate τ at global scale, as a function of groundwater flow length and aquifer hydraulic diffusivity. Our goal is to evaluate the use of this spatially variable and physically-based τ in the ORCHIDEE surface model in terms of simulated river discharges across large catchments. Aquifer transmissivity and drainable porosity stem from GLHYMPS high-resolution datasets whereas flow length is derived from an estimation of drainage density, using the GRIN global river network. ORCHIDEE is run in offline mode and its results are compared to a reference simulation using an almost spatially constant topographic-dependent τ. We discuss the limits of our approach in terms of both the relevance and accuracy of global estimates of aquifer hydraulic properties and the extent to which the underlying assumptions in the analytical method are valid.

  8. Catchment-scale Validation of a Physically-based, Post-fire Runoff and Erosion Model

    Science.gov (United States)

    Quinn, D.; Brooks, E. S.; Robichaud, P. R.; Dobre, M.; Brown, R. E.; Wagenbrenner, J.

    2017-12-01

    The cascading consequences of fire-induced ecological changes have profound impacts on both natural and managed forest ecosystems. Forest managers tasked with implementing post-fire mitigation strategies need robust tools to evaluate the effectiveness of their decisions, particularly those affecting hydrological recovery. Various hillslope-scale interfaces of the physically-based Water Erosion Prediction Project (WEPP) model have been successfully validated for this purpose using fire-effected plot experiments, however these interfaces are explicitly designed to simulate single hillslopes. Spatially-distributed, catchment-scale WEPP interfaces have been developed over the past decade, however none have been validated for post-fire simulations, posing a barrier to adoption for forest managers. In this validation study, we compare WEPP simulations with pre- and post-fire hydrological records for three forested catchments (W. Willow, N. Thomas, and S. Thomas) that burned in the 2011 Wallow Fire in Northeastern Arizona, USA. Simulations were conducted using two approaches; the first using automatically created inputs from an online, spatial, post-fire WEPP interface, and the second using manually created inputs which incorporate the spatial variability of fire effects observed in the field. Both approaches were compared to five years of observed post-fire sediment and flow data to assess goodness of fit.

  9. Evaluation of physics-based numerical modelling for diverse design architecture of perovskite solar cells

    Science.gov (United States)

    Mishra, A. K.; Catalan, Jorge; Camacho, Diana; Martinez, Miguel; Hodges, D.

    2017-08-01

    Solution processed organic-inorganic metal halide perovskite based solar cells are emerging as a new cost effective photovoltaic technology. In the context of increasing the power conversion efficiency (PCE) and sustainability of perovskite solar cells (PSC) devices, we comprehensively analyzed a physics-based numerical modelling for doped and un-doped PSC devices. Our analytics emphasized the role of different charge carrier layers from the view point of interfacial adhesion and its influence on charge extraction rate and charge recombination mechanism. Morphological and charge transport properties of perovskite thin film as a function of device architecture are also considered to investigate the photovoltaic properties of PSC. We observed that photocurrent is dominantly influenced by interfacial recombination process and photovoltage has functional relationship with defect density of perovskite absorption layer. A novel contour mapping method to understand the characteristics of current density-voltage (J-V) curves for each device as a function of perovskite layer thickness provide an important insight about the distribution spectrum of photovoltaic properties. Functional relationship of device efficiency and fill factor with absorption layer thickness are also discussed.

  10. Physical-based analytical model of flexible a-IGZO TFTs accounting for both charge injection and transport

    NARCIS (Netherlands)

    Ghittorelli, M.; Torricelli, F.; Steen, J.L. van der; Garripoli, C.; Tripathi, A.; Gelinck, G.H.; Cantatore, E.; Kovacs-Vajna, Z.M.

    2016-01-01

    Here we show a new physical-based analytical model of a-IGZO TFTs. TFTs scaling from L=200 μm to L=15 μm and fabricated on plastic foil are accurately reproduced with a unique set of parameters. The model is used to design a zero-VGS inverter. It is a valuable tool for circuit design and technology

  11. Physical-based analytical model of flexible a-IGZO TFTs accounting for both charge injection and transport

    NARCIS (Netherlands)

    Ghittorelli, M.; Torricelli, F.; van der Steen, J.-L.; Garripoli, C.; Tripathi, A.K.; Gelinck, G.; Cantatore, E.; Kovács-Vajna, Z.M.

    2015-01-01

    Here we show a new physical-based analytical model of a-IGZO TFTs. TFTs scaling from L=200 μm to L=15 μm and fabricated on plastic foil are accurately reproduced with a unique set of parameters. The model is used to design a zero- VGS inverter. It is a valuable tool for circuit design and technology

  12. Development and evaluation of a physics-based windblown dust emission scheme implemented in the CMAQ modeling system

    Science.gov (United States)

    A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...

  13. Erosion prediction for alpine slopes: a symbiosis of remote sensing and a physical based erosion model

    Science.gov (United States)

    Kaiser, Andreas; Neugirg, Fabian; Haas, Florian; Schindewolf, Marcus; Schmidt, Jürgen

    2014-05-01

    As rainfall simulations represent an established tool for quantifying soil detachment on cultivated area in lowlands and low mountain ranges, they are rarely used on steep slopes high mountain ranges. Still this terrain represents productive sediment sources of high morphodynamic. A quantitative differentiation between gravitationally and fluvially relocated material reveals a major challenge in understanding erosion on steep slopes: does solifluction as a result of melting in spring or heavy convective rainstorms during summer cause the essential erosion processes? This paper aims to answer this question by separating gravitational mass movement (solifluction, landslides, mudflow and needle ice) and runoff-induced detachment. First simulated rainstorm experiments are used to assess the sediment production on bare soil on a strongly inclined plot (1 m², 42°) in the northern limestone Alps. Throughout precipitation experiments runoff and related suspended sediments were quantified. In order to enlarge slope length virtually to around 20 m a runoff feeding device is additionally implemented. Soil physical parameters were derived from on-site sampling. The generated data is introduced to the physically based and catchment-scaled erosion model EROSION 3D to upscale plot size to small watershed conditions. Thus infiltration, runoff, detachment, transport and finally deposition can be predicted for single rainstorm events and storm sequences. Secondly, in order to separate gravitational mass movements and water erosion, a LiDAR and structure-from-motion based monitoring approach is carried out to produce high-resolution digital elevation models. A time series analysis of detachment and deposition from different points in time is implemented. Absolute volume losses are then compared to sediment losses calculated by the erosion model as the latter only generates data that is connected to water induced hillside erosion. This methodology will be applied in other watersheds

  14. A system-level cost-of-energy wind farm layout optimization with landowner modeling

    International Nuclear Information System (INIS)

    Chen, Le; MacDonald, Erin

    2014-01-01

    Highlights: • We model the role of landowners in determining the success of wind projects. • A cost-of-energy (COE) model with realistic landowner remittances is developed. • These models are included in a system-level wind farm layout optimization. • Basic verification indicates the optimal COE is in-line with real-world data. • Land plots crucial to a project’s success can be identified with the approach. - Abstract: This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustry. The system-level cost-of-energy (COE) optimization model is also tested under two land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability

  15. System-level modeling and simulation of the cell culture microfluidic biochip ProCell

    DEFF Research Database (Denmark)

    Minhass, Wajid Hassan; Pop, Paul; Madsen, Jan

    2010-01-01

    Microfluidic biochips offer a promising alternative to a conventional biochemical laboratory. There are two technologies for the microfluidic biochips: droplet-based and flow-based. In this paper we are interested in flow-based microfluidic biochips, where the liquid flows continuously through pre......-defined micro-channels using valves and pumps. We present an approach to the system-level modeling and simulation of a cell culture microfluidic biochip called ProCell, Programmable Cell Culture Chip. ProCell contains a cell culture chamber, which is envisioned to run 256 simultaneous experiments (viewed...

  16. A Physically-based Model for Predicting Soil Moisture Dynamics in Wetlands

    Science.gov (United States)

    Kalin, L.; Rezaeianzadeh, M.; Hantush, M. M.

    2017-12-01

    Wetlands are promoted as green infrastructures because of their characteristics in retaining and filtering water. In wetlands going through wetting/drying cycles, simulation of nutrient processes and biogeochemical reactions in both ponded and unsaturated wetland zones are needed for an improved understanding of wetland functioning for water quality improvement. The physically-based WetQual model can simulate the hydrology and nutrient and sediment cycles in natural and constructed wetlands. WetQual can be used in continuously flooded environments or in wetlands going through wetting/drying cycles. Currently, WetQual relies on 1-D Richards' Equation (RE) to simulate soil moisture dynamics in unponded parts of the wetlands. This is unnecessarily complex because as a lumped model, WetQual only requires average moisture contents. In this paper, we present a depth-averaged solution to the 1-D RE, called DARE, to simulate the average moisture content of the root zone and the layer below it in unsaturated parts of wetlands. DARE converts the PDE of the RE into ODEs; thus it is computationally more efficient. This method takes into account the plant uptake and groundwater table fluctuations, which are commonly overlooked in hydrologic models dealing with wetlands undergoing wetting and drying cycles. For verification purposes, DARE solutions were compared to Hydrus-1D model, which uses full RE, under gravity drainage only assumption and full-term equations. Model verifications were carried out under various top boundary conditions: no ponding at all, ponding at some point, and no rain. Through hypothetical scenarios and actual atmospheric data, the utility of DARE was demonstrated. Gravity drainage version of DARE worked well in comparison to Hydrus-1D, under all the assigned atmospheric boundary conditions of varying fluxes for all examined soil types (sandy loam, loam, sandy clay loam, and sand). The full-term version of DARE offers reasonable accuracy compared to the

  17. Quantification of uncertainties in turbulence modeling: A comparison of physics-based and random matrix theoretic approaches

    International Nuclear Information System (INIS)

    Wang, Jian-Xun; Sun, Rui; Xiao, Heng

    2016-01-01

    Highlights: • Compared physics-based and random matrix methods to quantify RANS model uncertainty. • Demonstrated applications of both methods in channel ow over periodic hills. • Examined the amount of information introduced in the physics-based approach. • Discussed implications to modeling turbulence in both near-wall and separated regions. - Abstract: Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows, e.g., those with non-parallel shear layers or strong mean flow curvature. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in the turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. This method has better mathematical rigorousness and provides the most non-committal prior distributions without introducing artificial constraints. On the other hand, the physics-based approach has the advantages of being more flexible to incorporate available physical insights. In this work, we compare and discuss the advantages and disadvantages of the two approaches on model-form uncertainty quantification. In addition, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in the physics-based approach. The comparison is conducted through a test case using a canonical flow, the flow past

  18. Goal-directed behaviour and instrumental devaluation: a neural system-level computational model

    Directory of Open Access Journals (Sweden)

    Francesco Mannella

    2016-10-01

    Full Text Available Devaluation is the key experimental paradigm used to demonstrate the presence of instrumental behaviours guided by goals in mammals. We propose a neural system-level computational model to address the question of which brain mechanisms allow the current value of rewards to control instrumental actions. The model pivots on and shows the computational soundness of the hypothesis for which the internal representation of instrumental manipulanda (e.g., levers activate the representation of rewards (or `action-outcomes', e.g. foods while attributing to them a value which depends on the current internal state of the animal (e.g., satiation for some but not all foods. The model also proposes an initial hypothesis of the integrated system of key brain components supporting this process and allowing the recalled outcomes to bias action selection: (a the sub-system formed by the basolateral amygdala and insular cortex acquiring the manipulanda-outcomes associations and attributing the current value to the outcomes; (b the three basal ganglia-cortical loops selecting respectively goals, associative sensory representations, and actions; (c the cortico-cortical and striato-nigro-striatal neural pathways supporting the selection, and selection learning, of actions based on habits and goals. The model reproduces and integrates the results of different devaluation experiments carried out with control rats and rats with pre- and post-training lesions of the basolateral amygdala, the nucleus accumbens core, the prelimbic cortex, and the dorso-medial striatum. The results support the soundness of the hypotheses of the model and show its capacity to integrate, at the system-level, the operations of the key brain structures underlying devaluation. Based on its hypotheses and predictions, the model also represents an operational framework to support the design and analysis of new experiments on the motivational aspects of goal-directed behaviour.

  19. Physically-Based Modelling of the Post-Fire Runoff Response of a Forest Catchment in Central Portugal

    NARCIS (Netherlands)

    Eck, Van Christel M.; Nunes, Joao P.; Vieira, Diana C.S.; Keesstra, Saskia; Keizer, Jan Jacob

    2016-01-01

    Forest fires are a recurrent phenomenon in Mediterranean forests, with impacts for human landscapes and communities, which must be understood before they can be managed. This study used the physically based Limburg Soil Erosion Model (LISEM) to simulate rainfall–runoff response, under soil water

  20. Physics-based modeling of live wildland fuel ignition experiments in the Forced Ignition and Flame Spread Test apparatus

    Science.gov (United States)

    C. Anand; B. Shotorban; S. Mahalingam; S. McAllister; D. R. Weise

    2017-01-01

    A computational study was performed to improve our understanding of the ignition of live fuel in the forced ignition and flame spread test apparatus, a setup where the impact of the heating mode is investigated by subjecting the fuel to forced convection and radiation. An improvement was first made in the physics-based model WFDS where the fuel is treated as fixed...

  1. Physically based multiscale-viscoplastic model for metals and steel alloys: Theory and computation

    Science.gov (United States)

    Abed, Farid H.

    The main requirement of large deformation problems such as high-speed machining, impact, and various primarily metal forming, is to develop constitutive relations which are widely applicable and capable of accounting for complex paths of deformation. Achieving such desirable goals for material like metals and steel alloys involves a comprehensive study of their microstructures and experimental observations under different loading conditions. In general, metal structures display a strong rate- and temperature-dependence when deformed non-uniformly into the inelastic range. This effect has important implications for an increasing number of applications in structural and engineering mechanics. The mechanical behavior of these applications cannot be characterized by classical (rate-independent) continuum theories because they incorporate no 'material length scales'. It is therefore necessary to develop a rate-dependent (viscoplasticity) continuum theory bridging the gap between the classical continuum theories and the microstructure simulations. Physically based vicoplasticity models for different types of metals (body centered cubic, face centered cubic and hexagonal close-packed) and steel alloys are derived in this work for this purpose. We adopt a multi-scale, hierarchical thermodynamic consistent framework to construct the material constitutive relations for the rate-dependent behavior. The concept of thermal activation energy, dislocations interactions mechanisms and the role of dislocations dynamics in crystals are used in the derivation process taking into consideration the contribution of the plastic strain evolution of dislocation density to the flow stress of polycrystalline metals. Material length scales are implicitly introduced into the governing equations through material rate-dependency (viscosity). The proposed framework is implemented into the commercially well-known finite element software ABAQUS. The finite element simulations of material

  2. Efficient Uplink Modeling for Dynamic System-Level Simulations of Cellular and Mobile Networks

    Directory of Open Access Journals (Sweden)

    Lobinger Andreas

    2010-01-01

    Full Text Available A novel theoretical framework for uplink simulations is proposed. It allows investigations which have to cover a very long (real- time and which at the same time require a certain level of accuracy in terms of radio resource management, quality of service, and mobility. This is of particular importance for simulations of self-organizing networks. For this purpose, conventional system level simulators are not suitable due to slow simulation speeds far beyond real-time. Simpler, snapshot-based tools are lacking the aforementioned accuracy. The runtime improvements are achieved by deriving abstract theoretical models for the MAC layer behavior. The focus in this work is long term evolution, and the most important uplink effects such as fluctuating interference, power control, power limitation, adaptive transmission bandwidth, and control channel limitations are considered. Limitations of the abstract models will be discussed as well. Exemplary results are given at the end to demonstrate the capability of the derived framework.

  3. System-Level Model for OFDM WiMAX Transceiver in Radiation Environment

    International Nuclear Information System (INIS)

    Abdel Alim, O.; Elboghdadly, N.; Ashour, M.M.; Elaskary, A.M.

    2008-01-01

    WiMAX (Worldwide Inter operability for Microwave Access), an evolving standard for point-to-multipoint wireless networking, works for the l ast mile c onnections for replacing optical fiber technology network but with no need for adding more infra structure within crowded areas. Optical fiber technology is seriously considered for communication and monitoring applications in space and around nuclear reactors. Space and nuclear environments are characterized, in particular, by the presence of ionizing radiation fields. Therefore the influence of radiation on such networks needs to be investigated. This paper has the objective of building a System level model for a WiMAX OFDM (Orthogonal Frequency Division Multiplexing) based transceiver. Modeling irradiation noise as an external effect added to the Additive White Gaussian noise (AWGN). Then analyze, discuss the results based on qualitatively performance evaluation using BER calculations for radiation environment

  4. LISEM: a physically based model to simulate runoff and soil erosion in catchments: model structure

    NARCIS (Netherlands)

    Roo, de A.P.J.; Wesseling, C.G.; Cremers, N.H.D.T.; Verzandvoort, M.A.; Ritsema, C.J.; Oostindie, K.

    1996-01-01

    The Limburg Soil Erosion Model (LISEM) is described as a way of simulating hydrological and soil erosion processes during single rainfall events on the catchment scale. Sensitivity analysis of the model shows that the initial matric pressure potentialthe hydraulic conductivity of the soil and

  5. Virtual Systems Pharmacology (ViSP software for mechanistic system-level model simulations

    Directory of Open Access Journals (Sweden)

    Sergey eErmakov

    2014-10-01

    Full Text Available Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user’s particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.

  6. A Little Knowledge of Ground Motion: Explaining 3-D Physics-Based Modeling to Engineers

    Science.gov (United States)

    Porter, K.

    2014-12-01

    Users of earthquake planning scenarios require the ground-motion map to be credible enough to justify costly planning efforts, but not all ground-motion maps are right for all uses. There are two common ways to create a map of ground motion for a hypothetical earthquake. One approach is to map the median shaking estimated by empirical attenuation relationships. The other uses 3-D physics-based modeling, in which one analyzes a mathematical model of the earth's crust near the fault rupture and calculates the generation and propagation of seismic waves from source to ground surface by first principles. The two approaches produce different-looking maps. The more-familiar median maps smooth out variability and correlation. Using them in a planning scenario can lead to a systematic underestimation of damage and loss, and could leave a community underprepared for realistic shaking. The 3-D maps show variability, including some very high values that can disconcert non-scientists. So when the USGS Science Application for Risk Reduction's (SAFRR) Haywired scenario project selected 3-D maps, it was necessary to explain to scenario users—especially engineers who often use median maps—the differences, advantages, and disadvantages of the two approaches. We used authority, empirical evidence, and theory to support our choice. We prefaced our explanation with SAFRR's policy of using the best available earth science, and cited the credentials of the maps' developers and the reputation of the journal in which they published the maps. We cited recorded examples from past earthquakes of extreme ground motions that are like those in the scenario map. We explained the maps on theoretical grounds as well, explaining well established causes of variability: directivity, basin effects, and source parameters. The largest mapped motions relate to potentially unfamiliar extreme-value theory, so we used analogies to human longevity and the average age of the oldest person in samples of

  7. Linkage of a Physically Based Distributed Watershed Model and a Dynamic Plant Growth Model

    National Research Council Canada - National Science Library

    Johnson, Billy E; Coldren, Cade L

    2006-01-01

    The impact of hydrological alteration on vegetation and of vegetation on water quality can be greatly facilitated by linking existing water engines with general ecosystem models designed to make long...

  8. Sediment transport modelling in a distributed physically based hydrological catchment model

    Directory of Open Access Journals (Sweden)

    M. Konz

    2011-09-01

    Full Text Available Bedload sediment transport and erosion processes in channels are important components of water induced natural hazards in alpine environments. A raster based distributed hydrological model, TOPKAPI, has been further developed to support continuous simulations of river bed erosion and deposition processes. The hydrological model simulates all relevant components of the water cycle and non-linear reservoir methods are applied for water fluxes in the soil, on the ground surface and in the channel. The sediment transport simulations are performed on a sub-grid level, which allows for a better discretization of the channel geometry, whereas water fluxes are calculated on the grid level in order to be CPU efficient. Several transport equations as well as the effects of an armour layer on the transport threshold discharge are considered. Flow resistance due to macro roughness is also considered. The advantage of this approach is the integrated simulation of the entire basin runoff response combined with hillslope-channel coupled erosion and transport simulation. The comparison with the modelling tool SETRAC demonstrates the reliability of the modelling concept. The devised technique is very fast and of comparable accuracy to the more specialised sediment transport model SETRAC.

  9. Linkage of a Physically Based Distributed Watershed Model and a Dynamic Plant Growth Model

    Science.gov (United States)

    2006-12-01

    i.e., Universal Soil Loss Equation ( USLE ) factors, K, C, and P). The K, C, and P factors are empiri- cal coefficients with the same conceptual...with general ecosystem models designed to make long-term projections of ecosystem dynamics. This development effort investigated the linkage of soil ...20 EDYS soil module

  10. Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model

    Science.gov (United States)

    Anderson, K. R.

    2016-12-01

    Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics-based

  11. Female mating preferences determine system-level evolution in a gene network model.

    Science.gov (United States)

    Fierst, Janna L

    2013-06-01

    Environmental patterns of directional, stabilizing and fluctuating selection can influence the evolution of system-level properties like evolvability and mutational robustness. Intersexual selection produces strong phenotypic selection and these dynamics may also affect the response to mutation and the potential for future adaptation. In order to to assess the influence of mating preferences on these evolutionary properties, I modeled a male trait and female preference determined by separate gene regulatory networks. I studied three sexual selection scenarios: sexual conflict, a Gaussian model of the Fisher process described in Lande (in Proc Natl Acad Sci 78(6):3721-3725, 1981) and a good genes model in which the male trait signalled his mutational condition. I measured the effects these mating preferences had on the potential for traits and preferences to evolve towards new states, and mutational robustness of both the phenotype and the individual's overall viability. All types of sexual selection increased male phenotypic robustness relative to a randomly mating population. The Fisher model also reduced male evolvability and mutational robustness for viability. Under good genes sexual selection, males evolved an increased mutational robustness for viability. Females choosing their mates is a scenario that is sufficient to create selective forces that impact genetic evolution and shape the evolutionary response to mutation and environmental selection. These dynamics will inevitably develop in any population where sexual selection is operating, and affect the potential for future adaptation.

  12. Improving Simulations of Extreme Flows by Coupling a Physically-based Hydrologic Model with a Machine Learning Model

    Science.gov (United States)

    Mohammed, K.; Islam, A. S.; Khan, M. J. U.; Das, M. K.

    2017-12-01

    With the large number of hydrologic models presently available along with the global weather and geographic datasets, streamflows of almost any river in the world can be easily modeled. And if a reasonable amount of observed data from that river is available, then simulations of high accuracy can sometimes be performed after calibrating the model parameters against those observed data through inverse modeling. Although such calibrated models can succeed in simulating the general trend or mean of the observed flows very well, more often than not they fail to adequately simulate the extreme flows. This causes difficulty in tasks such as generating reliable projections of future changes in extreme flows due to climate change, which is obviously an important task due to floods and droughts being closely connected to people's lives and livelihoods. We propose an approach where the outputs of a physically-based hydrologic model are used as an input to a machine learning model to try and better simulate the extreme flows. To demonstrate this offline-coupling approach, the Soil and Water Assessment Tool (SWAT) was selected as the physically-based hydrologic model, the Artificial Neural Network (ANN) as the machine learning model and the Ganges-Brahmaputra-Meghna (GBM) river system as the study area. The GBM river system, located in South Asia, is the third largest in the world in terms of freshwater generated and forms the largest delta in the world. The flows of the GBM rivers were simulated separately in order to test the performance of this proposed approach in accurately simulating the extreme flows generated by different basins that vary in size, climate, hydrology and anthropogenic intervention on stream networks. Results show that by post-processing the simulated flows of the SWAT models with ANN models, simulations of extreme flows can be significantly improved. The mean absolute errors in simulating annual maximum/minimum daily flows were minimized from 4967

  13. Modeling systems-level dynamics: Understanding without mechanistic explanation in integrative systems biology.

    Science.gov (United States)

    MacLeod, Miles; Nersessian, Nancy J

    2015-02-01

    In this paper we draw upon rich ethnographic data of two systems biology labs to explore the roles of explanation and understanding in large-scale systems modeling. We illustrate practices that depart from the goal of dynamic mechanistic explanation for the sake of more limited modeling goals. These processes use abstract mathematical formulations of bio-molecular interactions and data fitting techniques which we call top-down abstraction to trade away accurate mechanistic accounts of large-scale systems for specific information about aspects of those systems. We characterize these practices as pragmatic responses to the constraints many modelers of large-scale systems face, which in turn generate more limited pragmatic non-mechanistic forms of understanding of systems. These forms aim at knowledge of how to predict system responses in order to manipulate and control some aspects of them. We propose that this analysis of understanding provides a way to interpret what many systems biologists are aiming for in practice when they talk about the objective of a "systems-level understanding." Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Power monitors: A framework for system-level power estimation using heterogeneous power models

    NARCIS (Netherlands)

    Bansal, N.; Lahiri, K.; Raghunathan, A.; Chakradhar, S.T.

    2005-01-01

    Paper analysis early in the design cycle is critical for the design of low-power systems. With the move to system-level specifications and design methodologies, there has been significant research interest in system-level power estimation. However, as demonstrated in this paper, the addition of

  15. A simplified physically-based breach model for a high concrete-faced rockfill dam: A case study

    OpenAIRE

    Qi-ming Zhong; Sheng-shui Chen; Zhao Deng

    2018-01-01

    A simplified physically-based model was developed to simulate the breaching process of the Gouhou concrete-faced rockfill dam (CFRD), which is the only breach case of a high CFRD in the world. Considering the dam height, a hydraulic method was chosen to simulate the initial scour position on the downstream slope, with the steepening of the downstream slope taken into account; a headcut erosion formula was adopted to simulate the backward erosion as well. The moment equilibrium method was util...

  16. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  17. Evaluation of SCS-CN method using a fully distributed physically based coupled surface-subsurface flow model

    Science.gov (United States)

    Shokri, Ali

    2017-04-01

    The hydrological cycle contains a wide range of linked surface and subsurface flow processes. In spite of natural connections between surface water and groundwater, historically, these processes have been studied separately. The current trend in hydrological distributed physically based model development is to combine distributed surface water models with distributed subsurface flow models. This combination results in a better estimation of the temporal and spatial variability of the interaction between surface and subsurface flow. On the other hand, simple lumped models such as the Soil Conservation Service Curve Number (SCS-CN) are still quite common because of their simplicity. In spite of the popularity of the SCS-CN method, there have always been concerns about the ambiguity of the SCS-CN method in explaining physical mechanism of rainfall-runoff processes. The aim of this study is to minimize these ambiguity by establishing a method to find an equivalence of the SCS-CN solution to the DrainFlow model, which is a fully distributed physically based coupled surface-subsurface flow model. In this paper, two hypothetical v-catchment tests are designed and the direct runoff from a storm event are calculated by both SCS-CN and DrainFlow models. To find a comparable solution to runoff prediction through the SCS-CN and DrainFlow, the variance between runoff predictions by the two models are minimized by changing Curve Number (CN) and initial abstraction (Ia) values. Results of this study have led to a set of lumped model parameters (CN and Ia) for each catchment that is comparable to a set of physically based parameters including hydraulic conductivity, Manning roughness coefficient, ground surface slope, and specific storage. Considering the lack of physical interpretation in CN and Ia is often argued as a weakness of SCS-CN method, the novel method in this paper gives a physical explanation to CN and Ia.

  18. On the use of Empirical Data to Downscale Non-scientific Scepticism About Results From Complex Physical Based Models

    Science.gov (United States)

    Germer, S.; Bens, O.; Hüttl, R. F.

    2008-12-01

    The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.

  19. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Physically-based modeling of topographic effects on spatial evapotranspiration and soil moisture patterns through radiation and wind

    Directory of Open Access Journals (Sweden)

    M. Liu

    2012-02-01

    Full Text Available In this paper, simulations with the Soil Water Atmosphere Plant (SWAP model are performed to quantify the spatial variability of both potential and actual evapotranspiration (ET, and soil moisture content (SMC caused by topography-induced spatial wind and radiation differences. To obtain the spatially distributed ET/SMC patterns, the field scale SWAP model is applied in a distributed way for both pointwise and catchment wide simulations. An adapted radiation model from r.sun and the physically-based meso-scale wind model METRAS PC are applied to obtain the spatial radiation and wind patterns respectively, which show significant spatial variation and correlation with aspect and elevation respectively. Such topographic dependences and spatial variations further propagate to ET/SMC. A strong spatial, seasonal-dependent, scale-relevant intra-catchment variability in daily/annual ET and less variability in SMC can be observed from the numerical experiments. The study concludes that topography has a significant effect on ET/SMC in the humid region where ET is a energy limited rather than water availability limited process. It affects the spatial runoff generation through spatial radiation and wind, therefore should be applied to inform hydrological model development. In addition, the methodology used in the study can serve as a general method for physically-based ET estimation for data sparse regions.

  1. Assessment of nitrate pollution in the Grand Morin aquifers (France): Combined use of geostatistics and physically based modeling

    Energy Technology Data Exchange (ETDEWEB)

    Flipo, Nicolas [Centre de Geosciences, UMR Sisyphe, ENSMP, 35 rue Saint-Honore, F-77305 Fontainebleau (France)]. E-mail: nicolas.flipo@ensmp.fr; Jeannee, Nicolas [Geovariances, 49 bis, avenue Franklin Roosevelt, F-77212 Avon (France); Poulin, Michel [Centre de Geosciences, UMR Sisyphe, ENSMP, 35 rue Saint-Honore, F-77305 Fontainebleau (France); Even, Stephanie [Centre de Geosciences, UMR Sisyphe, ENSMP, 35 rue Saint-Honore, F-77305 Fontainebleau (France); Ledoux, Emmanuel [Centre de Geosciences, UMR Sisyphe, ENSMP, 35 rue Saint-Honore, F-77305 Fontainebleau (France)

    2007-03-15

    The objective of this work is to combine several approaches to better understand nitrate fate in the Grand Morin aquifers (2700 km{sup 2}), part of the Seine basin. CAWAQS results from the coupling of the hydrogeological model NEWSAM with the hydrodynamic and biogeochemical model of river PROSE. CAWAQS is coupled with the agronomic model STICS in order to simulate nitrate migration in basins. First, kriging provides a satisfactory representation of aquifer nitrate contamination from local observations, to set initial conditions for the physically based model. Then associated confidence intervals, derived from data using geostatistics, are used to validate CAWAQS results. Results and evaluation obtained from the combination of these approaches are given (period 1977-1988). Then CAWAQS is used to simulate nitrate fate for a 20-year period (1977-1996). The mean nitrate concentrations increase in aquifers is 0.09 mgN L{sup -1} yr{sup -1}, resulting from an average infiltration flux of 3500 kgN.km{sup -2} yr{sup -1}. - Combined use of geostatistics and physically based modeling allows assessment of nitrate concentrations in aquifer systems.

  2. Assessment of nitrate pollution in the Grand Morin aquifers (France): Combined use of geostatistics and physically based modeling

    International Nuclear Information System (INIS)

    Flipo, Nicolas; Jeannee, Nicolas; Poulin, Michel; Even, Stephanie; Ledoux, Emmanuel

    2007-01-01

    The objective of this work is to combine several approaches to better understand nitrate fate in the Grand Morin aquifers (2700 km 2 ), part of the Seine basin. CAWAQS results from the coupling of the hydrogeological model NEWSAM with the hydrodynamic and biogeochemical model of river PROSE. CAWAQS is coupled with the agronomic model STICS in order to simulate nitrate migration in basins. First, kriging provides a satisfactory representation of aquifer nitrate contamination from local observations, to set initial conditions for the physically based model. Then associated confidence intervals, derived from data using geostatistics, are used to validate CAWAQS results. Results and evaluation obtained from the combination of these approaches are given (period 1977-1988). Then CAWAQS is used to simulate nitrate fate for a 20-year period (1977-1996). The mean nitrate concentrations increase in aquifers is 0.09 mgN L -1 yr -1 , resulting from an average infiltration flux of 3500 kgN.km -2 yr -1 . - Combined use of geostatistics and physically based modeling allows assessment of nitrate concentrations in aquifer systems

  3. Development of a physically-based planar inductors VHDL-AMS model for integrated power converter design

    Science.gov (United States)

    Ammouri, Aymen; Ben Salah, Walid; Khachroumi, Sofiane; Ben Salah, Tarek; Kourda, Ferid; Morel, Hervé

    2014-05-01

    Design of integrated power converters needs prototype-less approaches. Specific simulations are required for investigation and validation process. Simulation relies on active and passive device models. Models of planar devices, for instance, are still not available in power simulator tools. There is, thus, a specific limitation during the simulation process of integrated power systems. The paper focuses on the development of a physically-based planar inductor model and its validation inside a power converter during transient switching. The planar inductor model remains a complex device to model, particularly when the skin, the proximity and the parasitic capacitances effects are taken into account. Heterogeneous simulation scheme, including circuit and device models, is successfully implemented in VHDL-AMS language and simulated in Simplorer platform. The mixed simulation results has been favorably tested and compared with practical measurements. It is found that the multi-domain simulation results and measurements data are in close agreement.

  4. System-level modelling of dynamic reconfigurable designs using functional programming abstractions

    NARCIS (Netherlands)

    Uchevler, B.N.; Svarstad, Kjetil; Kuper, Jan; Baaij, C.P.R.

    With the increasing size and complexity of designs in electronics, new approaches are required for the description and verification of digital circuits, specifically at the system level. Functional HDLs can appear as an advantageous choice for formal verification and high-level descriptions. In this

  5. ARTS: A System-Level Framework for Modeling MPSoC Components and Analysis of their Causality

    DEFF Research Database (Denmark)

    Mahadevan, Shankar; Storgaard, Michael; Madsen, Jan

    2005-01-01

    Designing complex heterogeneousmultiprocessor Systemon- Chip (MPSoC) requires support for modeling and analysis of the different layers i.e. application, operating system (OS) and platform architecture. This paper presents an abstract system-level modeling framework, called ARTS, to support...

  6. Predicting the Water Level Fluctuation in an Alpine Lake Using Physically Based, Artificial Neural Network, and Time Series Forecasting Models

    Directory of Open Access Journals (Sweden)

    Chih-Chieh Young

    2015-01-01

    Full Text Available Accurate prediction of water level fluctuation is important in lake management due to its significant impacts in various aspects. This study utilizes four model approaches to predict water levels in the Yuan-Yang Lake (YYL in Taiwan: a three-dimensional hydrodynamic model, an artificial neural network (ANN model (back propagation neural network, BPNN, a time series forecasting (autoregressive moving average with exogenous inputs, ARMAX model, and a combined hydrodynamic and ANN model. Particularly, the black-box ANN model and physically based hydrodynamic model are coupled to more accurately predict water level fluctuation. Hourly water level data (a total of 7296 observations was collected for model calibration (training and validation. Three statistical indicators (mean absolute error, root mean square error, and coefficient of correlation were adopted to evaluate model performances. Overall, the results demonstrate that the hydrodynamic model can satisfactorily predict hourly water level changes during the calibration stage but not for the validation stage. The ANN and ARMAX models better predict the water level than the hydrodynamic model does. Meanwhile, the results from an ANN model are superior to those by the ARMAX model in both training and validation phases. The novel proposed concept using a three-dimensional hydrodynamic model in conjunction with an ANN model has clearly shown the improved prediction accuracy for the water level fluctuation.

  7. In situ measurement and modeling of biomechanical response of human cadaveric soft tissues for physics-based surgical simulation.

    Science.gov (United States)

    Lim, Yi-Je; Deo, Dhanannjay; Singh, Tejinder P; Jones, Daniel B; De, Suvranu

    2009-06-01

    Development of a laparoscopic surgery simulator that delivers high-fidelity visual and haptic (force) feedback, based on the physical models of soft tissues, requires the use of empirical data on the mechanical behavior of intra-abdominal organs under the action of external forces. As experiments on live human patients present significant risks, the use of cadavers presents an alternative. We present techniques of measuring and modeling the mechanical response of human cadaveric tissue for the purpose of developing a realistic model. The major contribution of this paper is the development of physics-based models of soft tissues that range from linear elastic models to nonlinear viscoelastic models which are efficient for application within the framework of a real-time surgery simulator. To investigate the in situ mechanical, static, and dynamic properties of intra-abdominal organs, we have developed a high-precision instrument by retrofitting a robotic device from Sensable Technologies (position resolution of 0.03 mm) with a six-axis Nano 17 force-torque sensor from ATI Industrial Automation (force resolution of 1/1,280 N along each axis), and used it to apply precise displacement stimuli and record the force response of liver and stomach of ten fresh human cadavers. The mean elastic modulus of liver and stomach is estimated as 5.9359 kPa and 1.9119 kPa, respectively over the range of indentation depths tested. We have also obtained the parameters of a quasilinear viscoelastic (QLV) model to represent the nonlinear viscoelastic behavior of the cadaver stomach and liver over a range of indentation depths and speeds. The models are found to have an excellent goodness of fit (with R (2) > 0.99). The data and models presented in this paper together with additional ones based on the principles presented in this paper would result in realistic physics-based surgical simulators.

  8. Physically based modelling and optimal operation for product drying during post-harvest processing.

    NARCIS (Netherlands)

    Boxtel, van A.J.B.; Lukasse, L.; Farkas, I.; Rendik, Z.

    1996-01-01

    The development of new procedures for crop production and post-harvest processing requires models. Models based on physical backgrounds are most useful for this purpose because of their extrapolation potential. An optimal procedure is developed for alfalfa drying using a physical model. The model

  9. Physics-Based Modeling of Electric Operation, Heat Transfer, and Scrap Melting in an AC Electric Arc Furnace

    Science.gov (United States)

    Opitz, Florian; Treffinger, Peter

    2016-04-01

    Electric arc furnaces (EAF) are complex industrial plants whose actual behavior depends upon numerous factors. Due to its energy intensive operation, the EAF process has always been subject to optimization efforts. For these reasons, several models have been proposed in literature to analyze and predict different modes of operation. Most of these models focused on the processes inside the vessel itself. The present paper introduces a dynamic, physics-based model of a complete EAF plant which consists of the four subsystems vessel, electric system, electrode regulation, and off-gas system. Furthermore the solid phase is not treated to be homogenous but a simple spatial discretization is employed. Hence it is possible to simulate the energy input by electric arcs and fossil fuel burners depending on the state of the melting progress. The model is implemented in object-oriented, equation-based language Modelica. The simulation results are compared to literature data.

  10. A local-community-level, physically-based model of end-use energy consumption by Australian housing stock

    International Nuclear Information System (INIS)

    Ren Zhengen; Paevere, Phillip; McNamara, Cheryl

    2012-01-01

    We developed a physics based bottom-up model to estimate annual housing stock energy consumption at a local community level (Census Collection District—CCD) with an hourly resolution. Total energy consumption, including space heating and cooling, water heating, lighting and other household appliances, was simulated by considering building construction and materials, equipment and appliances, local climates and occupancy patterns. The model was used to analyse energy use by private dwellings in more than five thousand CCDs in the state of New South Wales (NSW), Australia. The predicted results focus on electricity consumption (natural gas and other fuel sources were excluded as the data are not available) and track the actual electricity consumption at CCD level with an error of 9.2% when summed to state level. For NSW and Victoria 2006, the predicted state electricity consumption is close to the published model (within 6%) and statistical data (within 10%). A key feature of the model is that it can be used to predict hourly electricity consumption and peak demand at fine geographic scales, which is important for grid planning and designing local energy efficiency or demand response strategies. - Highlights: ► We developed a physics-based model to estimate housing stock energy consumption. ► House type and vintage, family type and occupancy time were considered. ► The model results are close to actual energy consumption at local community level. ► Its’ results agree well with the published model and statistical data at state level. ► It shows the model could provide from hourly to annual residential energy consumption.

  11. A Simple Physics-Based Model Predicts Oil Production from Thousands of Horizontal Wells in Shales

    KAUST Repository

    Patzek, Tadeusz; Saputra, Wardana; Kirati, Wissem

    2017-01-01

    and ultimate recovery in shale wells. Here we introduce a simple model of producing oil and solution gas from the horizontal hydrofractured wells. This model is consistent with the basic physics and geometry of the extraction process. We then apply our model

  12. Enabling full-field physics-based optical proximity correction via dynamic model generation

    Science.gov (United States)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-07-01

    As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  13. A Comparison between Physics-based and Polytropic MHD Models for Stellar Coronae and Stellar Winds of Solar Analogs

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, O. [Lowell Center for Space Science and Technology, University of Massachusetts, Lowell, MA 01854 (United States)

    2017-02-01

    The development of the Zeeman–Doppler Imaging (ZDI) technique has provided synoptic observations of surface magnetic fields of low-mass stars. This led the stellar astrophysics community to adopt modeling techniques that have been used in solar physics using solar magnetograms. However, many of these techniques have been neglected by the solar community due to their failure to reproduce solar observations. Nevertheless, some of these techniques are still used to simulate the coronae and winds of solar analogs. Here we present a comparative study between two MHD models for the solar corona and solar wind. The first type of model is a polytropic wind model, and the second is the physics-based AWSOM model. We show that while the AWSOM model consistently reproduces many solar observations, the polytropic model fails to reproduce many of them, and in the cases where it does, its solutions are unphysical. Our recommendation is that polytropic models, which are used to estimate mass-loss rates and other parameters of solar analogs, must first be calibrated with solar observations. Alternatively, these models can be calibrated with models that capture more detailed physics of the solar corona (such as the AWSOM model) and that can reproduce solar observations in a consistent manner. Without such a calibration, the results of the polytropic models cannot be validated, but they can be wrongly used by others.

  14. Prognostics Health Management and Physics based failure Models for Electrolytic Capacitors

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper proposes first principles based modeling and prognostics approach for electrolytic capacitors. Electrolytic capacitors and MOSFETs are the two major...

  15. Physics Based Model for Online Fault Detection in Autonomous Cryogenic Loading System

    Science.gov (United States)

    Kashani, Ali; Devine, Ekaterina Viktorovna P; Luchinsky, Dmitry Georgievich; Smelyanskiy, Vadim; Sass, Jared P.; Brown, Barbara L.; Patterson-Hine, Ann

    2013-01-01

    We report the progress in the development of the chilldown model for rapid cryogenic loading system developed at KSC. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The two-phase flow model of the chilldown is approximated as one-dimensional homogeneous fluid flow with no slip condition for the interphase velocity. The model is built using commercial SINDAFLUINT software. The results of numerical predictions are in good agreement with the experimental time traces. The obtained results pave the way to the application of the SINDAFLUINT model as a verification tool for the design and algorithm development required for autonomous loading operation.

  16. A method for physically based model analysis of conjunctive use in response to potential climate changes

    Science.gov (United States)

    Hanson, R.T.; Flint, L.E.; Flint, A.L.; Dettinger, M.D.; Faunt, C.C.; Cayan, D.; Schmid, W.

    2012-01-01

    Potential climate change effects on aspects of conjunctive management of water resources can be evaluated by linking climate models with fully integrated groundwater-surface water models. The objective of this study is to develop a modeling system that links global climate models with regional hydrologic models, using the California Central Valley as a case study. The new method is a supply and demand modeling framework that can be used to simulate and analyze potential climate change and conjunctive use. Supply-constrained and demand-driven linkages in the water system in the Central Valley are represented with the linked climate models, precipitation-runoff models, agricultural and native vegetation water use, and hydrologic flow models to demonstrate the feasibility of this method. Simulated precipitation and temperature were used from the GFDL-A2 climate change scenario through the 21st century to drive a regional water balance mountain hydrologic watershed model (MHWM) for the surrounding watersheds in combination with a regional integrated hydrologic model of the Central Valley (CVHM). Application of this method demonstrates the potential transition from predominantly surface water to groundwater supply for agriculture with secondary effects that may limit this transition of conjunctive use. The particular scenario considered includes intermittent climatic droughts in the first half of the 21st century followed by severe persistent droughts in the second half of the 21st century. These climatic droughts do not yield a valley-wide operational drought but do cause reduced surface water deliveries and increased groundwater abstractions that may cause additional land subsidence, reduced water for riparian habitat, or changes in flows at the Sacramento-San Joaquin River Delta. The method developed here can be used to explore conjunctive use adaptation options and hydrologic risk assessments in regional hydrologic systems throughout the world.

  17. A physically-based parsimonious hydrological model for flash floods in Mediterranean catchments

    Directory of Open Access Journals (Sweden)

    H. Roux

    2011-09-01

    Full Text Available A spatially distributed hydrological model, dedicated to flood simulation, is developed on the basis of physical process representation (infiltration, overland flow, channel routing. Estimation of model parameters requires data concerning topography, soil properties, vegetation and land use. Four parameters are calibrated for the entire catchment using one flood event. Model sensitivity to individual parameters is assessed using Monte-Carlo simulations. Results of this sensitivity analysis with a criterion based on the Nash efficiency coefficient and the error of peak time and runoff are used to calibrate the model. This procedure is tested on the Gardon d'Anduze catchment, located in the Mediterranean zone of southern France. A first validation is conducted using three flood events with different hydrometeorological characteristics. This sensitivity analysis along with validation tests illustrates the predictive capability of the model and points out the possible improvements on the model's structure and parameterization for flash flood forecasting, especially in ungauged basins. Concerning the model structure, results show that water transfer through the subsurface zone also contributes to the hydrograph response to an extreme event, especially during the recession period. Maps of soil saturation emphasize the impact of rainfall and soil properties variability on these dynamics. Adding a subsurface flow component in the simulation also greatly impacts the spatial distribution of soil saturation and shows the importance of the drainage network. Measures of such distributed variables would help discriminating between different possible model structures.

  18. A physically based compact I-V model for monolayer TMDC channel MOSFET and DMFET biosensor.

    Science.gov (United States)

    Rahman, Ehsanur; Shadman, Abir; Ahmed, Imtiaz; Khan, Saeed Uz Zaman; Khosru, Quazi D M

    2018-06-08

    In this work, a compact transport model has been developed for monolayer transition metal dichalcogenide (TMDC) channel MOSFET. The analytical model solves the Poisson's equation for the inversion charge density to get the electrostatic potential in the channel. Current is then calculated by solving the drift-diffusion equation. The model makes gradual channel approximation to simplify the solution procedure. The appropriate density of states obtained from the first principle density functional theory simulation has been considered to keep the model physically accurate for monolayer TMDC channel FET. The outcome of the model has been benchmarked against both experimental and numerical quantum simulation results with the help of a few fitting parameters. Using the compact model, detailed output and transfer characteristics of monolayer WSe 2 FET have been studied, and various performance parameters have been determined. The study confirms excellent ON and OFF state performances of monolayer WSe 2 FET which could be viable for the next generation high-speed, low power applications. Also, the proposed model has been extended to study the operation of a biosensor. A monolayer MoS 2 channel based dielectric modulated FET is investigated using the compact model for detection of a biomolecule in a dry environment.

  19. A physically based compact I–V model for monolayer TMDC channel MOSFET and DMFET biosensor

    Science.gov (United States)

    Rahman, Ehsanur; Shadman, Abir; Ahmed, Imtiaz; Zaman Khan, Saeed Uz; Khosru, Quazi D. M.

    2018-06-01

    In this work, a compact transport model has been developed for monolayer transition metal dichalcogenide (TMDC) channel MOSFET. The analytical model solves the Poisson’s equation for the inversion charge density to get the electrostatic potential in the channel. Current is then calculated by solving the drift–diffusion equation. The model makes gradual channel approximation to simplify the solution procedure. The appropriate density of states obtained from the first principle density functional theory simulation has been considered to keep the model physically accurate for monolayer TMDC channel FET. The outcome of the model has been benchmarked against both experimental and numerical quantum simulation results with the help of a few fitting parameters. Using the compact model, detailed output and transfer characteristics of monolayer WSe2 FET have been studied, and various performance parameters have been determined. The study confirms excellent ON and OFF state performances of monolayer WSe2 FET which could be viable for the next generation high-speed, low power applications. Also, the proposed model has been extended to study the operation of a biosensor. A monolayer MoS2 channel based dielectric modulated FET is investigated using the compact model for detection of a biomolecule in a dry environment.

  20. Improving predictive power of physically based rainfall-induced shallow landslide models: a probabilistic approach

    Directory of Open Access Journals (Sweden)

    S. Raia

    2014-03-01

    Full Text Available Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are based on deterministic laws. These models extend spatially the static stability models adopted in geotechnical engineering, and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the operation of the existing models lays in the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of rainfall-induced shallow landslides. For this purpose, we have modified the transient rainfall infiltration and grid-based regional slope-stability analysis (TRIGRS code. The new code (TRIGRS-P adopts a probabilistic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs

  1. Improving predictive power of physically based rainfall-induced shallow landslide models: a probablistic approach

    Science.gov (United States)

    Raia, S.; Alvioli, M.; Rossi, M.; Baum, R.L.; Godt, J.W.; Guzzetti, F.

    2013-01-01

    Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are deterministic. These models extend spatially the static stability models adopted in geotechnical engineering and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the existing models is the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of shallow rainfall-induced landslides. For the purpose, we have modified the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis (TRIGRS) code. The new code (TRIGRS-P) adopts a stochastic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs of several model runs obtained varying the input parameters

  2. Systems-level modeling the effects of arsenic exposure with sequential pulsed and fluctuating patterns for tilapia and freshwater clam

    International Nuclear Information System (INIS)

    Chen, W.-Y.; Tsai, J.-W.; Ju, Y.-R.; Liao, C.-M.

    2010-01-01

    The purpose of this paper was to use quantitative systems-level approach employing biotic ligand model based threshold damage model to examine physiological responses of tilapia and freshwater clam to sequential pulsed and fluctuating arsenic concentrations. We tested present model and triggering mechanisms by carrying out a series of modeling experiments where we used periodic pulses and sine-wave as featured exposures. Our results indicate that changes in the dominant frequencies and pulse timing can shift the safe rate distributions for tilapia, but not for that of freshwater clam. We found that tilapia increase bioenergetic costs to maintain the acclimation during pulsed and sine-wave exposures. Our ability to predict the consequences of physiological variation under time-varying exposure patterns has also implications for optimizing species growing, cultivation strategies, and risk assessment in realistic situations. - Systems-level modeling the pulsed and fluctuating arsenic exposures.

  3. Systems-level modeling the effects of arsenic exposure with sequential pulsed and fluctuating patterns for tilapia and freshwater clam

    Energy Technology Data Exchange (ETDEWEB)

    Chen, W.-Y. [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Tsai, J.-W. [Institute of Ecology and Evolutionary Ecology, China Medical University, Taichung 40402, Taiwan (China); Ju, Y.-R. [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Liao, C.-M., E-mail: cmliao@ntu.edu.t [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China)

    2010-05-15

    The purpose of this paper was to use quantitative systems-level approach employing biotic ligand model based threshold damage model to examine physiological responses of tilapia and freshwater clam to sequential pulsed and fluctuating arsenic concentrations. We tested present model and triggering mechanisms by carrying out a series of modeling experiments where we used periodic pulses and sine-wave as featured exposures. Our results indicate that changes in the dominant frequencies and pulse timing can shift the safe rate distributions for tilapia, but not for that of freshwater clam. We found that tilapia increase bioenergetic costs to maintain the acclimation during pulsed and sine-wave exposures. Our ability to predict the consequences of physiological variation under time-varying exposure patterns has also implications for optimizing species growing, cultivation strategies, and risk assessment in realistic situations. - Systems-level modeling the pulsed and fluctuating arsenic exposures.

  4. Physics-based process model approach for detecting discontinuity during friction stir welding

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Amber; Pfefferkorn, Frank E.; Duffie, Neil A.; Ferrier, Nicola J.; Smith, Christopher B.; Malukhin, Kostya; Zinn, Michael

    2015-02-12

    The goal of this work is to develop a method for detecting the creation of discontinuities during friction stir welding. This in situ weld monitoring method could significantly reduce the need for post-process inspection. A process force model and a discontinuity force model were created based on the state-of-the-art understanding of flow around an friction stir welding (FSW) tool. These models are used to predict the FSW forces and size of discontinuities formed in the weld. Friction stir welds with discontinuities and welds without discontinuities were created, and the differences in force dynamics were observed. In this paper, discontinuities were generated by reducing the tool rotation frequency and increasing the tool traverse speed in order to create "cold" welds. Experimental force data for welds with discontinuities and welds without discontinuities compared favorably with the predicted forces. The model currently overpredicts the discontinuity size.

  5. Deep-depletion physics-based analytical model for scanning capacitance microscopy carrier profile extraction

    International Nuclear Information System (INIS)

    Wong, K. M.; Chim, W. K.

    2007-01-01

    An approach for fast and accurate carrier profiling using deep-depletion analytical modeling of scanning capacitance microscopy (SCM) measurements is shown for an ultrashallow p-n junction with a junction depth of less than 30 nm and a profile steepness of about 3 nm per decade change in carrier concentration. In addition, the analytical model is also used to extract the SCM dopant profiles of three other p-n junction samples with different junction depths and profile steepnesses. The deep-depletion effect arises from rapid changes in the bias applied between the sample and probe tip during SCM measurements. The extracted carrier profile from the model agrees reasonably well with the more accurate carrier profile from inverse modeling and the dopant profile from secondary ion mass spectroscopy measurements

  6. The Use of Physics-Based Models to Predict Fragmentation of Ordnance

    National Research Council Canada - National Science Library

    Ott, Garland

    1997-01-01

    .... These same data collected from arena tests are also used as input for models used by other technical communities such as effectiveness, collateral damage, force protection, and weapon design. (1...

  7. Creating physically-based three-dimensional microstructures: Bridging phase-field and crystal plasticity models.

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Hojun [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Owen, Steven J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Abdeljawad, Fadi F. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hanks, Byron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    In order to better incorporate microstructures in continuum scale models, we use a novel finite element (FE) meshing technique to generate three-dimensional polycrystalline aggregates from a phase field grain growth model of grain microstructures. The proposed meshing technique creates hexahedral FE meshes that capture smooth interfaces between adjacent grains. Three dimensional realizations of grain microstructures from the phase field model are used in crystal plasticity-finite element (CP-FE) simulations of polycrystalline a -iron. We show that the interface conformal meshes significantly reduce artificial stress localizations in voxelated meshes that exhibit the so-called "wedding cake" interfaces. This framework provides a direct link between two mesoscale models - phase field and crystal plasticity - and for the first time allows mechanics simulations of polycrystalline materials using three-dimensional hexahedral finite element meshes with realistic topological features.

  8. A Simple Physics-Based Model Predicts Oil Production from Thousands of Horizontal Wells in Shales

    KAUST Repository

    Patzek, Tadeusz

    2017-10-18

    Over the last six years, crude oil production from shales and ultra-deep GOM in the United States has accounted for most of the net increase of global oil production. Therefore, it is important to have a good predictive model of oil production and ultimate recovery in shale wells. Here we introduce a simple model of producing oil and solution gas from the horizontal hydrofractured wells. This model is consistent with the basic physics and geometry of the extraction process. We then apply our model thousands of wells in the Eagle Ford shale. Given well geometry, we obtain a one-dimensional nonlinear pressure diffusion equation that governs flow of mostly oil and solution gas. In principle, solutions of this equation depend on many parameters, but in practice and within a given oil shale, all but three can be fixed at typical values, leading to a nonlinear diffusion problem we linearize and solve exactly with a scaling

  9. System-Level Coupled Modeling of Piezoelectric Vibration Energy Harvesting Systems by Joint Finite Element and Circuit Analysis

    Directory of Open Access Journals (Sweden)

    Congcong Cheng

    2016-01-01

    Full Text Available A practical piezoelectric vibration energy harvesting (PVEH system is usually composed of two coupled parts: a harvesting structure and an interface circuit. Thus, it is much necessary to build system-level coupled models for analyzing PVEH systems, so that the whole PVEH system can be optimized to obtain a high overall efficiency. In this paper, two classes of coupled models are proposed by joint finite element and circuit analysis. The first one is to integrate the equivalent circuit model of the harvesting structure with the interface circuit and the second one is to integrate the equivalent electrical impedance of the interface circuit into the finite element model of the harvesting structure. Then equivalent circuit model parameters of the harvesting structure are estimated by finite element analysis and the equivalent electrical impedance of the interface circuit is derived by circuit analysis. In the end, simulations are done to validate and compare the proposed two classes of system-level coupled models. The results demonstrate that harvested powers from the two classes of coupled models approximate to theoretic values. Thus, the proposed coupled models can be used for system-level optimizations in engineering applications.

  10. Physically Based Modeling of Delta Island Consumptive Use: Fabian Tract and Staten Island, California

    Directory of Open Access Journals (Sweden)

    Lucas J. Siegfried

    2014-12-01

    Full Text Available doi: http://dx.doi.org/10.15447/sfews.2014v12iss4art2Water use estimation is central to managing most water problems. To better understand water use in California’s Sacramento–San Joaquin Delta, a collaborative, integrated approach was used to predict Delta island diversion, consumption, and return of water on a more detailed temporal and spatial resolution. Fabian Tract and Staten Island were selected for this pilot study based on available data and island accessibility. Historical diversion and return location data, water rights claims, LiDAR digital elevation model data, and Google Earth were used to predict island diversion and return locations, which were tested and improved through ground-truthing. Soil and land-use characteristics as well as weather data were incorporated with the Integrated Water Flow Model Demand Calculator to estimate water use and runoff returns from input agricultural lands. For modeling, the islands were divided into grid cells forming subregions, representing fields, levees, ditches, and roads. The subregions were joined hydrographically to form diversion and return watersheds related to return and diversion locations. Diversions and returns were limited by physical capacities. Differences between initial model and measured results point to the importance of seepage into deeply subsided islands. The capabilities of the models presented far exceeded current knowledge of agricultural practices within the Delta, demonstrating the need for more data collection to enable improvements upon current Delta Island Consumptive Use estimates.

  11. Physics Based Electrolytic Capacitor Degradation Models for Prognostic Studies under Thermal Overstress

    Science.gov (United States)

    Kulkarni, Chetan S.; Celaya, Jose R.; Goebel, Kai; Biswas, Gautam

    2012-01-01

    Electrolytic capacitors are used in several applications ranging from power supplies on safety critical avionics equipment to power drivers for electro-mechanical actuators. This makes them good candidates for prognostics and health management research. Prognostics provides a way to assess remaining useful life of components or systems based on their current state of health and their anticipated future use and operational conditions. Past experiences show that capacitors tend to degrade and fail faster under high electrical and thermal stress conditions that they are often subjected to during operations. In this work, we study the effects of accelerated aging due to thermal stress on different sets of capacitors under different conditions. Our focus is on deriving first principles degradation models for thermal stress conditions. Data collected from simultaneous experiments are used to validate the desired models. Our overall goal is to derive accurate models of capacitor degradation, and use them to predict performance changes in DC-DC converters.

  12. A physics-based potential and electric field model of a nanoscale ...

    Indian Academy of Sciences (India)

    is 240 nm. At y = 0, it is bounded by interface with the gate dielectric and on the lower ..... modelled results except near the source and the drain end which might be due to the finite .... [23] Sentaurus Device User Guide, Synopsys, Inc. (2011).

  13. Finite-element modelling of physics-based hillslope hydrology, Keith Beven, and beyond

    Science.gov (United States)

    Loague, Keith; Ebel, Brian A.

    2016-01-01

    Keith Beven is a voice of reason on the intelligent use of models and the subsequent acknowledgement/assessment of the uncertainties associated with environmental simula-tion. With several books and hundreds of papers, Keith’s work is widespread, well known, and highly referenced. Four of Keith’s most notable contributions are the iconic TOPMODEL (Beven and Kirkby, 1979), classic papers on macropores and preferential flow (Beven and Germann, 1982, 2013), two editions of the rainfall-runoff modelling bible (Beven, 2000a, 2012), and the selection/commentary for the first volume from the Benchmark Papers in Hydrology series (Beven, 2006b). Remarkably, the thirty-one papers in his benchmark volume, entitled Streamflow Generation Processes, are not tales of modelling wizardry but describe measurements designed to better understand the dynamics of near-surface systems (quintessential Keith). The impetus for this commentary is Keith’sPhD research (Beven, 1975), where he developed a new finite-element model and conducted concept-development simu-lations based upon the processes identified by, for example, Richards (1931), Horton (1933), Hubbert (1940), Hewlett and Hibbert (1963), and Dunne and Black (1970a,b). Readers not familiar with the different mechanisms of streamflow generation are referred to Dunne (1978).

  14. A physically based 3-D model of ice cliff evolution over debris-covered glaciers

    NARCIS (Netherlands)

    Buri, Pascal; Miles, Evan S.; Steiner, J.F.; Immerzeel, W.W.; Wagnon, Patrick; Pellicciotti, Francesca

    2016-01-01

    We use high-resolution digital elevation models (DEMs) from unmanned aerial vehicle (UAV) surveys to document the evolution of four ice cliffs on the debris-covered tongue of Lirung Glacier, Nepal, over one ablation season. Observations show that out of four cliffs, three different patterns of

  15. Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices

    Science.gov (United States)

    Michaelides, Stylianos

    Flip Chip on Board (FCOB) and Chip-Scale Packages (CSPs) are relatively new technologies that are being increasingly used in the electronic packaging industry. Compared to the more widely used face-up wirebonding and TAB technologies, flip-chips and most CSPs provide the shortest possible leads, lower inductance, higher frequency, better noise control, higher density, greater input/output (I/O), smaller device footprint and lower profile. However, due to the short history and due to the introduction of several new electronic materials, designs, and processing conditions, very limited work has been done to understand the role of material, geometry, and processing parameters on the reliability of flip-chip devices. Also, with the ever-increasing complexity of semiconductor packages and with the continued reduction in time to market, it is too costly to wait until the later stages of design and testing to discover that the reliability is not satisfactory. The objective of the research is to develop integrated process-reliability models that will take into consideration the mechanics of assembly processes to be able to determine the reliability of face-down devices under thermal cycling and long-term temperature dwelling. The models incorporate the time and temperature-dependent constitutive behavior of various materials in the assembly to be able to predict failure modes such as die cracking and solder cracking. In addition, the models account for process-induced defects and macro-micro features of the assembly. Creep-fatigue and continuum-damage mechanics models for the solder interconnects and fracture-mechanics models for the die have been used to determine the reliability of the devices. The results predicted by the models have been successfully validated against experimental data. The validated models have been used to develop qualification and test procedures for implantable medical devices. In addition, the research has helped develop innovative face

  16. A physically based 3-D model of ice cliff evolution over debris-covered glaciers

    Science.gov (United States)

    Buri, Pascal; Miles, Evan S.; Steiner, Jakob F.; Immerzeel, Walter W.; Wagnon, Patrick; Pellicciotti, Francesca

    2016-12-01

    We use high-resolution digital elevation models (DEMs) from unmanned aerial vehicle (UAV) surveys to document the evolution of four ice cliffs on the debris-covered tongue of Lirung Glacier, Nepal, over one ablation season. Observations show that out of four cliffs, three different patterns of evolution emerge: (i) reclining cliffs that flatten during the ablation season; (ii) stable cliffs that maintain a self-similar geometry; and (iii) growing cliffs, expanding laterally. We use the insights from this unique data set to develop a 3-D model of cliff backwasting and evolution that is validated against observations and an independent data set of volume losses. The model includes ablation at the cliff surface driven by energy exchange with the atmosphere, reburial of cliff cells by surrounding debris, and the effect of adjacent ponds. The cliff geometry is updated monthly to account for the modifications induced by each of those processes. Model results indicate that a major factor affecting the survival of steep cliffs is the coupling with ponded water at its base, which prevents progressive flattening and possible disappearance of a cliff. The radial growth observed at one cliff is explained by higher receipts of longwave and shortwave radiation, calculated taking into account atmospheric fluxes, shading, and the emission of longwave radiation from debris surfaces. The model is a clear step forward compared to existing static approaches that calculate atmospheric melt over an invariant cliff geometry and can be used for long-term simulations of cliff evolution and to test existing hypotheses about cliffs' survival.

  17. Prediction of power ramp defects - development of a physically based model and evaluation of existing criteria

    International Nuclear Information System (INIS)

    Notley, M.J.F.; Kohn, E.

    2001-01-01

    Power-ramp induced fuel failure is not a problem in the present CANDU reactors. The current empirical correlations that define probability of failure do not agree one-with-another and do not allow extrapolation outside the database. A new methodology, based on physical processes, is presented and compared to data. The methodology calculates the pre-ramp sheath stress and the incremental stress during the ramp, and whether or not there is a defect is predicted based on a failure threshold stress. The proposed model confirms the deductions made by daSilva from an empirical 'fit' to data from the 1988 PNGS power ramp failure incident. It is recommended that daSilvas' correlation be used as reference for OPG (Ontario Power Generation) power reactor fuel, and that extrapolation be performed using the new model. (author)

  18. Solar Radiation Received by Slopes Using COMS Imagery, a Physically Based Radiation Model, and GLOBE

    Directory of Open Access Journals (Sweden)

    Jong-Min Yeom

    2016-01-01

    Full Text Available This study mapped the solar radiation received by slopes for all of Korea, including areas that are not measured by ground station measurements, through using satellites and topographical data. When estimating insolation with satellite, we used a physical model to measure the amount of hourly based solar surface insolation. Furthermore, we also considered the effects of topography using the Global Land One-Kilometer Base Elevation (GLOBE digital elevation model (DEM for the actual amount of incident solar radiation according to solar geometry. The surface insolation mapping, by integrating a physical model with the Communication, Ocean, and Meteorological Satellite (COMS Meteorological Imager (MI image, was performed through a comparative analysis with ground-based observation data (pyranometer. Original and topographically corrected solar radiation maps were created and their characteristics analyzed. Both the original and the topographically corrected solar energy resource maps captured the temporal variations in atmospheric conditions, such as the movement of seasonal rain fronts during summer. In contrast, although the original solar radiation map had a low insolation value over mountain areas with a high rate of cloudiness, the topographically corrected solar radiation map provided a better description of the actual surface geometric characteristics.

  19. A Physically-based Model For Rainfall-triggered Landslides At A Regional Scale

    Science.gov (United States)

    Teles, V.; Capolongo, D.; Bras, R. L.

    Rainfall has long been recognized as a major cause of landslides. Historical records have shown that large rainfall can generate hundreds of landslides over hundreds of square kilometers. Although a great body of work has documented the morphology and mechanics of individual slope failure, few studies have considered the process at basin and regional scale. A landslide model is integrated in the landscape evolution model CHILD and simulates rainfall-triggered events based on a geotechnical index, the factor of safety, which takes into account the slope, the soil effective cohesion and weight, the friction angle, the regolith thickness and the saturated thickness. The stat- urated thickness is represented by the wetness index developed in the TOPMODEL. The topography is represented by a Triangulated Irregular Network (TIN). The factor of safety is computed at each node of the TIN. If the factor of safety is lower than 1, a landslide is intiated at this node. The regolith is then moved downstream. We applied the model to the Fortore basin whose valley cuts the flysch terrain that constitute the framework of the so called "sub-Apennines" chain that is the most eastern part of the Southern Apennines (Italy). We will discuss its value according to its sensitivity to the used parameters and compare it to the actual data available for this basin.

  20. Physics Based Model for Cryogenic Chilldown and Loading. Part IV: Code Structure

    Science.gov (United States)

    Luchinsky, D. G.; Smelyanskiy, V. N.; Brown, B.

    2014-01-01

    This is the fourth report in a series of technical reports that describe separated two-phase flow model application to the cryogenic loading operation. In this report we present the structure of the code. The code consists of five major modules: (1) geometry module; (2) solver; (3) material properties; (4) correlations; and finally (5) stability control module. The two key modules - solver and correlations - are further divided into a number of submodules. Most of the physics and knowledge databases related to the properties of cryogenic two-phase flow are included into the cryogenic correlations module. The functional form of those correlations is not well established and is a subject of extensive research. Multiple parametric forms for various correlations are currently available. Some of them are included into correlations module as will be described in details in a separate technical report. Here we describe the overall structure of the code and focus on the details of the solver and stability control modules.

  1. Physically based modeling of rainfall-triggered landslides: a case study in the Luquillo forest, Puerto Rico

    Science.gov (United States)

    Lepore, C.; Arnone, E.; Noto, L. V.; Sivandran, G.; Bras, R. L.

    2013-09-01

    This paper presents the development of a rainfall-triggered landslide module within an existing physically based spatially distributed ecohydrologic model. The model, tRIBS-VEGGIE (Triangulated Irregular Networks-based Real-time Integrated Basin Simulator and Vegetation Generator for Interactive Evolution), is capable of a sophisticated description of many hydrological processes; in particular, the soil moisture dynamics are resolved at a temporal and spatial resolution required to examine the triggering mechanisms of rainfall-induced landslides. The validity of the tRIBS-VEGGIE model to a tropical environment is shown with an evaluation of its performance against direct observations made within the study area of Luquillo Forest. The newly developed landslide module builds upon the previous version of the tRIBS landslide component. This new module utilizes a numerical solution to the Richards' equation (present in tRIBS-VEGGIE but not in tRIBS), which better represents the time evolution of soil moisture transport through the soil column. Moreover, the new landslide module utilizes an extended formulation of the factor of safety (FS) to correctly quantify the role of matric suction in slope stability and to account for unsaturated conditions in the evaluation of FS. The new modeling framework couples the capabilities of the detailed hydrologic model to describe soil moisture dynamics with the infinite slope model, creating a powerful tool for the assessment of rainfall-triggered landslide risk.

  2. Physically based modeling of rainfall-triggered landslides: a case study in the Luquillo forest, Puerto Rico

    Directory of Open Access Journals (Sweden)

    C. Lepore

    2013-09-01

    Full Text Available This paper presents the development of a rainfall-triggered landslide module within an existing physically based spatially distributed ecohydrologic model. The model, tRIBS-VEGGIE (Triangulated Irregular Networks-based Real-time Integrated Basin Simulator and Vegetation Generator for Interactive Evolution, is capable of a sophisticated description of many hydrological processes; in particular, the soil moisture dynamics are resolved at a temporal and spatial resolution required to examine the triggering mechanisms of rainfall-induced landslides. The validity of the tRIBS-VEGGIE model to a tropical environment is shown with an evaluation of its performance against direct observations made within the study area of Luquillo Forest. The newly developed landslide module builds upon the previous version of the tRIBS landslide component. This new module utilizes a numerical solution to the Richards' equation (present in tRIBS-VEGGIE but not in tRIBS, which better represents the time evolution of soil moisture transport through the soil column. Moreover, the new landslide module utilizes an extended formulation of the factor of safety (FS to correctly quantify the role of matric suction in slope stability and to account for unsaturated conditions in the evaluation of FS. The new modeling framework couples the capabilities of the detailed hydrologic model to describe soil moisture dynamics with the infinite slope model, creating a powerful tool for the assessment of rainfall-triggered landslide risk.

  3. Flash Floods Simulation using a Physical-Based Hydrological Model at Different Hydroclimatic Regions

    Science.gov (United States)

    Saber, Mohamed; Kamil Yilmaz, Koray

    2016-04-01

    Currently, flash floods are seriously increasing and affecting many regions over the world. Therefore, this study will focus on two case studies; Wadi Abu Subeira, Egypt as arid environment, and Karpuz basin, Turkey as Mediterranean environment. The main objective of this work is to simulate flash floods at both catchments considering the hydrometeorological differences between them which in turn effect their flash flood behaviors. An integrated methodology incorporating Hydrological River Basin Environmental Assessment Model (Hydro-BEAM) and remote sensing observations was devised. Global Satellite Mapping of Precipitation (GSMAP) were compared with the rain gauge network at the target basins to estimate the bias in an effort to further use it effectively in simulation of flash floods. Based on the preliminary results of flash floods simulation on both basins, we found that runoff behaviors of flash floods are different due to the impacts of climatology, hydrological and topographical conditions. Also, the simulated surface runoff hydrographs are reasonably coincide with the simulated ones. Consequently, some mitigation strategies relying on this study could be introduced to help in reducing the flash floods disasters at different climate regions. This comparison of different climatic basins would be a reasonable implication for the potential impact of climate change on the flash floods frequencies and occurrences.

  4. Reconciling the understanding of 'hydrophobicity' with physics-based models of proteins.

    Science.gov (United States)

    Harris, Robert C; Pettitt, B Montgomery

    2016-03-02

    The idea that a 'hydrophobic energy' drives protein folding, aggregation, and binding by favoring the sequestration of bulky residues from water into the protein interior is widespread. The solvation free energies (ΔGsolv) of small nonpolar solutes increase with surface area (A), and the free energies of creating macroscopic cavities in water increase linearly with A. These observations seem to imply that there is a hydrophobic component (ΔGhyd) of ΔGsolv that increases linearly with A, and this assumption is widely used in implicit solvent models. However, some explicit-solvent molecular dynamics studies appear to contradict these ideas. For example, one definition (ΔG(LJ)) of ΔGhyd is that it is the free energy of turning on the Lennard-Jones (LJ) interactions between the solute and solvent. However, ΔG(LJ) decreases with A for alanine and glycine peptides. Here we argue that these apparent contradictions can be reconciled by defining ΔGhyd to be a near hard core insertion energy (ΔGrep), as in the partitioning proposed by Weeks, Chandler, and Andersen. However, recent results have shown that ΔGrep is not a simple function of geometric properties of the molecule, such as A and the molecular volume, and that the free energy of turning on the attractive part of the LJ potential cannot be computed from first-order perturbation theory for proteins. The theories that have been developed from these assumptions to predict ΔGhyd are therefore inadequate for proteins.

  5. Physically-based modeling of the cyclic macroscopic behaviour of metals

    International Nuclear Information System (INIS)

    Sauzay, M.; Evrard, P.; Steckmeyer, A.; Ferrie, E.

    2010-01-01

    Grain size seems to have only a minor influence on the cyclic strain strain curves (CSSCs) of metallic polycrystals of medium to high stacking fault energy (SFE). That is why many authors tried to deduce the macroscopic CSSCs curves from the single crystals ones. Either crystals oriented for single slip or crystals oriented for multiple slip could be considered. In addition, a scale transition law should be used (from the grain scale to the macroscopic scale). Authors generally used either the Sachs rule (homogeneous single slip) or the Taylor one (homogeneous plastic strain, multiple slip). But the predicted macroscopic CSSCs do not generally agree with the experimental data for metals and alloys, presenting various SFE values. In order to avoid the choice of a particular scale transition rule, many finite element (FE) computations have been carried out using meshes of polycrystals including more than one hundred grains without texture. This allows the study of the influence of the crystalline constitutive laws on the macroscopic CSSCs. Activation of a secondary slip system in grains oriented for single slip is either allowed or hindered (slip planarity), which affects strongly the macroscopic CSSCs. The more planar the slip, the higher the predicted macroscopic stress amplitudes. If grains oriented for single slip obey slip planarity and two crystalline CSSCs are used (one for single slip grains and one for multiple slip grains), then the predicted macroscopic CSSCs agree well with experimental data provided the SFE is not too low (316L, copper, nickel, aluminium). Finally, the incremental self-consistent Hill-Hutchinson homogenization model is used for predicting CSS curves and partially validated with respect to the curves computed by the FE method. (authors)

  6. A neural network construction method for surrogate modeling of physics-based analysis

    Science.gov (United States)

    Sung, Woong Je

    In this thesis existing methodologies related to the developmental methods of neural networks have been surveyed and their approaches to network sizing and structuring are carefully observed. This literature review covers the constructive methods, the pruning methods, and the evolutionary methods and questions about the basic assumption intrinsic to the conventional neural network learning paradigm, which is primarily devoted to optimization of connection weights (or synaptic strengths) for the pre-determined connection structure of the network. The main research hypothesis governing this thesis is that, without breaking a prevailing dichotomy between weights and connectivity of the network during learning phase, the efficient design of a task-specific neural network is hard to achieve because, as long as connectivity and weights are searched by separate means, a structural optimization of the neural network requires either repetitive re-training procedures or computationally expensive topological meta-search cycles. The main contribution of this thesis is designing and testing a novel learning mechanism which efficiently learns not only weight parameters but also connection structure from a given training data set, and positioning this learning mechanism within the surrogate modeling practice. In this work, a simple and straightforward extension to the conventional error Back-Propagation (BP) algorithm has been formulated to enable a simultaneous learning for both connectivity and weights of the Generalized Multilayer Perceptron (GMLP) in supervised learning tasks. A particular objective is to achieve a task-specific network having reasonable generalization performance with a minimal training time. The dichotomy between architectural design and weight optimization is reconciled by a mechanism establishing a new connection for a neuron pair which has potentially higher error-gradient than one of the existing connections. Interpreting an instance of the absence of

  7. A Physically Based Analytical Model to Describe Effective Excess Charge for Streaming Potential Generation in Water Saturated Porous Media

    Science.gov (United States)

    Guarracino, L.; Jougnot, D.

    2018-01-01

    Among the different contributions generating self-potential, the streaming potential is of particular interest in hydrogeology for its sensitivity to water flow. Estimating water flux in porous media using streaming potential data relies on our capacity to understand, model, and upscale the electrokinetic coupling at the mineral-solution interface. Different approaches have been proposed to predict streaming potential generation in porous media. One of these approaches is the flux averaging which is based on determining the excess charge which is effectively dragged in the medium by water flow. In this study, we develop a physically based analytical model to predict the effective excess charge in saturated porous media using a flux-averaging approach in a bundle of capillary tubes with a fractal pore size distribution. The proposed model allows the determination of the effective excess charge as a function of pore water ionic concentration and hydrogeological parameters like porosity, permeability, and tortuosity. The new model has been successfully tested against different set of experimental data from the literature. One of the main findings of this study is the mechanistic explanation to the empirical dependence between the effective excess charge and the permeability that has been found by several researchers. The proposed model also highlights the link to other lithological properties, and it is able to reproduce the evolution of effective excess charge with electrolyte concentrations.

  8. Design of Soil Salinity Policies with Tinamit, a Flexible and Rapid Tool to Couple Stakeholder-Built System Dynamics Models with Physically-Based Models

    Science.gov (United States)

    Malard, J. J.; Baig, A. I.; Hassanzadeh, E.; Adamowski, J. F.; Tuy, H.; Melgar-Quiñonez, H.

    2016-12-01

    Model coupling is a crucial step to constructing many environmental models, as it allows for the integration of independently-built models representing different system sub-components to simulate the entire system. Model coupling has been of particular interest in combining socioeconomic System Dynamics (SD) models, whose visual interface facilitates their direct use by stakeholders, with more complex physically-based models of the environmental system. However, model coupling processes are often cumbersome and inflexible and require extensive programming knowledge, limiting their potential for continued use by stakeholders in policy design and analysis after the end of the project. Here, we present Tinamit, a flexible Python-based model-coupling software tool whose easy-to-use API and graphical user interface make the coupling of stakeholder-built SD models with physically-based models rapid, flexible and simple for users with limited to no coding knowledge. The flexibility of the system allows end users to modify the SD model as well as the linking variables between the two models themselves with no need for recoding. We use Tinamit to couple a stakeholder-built socioeconomic model of soil salinization in Pakistan with the physically-based soil salinity model SAHYSMOD. As climate extremes increase in the region, policies to slow or reverse soil salinity buildup are increasing in urgency and must take both socioeconomic and biophysical spheres into account. We use the Tinamit-coupled model to test the impact of integrated policy options (economic and regulatory incentives to farmers) on soil salinity in the region in the face of future climate change scenarios. Use of the Tinamit model allowed for rapid and flexible coupling of the two models, allowing the end user to continue making model structure and policy changes. In addition, the clear interface (in contrast to most model coupling code) makes the final coupled model easily accessible to stakeholders with

  9. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Taylor; Guo, Yi; Veers, Paul; Dykes, Katherine; Damiani, Rick

    2016-01-26

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrum is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.

  10. Performance Evaluation of UML2-Modeled Embedded Streaming Applications with System-Level Simulation

    Directory of Open Access Journals (Sweden)

    Arpinen Tero

    2009-01-01

    Full Text Available This article presents an efficient method to capture abstract performance model of streaming data real-time embedded systems (RTESs. Unified Modeling Language version 2 (UML2 is used for the performance modeling and as a front-end for a tool framework that enables simulation-based performance evaluation and design-space exploration. The adopted application meta-model in UML resembles the Kahn Process Network (KPN model and it is targeted at simulation-based performance evaluation. The application workload modeling is done using UML2 activity diagrams, and platform is described with structural UML2 diagrams and model elements. These concepts are defined using a subset of the profile for Modeling and Analysis of Realtime and Embedded (MARTE systems from OMG and custom stereotype extensions. The goal of the performance modeling and simulation is to achieve early estimates on task response times, processing element, memory, and on-chip network utilizations, among other information that is used for design-space exploration. As a case study, a video codec application on multiple processors is modeled, evaluated, and explored. In comparison to related work, this is the first proposal that defines transformation between UML activity diagrams and streaming data application workload meta models and successfully adopts it for RTES performance evaluation.

  11. Dynamic restoration mechanism and physically based constitutive model of 2050 Al–Li alloy during hot compression

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Ruihua; Liu, Qing [School of Materials Science and Engineering, Central South University, Changsha 410083 (China); Li, Jinfeng, E-mail: lijinfeng@csu.edu.cn [School of Materials Science and Engineering, Central South University, Changsha 410083 (China); Xiang, Sheng [School of Materials Science and Engineering, Central South University, Changsha 410083 (China); Chen, Yonglai; Zhang, Xuhu [Aerospace Research Institute of Materials and Processing Technology, Beijing 100076 (China)

    2015-11-25

    Dynamic restoration mechanism of 2050 Al–Li alloy and its constitutive model were investigated by means of hot compression simulation in the deformation temperature ranging from 340 to 500 °C and at strain rates of 0.001–10 s{sup −1}. The microstructures of the compressed samples were observed using optical microscopy and transmission electron microscopy. On the base of dislocation density theory and Avrami kinetics, a physically based constitutive model was established. The results show that dynamic recovery (DRV) and dynamic recrystallization (DRX) are co-responsible for the dynamic restoration during the hot compression process under all compression conditions. The dynamic precipitation (DPN) of T1 and σ phases was observed after the deformation at 340 °C. This is the first experimental evidence for the DPN of σ phase in Al–Cu–Li alloys. The particle stimulated nucleation of DRX (PSN-DRX) due to the large Al–Cu–Mn particle was also observed. The error analysis suggests that the established constitutive model can adequately describe the flow stress dependence on strain rate, temperature and strain during the hot deformation process. - Highlights: • The experimental evidence for the DPN of σ phase in Al–Cu–Li alloys was found. • The PSN-DRX due to the large Al–Cu–Mn particle was observed. • A novel method was proposed to calculated the stress multiplier α.

  12. Investigating ice cliff evolution and contribution to glacier mass-balance using a physically-based dynamic model

    Science.gov (United States)

    Buri, Pascal; Miles, Evan; Ragettli, Silvan; Brun, Fanny; Steiner, Jakob; Pellicciotti, Francesca

    2016-04-01

    Supraglacial cliffs are a surface feature typical of debris-covered glaciers, affecting surface evolution, glacier downwasting and mass balance by providing a direct ice-atmosphere interface. As a result, melt rates can be very high and ice cliffs may account for a significant portion of the total glacier mass loss. However, their contribution to glacier mass balance has rarely been quantified through physically-based models. Most cliff energy balance models are point scale models which calculate energy fluxes at individual cliff locations. Results from the only grid based model to date accurately reflect energy fluxes and cliff melt, but modelled backwasting patterns are in some cases unrealistic, as the distribution of melt rates would lead to progressive shallowing and disappearance of cliffs. Based on a unique multitemporal dataset of cliff topography and backwasting obtained from high-resolution terrestrial and aerial Structure-from-Motion analysis on Lirung Glacier in Nepal, it is apparent that cliffs exhibit a range of behaviours but most do not rapidly disappear. The patterns of evolution cannot be explained satisfactorily by atmospheric melt alone, and are moderated by the presence of supraglacial ponds at the base of cliffs and by cliff reburial with debris. Here, we document the distinct patterns of evolution including disappearance, growth and stability. We then use these observations to improve the grid-based energy balance model, implementing periodic updates of the cliff geometry resulting from modelled melt perpendicular to the ice surface. Based on a slope threshold, pixels can be reburied by debris or become debris-free. The effect of ponds are taken into account through enhanced melt rates in horizontal direction on pixels selected based on an algorithm considering distance to the water surface, slope and lake level. We use the dynamic model to first study the evolution of selected cliffs for which accurate, high resolution DEMs are available

  13. Where and why hyporheic exchange is important: Inferences from a parsimonious, physically-based river network model

    Science.gov (United States)

    Gomez-Velez, J. D.; Harvey, J. W.

    2014-12-01

    Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data as well as models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically-based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). At the core of NEXSS is a characterization of the channel geometry, geomorphic features, and related hydraulic drivers based on scaling equations from the literature and readily accessible information such as river discharge, bankfull width, median grain size, sinuosity, channel slope, and regional groundwater gradients. Multi-scale hyporheic flow is computed based on combining simple but powerful analytical and numerical expressions that have been previously published. We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bedforms dominates hyporheic fluxes and turnover rates along the river corridor. Moreover, the hyporheic zone's potential for biogeochemical transformations is comparable across stream orders, but the abundance of lower-order channels results in a considerably higher cumulative effect for low-order streams. Thus, vertical exchange beneath submerged bedforms has more potential for biogeochemical transformations than lateral exchange beneath banks, although lateral exchange through meanders may be important in large rivers. These results have implications for predicting outcomes of river and basin management practices.

  14. Coupling physically based and data-driven models for assessing freshwater inflow into the Small Aral Sea

    Science.gov (United States)

    Ayzel, Georgy; Izhitskiy, Alexander

    2018-06-01

    The Aral Sea desiccation and related changes in hydroclimatic conditions on a regional level is a hot topic for past decades. The key problem of scientific research projects devoted to an investigation of modern Aral Sea basin hydrological regime is its discontinuous nature - the only limited amount of papers takes into account the complex runoff formation system entirely. Addressing this challenge we have developed a continuous prediction system for assessing freshwater inflow into the Small Aral Sea based on coupling stack of hydrological and data-driven models. Results show a good prediction skill and approve the possibility to develop a valuable water assessment tool which utilizes the power of classical physically based and modern machine learning models both for territories with complex water management system and strong water-related data scarcity. The source code and data of the proposed system is available on a Github page (https://github.com/SMASHIproject/IWRM2018" target="_blank">https://github.com/SMASHIproject/IWRM2018).

  15. Enhancement of Physics-of-Failure Prognostic Models with System Level Features

    National Research Council Canada - National Science Library

    Kacprzynski, Gregory

    2002-01-01

    .... The novelty in the current prognostic tool development is that predictions are made through the fusion of stochastic physics-of-failure models, relevant system or component level health monitoring...

  16. Models of governance in multihospital systems. Implications for hospital and system-level decision-making.

    Science.gov (United States)

    Morlock, L L; Alexander, J A

    1986-12-01

    This study utilizes data from a national survey of 159 multihospital systems in order to describe the types of governance structures currently being utilized, and to compare the policy making process for various types of decisions in systems with different approaches to governance. Survey results indicate that multihospital systems most often use one of three governance models. Forty-one percent of the systems (including 33% of system hospitals) use a parent holding company model in which there is a system-wide corporate governing board and separate governing boards for each member hospital. Twenty-two percent of systems in the sample (but 47% of all system hospitals) utilize what we have termed a modified parent holding company model in which there is one system-wide governing board, but advisory boards are substituted for governing boards at the local hospital level. Twenty-three percent of the sampled systems (including 11% of system hospitals) use a corporate model in which there is one system-wide governing board but no other governing or advisory boards at either the divisional, regional or local hospital levels. A comparison of systems using these three governance approaches found significant variation in terms of system size, ownership and the geographic proximity of member hospitals. In order to examine the relationship between alternative approaches to governance and patterns of decision-making, the three model types were compared with respect to the percentages of systems reporting that local boards, corporate management and/or system-wide corporate boards have responsibility for decision-making in a number of specific issue areas. Study results indicate that, regardless of model type, corporate boards are most likely to have responsibility for decisions regarding the transfer, pledging and sale of assets; the formation of new companies; purchase of assets greater than $100,000; changes in hospital bylaws; and the appointment of local board members. In

  17. COSMOS: A System-Level Modelling and Simulation Framework for Coprocessor-Coupled Reconfigurable Systems

    DEFF Research Database (Denmark)

    Wu, Kehuai; Madsen, Jan

    2007-01-01

    and resource management, and iii) present a SystemC based framework to model and simulate coprocessor-coupled reconfigurable systems. We illustrate how COSMOS may be used to capture the dynamic behavior of such systems and emphasize the need for capturing the system aspects of such systems in order to deal...

  18. Underwater wireless optical communications: From system-level demonstrations to channel modelling

    KAUST Repository

    Oubei, Hassan M.

    2018-01-09

    In this paper, we discuss about recent experimental advances in underwater wireless optical communications (UWOC) over various underwater channel water types using different modulation schemes as well as modelling and describing the statistical properties of turbulence-induced fading in underwater wireless optical channels using laser beam intensity fluctuations measurements.

  19. Application of the PMI Model at the System Level: Evaluation of a Systemwide Program Implementation.

    Science.gov (United States)

    Cobb, Herman, Jr.

    A practical application of the Planning, Monitoring, and Implementation Model (PMI) is illustrated in the evaluation of the District of Columbia Public Schools' Student Progress Plan. The plan adheres to the principle that the student be encouraged to move along an instructional continuum at his or her individual rate. The Division of Research and…

  20. Multi-scale Drivers of Variations in Atmospheric Evaporative Demand Based on Observations and Physically-based Modeling

    Science.gov (United States)

    Peng, L.; Sheffield, J.; Li, D.

    2015-12-01

    Evapotranspiration (ET) is a key link between the availability of water resources and climate change and climate variability. Variability of ET has important environmental and socioeconomic implications for managing hydrological hazards, food and energy production. Although there have been many observational and modeling studies of ET, how ET has varied and the drivers of the variations at different temporal scales remain elusive. Much of the uncertainty comes from the atmospheric evaporative demand (AED), which is the combined effect of radiative and aerodynamic controls. The inconsistencies among modeled AED estimates and the limited observational data may originate from multiple sources including the limited time span and uncertainties in the data. To fully investigate and untangle the intertwined drivers of AED, we present a spectrum analysis to identify key controls of AED across multiple temporal scales. We use long-term records of observed pan evaporation for 1961-2006 from 317 weather stations across China and physically-based model estimates of potential evapotranspiration (PET). The model estimates are based on surface meteorology and radiation derived from reanalysis, satellite retrievals and station data. Our analyses show that temperature plays a dominant role in regulating variability of AED at the inter-annual scale. At the monthly and seasonal scales, the primary control of AED shifts from radiation in humid regions to humidity in dry regions. Unlike many studies focusing on the spatial pattern of ET drivers based on a traditional supply and demand framework, this study underlines the importance of temporal scales when discussing controls of ET variations.

  1. Modeling Subsurface Behavior at the System Level: Considerations and a Path Forward

    Science.gov (United States)

    Geesey, G.

    2005-12-01

    The subsurface is an obscure but essential resource to life on Earth. It is an important region for carbon production and sequestration, a source and reservoir for energy, minerals and metals and potable water. There is a growing need to better understand subsurface possesses that control the exploitation and security of these resources. Our best models often fail to predict these processes at the field scale because of limited understanding of 1) the processes and the controlling parameters, 2) how processes are coupled at the field scale 3) geological heterogeneities that control hydrological, geochemical and microbiological processes at the field scale and 4) lack of data sets to calibrate and validate numerical models. There is a need for experimental data obtained at scales larger than those obtained at the laboratory bench that take into account the influence of hydrodynamics, geochemical reactions including complexation and chelation/adsorption/precipitation/ion exchange/oxidation-reduction/colloid formation and dissolution, and reactions of microbial origin. Furthermore, the coupling of each of these processes and reactions needs to be evaluated experimentally at a scale that produces data that can be used to calibrate numerical models so that they accurately describe field scale system behavior. Establishing the relevant experimental scale for collection of data from coupled processes remains a challenge and will likely be process-dependent and involve iterations of experimentation and data collection at different intermediate scales until the models calibrated with the appropriate date sets achieve an acceptable level of performance. Assuming that the geophysicists will soon develop technologies to define geological heterogeneities over a wide range of scales in the subsurface, geochemists need to continue to develop techniques to remotely measure abiotic reactions, while geomicrobiologists need to continue their development of complementary technologies

  2. A Mathematical Model of Metabolism and Regulation Provides a Systems-Level View of How Escherichia coli Responds to Oxygen

    Directory of Open Access Journals (Sweden)

    Michael eEderer

    2014-03-01

    Full Text Available The efficient redesign of bacteria for biotechnological purposes, such as biofuel production, waste disposal or specific biocatalytic functions, requires a quantitative systems-level understanding of energy supply, carbon and redox metabolism. The measurement of transcript levels, metabolite concentrations and metabolic fluxes per se gives an incomplete picture. An appreciation of the interdependencies between the different measurement values is essential for systems-level understanding. Mathematical modeling has the potential to provide a coherent and quantitative description of the interplay between gene expression, metabolite concentrations and metabolic fluxes. Escherichia coli undergoes major adaptations in central metabolism when the availability of oxygen changes. Thus, an integrated description of the oxygen response provides a benchmark of our understanding of carbon, energy and redox metabolism. We present the first comprehensive model of the central metabolism of E. coli that describes steady-state metabolism at different levels of oxygen availability. Variables of the model are metabolite concentrations, gene expression levels, transcription factor activities, metabolic fluxes and biomass concentration. We analyze the model with respect to the production capabilities of central metabolism of E. coli. In particular, we predict how precursor and biomass concentration are affected by product formation.

  3. Simulation of green roof runoff under different substrate depths and vegetation covers by coupling a simple conceptual and a physically based hydrological model.

    Science.gov (United States)

    Soulis, Konstantinos X; Valiantzas, John D; Ntoulas, Nikolaos; Kargas, George; Nektarios, Panayiotis A

    2017-09-15

    In spite of the well-known green roof benefits, their widespread adoption in the management practices of urban drainage systems requires the use of adequate analytical and modelling tools. In the current study, green roof runoff modeling was accomplished by developing, testing, and jointly using a simple conceptual model and a physically based numerical simulation model utilizing HYDRUS-1D software. The use of such an approach combines the advantages of the conceptual model, namely simplicity, low computational requirements, and ability to be easily integrated in decision support tools with the capacity of the physically based simulation model to be easily transferred in conditions and locations other than those used for calibrating and validating it. The proposed approach was evaluated with an experimental dataset that included various green roof covers (either succulent plants - Sedum sediforme, or xerophytic plants - Origanum onites, or bare substrate without any vegetation) and two substrate depths (either 8 cm or 16 cm). Both the physically based and the conceptual models matched very closely the observed hydrographs. In general, the conceptual model performed better than the physically based simulation model but the overall performance of both models was sufficient in most cases as it is revealed by the Nash-Sutcliffe Efficiency index which was generally greater than 0.70. Finally, it was showcased how a physically based and a simple conceptual model can be jointly used to allow the use of the simple conceptual model for a wider set of conditions than the available experimental data and in order to support green roof design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Using Agent-Based Modeling to Enhance System-Level Real-time Control of Urban Stormwater Systems

    Science.gov (United States)

    Rimer, S.; Mullapudi, A. M.; Kerkez, B.

    2017-12-01

    The ability to reduce combined-sewer overflow (CSO) events is an issue that challenges over 800 U.S. municipalities. When the volume of a combined sewer system or wastewater treatment plant is exceeded, untreated wastewater then overflows (a CSO event) into nearby streams, rivers, or other water bodies causing localized urban flooding and pollution. The likelihood and impact of CSO events has only exacerbated due to urbanization, population growth, climate change, aging infrastructure, and system complexity. Thus, there is an urgent need for urban areas to manage CSO events. Traditionally, mitigating CSO events has been carried out via time-intensive and expensive structural interventions such as retention basins or sewer separation, which are able to reduce CSO events, but are costly, arduous, and only provide a fixed solution to a dynamic problem. Real-time control (RTC) of urban drainage systems using sensor and actuator networks has served as an inexpensive and versatile alternative to traditional CSO intervention. In particular, retrofitting individual stormwater elements for sensing and automated active distributed control has been shown to significantly reduce the volume of discharge during CSO events, with some RTC models demonstrating a reduction upwards of 90% when compared to traditional passive systems. As more stormwater elements become retrofitted for RTC, system-level RTC across complete watersheds is an attainable possibility. However, when considering the diverse set of control needs of each of these individual stormwater elements, such system-level RTC becomes a far more complex problem. To address such diverse control needs, agent-based modeling is employed such that each individual stormwater element is treated as an autonomous agent with a diverse decision making capabilities. We present preliminary results and limitations of utilizing the agent-based modeling computational framework for the system-level control of diverse, interacting

  5. Tritium transport modeling at system level for the EUROfusion dual coolant lithium-lead breeding blanket

    Science.gov (United States)

    Urgorri, F. R.; Moreno, C.; Carella, E.; Rapisarda, D.; Fernández-Berceruelo, I.; Palermo, I.; Ibarra, A.

    2017-11-01

    The dual coolant lithium lead (DCLL) breeding blanket is one of the four breeder blanket concepts under consideration within the framework of EUROfusion consortium activities. The aim of this work is to develop a model that can dynamically track tritium concentrations and fluxes along each part of the DCLL blanket and the ancillary systems associated to it at any time. Because of tritium nature, the phenomena of diffusion, dissociation, recombination and solubilisation have been modeled in order to describe the interaction between the lead-lithium channels, the structural material, the flow channel inserts and the helium channels that are present in the breeding blanket. Results have been obtained for a pulsed generation scenario for DEMO. The tritium inventory in different parts of the blanket, the permeation rates from the breeder to the secondary coolant and the amount of tritium extracted from the lead-lithium loop have been computed. Results present an oscillating behavior around mean values. The obtained average permeation rate from the liquid metal to the helium is 1.66 mg h-1 while the mean tritium inventory in the whole system is 417 mg. Besides the reference case results, parametric studies of the lead-lithium mass flow rate, the tritium extraction efficiency and the tritium solubility in lead-lithium have been performed showing the reaction of the system to the variation of these parameters.

  6. 2008 GEM Modeling Challenge: Metrics Study of the Dst Index in Physics-Based Magnetosphere and Ring Current Models and in Statistical and Analytic Specifications

    Science.gov (United States)

    Rastaetter, L.; Kuznetsova, M.; Hesse, M.; Pulkkinen, A.; Glocer, A.; Yu, Y.; Meng, X.; Raeder, J.; Wiltberger, M.; Welling, D.; hide

    2011-01-01

    In this paper the metrics-based results of the Dst part of the 2008-2009 GEM Metrics Challenge are reported. The Metrics Challenge asked modelers to submit results for 4 geomagnetic storm events and 5 different types of observations that can be modeled by statistical or climatological or physics-based (e.g. MHD) models of the magnetosphere-ionosphere system. We present the results of over 25 model settings that were run at the Community Coordinated Modeling Center (CCMC) and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations we use comparisons of one-hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of one-minute model data with the one-minute Dst index calculated by the United States Geologic Survey (USGS).

  7. A physically-based analytical model to describe effective excess charge for streaming potential generation in saturated porous media

    Science.gov (United States)

    Jougnot, D.; Guarracino, L.

    2016-12-01

    The self-potential (SP) method is considered by most researchers the only geophysical method that is directly sensitive to groundwater flow. One source of SP signals, the so-called streaming potential, results from the presence of an electrical double layer at the mineral-pore water interface. When water flows through the pore space, it gives rise to a streaming current and a resulting measurable electrical voltage. Different approaches have been proposed to predict streaming potentials in porous media. One approach is based on the excess charge which is effectively dragged in the medium by the water flow. Following a recent theoretical framework, we developed a physically-based analytical model to predict the effective excess charge in saturated porous media. In this study, the porous media is described by a bundle of capillary tubes with a fractal pore-size distribution. First, an analytical relationship is derived to determine the effective excess charge for a single capillary tube as a function of the pore water salinity. Then, this relationship is used to obtain both exact and approximated expressions for the effective excess charge at the Representative Elementary Volume (REV) scale. The resulting analytical relationship allows the determination of the effective excess charge as a function of pore water salinity, fractal dimension and hydraulic parameters like porosity and permeability, which are also obtained at the REV scale. This new model has been successfully tested against data from the literature of different sources. One of the main finding of this study is that it provides a mechanistic explanation to the empirical dependence between the effective excess charge and the permeability that has been found by various researchers. The proposed petrophysical relationship also contributes to understand the role of porosity and water salinity on effective excess charge and will help to push further the use of streaming potential to monitor groundwater flow.

  8. Modeling and simulation of tumor-influenced high resolution real-time physics-based breast models for model-guided robotic interventions

    Science.gov (United States)

    Neylon, John; Hasse, Katelyn; Sheng, Ke; Santhanam, Anand P.

    2016-03-01

    Breast radiation therapy is typically delivered to the patient in either supine or prone position. Each of these positioning systems has its limitations in terms of tumor localization, dose to the surrounding normal structures, and patient comfort. We envision developing a pneumatically controlled breast immobilization device that will enable the benefits of both supine and prone positioning. In this paper, we present a physics-based breast deformable model that aids in both the design of the breast immobilization device as well as a control module for the device during every day positioning. The model geometry is generated from a subject's CT scan acquired during the treatment planning stage. A GPU based deformable model is then generated for the breast. A mass-spring-damper approach is then employed for the deformable model, with the spring modeled to represent a hyperelastic tissue behavior. Each voxel of the CT scan is then associated with a mass element, which gives the model its high resolution nature. The subject specific elasticity is then estimated from a CT scan in prone position. Our results show that the model can deform at >60 deformations per second, which satisfies the real-time requirement for robotic positioning. The model interacts with a computer designed immobilization device to position the breast and tumor anatomy in a reproducible location. The design of the immobilization device was also systematically varied based on the breast geometry, tumor location, elasticity distribution and the reproducibility of the desired tumor location.

  9. Multi-Hydro: a multi module physically based model to evaluate effect of implementation of the flood resilience measure.

    Science.gov (United States)

    Giangola-Murzyn, A.; Gires, A.; Richard, J.; Tchiguirinskaia, I.; Schertzer, D.

    2012-04-01

    Nowadays cities are rapidly growing, gradually transforming the nearby rural area into peri-urban area where the urbanization rate increases again and again. Many of these areas are located in the floodplain. In this context and to facilitate the choice of the protection measure of the building of these areas, the European SMARTeST project (Smart Resilient Technologies and System Tools) aims to create a guideline regrouping the different existing system and their conditions of use for different situations. In this context, the Multi-Hydro model was improved and tested to evaluate the effect of the implementation of the flood resilience measures. This model consists of a coupling between different modules relying on existing and validated hydrological and physically based models for runoff processes, sewer system discharge and subsurface processes. The basic data are rainfall and GIS data of elevation, land use or soil description. However, the data necessary to perform this type of model can be difficult to access. These missing data, which can be evaluated by average values, can cause inaccuracies in the simulated water levels. But if the water level cannot yet able to be connected to survey measurements, the location of this water is very useful to understand the hydrological behavior of the study area. The ability to circle the missing data enables the portability of the model, which is a major advantage for the SMARTeST project. Multi-Hydro can be thus a tool useable by all project partners. The model was implemented on a case study of the Paris area, the city of Villecresnes. Various scenarios in terms of implementation of protection measures are tested under a fixed rainfall scenario. The results of these simulations, analyzed as series of risk maps and by an advanced statistical analysis, show that depending on the selected measures (single barrier or perimeter), the behavior of the watershed is modified. Indeed, the modifications of the land use of the

  10. A physics-based model for maintenance of the pH gradient in the gastric mucus layer.

    Science.gov (United States)

    Lewis, Owen L; Keener, James P; Fogelson, Aaron L

    2017-12-01

    It is generally accepted that the gastric mucus layer provides a protective barrier between the lumen and the mucosa, shielding the mucosa from acid and digestive enzymes and preventing autodigestion of the stomach epithelium. However, the precise mechanisms that contribute to this protective function are still up for debate. In particular, it is not clear what physical processes are responsible for transporting hydrogen protons, secreted within the gastric pits, across the mucus layer to the lumen without acidifying the environment adjacent to the epithelium. One hypothesis is that hydrogen may be bound to the mucin polymers themselves as they are convected away from the mucosal surface and eventually degraded in the stomach lumen. It is also not clear what mechanisms prevent hydrogen from diffusing back toward the mucosal surface, thereby lowering the local pH. In this work we investigate a physics-based model of ion transport within the mucosal layer based on a Nernst-Planck-like equation. Analysis of this model shows that the mechanism of transporting protons bound to the mucus gel is capable of reproducing the trans-mucus pH gradients reported in the literature. Furthermore, when coupled with ion exchange at the epithelial surface, our analysis shows that bicarbonate secretion alone is capable of neutralizing the epithelial pH, even in the face of enormous diffusive gradients of hydrogen. Maintenance of the pH gradient is found to be robust to a wide array of perturbations in both physiological and phenomenological model parameters, suggesting a robust physiological control mechanism. NEW & NOTEWORTHY This work combines modeling techniques based on physical principles, as well as novel numerical simulations to test the plausibility of one hypothesized mechanism for proton transport across the gastric mucus layer. Results show that this mechanism is able to maintain the extreme pH gradient seen in in vivo experiments and suggests a highly robust regulation

  11. Unified System-Level Modeling of Intermittent Renewable Energy Sources and Energy Storage for Power System Operation

    DEFF Research Database (Denmark)

    Heussen, Kai; Koch, Stephan; Ulbig, Andreas

    2011-01-01

    The system-level consideration of inter- mittent renewable energy sources and small-scale en- ergy storage in power systems remains a challenge as either type is incompatible with traditional operation concepts. Non-controllability and energy-constraints are still considered contingent cases...... in market-based operation. The design of operation strategies for up to 100 % renewable energy systems requires an explicit consideration of non-dispatchable generation and stor- age capacities, as well as the evaluation of operational performance in terms of energy eciency, reliability, environmental...... impact and cost. By abstracting from technology-dependent and physical unit properties, the modeling framework presented and extended in this pa- per allows the modeling of a technologically diverse unit portfolio with a unied approach, whilst establishing the feasibility of energy-storage consideration...

  12. Reconstructing the early 19th-century Waal River by means of a 2D physics-based numerical model

    NARCIS (Netherlands)

    Montes Arboleda, A.; Crosato, A.; Middelkoop, H.

    2010-01-01

    Suspended-sediment concentration data are a missing link in reconstructions of the River Waal in the early 1800s. These reconstructions serve as a basis for assessing the long-term effects of major interventions carried out between 1850 AD and the early 20th century. We used a 2D physics-based

  13. A system-level mathematical model of Basal Ganglia motor-circuit for kinematic planning of arm movements.

    Science.gov (United States)

    Salimi-Badr, Armin; Ebadzadeh, Mohammad Mehdi; Darlot, Christian

    2018-01-01

    In this paper, a novel system-level mathematical model of the Basal Ganglia (BG) for kinematic planning, is proposed. An arm composed of several segments presents a geometric redundancy. Thus, selecting one trajectory among an infinite number of possible ones requires overcoming redundancy, according to some kinds of optimization. Solving this optimization is assumed to be the function of BG in planning. In the proposed model, first, a mathematical solution of kinematic planning is proposed for movements of a redundant arm in a plane, based on minimizing energy consumption. Next, the function of each part in the model is interpreted as a possible role of a nucleus of BG. Since the kinematic variables are considered as vectors, the proposed model is presented based on the vector calculus. This vector model predicts different neuronal populations in BG which is in accordance with some recent experimental studies. According to the proposed model, the function of the direct pathway is to calculate the necessary rotation of each joint, and the function of the indirect pathway is to control each joint rotation considering the movement of the other joints. In the proposed model, the local feedback loop between Subthalamic Nucleus and Globus Pallidus externus is interpreted as a local memory to store the previous amounts of movements of the other joints, which are utilized by the indirect pathway. In this model, activities of dopaminergic neurons would encode, at short-term, the error between the desired and actual positions of the end-effector. The short-term modulating effect of dopamine on Striatum is also modeled as cross product. The model is simulated to generate the commands of a redundant manipulator. The performance of the model is studied for different reaching movements between 8 points in a plane. Finally, some symptoms of Parkinson's disease such as bradykinesia and akinesia are simulated by modifying the model parameters, inspired by the dopamine depletion

  14. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    Science.gov (United States)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software

  15. Modelling technical snow production for skiing areas in the Austrian Alps with the physically based snow model AMUNDSEN

    Science.gov (United States)

    Hanzer, F.; Marke, T.; Steiger, R.; Strasser, U.

    2012-04-01

    Tourism and particularly winter tourism is a key factor for the Austrian economy. Judging from currently available climate simulations, the Austrian Alps show a particularly high vulnerability to climatic changes. To reduce the exposure of ski areas towards changes in natural snow conditions as well as to generally enhance snow conditions at skiing sites, technical snowmaking is widely utilized across Austrian ski areas. While such measures result in better snow conditions at the skiing sites and are important for the local skiing industry, its economic efficiency has also to be taken into account. The current work emerges from the project CC-Snow II, where improved future climate scenario simulations are used to determine future natural and artificial snow conditions and their effects on tourism and economy in the Austrian Alps. In a first step, a simple technical snowmaking approach is incorporated into the process based snow model AMUNDSEN, which operates at a spatial resolution of 10-50 m and a temporal resolution of 1-3 hours. Locations of skiing slopes within a ski area in Styria, Austria, were digitized and imported into the model environment. During a predefined time frame in the beginning of the ski season, the model produces a maximum possible amount of technical snow and distributes the associated snow on the slopes, whereas afterwards, until to the end of the ski season, the model tries to maintain a certain snow depth threshold value on the slopes. Due to only few required input parameters, this approach is easily transferable to other ski areas. In our poster contribution, we present first results of this snowmaking approach and give an overview of the data and methodology applied. In a further step in CC-Snow, this simple bulk approach will be extended to consider actual snow cannon locations and technical specifications, which will allow a more detailed description of technical snow production as well as cannon-based recordings of water and energy

  16. Modeling radiocesium transport from a river catchment based on a physically-based distributed hydrological and sediment erosion model.

    Science.gov (United States)

    Kinouchi, Tsuyoshi; Yoshimura, Kazuya; Omata, Teppei

    2015-01-01

    The accident at the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) in March 2011 resulted in the deposition of large quantities of radionuclides, such as (134)Cs and (137)Cs, over parts of eastern Japan. Since then high levels of radioactive contamination have been detected in large areas, including forests, agricultural land, and residential areas. Due to the strong adsorption capability of radiocesium to soil particles, radiocesium migrates with eroded sediments, follows the surface flow paths, and is delivered to more populated downstream regions and eventually to the Pacific Ocean. It is therefore important to understand the transport of contaminated sediments in the hydrological system and to predict changes in the spatial distribution of radiocesium concentrations by taking the land-surface processes related to sediment migration into consideration. In this study, we developed a distributed model to simulate the transport of water and contaminated sediment in a watershed hydrological system, and applied this model to a partially forested mountain catchment located in an area highly contaminated by the radioactive fallout. Observed discharge, sediment concentration, and cesium concentration measured from June 2011 until December 2012 were used for calibration of model parameters. The simulated discharge and sediment concentration both agreed well with observed values, while the cesium concentration was underestimated in the initial period following the accident. This result suggests that the leaching of radiocesium from the forest canopy, which was not considered in the model, played a significant role in its transport from the catchment. Based on the simulation results, we quantified the long-term fate of radiocesium over the study area and estimated that the effective half-life of (137)Cs deposited in the study area will be approximately 22 y due to the export of contaminated sediment by land-surface processes, and the amount of (137)Cs remaining in the

  17. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    Science.gov (United States)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  18. A mathematical model of metabolism an regulation provides a systems-level view of how Escherichia coli responds to oxigen

    NARCIS (Netherlands)

    Ederer, M.; Steinsiek, S.; Stagge, S.; Rolfe, M.D.; ter Beek, A.; Knies, D.; Teixeira De Mattos, M.J.; Sauter, T.; Green, J.; Poole, R.K.; Bettenbrock, K.; Sawodny, O.

    2014-01-01

    The efficient redesign of bacteria for biotechnological purposes, such as biofuel production, waste disposal or specific biocatalytic functions, requires a quantitative systems-level understanding of energy supply, carbon, and redox metabolism. The measurement of transcript levels, metabolite

  19. Implementation of a Sage-Based Stirling Model Into a System-Level Numerical Model of the Fission Power System Technology Demonstration Unit

    Science.gov (United States)

    Briggs, Maxwell H.

    2011-01-01

    The Fission Power System (FPS) project is developing a Technology Demonstration Unit (TDU) to verify the performance and functionality of a subscale version of the FPS reference concept in a relevant environment, and to verify component and system models. As hardware is developed for the TDU, component and system models must be refined to include the details of specific component designs. This paper describes the development of a Sage-based pseudo-steady-state Stirling convertor model and its implementation into a system-level model of the TDU.

  20. Integrating pro-environmental behavior with transportation network modeling: User and system level strategies, implementation, and evaluation

    Science.gov (United States)

    Aziz, H. M. Abdul

    Personal transport is a leading contributor to fossil fuel consumption and greenhouse (GHG) emissions in the U.S. The U.S. Energy Information Administration (EIA) reports that light-duty vehicles (LDV) are responsible for 61% of all transportation related energy consumption in 2012, which is equivalent to 8.4 million barrels of oil (fossil fuel) per day. The carbon content in fossil fuels is the primary source of GHG emissions that links to the challenge associated with climate change. Evidently, it is high time to develop actionable and innovative strategies to reduce fuel consumption and GHG emissions from the road transportation networks. This dissertation integrates the broader goal of minimizing energy and emissions into the transportation planning process using novel systems modeling approaches. This research aims to find, investigate, and evaluate strategies that minimize carbon-based fuel consumption and emissions for a transportation network. We propose user and system level strategies that can influence travel decisions and can reinforce pro-environmental attitudes of road users. Further, we develop strategies that system operators can implement to optimize traffic operations with emissions minimization goal. To complete the framework we develop an integrated traffic-emissions (EPA-MOVES) simulation framework that can assess the effectiveness of the strategies with computational efficiency and reasonable accuracy. The dissertation begins with exploring the trade-off between emissions and travel time in context of daily travel decisions and its heterogeneous nature. Data are collected from a web-based survey and the trade-off values indicating the average additional travel minutes a person is willing to consider for reducing a lb. of GHG emissions are estimated from random parameter models. Results indicate that different trade-off values for male and female groups. Further, participants from high-income households are found to have higher trade-off values

  1. An Incremental Physically-Based Model of P91 Steel Flow Behaviour for the Numerical Analysis of Hot-Working Processes

    Directory of Open Access Journals (Sweden)

    Alberto Murillo-Marrodán

    2018-04-01

    Full Text Available This paper is aimed at modelling the flow behaviour of P91 steel at high temperature and a wide range of strain rates for constant and also variable strain-rate deformation conditions, such as those in real hot-working processes. For this purpose, an incremental physically-based model is proposed for the P91 steel flow behavior. This formulation considers the effects of dynamic recovery (DRV and dynamic recrystallization (DRX on the mechanical properties of the material, using only the flow stress, strain rate and temperature as state variables and not the accumulated strain. Therefore, it reproduces accurately the flow stress, work hardening and work softening not only under constant, but also under transient deformation conditions. To accomplish this study, the material is characterised experimentally by means of uniaxial compression tests, conducted at a temperature range of 900–1270 °C and at strain rates in the range of 0.005–10 s−1. Finally, the proposed model is implemented in commercial finite element (FE software to provide evidence of the performance of the proposed formulation. The experimental compression tests are simulated using the novel model and the well-known Hansel–Spittel formulation. In conclusion, the incremental physically-based model shows accurate results when work softening is present, especially under variable strain-rate deformation conditions. Hence, the present formulation is appropriate for the simulation of the hot-working processes typically conducted at industrial scale.

  2. Bayesian inversion of data from effusive volcanic eruptions using physics-based models: Application to Mount St. Helens 2004--2008

    Science.gov (United States)

    Anderson, Kyle; Segall, Paul

    2013-01-01

    Physics-based models of volcanic eruptions can directly link magmatic processes with diverse, time-varying geophysical observations, and when used in an inverse procedure make it possible to bring all available information to bear on estimating properties of the volcanic system. We develop a technique for inverting geodetic, extrusive flux, and other types of data using a physics-based model of an effusive silicic volcanic eruption to estimate the geometry, pressure, depth, and volatile content of a magma chamber, and properties of the conduit linking the chamber to the surface. A Bayesian inverse formulation makes it possible to easily incorporate independent information into the inversion, such as petrologic estimates of melt water content, and yields probabilistic estimates for model parameters and other properties of the volcano. Probability distributions are sampled using a Markov-Chain Monte Carlo algorithm. We apply the technique using GPS and extrusion data from the 2004–2008 eruption of Mount St. Helens. In contrast to more traditional inversions such as those involving geodetic data alone in combination with kinematic forward models, this technique is able to provide constraint on properties of the magma, including its volatile content, and on the absolute volume and pressure of the magma chamber. Results suggest a large chamber of >40 km3 with a centroid depth of 11–18 km and a dissolved water content at the top of the chamber of 2.6–4.9 wt%.

  3. Component- and system-level degradation modeling of digital Instrumentation and Control systems based on a Multi-State Physics Modeling Approach

    International Nuclear Information System (INIS)

    Wang, Wei; Di Maio, Francesco; Zio, Enrico

    2016-01-01

    Highlights: • A Multi-State Physics Modeling (MSPM) framework for reliability assessment is proposed. • Monte Carlo (MC) simulation is utilized to estimate the degradation state probability. • Due account is given to stochastic uncertainty and deterministic degradation progression. • The MSPM framework is applied to the reliability assessment of a digital I&C system. • Results are compared with the results obtained with a Markov Chain Model (MCM). - Abstract: A system-level degradation modeling is proposed for the reliability assessment of digital Instrumentation and Control (I&C) systems in Nuclear Power Plants (NPPs). At the component level, we focus on the reliability assessment of a Resistance Temperature Detector (RTD), which is an important digital I&C component used to guarantee the safe operation of NPPs. A Multi-State Physics Model (MSPM) is built to describe this component degradation progression towards failure and Monte Carlo (MC) simulation is used to estimate the probability of sojourn in any of the previously defined degradation states, by accounting for both stochastic and deterministic processes that affect the degradation progression. The MC simulation relies on an integrated modeling of stochastic processes with deterministic aging of components that results to be fundamental for estimating the joint cumulative probability distribution of finding the component in any of the possible degradation states. The results of the application of the proposed degradation model to a digital I&C system of literature are compared with the results obtained by a Markov Chain Model (MCM). The integrated stochastic-deterministic process here proposed to drive the MC simulation is viable to integrate component-level models into a system-level model that would consider inter-system or/and inter-component dependencies and uncertainties.

  4. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    International Nuclear Information System (INIS)

    Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle

    2017-01-01

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4 m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.

  5. Integrating Operational Energy Implications into System-Level Combat Effects Modeling: Assessing the Combat Effectiveness and Fuel Use of ABCT 2020 and Current ABCT

    Science.gov (United States)

    2015-01-01

    Endy M. Daehner, John Matsumura, Thomas J. Herbert , Jeremy R. Kurz, Keith Walters Integrating Operational Energy Implications into System-Level... George Guthridge, and Megan Corso for their clear guid- ance and assistance throughout the study. We also received valuable information and insights from...helped with processing modeling and simulation outputs. Laura Novacic and Donna Mead provided invaluable administrative assistance and help with

  6. Wear-out Failure Analysis of an Impedance-Source PV Microinverter Based on System-Level Electro-Thermal Modeling

    DEFF Research Database (Denmark)

    Shen, Yanfeng; Chub, Andrii; Wang, Huai

    2018-01-01

    and system-level finite element method (FEM) simulations, the electro-thermal models are built for the most reliability-critical components, i.e., power semi-conductor devices and capacitors. The dependence of the power loss on the junction/hotspot temperature is considered, the enclosure temperature...

  7. Draft Forecasts from Real-Time Runs of Physics-Based Models - A Road to the Future

    Science.gov (United States)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2008-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOAA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.

  8. System level ESD protection

    CERN Document Server

    Vashchenko, Vladislav

    2014-01-01

    This book addresses key aspects of analog integrated circuits and systems design related to system level electrostatic discharge (ESD) protection.  It is an invaluable reference for anyone developing systems-on-chip (SoC) and systems-on-package (SoP), integrated with system-level ESD protection. The book focuses on both the design of semiconductor integrated circuit (IC) components with embedded, on-chip system level protection and IC-system co-design. The readers will be enabled to bring the system level ESD protection solutions to the level of integrated circuits, thereby reducing or completely eliminating the need for additional, discrete components on the printed circuit board (PCB) and meeting system-level ESD requirements. The authors take a systematic approach, based on IC-system ESD protection co-design. A detailed description of the available IC-level ESD testing methods is provided, together with a discussion of the correlation between IC-level and system-level ESD testing methods. The IC-level ESD...

  9. A multilayer physically based snowpack model simulating direct and indirect radiative impacts of light-absorbing impurities in snow

    Science.gov (United States)

    Tuzet, Francois; Dumont, Marie; Lafaysse, Matthieu; Picard, Ghislain; Arnaud, Laurent; Voisin, Didier; Lejeune, Yves; Charrois, Luc; Nabat, Pierre; Morin, Samuel

    2017-11-01

    Light-absorbing impurities (LAIs) decrease snow albedo, increasing the amount of solar energy absorbed by the snowpack. Its most intuitive and direct impact is to accelerate snowmelt. Enhanced energy absorption in snow also modifies snow metamorphism, which can indirectly drive further variations of snow albedo in the near-infrared part of the solar spectrum because of the evolution of the near-surface snow microstructure. New capabilities have been implemented in the detailed snowpack model SURFEX/ISBA-Crocus (referred to as Crocus) to account for impurities' deposition and evolution within the snowpack and their direct and indirect impacts. Once deposited, the model computes impurities' mass evolution until snow melts out, accounting for scavenging by meltwater. Taking advantage of the recent inclusion of the spectral radiative transfer model TARTES (Two-stream Analytical Radiative TransfEr in Snow model) in Crocus, the model explicitly represents the radiative impacts of light-absorbing impurities in snow. The model was evaluated at the Col de Porte experimental site (French Alps) during the 2013-2014 snow season against in situ standard snow measurements and spectral albedo measurements. In situ meteorological measurements were used to drive the snowpack model, except for aerosol deposition fluxes. Black carbon (BC) and dust deposition fluxes used to drive the model were extracted from simulations of the atmospheric model ALADIN-Climate. The model simulates snowpack evolution reasonably, providing similar performances to our reference Crocus version in terms of snow depth, snow water equivalent (SWE), near-surface specific surface area (SSA) and shortwave albedo. Since the reference empirical albedo scheme was calibrated at the Col de Porte, improvements were not expected to be significant in this study. We show that the deposition fluxes from the ALADIN-Climate model provide a reasonable estimate of the amount of light-absorbing impurities deposited on the

  10. Physics-Based Stress Corrosion Cracking Component Reliability Model cast in an R7-Compatible Cumulative Damage Framework

    Energy Technology Data Exchange (ETDEWEB)

    Unwin, Stephen D.; Lowry, Peter P.; Layton, Robert F.; Toloczko, Mychailo B.; Johnson, Kenneth I.; Sanborn, Scott E.

    2011-07-01

    This is a working report drafted under the Risk-Informed Safety Margin Characterization pathway of the Light Water Reactor Sustainability Program, describing statistical models of passives component reliabilities.

  11. A new physics-based self-heating effect model for 4H-SiC MESFETs

    International Nuclear Information System (INIS)

    Cao Quanjun; Zhang Yimen; Zhang Yuming

    2008-01-01

    A new self-heating effect model for 4H-SiC MESFETs is proposed based on a combination of an analytical and a computer aided design (CAD) oriented drain current model. The circuit oriented expressions of 4H-SiC low-field electron mobility and incomplete ionization rate, which are related to temperature, are presented in this model, which are used to estimate the self-heating effect of 4H-SiC MESFETs. The verification of the present model is made, and the good agreement between simulated results and measured data of DC I – V curves with the self-heating effect is obtained. (condensed matter: electronic structure, electrical, magnetic, and optical propertiesx)

  12. An integrated approach coupling physically based models and probabilistic method to assess quantitatively landslide susceptibility at different scale: application to different geomorphological environments

    Science.gov (United States)

    Vandromme, Rosalie; Thiéry, Yannick; Sedan, Olivier; Bernardie, Séverine

    2016-04-01

    Landslide hazard assessment is the estimation of a target area where landslides of a particular type, volume, runout and intensity may occur within a given period. The first step to analyze landslide hazard consists in assessing the spatial and temporal failure probability (when the information is available, i.e. susceptibility assessment). Two types of approach are generally recommended to achieve this goal: (i) qualitative approach (i.e. inventory based methods and knowledge data driven methods) and (ii) quantitative approach (i.e. data-driven methods or deterministic physically based methods). Among quantitative approaches, deterministic physically based methods (PBM) are generally used at local and/or site-specific scales (1:5,000-1:25,000 and >1:5,000, respectively). The main advantage of these methods is the calculation of probability of failure (safety factor) following some specific environmental conditions. For some models it is possible to integrate the land-uses and climatic change. At the opposite, major drawbacks are the large amounts of reliable and detailed data (especially materials type, their thickness and the geotechnical parameters heterogeneity over a large area) and the fact that only shallow landslides are taking into account. This is why they are often used at site-specific scales (> 1:5,000). Thus, to take into account (i) materials' heterogeneity , (ii) spatial variation of physical parameters, (iii) different landslide types, the French Geological Survey (i.e. BRGM) has developed a physically based model (PBM) implemented in a GIS environment. This PBM couples a global hydrological model (GARDENIA®) including a transient unsaturated/saturated hydrological component with a physically based model computing the stability of slopes (ALICE®, Assessment of Landslides Induced by Climatic Events) based on the Morgenstern-Price method for any slip surface. The variability of mechanical parameters is handled by Monte Carlo approach. The

  13. A physics-based crystallographic modeling framework for describing the thermal creep behavior of Fe-Cr alloys

    Energy Technology Data Exchange (ETDEWEB)

    Wen, Wei [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Capolungo, Laurent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patra, Anirban [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tome, Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-02

    This Report addresses the Milestone M2MS-16LA0501032 of NEAMS Program (“Develop hardening model for FeCrAl cladding), with a deadline of 09/30/2016. Here we report a constitutive law for thermal creep of FeCrAl. This Report adds to and complements the one for Milestone M3MS-16LA0501034 (“Interface hardening models with MOOSE-BISON”), where we presented a hardening law for irradiated FeCrAl. The last component of our polycrystal-based constitutive behavior, namely, an irradiation creep model for FeCrAl, will be developed as part of the FY17 Milestones, and the three regimes will be coupled and interfaced with MOOSE-BISON.

  14. Hydrologic response to multimodel climate output using a physically based model of groundwater/surface water interactions

    Science.gov (United States)

    Sulis, M.; Paniconi, C.; Marrocu, M.; Huard, D.; Chaumont, D.

    2012-12-01

    General circulation models (GCMs) are the primary instruments for obtaining projections of future global climate change. Outputs from GCMs, aided by dynamical and/or statistical downscaling techniques, have long been used to simulate changes in regional climate systems over wide spatiotemporal scales. Numerous studies have acknowledged the disagreements between the various GCMs and between the different downscaling methods designed to compensate for the mismatch between climate model output and the spatial scale at which hydrological models are applied. Very little is known, however, about the importance of these differences once they have been input or assimilated by a nonlinear hydrological model. This issue is investigated here at the catchment scale using a process-based model of integrated surface and subsurface hydrologic response driven by outputs from 12 members of a multimodel climate ensemble. The data set consists of daily values of precipitation and min/max temperatures obtained by combining four regional climate models and five GCMs. The regional scenarios were downscaled using a quantile scaling bias-correction technique. The hydrologic response was simulated for the 690 km2des Anglais catchment in southwestern Quebec, Canada. The results show that different hydrological components (river discharge, aquifer recharge, and soil moisture storage) respond differently to precipitation and temperature anomalies in the multimodel climate output, with greater variability for annual discharge compared to recharge and soil moisture storage. We also find that runoff generation and extreme event-driven peak hydrograph flows are highly sensitive to any uncertainty in climate data. Finally, the results show the significant impact of changing sequences of rainy days on groundwater recharge fluxes and the influence of longer dry spells in modifying soil moisture spatial variability.

  15. Incorporation of a physically based melt pond scheme into the sea ice component of a climate model

    OpenAIRE

    Flocco, Daniela; Feltham, Danny; Turner, Adrian K.

    2010-01-01

    The extent and thickness of the Arctic sea ice cover has decreased dramatically in the past few decades with minima in sea ice extent in September 2005 and 2007. These minima have not been predicted in the IPCC AR4 report, suggesting that the sea ice component of climate models should more realistically represent the processes controlling the sea ice mass balance. One of the processes poorly represented in sea ice models is the formation and evolution of melt ponds. Melt ponds accumulate on t...

  16. Parental concern about vaccine safety in Canadian children partially immunized at age 2: a multivariable model including system level factors.

    Science.gov (United States)

    MacDonald, Shannon E; Schopflocher, Donald P; Vaudry, Wendy

    2014-01-01

    Children who begin but do not fully complete the recommended series of childhood vaccines by 2 y of age are a much larger group than those who receive no vaccines. While parents who refuse all vaccines typically express concern about vaccine safety, it is critical to determine what influences parents of 'partially' immunized children. This case-control study examined whether parental concern about vaccine safety was responsible for partial immunization, and whether other personal or system-level factors played an important role. A random sample of parents of partially and completely immunized 2 y old children were selected from a Canadian regional immunization registry and completed a postal survey assessing various personal and system-level factors. Unadjusted odds ratios (OR) and adjusted ORs (aOR) were calculated with logistic regression. While vaccine safety concern was associated with partial immunization (OR 7.338, 95% CI 4.138-13.012), other variables were more strongly associated and reduced the strength of the relationship between concern and partial immunization in multivariable analysis (aOR 2.829, 95% CI 1.151-6.957). Other important factors included perceived disease susceptibility and severity (aOR 4.629, 95% CI 2.017-10.625), residential mobility (aOR 3.908, 95% CI 2.075-7.358), daycare use (aOR 0.310, 95% CI 0.144-0.671), number of needles administered at each visit (aOR 7.734, 95% CI 2.598-23.025) and access to a regular physician (aOR 0.219, 95% CI 0.057-0.846). While concern about vaccine safety may be addressed through educational strategies, this study suggests that additional program and policy-level strategies may positively impact immunization uptake.

  17. Physically based dynamic run-out modelling for quantitative debris flow risk assessment: a case study in Tresenda, northern Italy

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; Camera, C.; Van Westen, C.; Apuani, T.; Jetten, V.; Sterlacchini, S.

    2014-01-01

    Roč. 72, č. 3 (2014), s. 645-661 ISSN 1866-6280 Institutional support: RVO:67985891 Keywords : debris flow * FLO-2D * run-out * quantitative hazard and risk assessment * vulnerability * numerical modelling Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.765, year: 2014

  18. An introduction to the European Hydrological System — Systeme Hydrologique Europeen, ``SHE'', 2: Structure of a physically-based, distributed modelling system

    Science.gov (United States)

    Abbott, M. B.; Bathurst, J. C.; Cunge, J. A.; O'Connell, P. E.; Rasmussen, J.

    1986-10-01

    The paper forms the second part of an introduction to the SHE, a physically-based, distributed catchment modelling system produced jointly by the Danish Hydraulic Institute, the British Institute of Hydrology and SOGREAH (France) with the financial support of the Commission of the European Communities. The SHE is physically-based in the sense that the hydrological processes of water movement are modelled either by finite difference representations of the partial differential equations of mass, momentum and energy conservation, or by empirical equations derived from independent experimental research. Spatial distribution of catchment parameters, rainfall input and hydrological response is achieved in the horizontal by an orthogonal grid network and in the vertical by a column of horizontal layers at each grid square. Each of the primary processes of the land phase of the hydrological cycle is modelled in a separate component as follows: interception, by the Rutter accounting procedure; evapotranspiration, by the Penman-Monteith equation; overland and channel flow, by simplifications of the St. Venant equations; unsaturated zone flow, by the one-dimensional Richards equation; saturated zone flow, by the two-dimensional Boussinesq equation; snowmelt, by an energy budget method. Overall control of the parallel running of the components and the information exchanges between them is managed by a FRAME component. Careful attention has been devoted to a modular construction so that improvements or additional components (e.g. water quality and sediment yield) can be added in the future. Considerable operating flexibility is provided through the ability to vary the level of sophistication of the calculation mode to match the availability or quality of the data.

  19. An instructional model for the teaching of physics, based on a meaningful learning theory and class experiences

    Directory of Open Access Journals (Sweden)

    Ricardo Chrobak

    1997-05-01

    Full Text Available Practically all research studies concerning the teaching of Physics point out the fact that conventional instructional models fail to achieve their objectives. Many attempts have been done to change this situation, frequently with disappointing results. This work, which is the experimental stage in a research project of a greater scope, represents an effort to change to a model based on a cognitive learning theory, known as the Ausubel-Novak-Gowin theory, making use of the metacognitive tools that emerge from this theory. The results of this work indicate that the students react positively to the goals of meaningful learning, showing substantial understanding of Newtonian Mechanics. An important reduction in the study time required to pass the course has also been reported.

  20. Physics-Based Modeling of Permeation: Simulation of Low-Volatility Agent Permeation and Aerosol Vapor Liquid Assessment Group Experiments

    Science.gov (United States)

    2015-06-01

    methylphosphonothiolate (VX) through natural latex rubber and neoprene resulting from LVAP tests. 2. The permeation model is used to study the sensitivity of...Styrene–Butadiene– Rubber , Ethylene–Propylene–Diene Terpolymer, and Natural Rubber Versus Hydrocarbons (C8–C16). Macromolecules 1991, 24 (9), 2598–2605...22 14. Harogoppad, S.B.; Aminabhavi, T.M. Diffusion and Sorption of Organic Liquids through Polymer Membranes 2. Neoprene, SBR, EPDM, NBR , and

  1. Physics based modeling of a series parallel battery pack for asymmetry analysis, predictive control and life extension

    Science.gov (United States)

    Ganesan, Nandhini; Basu, Suman; Hariharan, Krishnan S.; Kolake, Subramanya Mayya; Song, Taewon; Yeo, Taejung; Sohn, Dong Kee; Doo, Seokgwang

    2016-08-01

    Lithium-Ion batteries used for electric vehicle applications are subject to large currents and various operation conditions, making battery pack design and life extension a challenging problem. With increase in complexity, modeling and simulation can lead to insights that ensure optimal performance and life extension. In this manuscript, an electrochemical-thermal (ECT) coupled model for a 6 series × 5 parallel pack is developed for Li ion cells with NCA/C electrodes and validated against experimental data. Contribution of the cathode to overall degradation at various operating conditions is assessed. Pack asymmetry is analyzed from a design and an operational perspective. Design based asymmetry leads to a new approach of obtaining the individual cell responses of the pack from an average ECT output. Operational asymmetry is demonstrated in terms of effects of thermal gradients on cycle life, and an efficient model predictive control technique is developed. Concept of reconfigurable battery pack is studied using detailed simulations that can be used for effective monitoring and extension of battery pack life.

  2. Assessment of land-use change on streamflow using GIS, remote sensing and a physically-based model, SWAT

    Directory of Open Access Journals (Sweden)

    J. Y. G. Dos Santos

    2014-09-01

    Full Text Available This study aims to assess the impact of the land-use changes between the periods 1967−1974 and 1997−2008 on the streamflow of Tapacurá catchment (northeastern Brazil using the Soil and Water Assessment Tool (SWAT model. The results show that the most sensitive parameters were the baseflow, Manning factor, time of concentration and soil evaporation compensation factor, which affect the catchment hydrology. The model calibration and validation were performed on a monthly basis, and the streamflow simulation showed a good level of accuracy for both periods. The obtained R2 and Nash-Sutcliffe Efficiency values for each period were respectively 0.82 and 0.81 for 1967−1974, and 0.93 and 0.92 for the period 1997−2008. The evaluation of the SWAT model response to the land cover has shown that the mean monthly flow, during the rainy seasons for 1967−1974, decreased when compared to 1997−2008.

  3. Integrated Physics-based Modeling and Experiments for Improved Prediction of Combustion Dynamics in Low-Emission Systems

    Science.gov (United States)

    Anderson, William E.; Lucht, Robert P.; Mongia, Hukam

    2015-01-01

    Concurrent simulation and experiment was undertaken to assess the ability of a hybrid RANS-LES model to predict combustion dynamics in a single-element lean direct-inject (LDI) combustor showing self-excited instabilities. High frequency pressure modes produced by Fourier and modal decomposition analysis were compared quantitatively, and trends with equivalence ratio and inlet temperature were compared qualitatively. High frequency OH PLIF and PIV measurements were also taken. Submodels for chemical kinetics and primary and secondary atomization were also tested against the measured behavior. For a point-wise comparison, the amplitudes matched within a factor of two. The dependence on equivalence ratio was matched. Preliminary results from simulation using an 18-reaction kinetics model indicated instability amplitudes closer to measurement. Analysis of the simulations suggested a band of modes around 1400 Hz were due to a vortex bubble breakdown and a band of modes around 6 kHz were due to a precessing vortex core hydrodynamic instability. The primary needs are directly coupled and validated ab initio models of the atomizer free surface flow and the primary atomization processes, and more detailed study of the coupling between the 3D swirling flow and the local thermoacoustics in the diverging venturi section.

  4. Correlating electroluminescence characterization and physics-based models of InGaN/GaN LEDs: Pitfalls and open issues

    Energy Technology Data Exchange (ETDEWEB)

    Calciati, Marco; Vallone, Marco; Zhou, Xiangyu; Ghione, Giovanni [Dipartimento di Elettronica e Telecomunicazioni, Politecnico di Torino, corso Duca degli Abruzzi 24, 10129 Torino (Italy); Goano, Michele, E-mail: michele.goano@polito.it; Bertazzi, Francesco [Dipartimento di Elettronica e Telecomunicazioni, Politecnico di Torino, corso Duca degli Abruzzi 24, 10129 Torino (Italy); IEIIT-CNR, Politecnico di Torino, corso Duca degli Abruzzi 24, 10129 Torino (Italy); Meneghini, Matteo; Meneghesso, Gaudenzio; Zanoni, Enrico [Dipartimento di Ingegneria dell' Informazione, Università di Padova, Via Gradenigo 6/B, 35131 Padova (Italy); Bellotti, Enrico [Department of Electrical and Computer Engineering, Boston University, 8 Saint Mary' s Street, 02215 Boston, MA (United States); Verzellesi, Giovanni [Dipartimento di Scienze e Metodi dell' Ingegneria, Università di Modena e Reggio Emilia, 42122 Reggio Emilia (Italy); Zhu, Dandan; Humphreys, Colin [Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom)

    2014-06-15

    Electroluminescence (EL) characterization of InGaN/GaN light-emitting diodes (LEDs), coupled with numerical device models of different sophistication, is routinely adopted not only to establish correlations between device efficiency and structural features, but also to make inferences about the loss mechanisms responsible for LED efficiency droop at high driving currents. The limits of this investigative approach are discussed here in a case study based on a comprehensive set of current- and temperature-dependent EL data from blue LEDs with low and high densities of threading dislocations (TDs). First, the effects limiting the applicability of simpler (closed-form and/or one-dimensional) classes of models are addressed, like lateral current crowding, vertical carrier distribution nonuniformity, and interband transition broadening. Then, the major sources of uncertainty affecting state-of-the-art numerical device simulation are reviewed and discussed, including (i) the approximations in the transport description through the multi-quantum-well active region, (ii) the alternative valence band parametrizations proposed to calculate the spontaneous emission rate, (iii) the difficulties in defining the Auger coefficients due to inadequacies in the microscopic quantum well description and the possible presence of extra, non-Auger high-current-density recombination mechanisms and/or Auger-induced leakage. In the case of the present LED structures, the application of three-dimensional numerical-simulation-based analysis to the EL data leads to an explanation of efficiency droop in terms of TD-related and Auger-like nonradiative losses, with a C coefficient in the 10{sup −30} cm{sup 6}/s range at room temperature, close to the larger theoretical calculations reported so far. However, a study of the combined effects of structural and model uncertainties suggests that the C values thus determined could be overestimated by about an order of magnitude. This preliminary

  5. Correlating electroluminescence characterization and physics-based models of InGaN/GaN LEDs: Pitfalls and open issues

    Directory of Open Access Journals (Sweden)

    Marco Calciati

    2014-06-01

    Full Text Available Electroluminescence (EL characterization of InGaN/GaN light-emitting diodes (LEDs, coupled with numerical device models of different sophistication, is routinely adopted not only to establish correlations between device efficiency and structural features, but also to make inferences about the loss mechanisms responsible for LED efficiency droop at high driving currents. The limits of this investigative approach are discussed here in a case study based on a comprehensive set of current- and temperature-dependent EL data from blue LEDs with low and high densities of threading dislocations (TDs. First, the effects limiting the applicability of simpler (closed-form and/or one-dimensional classes of models are addressed, like lateral current crowding, vertical carrier distribution nonuniformity, and interband transition broadening. Then, the major sources of uncertainty affecting state-of-the-art numerical device simulation are reviewed and discussed, including (i the approximations in the transport description through the multi-quantum-well active region, (ii the alternative valence band parametrizations proposed to calculate the spontaneous emission rate, (iii the difficulties in defining the Auger coefficients due to inadequacies in the microscopic quantum well description and the possible presence of extra, non-Auger high-current-density recombination mechanisms and/or Auger-induced leakage. In the case of the present LED structures, the application of three-dimensional numerical-simulation-based analysis to the EL data leads to an explanation of efficiency droop in terms of TD-related and Auger-like nonradiative losses, with a C coefficient in the 10−30 cm6/s range at room temperature, close to the larger theoretical calculations reported so far. However, a study of the combined effects of structural and model uncertainties suggests that the C values thus determined could be overestimated by about an order of magnitude. This preliminary

  6. Electrochemical state and internal variables estimation using a reduced-order physics-based model of a lithium-ion cell and an extended Kalman filter

    Energy Technology Data Exchange (ETDEWEB)

    Stetzel, KD; Aldrich, LL; Trimboli, MS; Plett, GL

    2015-03-15

    This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables. (C) 2014 Elsevier B.V. All rights reserved.

  7. Implementation of a physically based water percolation routine in the Crocus/SURFEX (V7.3 snowpack model

    Directory of Open Access Journals (Sweden)

    C. J. L. D'Amboise

    2017-09-01

    Full Text Available We present a new water percolation routine added to the one-dimensional snowpack model Crocus as an alternative to the empirical bucket routine. This routine solves the Richards equation, which describes flow of water through unsaturated porous snow governed by capillary suction, gravity and hydraulic conductivity of the snow layers. We tested the Richards routine on two data sets, one recorded from an automatic weather station over the winter of 2013–2014 at Filefjell, Norway, and the other an idealized synthetic data set. Model results using the Richards routine generally lead to higher water contents in the snow layers. Snow layers often reached a point at which the ice crystals' surface area is completely covered by a thin film of water (the transition between pendular and funicular regimes, at which feedback from the snow metamorphism and compaction routines are expected to be nonlinear. With the synthetic simulation 18 % of snow layers obtained a saturation of  >  10 % and 0.57 % of layers reached saturation of  >  15 %. The Richards routine had a maximum liquid water content of 173.6 kg m−3 whereas the bucket routine had a maximum of 42.1 kg m−3. We found that wet-snow processes, such as wet-snow metamorphism and wet-snow compaction rates, are not accurately represented at higher water contents. These routines feed back on the Richards routines, which rely heavily on grain size and snow density. The parameter sets for the water retention curve and hydraulic conductivity of snow layers, which are used in the Richards routine, do not represent all the snow types that can be found in a natural snowpack. We show that the new routine has been implemented in the Crocus model, but due to feedback amplification and parameter uncertainties, meaningful applicability is limited. Updating or adapting other routines in Crocus, specifically the snow compaction routine and the grain metamorphism routine, is needed

  8. Physically-based impedance modeling of the negative electrode in All-Vanadium Redox Flow Batteries: insight into mass transport issues

    International Nuclear Information System (INIS)

    Zago, M.; Casalegno, A.

    2017-01-01

    Highlights: •Performance losses induced by migration though the porous electrode are negligible. •Convection at carbon fiber results in a linear branch at low frequency in Nyquist plot. •When the reaction is concentrated, diffusion losses though the electrode diminishes. •Diffusion process in the pores becomes more limiting at high current. •Charge transfer resistance decreases with increasing current. -- Abstract: Mass transport of the electrolyte over the porous electrode is one of the most critical issues hindering Vanadium Redox Flow Battery commercialization, leading to increased overpotential at high current and limiting system power density. In this work, a 1D physically based impedance model of Vanadium Redox Flow Battery negative electrode is developed, taking into account electrochemical reactions, convection at carbon fiber, diffusion in the pores and migration and diffusion through electrode thickness. The model is validated with respect to experimental data measured in a symmetric cell hardware, which allows to keep the State of Charge constant during the measurement. The physically based approach permits to elucidate the origin of different impedance features and quantify the corresponding losses. Charge transfer resistance decreases with increasing current and is generally lower compared to the ones related to mass transport phenomena. Migration losses through the porous electrode are negligible, while convection at carbon fiber is relevant and in Nyquist plot results in a linear branch at low frequency. In presence of significant convection losses the reaction tends to concentrate close to the channel: this leads to a reduction of diffusion losses through the electrode, while diffusion process in the pores becomes more limiting.

  9. Regionalization of meso-scale physically based nitrogen modeling outputs to the macro-scale by the use of regression trees

    Science.gov (United States)

    Künne, A.; Fink, M.; Kipka, H.; Krause, P.; Flügel, W.-A.

    2012-06-01

    In this paper, a method is presented to estimate excess nitrogen on large scales considering single field processes. The approach was implemented by using the physically based model J2000-S to simulate the nitrogen balance as well as the hydrological dynamics within meso-scale test catchments. The model input data, the parameterization, the results and a detailed system understanding were used to generate the regression tree models with GUIDE (Loh, 2002). For each landscape type in the federal state of Thuringia a regression tree was calibrated and validated using the model data and results of excess nitrogen from the test catchments. Hydrological parameters such as precipitation and evapotranspiration were also used to predict excess nitrogen by the regression tree model. Hence they had to be calculated and regionalized as well for the state of Thuringia. Here the model J2000g was used to simulate the water balance on the macro scale. With the regression trees the excess nitrogen was regionalized for each landscape type of Thuringia. The approach allows calculating the potential nitrogen input into the streams of the drainage area. The results show that the applied methodology was able to transfer the detailed model results of the meso-scale catchments to the entire state of Thuringia by low computing time without losing the detailed knowledge from the nitrogen transport modeling. This was validated with modeling results from Fink (2004) in a catchment lying in the regionalization area. The regionalized and modeled excess nitrogen correspond with 94%. The study was conducted within the framework of a project in collaboration with the Thuringian Environmental Ministry, whose overall aim was to assess the effect of agro-environmental measures regarding load reduction in the water bodies of Thuringia to fulfill the requirements of the European Water Framework Directive (Bäse et al., 2007; Fink, 2006; Fink et al., 2007).

  10. Physics-based simulation modeling and optimization of microstructural changes induced by machining and selective laser melting processes in titanium and nickel based alloys

    Science.gov (United States)

    Arisoy, Yigit Muzaffer

    Manufacturing processes may significantly affect the quality of resultant surfaces and structural integrity of the metal end products. Controlling manufacturing process induced changes to the product's surface integrity may improve the fatigue life and overall reliability of the end product. The goal of this study is to model the phenomena that result in microstructural alterations and improve the surface integrity of the manufactured parts by utilizing physics-based process simulations and other computational methods. Two different (both conventional and advanced) manufacturing processes; i.e. machining of Titanium and Nickel-based alloys and selective laser melting of Nickel-based powder alloys are studied. 3D Finite Element (FE) process simulations are developed and experimental data that validates these process simulation models are generated to compare against predictions. Computational process modeling and optimization have been performed for machining induced microstructure that includes; i) predicting recrystallization and grain size using FE simulations and the Johnson-Mehl-Avrami-Kolmogorov (JMAK) model, ii) predicting microhardness using non-linear regression models and the Random Forests method, and iii) multi-objective machining optimization for minimizing microstructural changes. Experimental analysis and computational process modeling of selective laser melting have been also conducted including; i) microstructural analysis of grain sizes and growth directions using SEM imaging and machine learning algorithms, ii) analysis of thermal imaging for spattering, heating/cooling rates and meltpool size, iii) predicting thermal field, meltpool size, and growth directions via thermal gradients using 3D FE simulations, iv) predicting localized solidification using the Phase Field method. These computational process models and predictive models, once utilized by industry to optimize process parameters, have the ultimate potential to improve performance of

  11. Diagnosis of the hydrology of a small Arctic basin at the tundra-taiga transition using a physically based hydrological model

    Science.gov (United States)

    Krogh, Sebastian A.; Pomeroy, John W.; Marsh, Philip

    2017-07-01

    A better understanding of cold regions hydrological processes and regimes in transitional environments is critical for predicting future Arctic freshwater fluxes under climate and vegetation change. A physically based hydrological model using the Cold Regions Hydrological Model platform was created for a small Arctic basin in the tundra-taiga transition region. The model represents snow redistribution and sublimation by wind and vegetation, snowmelt energy budget, evapotranspiration, subsurface flow through organic terrain, infiltration to frozen soils, freezing and thawing of soils, permafrost and streamflow routing. The model was used to reconstruct the basin water cycle over 28 years to understand and quantify the mass fluxes controlling its hydrological regime. Model structure and parameters were set from the current understanding of Arctic hydrology, remote sensing, field research in the basin and region, and calibration against streamflow observations. Calibration was restricted to subsurface hydraulic and storage parameters. Multi-objective evaluation of the model using observed streamflow, snow accumulation and ground freeze/thaw state showed adequate simulation. Significant spatial variability in the winter mass fluxes was found between tundra, shrubs and forested sites, particularly due to the substantial blowing snow redistribution and sublimation from the wind-swept upper basin, as well as sublimation of canopy intercepted snow from the forest (about 17% of snowfall). At the basin scale, the model showed that evapotranspiration is the largest loss of water (47%), followed by streamflow (39%) and sublimation (14%). The models streamflow performance sensitivity to a set of parameter was analysed, as well as the mean annual mass balance uncertainty associated with these parameters.

  12. Examining the Self-Assembly of Rod-Coil Block Copolymers via Physics Based Polymer Models and Polarized X-Ray Scattering

    Science.gov (United States)

    Hannon, Adam; Sunday, Daniel; Windover, Donald; Liman, Christopher; Bowen, Alec; Khaira, Gurdaman; de Pablo, Juan; Delongchamp, Dean; Kline, R. Joseph

    Photovoltaics, flexible electronics, and stimuli-responsive materials all require enhanced methodology to examine their nanoscale molecular orientation. The mechanical, electronic, optical, and transport properties of devices made from these materials are all a function of this orientation. The polymer chains in these materials are best modeled as semi-flexible to rigid rods. Characterizing the rigidity and molecular orientation of these polymers non-invasively is currently being pursued by using polarized resonant soft X-ray scattering (P-RSoXS). In this presentation, we show recent work on implementing such a characterization process using a rod-coil block copolymer system in the rigid-rod limit. We first demonstrate how we have used physics based models such as self-consistent field theory (SCFT) in non-polarized RSoXS work to fit scattering profiles for thin film coil-coil PS- b-PMMA block copolymer systems. We then show by using a wormlike chain partition function in the SCFT formulism to model the rigid-rod block, the methodology can be used there as well to extract the molecular orientation of the rod block from a simulated P-RSoXS experiment. The results from the work show the potential of the technique to extract thermodynamic and morphological sample information.

  13. Characterization of System Level Single Event Upset (SEU) Responses using SEU Data, Classical Reliability Models, and Space Environment Data

    Science.gov (United States)

    Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael

    2017-01-01

    We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.

  14. Validation of a physically based catchment model for application in post-closure radiological safety assessments of deep geological repositories for solid radioactive wastes.

    Science.gov (United States)

    Thorne, M C; Degnan, P; Ewen, J; Parkin, G

    2000-12-01

    The physically based river catchment modelling system SHETRAN incorporates components representing water flow, sediment transport and radionuclide transport both in solution and bound to sediments. The system has been applied to simulate hypothetical future catchments in the context of post-closure radiological safety assessments of a potential site for a deep geological disposal facility for intermediate and certain low-level radioactive wastes at Sellafield, west Cumbria. In order to have confidence in the application of SHETRAN for this purpose, various blind validation studies have been undertaken. In earlier studies, the validation was undertaken against uncertainty bounds in model output predictions set by the modelling team on the basis of how well they expected the model to perform. However, validation can also be carried out with bounds set on the basis of how well the model is required to perform in order to constitute a useful assessment tool. Herein, such an assessment-based validation exercise is reported. This exercise related to a field plot experiment conducted at Calder Hollow, west Cumbria, in which the migration of strontium and lanthanum in subsurface Quaternary deposits was studied on a length scale of a few metres. Blind predictions of tracer migration were compared with experimental results using bounds set by a small group of assessment experts independent of the modelling team. Overall, the SHETRAN system performed well, failing only two out of seven of the imposed tests. Furthermore, of the five tests that were not failed, three were positively passed even when a pessimistic view was taken as to how measurement errors should be taken into account. It is concluded that the SHETRAN system, which is still being developed further, is a powerful tool for application in post-closure radiological safety assessments.

  15. System-Level Heat Transfer Analysis, Thermal- Mechanical Cyclic Stress Analysis, and Environmental Fatigue Modeling of a Two-Loop Pressurized Water Reactor. A Preliminary Study

    Energy Technology Data Exchange (ETDEWEB)

    Mohanty, Subhasish [Argonne National Lab. (ANL), Argonne, IL (United States); Soppet, William [Argonne National Lab. (ANL), Argonne, IL (United States); Majumdar, Saurin [Argonne National Lab. (ANL), Argonne, IL (United States); Natesan, Ken [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-01-03

    This report provides an update on an assessment of environmentally assisted fatigue for light water reactor components under extended service conditions. This report is a deliverable in April 2015 under the work package for environmentally assisted fatigue under DOE's Light Water Reactor Sustainability program. In this report, updates are discussed related to a system level preliminary finite element model of a two-loop pressurized water reactor (PWR). Based on this model, system-level heat transfer analysis and subsequent thermal-mechanical stress analysis were performed for typical design-basis thermal-mechanical fatigue cycles. The in-air fatigue lives of components, such as the hot and cold legs, were estimated on the basis of stress analysis results, ASME in-air fatigue life estimation criteria, and fatigue design curves. Furthermore, environmental correction factors and associated PWR environment fatigue lives for the hot and cold legs were estimated by using estimated stress and strain histories and the approach described in NUREG-6909. The discussed models and results are very preliminary. Further advancement of the discussed model is required for more accurate life prediction of reactor components. This report only presents the work related to finite element modelling activities. However, in between multiple tensile and fatigue tests were conducted. The related experimental results will be presented in the year-end report.

  16. Simulations of future runoff conditions for glacierized catchments in the Ötztal Alps (Austria) using the physically based hydroclimatological model AMUNDSEN

    Science.gov (United States)

    Hanzer, Florian; Förster, Kristian; Marke, Thomas; Strasser, Ulrich

    2016-04-01

    Assessing the amount of water resources stored in mountain catchments as snow and ice as well as the timing of meltwater production and the resulting streamflow runoff is of high interest for glaciohydrological investigations and hydropower production. Climate change induced seasonal shifts in snow and ice melt will alter the hydrological regimes in glacierized catchments in terms of both timing and magnitude of discharge. We present the setup of the hydroclimatological model AMUNDSEN for a highly glacierized (24 %) 558 km2 large study area (1760-3768 m a.s.l.) in the Ötztal Alps (Austria), and first results of simulated future runoff conditions. The study region comprises the headwater catchments of the valleys Ötztal, Pitztal, and Kaunertal, which contribute to the streamflow of the river Inn. AMUNDSEN is a fully distributed physically based model designed to quantify the energy and mass balance of snow and ice surfaces in complex topography as well as streamflow generation for a given catchment. The model has been extensively validated for past conditions and has been extended by an empirical glacier evolution model (Δh approach) for the present study. Statistically downscaled EURO-CORDEX climate simulations covering the RCP4.5 and RCP8.5 scenarios are used as the meteorological forcing for the period 2006-2050. Model results are evaluated in terms of magnitude and change of the contributions of the individual runoff components (snowmelt, ice melt, rain) in the subcatchments as well as the change in glacier volume and area.

  17. The mechanisms of feature inheritance as predicted by a systems-level model of visual attention and decision making.

    Science.gov (United States)

    Hamker, Fred H

    2008-07-15

    Feature inheritance provides evidence that properties of an invisible target stimulus can be attached to a following mask. We apply a systemslevel model of attention and decision making to explore the influence of memory and feedback connections in feature inheritance. We find that the presence of feedback loops alone is sufficient to account for feature inheritance. Although our simulations do not cover all experimental variations and focus only on the general principle, our result appears of specific interest since the model was designed for a completely different purpose than to explain feature inheritance. We suggest that feedback is an important property in visual perception and provide a description of its mechanism and its role in perception.

  18. Toward Inverse Control of Physics-Based Sound Synthesis

    Science.gov (United States)

    Pfalz, A.; Berdahl, E.

    2017-05-01

    Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.

  19. Towards a system level understanding of non-model organisms sampled from the environment: a network biology approach.

    Directory of Open Access Journals (Sweden)

    Tim D Williams

    2011-08-01

    Full Text Available The acquisition and analysis of datasets including multi-level omics and physiology from non-model species, sampled from field populations, is a formidable challenge, which so far has prevented the application of systems biology approaches. If successful, these could contribute enormously to improving our understanding of how populations of living organisms adapt to environmental stressors relating to, for example, pollution and climate. Here we describe the first application of a network inference approach integrating transcriptional, metabolic and phenotypic information representative of wild populations of the European flounder fish, sampled at seven estuarine locations in northern Europe with different degrees and profiles of chemical contaminants. We identified network modules, whose activity was predictive of environmental exposure and represented a link between molecular and morphometric indices. These sub-networks represented both known and candidate novel adverse outcome pathways representative of several aspects of human liver pathophysiology such as liver hyperplasia, fibrosis, and hepatocellular carcinoma. At the molecular level these pathways were linked to TNF alpha, TGF beta, PDGF, AGT and VEGF signalling. More generally, this pioneering study has important implications as it can be applied to model molecular mechanisms of compensatory adaptation to a wide range of scenarios in wild populations.

  20. Towards a system level understanding of non-model organisms sampled from the environment: a network biology approach.

    Science.gov (United States)

    Williams, Tim D; Turan, Nil; Diab, Amer M; Wu, Huifeng; Mackenzie, Carolynn; Bartie, Katie L; Hrydziuszko, Olga; Lyons, Brett P; Stentiford, Grant D; Herbert, John M; Abraham, Joseph K; Katsiadaki, Ioanna; Leaver, Michael J; Taggart, John B; George, Stephen G; Viant, Mark R; Chipman, Kevin J; Falciani, Francesco

    2011-08-01

    The acquisition and analysis of datasets including multi-level omics and physiology from non-model species, sampled from field populations, is a formidable challenge, which so far has prevented the application of systems biology approaches. If successful, these could contribute enormously to improving our understanding of how populations of living organisms adapt to environmental stressors relating to, for example, pollution and climate. Here we describe the first application of a network inference approach integrating transcriptional, metabolic and phenotypic information representative of wild populations of the European flounder fish, sampled at seven estuarine locations in northern Europe with different degrees and profiles of chemical contaminants. We identified network modules, whose activity was predictive of environmental exposure and represented a link between molecular and morphometric indices. These sub-networks represented both known and candidate novel adverse outcome pathways representative of several aspects of human liver pathophysiology such as liver hyperplasia, fibrosis, and hepatocellular carcinoma. At the molecular level these pathways were linked to TNF alpha, TGF beta, PDGF, AGT and VEGF signalling. More generally, this pioneering study has important implications as it can be applied to model molecular mechanisms of compensatory adaptation to a wide range of scenarios in wild populations.

  1. Systems-level computational modeling demonstrates fuel selection switching in high capacity running and low capacity running rats

    Science.gov (United States)

    Qi, Nathan R.

    2018-01-01

    High capacity and low capacity running rats, HCR and LCR respectively, have been bred to represent two extremes of running endurance and have recently demonstrated disparities in fuel usage during transient aerobic exercise. HCR rats can maintain fatty acid (FA) utilization throughout the course of transient aerobic exercise whereas LCR rats rely predominantly on glucose utilization. We hypothesized that the difference between HCR and LCR fuel utilization could be explained by a difference in mitochondrial density. To test this hypothesis and to investigate mechanisms of fuel selection, we used a constraint-based kinetic analysis of whole-body metabolism to analyze transient exercise data from these rats. Our model analysis used a thermodynamically constrained kinetic framework that accounts for glycolysis, the TCA cycle, and mitochondrial FA transport and oxidation. The model can effectively match the observed relative rates of oxidation of glucose versus FA, as a function of ATP demand. In searching for the minimal differences required to explain metabolic function in HCR versus LCR rats, it was determined that the whole-body metabolic phenotype of LCR, compared to the HCR, could be explained by a ~50% reduction in total mitochondrial activity with an additional 5-fold reduction in mitochondrial FA transport activity. Finally, we postulate that over sustained periods of exercise that LCR can partly overcome the initial deficit in FA catabolic activity by upregulating FA transport and/or oxidation processes. PMID:29474500

  2. The classification of lung cancers and their degree of malignancy by FTIR, PCA-LDA analysis, and a physics-based computational model.

    Science.gov (United States)

    Kaznowska, E; Depciuch, J; Łach, K; Kołodziej, M; Koziorowska, A; Vongsvivut, J; Zawlik, I; Cholewa, M; Cebulski, J

    2018-08-15

    Lung cancer has the highest mortality rate of all malignant tumours. The current effects of cancer treatment, as well as its diagnostics, are unsatisfactory. Therefore it is very important to introduce modern diagnostic tools, which will allow for rapid classification of lung cancers and their degree of malignancy. For this purpose, the authors propose the use of Fourier Transform InfraRed (FTIR) spectroscopy combined with Principal Component Analysis-Linear Discriminant Analysis (PCA-LDA) and a physics-based computational model. The results obtained for lung cancer tissues, adenocarcinoma and squamous cell carcinoma FTIR spectra, show a shift in wavenumbers compared to control tissue FTIR spectra. Furthermore, in the FTIR spectra of adenocarcinoma there are no peaks corresponding to glutamate or phospholipid functional groups. Moreover, in the case of G2 and G3 malignancy of adenocarcinoma lung cancer, the absence of an OH groups peak was noticed. Thus, it seems that FTIR spectroscopy is a valuable tool to classify lung cancer and to determine the degree of its malignancy. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Oblique incidence effects in direct x-ray detectors: A first-order approximation using a physics-based analytical model

    International Nuclear Information System (INIS)

    Badano, Aldo; Freed, Melanie; Fang Yuan

    2011-01-01

    Purpose: The authors describe the modifications to a previously developed analytical model of indirect CsI:Tl-based detector response required for studying oblique x-ray incidence effects in direct semiconductor-based detectors. This first-order approximation analysis allows the authors to describe the associated degradation in resolution in direct detectors and compare the predictions to the published data for indirect detectors. Methods: The proposed model is based on a physics-based analytical description developed by Freed et al. [''A fast, angle-dependent, analytical model of CsI detector response for optimization of 3D x-ray breast imaging systems,'' Med. Phys. 37(6), 2593-2605 (2010)] that describes detector response functions for indirect detectors and oblique incident x rays. The model, modified in this work to address direct detector response, describes the dependence of the response with x-ray energy, thickness of the transducer layer, and the depth-dependent blur and collection efficiency. Results: The authors report the detector response functions for indirect and direct detector models for typical thicknesses utilized in clinical systems for full-field digital mammography (150 μm for indirect CsI:Tl and 200 μm for a-Se direct detectors). The results suggest that the oblique incidence effect in a semiconductor detector differs from that in indirect detectors in two ways: The direct detector model produces a sharper overall PRF compared to the response corresponding to the indirect detector model for normal x-ray incidence and a larger relative increase in blur along the x-ray incidence direction compared to that found in indirect detectors with respect to the response at normal incidence angles. Conclusions: Compared to the effect seen in indirect detectors, the direct detector model exhibits a sharper response at normal x-ray incidence and a larger relative increase in blur along the x-ray incidence direction with respect to the blur in the

  4. Modeling the ionosphere-thermosphere response to a geomagnetic storm using physics-based magnetospheric energy input: OpenGGCM-CTIM results

    Directory of Open Access Journals (Sweden)

    Connor Hyunju Kim

    2016-01-01

    Full Text Available The magnetosphere is a major source of energy for the Earth’s ionosphere and thermosphere (IT system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM coupled with the Coupled Thermosphere Ionosphere Model (CTIM. OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe. CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCM-CTIM reproduces localized neutral density peaks at ~ 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset

  5. System-level insights into the cellular interactome of a non-model organism: inferring, modelling and analysing functional gene network of soybean (Glycine max.

    Directory of Open Access Journals (Sweden)

    Yungang Xu

    Full Text Available Cellular interactome, in which genes and/or their products interact on several levels, forming transcriptional regulatory-, protein interaction-, metabolic-, signal transduction networks, etc., has attracted decades of research focuses. However, such a specific type of network alone can hardly explain the various interactive activities among genes. These networks characterize different interaction relationships, implying their unique intrinsic properties and defects, and covering different slices of biological information. Functional gene network (FGN, a consolidated interaction network that models fuzzy and more generalized notion of gene-gene relations, have been proposed to combine heterogeneous networks with the goal of identifying functional modules supported by multiple interaction types. There are yet no successful precedents of FGNs on sparsely studied non-model organisms, such as soybean (Glycine max, due to the absence of sufficient heterogeneous interaction data. We present an alternative solution for inferring the FGNs of soybean (SoyFGNs, in a pioneering study on the soybean interactome, which is also applicable to other organisms. SoyFGNs exhibit the typical characteristics of biological networks: scale-free, small-world architecture and modularization. Verified by co-expression and KEGG pathways, SoyFGNs are more extensive and accurate than an orthology network derived from Arabidopsis. As a case study, network-guided disease-resistance gene discovery indicates that SoyFGNs can provide system-level studies on gene functions and interactions. This work suggests that inferring and modelling the interactome of a non-model plant are feasible. It will speed up the discovery and definition of the functions and interactions of other genes that control important functions, such as nitrogen fixation and protein or lipid synthesis. The efforts of the study are the basis of our further comprehensive studies on the soybean functional

  6. Physical bases of nuclear medicine

    International Nuclear Information System (INIS)

    Isabelle, D.B.; Ducassou, D.

    1975-01-01

    The physical bases of nuclear medicine are outlined in several chapters devoted successively to: atomic and nuclear structures; nuclear reactions; radioactiity laws; a study of different types of disintegration; the interactions of radiations with matter [fr

  7. Features, Events, and Processes: System Level

    Energy Technology Data Exchange (ETDEWEB)

    D. McGregor

    2004-04-19

    The primary purpose of this analysis is to evaluate System Level features, events, and processes (FEPs). The System Level FEPs typically are overarching in nature, rather than being focused on a particular process or subsystem. As a result, they are best dealt with at the system level rather than addressed within supporting process-level or subsystem level analyses and models reports. The System Level FEPs also tend to be directly addressed by regulations, guidance documents, or assumptions listed in the regulations; or are addressed in background information used in development of the regulations. This evaluation determines which of the System Level FEPs are excluded from modeling used to support the total system performance assessment for license application (TSPA-LA). The evaluation is based on the information presented in analysis reports, model reports, direct input, or corroborative documents that are cited in the individual FEP discussions in Section 6.2 of this analysis report.

  8. PRISMA: Program of Research to Integrate the Services for the Maintenance of Autonomy. A system-level integration model in Quebec

    Directory of Open Access Journals (Sweden)

    Margaret MacAdam

    2015-09-01

    Full Text Available The Program of Research to Integrate the Services for the Maintenance of Autonomy (PRISMA began in Quebec in 1999. Evaluation results indicated that the PRISMA Project improved the system of care for the frail elderly at no additional cost. In 2001, the Quebec Ministry of Health and Social Services made implementing the six features of the PRISMA approach a province-wide goal in the programme now known as RSIPA (French acronym. Extensive Province-wide progress has been made since then, but ongoing challenges include reducing unmet need for case management and home care services, creating incentives for increased physician participation in care planning and improving the computerized client chart, among others. PRISMA is the only evaluated international model of a coordination approach to integration and one of the few, if not the only, integration model to have been adopted at the system level by policy-makers.

  9. Assessing ecohydrological controls on catchment water storage, flux and age dynamics using tracers in a physically-based, spatially distributed model

    Science.gov (United States)

    Kuppel, S.; Tetzlaff, D.; Maneta, M. P.; Soulsby, C.

    2017-12-01

    Stable water isotope tracing has been extensively used in a wide range of geographical environments as a means to understand the sources, flow paths and ages of water stored and exiting a landscape via evapotranspiration, surface runoff and/or stream flow. Comparisons of isotopic signatures of precipitation and water in streams, soils, groundwater and plant xylem facilitates the assessment of how plant water use may affect preferential hydrologic pathways, storage dynamics and transit times in the critical zone. While tracers are also invaluable for testing model structure and accuracy, in most cases the measured isotopic signatures have been used to guide the calibration of conceptual runoff models with simplified vegetation and energy balance representation, which lacks sufficient detail to constrain key ecohydrological controls on flow paths and water ages. Here, we use a physically-based, distributed ecohydrological model (EcH2O) which we have extended to track 2H and 18O (including fractionation processes), and water age. This work is part of the "VeWa" project which aims at understanding ecohydrological couplings across climatic gradients in the wider North, where the hydrological implications of projected environmental change are essentially unknown though expected to be high. EcH2O combines a hydrologic scheme with an explicit representation of plant growth and phenology while resolving the energy balance across the soil-vegetation-atmosphere continuum. We focus on a montane catchment in Scotland, where unique long-term, high resolution hydrometric, ecohydrological and isotopic data allows for extensive model testing and projections. Results show the importance of incorporating soil fractionation processes to explain stream isotope dynamics, particularly seasonal enrichment in this humid, energy-limited catchment. This generic process-based approach facilitates analysis of dynamics in isotopes, storage and ages for the different hydrological compartments

  10. Physically-based slope stability modelling and parameter sensitivity: a case study in the Quitite and Papagaio catchments, Rio de Janeiro, Brazil

    Science.gov (United States)

    de Lima Neves Seefelder, Carolina; Mergili, Martin

    2016-04-01

    conservative than those yielded with the infinite slope stability model. The sensitivity of AUCROC to variations in the geohydraulic parameters remains small as long as the calculated degree of saturation of the soils is sufficient to result in the prediction of a significant amount of landslide release pixels. Due to the poor sensitivity of AUCROC to variations of the geotechnical and geohydraulic parameters it is hard to optimize the parameters by means of statistics. Instead, the results produced with many different combinations of parameters correspond reasonably well with the distribution of the observed landslide release areas, even though they vary considerably in terms of their conservativeness. Considering the uncertainty inherent in all geotechnical and geohydraulic data, and the impossibility to capture the spatial distribution of the parameters by means of laboratory tests in sufficient detail, we conclude that landslide susceptibility maps yielded by catchment-scale physically-based models should not be interpreted in absolute terms. Building on the assumption that our findings are generally valid, we suggest that efforts to develop better strategies for dealing with the uncertainties in the spatial variation of the key parameters should be given priority in future slope stability modelling efforts.

  11. Physics-Based Probabilistic Design Tool with System-Level Reliability Constraint, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The work proposed herein would develop a set of analytic methodologies and a computer tool suite enabling aerospace hardware designers to rapidly determine optimum...

  12. Physics-Based Probabilistic Design Tool with System-Level Reliability Constraint, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The work proposed herein would establish a concurrent design environment that enables aerospace hardware designers to rapidly determine optimum risk-constrained...

  13. Intrinsically motivated action-outcome learning and goal-based action recall: a system-level bio-constrained computational model.

    Science.gov (United States)

    Baldassarre, Gianluca; Mannella, Francesco; Fiore, Vincenzo G; Redgrave, Peter; Gurney, Kevin; Mirolli, Marco

    2013-05-01

    Reinforcement (trial-and-error) learning in animals is driven by a multitude of processes. Most animals have evolved several sophisticated systems of 'extrinsic motivations' (EMs) that guide them to acquire behaviours allowing them to maintain their bodies, defend against threat, and reproduce. Animals have also evolved various systems of 'intrinsic motivations' (IMs) that allow them to acquire actions in the absence of extrinsic rewards. These actions are used later to pursue such rewards when they become available. Intrinsic motivations have been studied in Psychology for many decades and their biological substrates are now being elucidated by neuroscientists. In the last two decades, investigators in computational modelling, robotics and machine learning have proposed various mechanisms that capture certain aspects of IMs. However, we still lack models of IMs that attempt to integrate all key aspects of intrinsically motivated learning and behaviour while taking into account the relevant neurobiological constraints. This paper proposes a bio-constrained system-level model that contributes a major step towards this integration. The model focusses on three processes related to IMs and on the neural mechanisms underlying them: (a) the acquisition of action-outcome associations (internal models of the agent-environment interaction) driven by phasic dopamine signals caused by sudden, unexpected changes in the environment; (b) the transient focussing of visual gaze and actions on salient portions of the environment; (c) the subsequent recall of actions to pursue extrinsic rewards based on goal-directed reactivation of the representations of their outcomes. The tests of the model, including a series of selective lesions, show how the focussing processes lead to a faster learning of action-outcome associations, and how these associations can be recruited for accomplishing goal-directed behaviours. The model, together with the background knowledge reviewed in the paper

  14. Reviews on Physically Based Controllable Fluid Animation

    Directory of Open Access Journals (Sweden)

    Pizzanu Kanongchaiyos

    2010-04-01

    Full Text Available In computer graphics animation, animation tools are required for fluid-like motions which are controllable by users or animator, since applying the techniques to commercial animations such as advertisement and film. Many developments have been proposed to model controllable fluid simulation with the need in realistic motion, robustness, adaptation, and support more required control model. Physically based models for different states of substances have been applied in general in order to permit animators to almost effortlessly create interesting, realistic, and sensible animation of natural phenomena such as water flow, smoke spread, etc. In this paper, we introduce the methods for simulation based on physical model and the techniques for control the flow of fluid, especially focus on particle based method. We then discuss the existing control methods within three performances; control ability, realism, and computation time. Finally, we give a brief of the current and trend of the research areas.

  15. Enhancing the Predicting Accuracy of the Water Stage Using a Physical-Based Model and an Artificial Neural Network-Genetic Algorithm in a River System

    Directory of Open Access Journals (Sweden)

    Wen-Cheng Liu

    2014-06-01

    Full Text Available Accurate simulations of river stages during typhoon events are critically important for flood control and are necessary for disaster prevention and water resources management in Taiwan. This study applies two artificial neural network (ANN models, including the back propagation neural network (BPNN and genetic algorithm neural network (GANN techniques, to improve predictions from a one-dimensional flood routing hydrodynamic model regarding the water stages during typhoon events in the Danshuei River system in northern Taiwan. The hydrodynamic model is driven by freshwater discharges at the upstream boundary conditions and by the water levels at the downstream boundary condition. The model provides a sound physical basis for simulating water stages along the river. The simulated results of the hydrodynamic model show that the model cannot reproduce the water stages at different stations during typhoon events for the model calibration and verification phases. The BPNN and GANN models can improve the simulated water stages compared with the performance of the hydrodynamic model. The GANN model satisfactorily predicts water stages during the training and verification phases and exhibits the lowest values of mean absolute error, root-mean-square error and peak error compared with the simulated results at different stations using the hydrodynamic model and the BPNN model. Comparison of the simulated results shows that the GANN model can be successfully applied to predict the water stages of the Danshuei River system during typhoon events.

  16. Modelling of plastic flow localization and damage development in friction stir welded 6005A aluminium alloy using physics based strain hardening law

    DEFF Research Database (Denmark)

    Nielsen, Kim Lau; Pardoen, Thomas; Tvergaard, Viggo

    2010-01-01

    of these zones was extracted from micro-tensile specimens cut parallel to the welding direction. The measured material properties and weld topology were introduced into a 3D finite element model, fully coupled with the damage model. A Voce law hardening model involving a constant stage IV is used within...

  17. A Physically-Based Equivalent Circuit Model for the Impedance of a LiFePO4/Graphite 26650 Cylindrical Cell

    DEFF Research Database (Denmark)

    Scipioni, Roberto; Jørgensen, Peter Stanley; Graves, Christopher R.

    2017-01-01

    In this work an Equivalent Circuit Model (ECM) is developed and used to model impedance spectra measured on a commercial 26650 LiFePO4/Graphite cylindrical cell. The ECM is based on measurements and modeling of impedance spectra recorded separately on cathode (LiFePO4) and anode (Graphite) samples...

  18. Features, Events, and Processes: system Level

    Energy Technology Data Exchange (ETDEWEB)

    D. McGregor

    2004-10-15

    The purpose of this analysis report is to evaluate and document the inclusion or exclusion of the system-level features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment for the license application (TSPA-LA). A screening decision, either Included or Excluded, is given for each FEP along with the technical basis for screening decisions. This information is required by the U.S. Nuclear Regulatory Commission (NRC) at 10 CFR 63.113 (d, e, and f) (DIRS 156605). The system-level FEPs addressed in this report typically are overarching in nature, rather than being focused on a particular process or subsystem. As a result, they are best dealt with at the system level rather than addressed within supporting process-level or subsystem-level analyses and models reports. The system-level FEPs also tend to be directly addressed by regulations, guidance documents, or assumptions listed in the regulations; or are addressed in background information used in development of the regulations. For included FEPs, this analysis summarizes the implementation of the FEP in the TSPA-LA (i.e., how the FEP is included). For excluded FEPs, this analysis provides the technical basis for exclusion from the TSPA-LA (i.e., why the FEP is excluded). The initial version of this report (Revision 00) was developed to support the total system performance assessment for site recommendation (TSPA-SR). This revision addresses the license application (LA) FEP List (DIRS 170760).

  19. Features, Events, and Processes: system Level

    International Nuclear Information System (INIS)

    D. McGregor

    2004-01-01

    The purpose of this analysis report is to evaluate and document the inclusion or exclusion of the system-level features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment for the license application (TSPA-LA). A screening decision, either Included or Excluded, is given for each FEP along with the technical basis for screening decisions. This information is required by the U.S. Nuclear Regulatory Commission (NRC) at 10 CFR 63.113 (d, e, and f) (DIRS 156605). The system-level FEPs addressed in this report typically are overarching in nature, rather than being focused on a particular process or subsystem. As a result, they are best dealt with at the system level rather than addressed within supporting process-level or subsystem-level analyses and models reports. The system-level FEPs also tend to be directly addressed by regulations, guidance documents, or assumptions listed in the regulations; or are addressed in background information used in development of the regulations. For included FEPs, this analysis summarizes the implementation of the FEP in the TSPA-LA (i.e., how the FEP is included). For excluded FEPs, this analysis provides the technical basis for exclusion from the TSPA-LA (i.e., why the FEP is excluded). The initial version of this report (Revision 00) was developed to support the total system performance assessment for site recommendation (TSPA-SR). This revision addresses the license application (LA) FEP List (DIRS 170760)

  20. A physical based equivalent circuit modeling approach for ballasted InP DHBT multi-finger devices at millimeter-wave frequencies

    DEFF Research Database (Denmark)

    Midili, Virginio; Squartecchia, Michele; Johansen, Tom Keinicke

    2016-01-01

    equivalent circuit description. In the first approach, the EM simulations of contact pads and ballasting network are combined with the small-signal model of the intrinsic device. In the second approach, the ballasting network is modeled with lumped components derived from physical analysis of the layout...

  1. Development and assessment of a physics-based simulation model to investigate residential PM2.5 infiltration across the US housing stock

    Science.gov (United States)

    The Lawrence Berkeley National Laboratory Population Impact Assessment Modeling Framework (PIAMF) was expanded to enable determination of indoor PM2.5 concentrations and exposures in a set of 50,000 homes representing the US housing stock. A mass-balance model is used to calculat...

  2. Space elevator systems level analysis

    Energy Technology Data Exchange (ETDEWEB)

    Laubscher, B. E. (Bryan E.)

    2004-01-01

    The Space Elevator (SE) represents a major paradigm shift in space access. It involves new, untried technologies in most of its subsystems. Thus the successful construction of the SE requires a significant amount of development, This in turn implies a high level of risk for the SE. This paper will present a systems level analysis of the SE by subdividing its components into their subsystems to determine their level of technological maturity. such a high-risk endeavor is to follow a disciplined approach to the challenges. A systems level analysis informs this process and is the guide to where resources should be applied in the development processes. It is an efficient path that, if followed, minimizes the overall risk of the system's development. systems level analysis is that the overall system is divided naturally into its subsystems, and those subsystems are further subdivided as appropriate for the analysis. By dealing with the complex system in layers, the parameter space of decisions is kept manageable. Moreover, A rational way to manage One key aspect of a resources are not expended capriciously; rather, resources are put toward the biggest challenges and most promising solutions. This overall graded approach is a proven road to success. The analysis includes topics such as nanotube technology, deployment scenario, power beaming technology, ground-based hardware and operations, ribbon maintenance and repair and climber technology.

  3. Influence of spatial discretization, underground water storage and glacier melt on a physically-based hydrological model of the Upper Durance River basin

    Science.gov (United States)

    Lafaysse, M.; Hingray, B.; Etchevers, P.; Martin, E.; Obled, C.

    2011-06-01

    SummaryThe SAFRAN-ISBA-MODCOU hydrological model ( Habets et al., 2008) presents severe limitations for alpine catchments. Here we propose possible model adaptations. For the catchment discretization, Relatively Homogeneous Hydrological Units (RHHUs) are used instead of the classical 8 km square grid. They are defined from the dilineation of hydrological subbasins, elevation bands, and aspect classes. Glacierized and non-glacierized areas are also treated separately. In addition, new modules are included in the model for the simulation of glacier melt, and retention of underground water. The improvement resulting from each model modification is analysed for the Upper Durance basin. RHHUs allow the model to better account for the high spatial variability of the hydrological processes (e.g. snow cover). The timing and the intensity of the spring snowmelt floods are significantly improved owing to the representation of water retention by aquifers. Despite the relatively small area covered by glaciers, accounting for glacier melt is necessary for simulating the late summer low flows. The modified model is robust over a long simulation period and it produces a good reproduction of the intra and interannual variability of discharge, which is a necessary condition for its application in a modified climate context.

  4. The value of oxygen-isotope data and multiple discharge records in calibrating a fully-distributed, physically-based rainfall-runoff model (CRUM3) to improve predictive capability

    Science.gov (United States)

    Neill, Aaron; Reaney, Sim

    2015-04-01

    Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to

  5. Distributed physically-based precipitation-runoff models for continuous simulation of daily runoff in the Columbia River Basin, British Columbia

    International Nuclear Information System (INIS)

    Chin, W.Q.; Salmon, G.M.; Luo, W.

    1997-01-01

    The need to accurately forecast precipitation and water runoff is essential to the operations of hydroelectric power plants. In 1993, BC Hydro established a program to develop, test and improve new and existing atmospheric and hydrologic models that would be suitable for application over the mountainous terrain of British Columbia. The objective was to improve the reliability and accuracy of hydrological models that simulate and forecast precipitation and runoff. Another objective was to develop a modelling system for hydrologic risk assessment in dam safety evaluation. This paper describes progress made in implementing timely measures to resolve problems of reservoir operation in balancing the need for generation of hydroelectric power with conflicting requirements for flood control, fisheries, recreation and other environmental concerns. 23 refs., 11 figs

  6. APPLICATION AND EVALUATION OF AN AGGREGATE PHYSICALLY-BASED TWO-STAGE MONTE CARLO PROBABILISTIC MODEL FOR QUANTIFYING CHILDREN'S RESIDENTIAL EXPOSURE AND DOSE TO CHLORPYRIFOS

    Science.gov (United States)

    Critical voids in exposure data and models lead risk assessors to rely on conservative assumptions. Risk assessors and managers need improved tools beyond the screening level analysis to address aggregate exposures to pesticides as required by the Food Quality Protection Act o...

  7. Physics Based Modeling in Design and Development for U.S. Defense Held in Denver, Colorado on November 14-17, 2011. Volume 2: Audio and Movie Files

    Science.gov (United States)

    2011-11-17

    Mr. Frank Salvatore, High Performance Technologies FIXED AND ROTARY WING AIRCRAFT 13274 - “CREATE-AV DaVinci : Model-Based Engineering for Systems... Tools for Reliability Improvement and Addressing Modularity Issues in Evaluation and Physical Testing”, Dr. Richard Heine, Army Materiel Systems

  8. Implementation of a physically-based scheme representing light-absorbing impurities deposition, evolution and radiative impacts in the SURFEX/Crocus model

    Science.gov (United States)

    Tuzet, F.; Dumont, M.; Lafaysse, M.; Hagenmuller, P.; Arnaud, L.; Picard, G.; Morin, S.

    2017-12-01

    Light-absorbing impurities decrease snow albedo, increasing the amount of solar energy absorbed by the snowpack. Its most intuitive impact is to accelerate snow melt. However the presence of a layer highly concentrated in light-absorbing impurities in the snowpack also modify its temperature profile affecting snow metamorphism. New capabilities have been implemented in the detailed snowpack model SURFEX/ISBA-Crocus (referred to as Crocus) to account for impurities deposition and evolution within the snowpack (Tuzet et al., 2017, TCD). Once deposited, the model computes impurities mass evolution until snow melts out. Taking benefits of the recent inclusion of the spectral radiative transfer model TARTES in Crocus, the model explicitly represents the radiative impacts of light-absorbing impurities in snow. In the Pyrenees mountain range, strong sporadic Saharan dust deposition (referred to as dust outbreaks) can occur during the snow season leading some snow layers in the snowpack to contain high concentrations of mineral dust. One of the major events of the past years occurred on February 2014, affecting the whole southern Europe. During the weeks following this dust outbreak a strong avalanche activity was reported in the Aran valley (Pyrenees, Spain). For now, the link between the dust outbreak and the avalanche activity is not demonstrated.We investigate the impact of this dust outbreak on the snowpack stability in the Aran valley using the Crocus model, trying to determine whether the snowpack instability observed after the dust outbreak can be related to the presence of dust. SAFRAN-reanalysis meteorological data are used to drive the model on several altitudes, slopes and aspects. For each slope configuration two different simulations are run; one without dust and one simulating the dust outbreak of February 2014.The two corresponding simulations are then compared to assess the role of impurities on snow metamorphism and stability.On this example, we

  9. On the influence of cell size in physically-based distributed hydrological modelling to assess extreme values in water resource planning

    Directory of Open Access Journals (Sweden)

    M. Egüen

    2012-05-01

    Full Text Available This paper studies the influence of changing spatial resolution on the implementation of distributed hydrological modelling for water resource planning in Mediterranean areas. Different cell sizes were used to investigate variations in the basin hydrologic response given by the model WiMMed, developed in Andalusia (Spain, in a selected watershed. The model was calibrated on a monthly basis from the available daily flow data at the reservoir that closes the watershed, for three different cell sizes, 30, 100, and 500 m, and the effects of this change on the hydrological response of the basin were analysed by means of the comparison of the hydrological variables at different time scales for a 3-yr-period, and the effective values for the calibration parameters obtained for each spatial resolution. The variation in the distribution of the input parameters due to using different spatial resolutions resulted in a change in the obtained hydrological networks and significant differences in other hydrological variables, both in mean basin-scale and values distributed in the cell level. Differences in the magnitude of annual and global runoff, together with other hydrological components of the water balance, became apparent. This study demonstrated the importance of choosing the appropriate spatial scale in the implementation of a distributed hydrological model to reach a balance between the quality of results and the computational cost; thus, 30 and 100-m could be chosen for water resource management, without significant decrease in the accuracy of the simulation, but the 500-m cell size resulted in significant overestimation of runoff and consequently, could involve uncertain decisions based on the expected availability of rainfall excess for storage in the reservoirs. Particular values of the effective calibration parameters are also provided for this hydrological model and the study area.

  10. System-level musings about system-level science (Invited)

    Science.gov (United States)

    Liu, W.

    2009-12-01

    In teleology, a system has a purpose. In physics, a system has a tendency. For example, a mechanical system has a tendency to lower its potential energy. A thermodynamic system has a tendency to increase its entropy. Therefore, if geospace is seen as a system, what is its tendency? Surprisingly or not, there is no simple answer to this question. Or, to flip the statement, the answer is complex, or complexity. We can understand generally why complexity arises, as the geospace boundary is open to influences from the solar wind and Earth’s atmosphere and components of the system couple to each other in a myriad of ways to make the systemic behavior highly nonlinear. But this still begs the question: What is the system-level approach to geospace science? A reductionist view might assert that as our understanding of a component or subsystem progresses to a certain point, we can couple some together to understand the system on a higher level. However, in practice, a subsystem can almost never been observed in isolation with others. Even if such is possible, there is no guarantee that the subsystem behavior will not change when coupled to others. Hence, there is no guarantee that a subsystem, such as the ring current, has an innate and intrinsic behavior like a hydrogen atom. An absolutist conclusion from this logic can be sobering, as one would have to trace a flash of aurora to the nucleosynthesis in the solar core. The practical answer, however, is more promising; it is a mix of the common sense we call reductionism and awareness that, especially when strongly coupled, subsystems can experience behavioral changes, breakdowns, and catastrophes. If the stock answer to the systemic tendency of geospace is complexity, the objective of the system-level approach to geospace science is to define, measure, and understand this complexity. I will use the example of magnetotail dynamics to illuminate some key points in this talk.

  11. Impacts of Extreme Space Weather Events on Power Grid Infrastructure: Physics-Based Modelling of Geomagnetically-Induced Currents (GICs) During Carrington-Class Geomagnetic Storms

    Science.gov (United States)

    Henderson, M. G.; Bent, R.; Chen, Y.; Delzanno, G. L.; Jeffery, C. A.; Jordanova, V. K.; Morley, S.; Rivera, M. K.; Toth, G.; Welling, D. T.; Woodroffe, J. R.; Engel, M.

    2017-12-01

    Large geomagnetic storms can have devastating effects on power grids. The largest geomagnetic storm ever recorded - called the Carrington Event - occurred in 1859 and produced Geomagnetically Induced Currents (GICs) strong enough to set fires in telegraph offices. It has been estimated that if such a storm occurred today, it would have devastating, long-lasting effects on the North American power transmission infrastructure. Acutely aware of this imminent threat, the North American Electric Reliability Corporation (NERC) was recently instructed to establish requirements for transmission system performance during geomagnetic disturbance (GMD) events and, although the benchmarks adopted were based on the best available data at the time, they suffer from a severely limited physical understanding of the behavior of GMDs and the resulting GICs for strong events. To rectify these deficiencies, we are developing a first-of-its-kind data-informed modelling capability that will provide transformational understanding of the underlying physical mechanisms responsible for the most harmful intense localized GMDs and their impacts on real power transmission networks. This work is being conducted in two separate modes of operation: (1) using historical, well-observed large storm intervals for which robust data-assimilation can be performed, and (2) extending the modelling into a predictive realm in order to assess impacts of poorly and/or never-before observed Carrington-class events. Results of this work are expected to include a potential replacement for the current NERC benchmarking methodology and the development of mitigation strategies in real power grid networks. We report on progress to date and show some preliminary results of modeling large (but not yet extreme) events.

  12. Bridging Thermal Infrared Sensing and Physically-Based Evapotranspiration Modeling: From Theoretical Implementation to Validation Across an Aridity Gradient in Australian Ecosystems

    DEFF Research Database (Denmark)

    Mallick, Kaniska; Toivonen, Erika; Trebs, Ivonne

    2018-01-01

    model, the Surface Temperature Initiated Closure (STIC1.2), that physically integrates TR observations into a combined Penman‐Monteith Shuttleworth‐Wallace (PM‐SW) framework for directly estimating E, and overcoming the uncertainties associated with T0 and gA determination. An evaluation of STIC1.......2 against high temporal frequency SEB flux measurements across an aridity gradient in Australia revealed a systematic error of 10% – 52% in E from mesic to arid ecosystem, and low systematic error in sensible heat fluxes (H) (12% – 25%) in all ecosystems. Uncertainty in TR versus moisture availability...

  13. Multi-scale validation of a new soil freezing scheme for a land-surface model with physically-based hydrology

    Directory of Open Access Journals (Sweden)

    I. Gouttevin

    2012-04-01

    Full Text Available Soil freezing is a major feature of boreal regions with substantial impact on climate. The present paper describes the implementation of the thermal and hydrological effects of soil freezing in the land surface model ORCHIDEE, which includes a physical description of continental hydrology. The new soil freezing scheme is evaluated against analytical solutions and in-situ observations at a variety of scales in order to test its numerical robustness, explore its sensitivity to parameterization choices and confront its performance to field measurements at typical application scales.

    Our soil freezing model exhibits a low sensitivity to the vertical discretization for spatial steps in the range of a few millimetres to a few centimetres. It is however sensitive to the temperature interval around the freezing point where phase change occurs, which should be 1 °C to 2 °C wide. Furthermore, linear and thermodynamical parameterizations of the liquid water content lead to similar results in terms of water redistribution within the soil and thermal evolution under freezing. Our approach does not allow firm discrimination of the performance of one approach over the other.

    The new soil freezing scheme considerably improves the representation of runoff and river discharge in regions underlain by permafrost or subject to seasonal freezing. A thermodynamical parameterization of the liquid water content appears more appropriate for an integrated description of the hydrological processes at the scale of the vast Siberian basins. The use of a subgrid variability approach and the representation of wetlands could help capture the features of the Arctic hydrological regime with more accuracy.

    The modeling of the soil thermal regime is generally improved by the representation of soil freezing processes. In particular, the dynamics of the active layer is captured with more accuracy, which is of crucial importance in the prospect of

  14. Satellite Collision Modeling with Physics-Based Hydrocodes: Debris Generation Predictions of the Iridium-Cosmos Collision Event and Other Impact Events

    International Nuclear Information System (INIS)

    Springer, H.K.; Miller, W.O.; Levatin, J.L.; Pertica, A.J.; Olivier, S.S.

    2010-01-01

    Satellite collision debris poses risks to existing space assets and future space missions. Predictive models of debris generated from these hypervelocity collisions are critical for developing accurate space situational awareness tools and effective mitigation strategies. Hypervelocity collisions involve complex phenomenon that spans several time- and length-scales. We have developed a satellite collision debris modeling approach consisting of a Lagrangian hydrocode enriched with smooth particle hydrodynamics (SPH), advanced material failure models, detailed satellite mesh models, and massively parallel computers. These computational studies enable us to investigate the influence of satellite center-of-mass (CM) overlap and orientation, relative velocity, and material composition on the size, velocity, and material type distributions of collision debris. We have applied our debris modeling capability to the recent Iridium 33-Cosmos 2251 collision event. While the relative velocity was well understood in this event, the degree of satellite CM overlap and orientation was ill-defined. In our simulations, we varied the collision CM overlap and orientation of the satellites from nearly maximum overlap to partial overlap on the outermost extents of the satellites (i.e, solar panels and gravity boom). As expected, we found that with increased satellite overlap, the overall debris cloud mass and momentum (transfer) increases, the average debris size decreases, and the debris velocity increases. The largest predicted debris can also provide insight into which satellite components were further removed from the impact location. A significant fraction of the momentum transfer is imparted to the smallest debris (< 1-5mm, dependent on mesh resolution), especially in large CM overlap simulations. While the inclusion of the smallest debris is critical to enforcing mass and momentum conservation in hydrocode simulations, there seems to be relatively little interest in their

  15. A Vulnerability-Based, Bottom-up Assessment of Future Riverine Flood Risk Using a Modified Peaks-Over-Threshold Approach and a Physically Based Hydrologic Model

    Science.gov (United States)

    Knighton, James; Steinschneider, Scott; Walter, M. Todd

    2017-12-01

    There is a chronic disconnection among purely probabilistic flood frequency analysis of flood hazards, flood risks, and hydrological flood mechanisms, which hamper our ability to assess future flood impacts. We present a vulnerability-based approach to estimating riverine flood risk that accommodates a more direct linkage between decision-relevant metrics of risk and the dominant mechanisms that cause riverine flooding. We adapt the conventional peaks-over-threshold (POT) framework to be used with extreme precipitation from different climate processes and rainfall-runoff-based model output. We quantify the probability that at least one adverse hydrologic threshold, potentially defined by stakeholders, will be exceeded within the next N years. This approach allows us to consider flood risk as the summation of risk from separate atmospheric mechanisms, and supports a more direct mapping between hazards and societal outcomes. We perform this analysis within a bottom-up framework to consider the relevance and consequences of information, with varying levels of credibility, on changes to atmospheric patterns driving extreme precipitation events. We demonstrate our proposed approach using a case study for Fall Creek in Ithaca, NY, USA, where we estimate the risk of stakeholder-defined flood metrics from three dominant mechanisms: summer convection, tropical cyclones, and spring rain and snowmelt. Using downscaled climate projections, we determine how flood risk associated with a subset of mechanisms may change in the future, and the resultant shift to annual flood risk. The flood risk approach we propose can provide powerful new insights into future flood threats.

  16. Physical bases of the generation of short-term earthquake precursors: A complex model of ionization-induced geophysical processes in the lithosphere-atmosphere-ionosphere-magnetosphere system

    Science.gov (United States)

    Pulinets, S. A.; Ouzounov, D. P.; Karelin, A. V.; Davidenko, D. V.

    2015-07-01

    This paper describes the current understanding of the interaction between geospheres from a complex set of physical and chemical processes under the influence of ionization. The sources of ionization involve the Earth's natural radioactivity and its intensification before earthquakes in seismically active regions, anthropogenic radioactivity caused by nuclear weapon testing and accidents in nuclear power plants and radioactive waste storage, the impact of galactic and solar cosmic rays, and active geophysical experiments using artificial ionization equipment. This approach treats the environment as an open complex system with dissipation, where inherent processes can be considered in the framework of the synergistic approach. We demonstrate the synergy between the evolution of thermal and electromagnetic anomalies in the Earth's atmosphere, ionosphere, and magnetosphere. This makes it possible to determine the direction of the interaction process, which is especially important in applications related to short-term earthquake prediction. That is why the emphasis in this study is on the processes proceeding the final stage of earthquake preparation; the effects of other ionization sources are used to demonstrate that the model is versatile and broadly applicable in geophysics.

  17. Quantifying bioalbedo: a new physically based model and discussion of empirical methods for characterising biological influence on ice and snow albedo

    Science.gov (United States)

    Cook, Joseph M.; Hodson, Andrew J.; Gardner, Alex S.; Flanner, Mark; Tedstone, Andrew J.; Williamson, Christopher; Irvine-Fynn, Tristram D. L.; Nilsson, Johan; Bryant, Robert; Tranter, Martyn

    2017-11-01

    The darkening effects of biological impurities on ice and snow have been recognised as a control on the surface energy balance of terrestrial snow, sea ice, glaciers and ice sheets. With a heightened interest in understanding the impacts of a changing climate on snow and ice processes, quantifying the impact of biological impurities on ice and snow albedo (bioalbedo) and its evolution through time is a rapidly growing field of research. However, rigorous quantification of bioalbedo has remained elusive because of difficulties in isolating the biological contribution to ice albedo from that of inorganic impurities and the variable optical properties of the ice itself. For this reason, isolation of the biological signature in reflectance data obtained from aerial/orbital platforms has not been achieved, even when ground-based biological measurements have been available. This paper provides the cell-specific optical properties that are required to model the spectral signatures and broadband darkening of ice. Applying radiative transfer theory, these properties provide the physical basis needed to link biological and glaciological ground measurements with remotely sensed reflectance data. Using these new capabilities we confirm that biological impurities can influence ice albedo, then we identify 10 challenges to the measurement of bioalbedo in the field with the aim of improving future experimental designs to better quantify bioalbedo feedbacks. These challenges are (1) ambiguity in terminology, (2) characterising snow or ice optical properties, (3) characterising solar irradiance, (4) determining optical properties of cells, (5) measuring biomass, (6) characterising vertical distribution of cells, (7) characterising abiotic impurities, (8) surface anisotropy, (9) measuring indirect albedo feedbacks, and (10) measurement and instrument configurations. This paper aims to provide a broad audience of glaciologists and biologists with an overview of radiative transfer and

  18. Quantifying bioalbedo: a new physically based model and discussion of empirical methods for characterising biological influence on ice and snow albedo

    Directory of Open Access Journals (Sweden)

    J. M. Cook

    2017-11-01

    Full Text Available The darkening effects of biological impurities on ice and snow have been recognised as a control on the surface energy balance of terrestrial snow, sea ice, glaciers and ice sheets. With a heightened interest in understanding the impacts of a changing climate on snow and ice processes, quantifying the impact of biological impurities on ice and snow albedo (bioalbedo and its evolution through time is a rapidly growing field of research. However, rigorous quantification of bioalbedo has remained elusive because of difficulties in isolating the biological contribution to ice albedo from that of inorganic impurities and the variable optical properties of the ice itself. For this reason, isolation of the biological signature in reflectance data obtained from aerial/orbital platforms has not been achieved, even when ground-based biological measurements have been available. This paper provides the cell-specific optical properties that are required to model the spectral signatures and broadband darkening of ice. Applying radiative transfer theory, these properties provide the physical basis needed to link biological and glaciological ground measurements with remotely sensed reflectance data. Using these new capabilities we confirm that biological impurities can influence ice albedo, then we identify 10 challenges to the measurement of bioalbedo in the field with the aim of improving future experimental designs to better quantify bioalbedo feedbacks. These challenges are (1 ambiguity in terminology, (2 characterising snow or ice optical properties, (3 characterising solar irradiance, (4 determining optical properties of cells, (5 measuring biomass, (6 characterising vertical distribution of cells, (7 characterising abiotic impurities, (8 surface anisotropy, (9 measuring indirect albedo feedbacks, and (10 measurement and instrument configurations. This paper aims to provide a broad audience of glaciologists and biologists with an overview of

  19. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    Science.gov (United States)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  20. Physics-Based Simulations of Natural Hazards

    Science.gov (United States)

    Schultz, Kasey William

    Earthquakes and tsunamis are some of the most damaging natural disasters that we face. Just two recent events, the 2004 Indian Ocean earthquake and tsunami and the 2011 Haiti earthquake, claimed more than 400,000 lives. Despite their catastrophic impacts on society, our ability to predict these natural disasters is still very limited. The main challenge in studying the earthquake cycle is the non-linear and multi-scale properties of fault networks. Earthquakes are governed by physics across many orders of magnitude of spatial and temporal scales; from the scale of tectonic plates and their evolution over millions of years, down to the scale of rock fracturing over milliseconds to minutes at the sub-centimeter scale during an earthquake. Despite these challenges, there are useful patterns in earthquake occurrence. One such pattern, the frequency-magnitude relation, relates the number of large earthquakes to small earthquakes and forms the basis for assessing earthquake hazard. However the utility of these relations is proportional to the length of our earthquake records, and typical records span at most a few hundred years. Utilizing physics based interactions and techniques from statistical physics, earthquake simulations provide rich earthquake catalogs allowing us to measure otherwise unobservable statistics. In this dissertation I will discuss five applications of physics-based simulations of natural hazards, utilizing an earthquake simulator called Virtual Quake. The first is an overview of computing earthquake probabilities from simulations, focusing on the California fault system. The second uses simulations to help guide satellite-based earthquake monitoring methods. The third presents a new friction model for Virtual Quake and describes how we tune simulations to match reality. The fourth describes the process of turning Virtual Quake into an open source research tool. This section then focuses on a resulting collaboration using Virtual Quake for a detailed

  1. Earth system modelling on system-level heterogeneous architectures: EMAC (version 2.42) on the Dynamical Exascale Entry Platform (DEEP)

    Science.gov (United States)

    Christou, Michalis; Christoudias, Theodoros; Morillo, Julián; Alvarez, Damian; Merx, Hendrik

    2016-09-01

    We examine an alternative approach to heterogeneous cluster-computing in the many-core era for Earth system models, using the European Centre for Medium-Range Weather Forecasts Hamburg (ECHAM)/Modular Earth Submodel System (MESSy) Atmospheric Chemistry (EMAC) model as a pilot application on the Dynamical Exascale Entry Platform (DEEP). A set of autonomous coprocessors interconnected together, called Booster, complements a conventional HPC Cluster and increases its computing performance, offering extra flexibility to expose multiple levels of parallelism and achieve better scalability. The EMAC model atmospheric chemistry code (Module Efficiently Calculating the Chemistry of the Atmosphere (MECCA)) was taskified with an offload mechanism implemented using OmpSs directives. The model was ported to the MareNostrum 3 supercomputer to allow testing with Intel Xeon Phi accelerators on a production-size machine. The changes proposed in this paper are expected to contribute to the eventual adoption of Cluster-Booster division and Many Integrated Core (MIC) accelerated architectures in presently available implementations of Earth system models, towards exploiting the potential of a fully Exascale-capable platform.

  2. Physics-based signal processing algorithms for micromachined cantilever arrays

    Science.gov (United States)

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  3. Enriching Triangle Mesh Animations with Physically Based Simulation.

    Science.gov (United States)

    Li, Yijing; Xu, Hongyi; Barbic, Jernej

    2017-10-01

    We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.

  4. Physics-based deformable organisms for medical image analysis

    Science.gov (United States)

    Hamarneh, Ghassan; McIntosh, Chris

    2005-04-01

    Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.

  5. The evaluation of the climate change effects on maize and fennel cultivation by means of an hydrological physically based model: the case study of an irrigated district of southern Italy

    Science.gov (United States)

    Bonfante, A.; Alfieri, M. S.; Basile, A.; De Lorenzi, F.; Fiorentino, N.; Menenti, M.

    2012-04-01

    The effect of climate change on irrigated agricultural systems will be different from area to area depending on some factors as: (i) water availability, (ii) crop water demand (iii) soil hydrological behavior and (iv) irrigation management strategy. The adaptation of irrigated crop systems to future climate change can be supported by physically based model which simulate the water and heat fluxes in the soil-vegetation-atmosphere system. The aim of this work is to evaluate the effects of climate change on the heat and water balance of a maize-fennel rotation. This was applied to a on-demand irrigation district of Southern Italy ("Destra Sele", Campania Region, 22.645 ha). Two climate scenarios were considered, current climate (1961-1990) and future climate (2021-2050), the latter constructed by applying statistical downscaling to GCMs scenarios. For each climate scenario the soil moisture regime of the selected study area was calculated by means of a simulation model of the soil-water-atmosphere system (SWAP). Synthetic indicators of the soil water regimes (e.g., crop water stress index - CWSI, available water content) have been calculated and impacts evaluated taking into account the yield response functions to water availability of different cultivars. Different irrigation delivering strategies were also simulated. The hydrological model SWAP was applied to the representative soils of the whole area (20 soil units) for which the soil hydraulic properties were derived by means of pedo-transfer function (HYPRES) tested and validated on the typical soils in the study area. Upper boundary conditions were derived from two climate scenarios, i.e. current and future. Unit gradient in soil water potential was set as lower boundary condition. Crop-specific input data and model parameters were derived from field experiments, in the same area, where the SWAP model was calibrated and validated. The results obtained have shown a significant increase of CWSI in the future

  6. Two modelling approaches to water-quality simulation in a flooded iron-ore mine (Saizerais, Lorraine, France): a semi-distributed chemical reactor model and a physically based distributed reactive transport pipe network model.

    Science.gov (United States)

    Hamm, V; Collon-Drouaillet, P; Fabriol, R

    2008-02-19

    The flooding of abandoned mines in the Lorraine Iron Basin (LIB) over the past 25 years has degraded the quality of the groundwater tapped for drinking water. High concentrations of dissolved sulphate have made the water unsuitable for human consumption. This problematic issue has led to the development of numerical tools to support water-resource management in mining contexts. Here we examine two modelling approaches using different numerical tools that we tested on the Saizerais flooded iron-ore mine (Lorraine, France). A first approach considers the Saizerais Mine as a network of two chemical reactors (NCR). The second approach is based on a physically distributed pipe network model (PNM) built with EPANET 2 software. This approach considers the mine as a network of pipes defined by their geometric and chemical parameters. Each reactor in the NCR model includes a detailed chemical model built to simulate quality evolution in the flooded mine water. However, in order to obtain a robust PNM, we simplified the detailed chemical model into a specific sulphate dissolution-precipitation model that is included as sulphate source/sink in both a NCR model and a pipe network model. Both the NCR model and the PNM, based on different numerical techniques, give good post-calibration agreement between the simulated and measured sulphate concentrations in the drinking-water well and overflow drift. The NCR model incorporating the detailed chemical model is useful when a detailed chemical behaviour at the overflow is needed. The PNM incorporating the simplified sulphate dissolution-precipitation model provides better information of the physics controlling the effect of flow and low flow zones, and the time of solid sulphate removal whereas the NCR model will underestimate clean-up time due to the complete mixing assumption. In conclusion, the detailed NCR model will give a first assessment of chemical processes at overflow, and in a second time, the PNM model will provide more

  7. Interactive physically-based sound simulation

    Science.gov (United States)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  8. PHYSICAL BASES OF SYSTEMS CREATION FOR MAGNETIC-IMPULSIVE ATTRACTION OF THIN-WALLED SHEET METALS

    Directory of Open Access Journals (Sweden)

    Y. Batygin

    2009-01-01

    Full Text Available The work is dedicated to the physical base of systems creating for the thin-walled sheet metals magnetic pulse attraction. Some practical realization models of the author’s suggestions are represented.

  9. A Distributed Approach to System-Level Prognostics

    Science.gov (United States)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil

    2012-01-01

    Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.

  10. Physically based rendering from theory to implementation

    CERN Document Server

    Pharr, Matt

    2010-01-01

    "Physically Based Rendering, 2nd Edition" describes both the mathematical theory behind a modern photorealistic rendering system as well as its practical implementation. A method - known as 'literate programming'- combines human-readable documentation and source code into a single reference that is specifically designed to aid comprehension. The result is a stunning achievement in graphics education. Through the ideas and software in this book, you will learn to design and employ a full-featured rendering system for creating stunning imagery. This book features new sections on subsurface scattering, Metropolis light transport, precomputed light transport, multispectral rendering, and much more. It includes a companion site complete with source code for the rendering system described in the book, with support for Windows, OS X, and Linux. Code and text are tightly woven together through a unique indexing feature that lists each function, variable, and method on the page that they are first described.

  11. Use of a novel docosahexaenoic acid (DHA) formulation versus control in a neonatal porcine model of short bowel syndrome leads to greater intestinal absorption and higher systemic levels of DHA

    Science.gov (United States)

    Martin, Camilia R.; Stoll, Barbara; Cluette-Brown, Joanne; Akinkuotu, Adesola C.; Olutoye, Oluyinka O.; Gura, Kathleen M.; Singh, Pratibha; Zaman, Munir M.; Perillo, Michael C.; Puder, Mark; Freedman, Steven D.; Burrin, Doug

    2017-01-01

    Infants with short bowel syndrome (SBS) are at high risk for malabsorption, malnutrition, and failure to thrive. The objective of this study was to evaluate in a porcine model of SBS, the systemic absorption of a novel enteral Docosahexaenoic acid (DHA) formulation that forms micelles independent of bile salts (DHA-ALT®). We hypothesized that enteral delivery of DHA-ALT® would result in higher blood levels of DHA compared to a control DHA preparation due to improved intestinal absorption. SBS was induced in term piglets through a 75% mid-jejunoileal resection and the piglets randomized to either DHA-ALT® or control DHA formulation (N=5 per group) for 4 postoperative days. The median ± IQR difference in final versus starting weight was 696 ± 425g in the DHA-ALT® group compared to 132 ± 278g in the controls (p=.08). Within 12 hours, median ± IQR DHA and eicosapentaenoic acid plasma levels (mol%) were significantly higher in the DHA-ALT® vs. control group (4.1 ± 0.3 vs 2.5 ± 0.5, p=0.009; 0.7 ± 0.3 vs 0.2 ± 0.005, p=0.009, respectively). There were lower fecal losses of DHA and greater ileal tissue incorporation with DHA-ALT® versus the control. Morphometric analyses demonstrated an increase in proximal jejunum and distal ileum villus height in the DHA-ALT® group compared to controls (p=0.01). In a neonatal porcine model of SBS, enteral administration of a novel DHA preparation that forms micelles independent of bile salts resulted in increased fatty acid absorption, increased ileal tissue incorporation, and increased systemic levels of DHA. PMID:28385289

  12. Use of a novel docosahexaenoic acid formulation vs control in a neonatal porcine model of short bowel syndrome leads to greater intestinal absorption and higher systemic levels of DHA.

    Science.gov (United States)

    Martin, Camilia R; Stoll, Barbara; Cluette-Brown, Joanne; Akinkuotu, Adesola C; Olutoye, Oluyinka O; Gura, Kathleen M; Singh, Pratibha; Zaman, Munir M; Perillo, Michael C; Puder, Mark; Freedman, Steven D; Burrin, Doug

    2017-03-01

    Infants with short bowel syndrome (SBS) are at high risk for malabsorption, malnutrition, and failure to thrive. The objective of this study was to evaluate in a porcine model of SBS, the systemic absorption of a novel enteral Docosahexaenoic acid (DHA) formulation that forms micelles independent of bile salts (DHA-ALT®). We hypothesized that enteral delivery of DHA-ALT® would result in higher blood levels of DHA compared to a control DHA preparation due to improved intestinal absorption. SBS was induced in term piglets through a 75% mid-jejunoileal resection and the piglets randomized to either DHA-ALT® or control DHA formulation (N=5 per group) for 4 postoperative days. The median±IQR difference in final vs starting weight was 696±425 g in the DHA-ALT® group compared to 132±278 g in the controls (P=.08). Within 12 hours, median±IQR DHA and eicosapentaenoic acid plasma levels (mol%) were significantly higher in the DHA-ALT® vs control group (4.1±0.3 vs 2.5±0.5, P=.009; 0.7±0.3 vs 0.2±0.005, P=.009, respectively). There were lower fecal losses of DHA and greater ileal tissue incorporation with DHA-ALT® vs the control. Morphometric analyses demonstrated an increase in proximal jejunum and distal ileum villus height in the DHA-ALT® group compared to controls (P=.01). In a neonatal porcine model of SBS, enteral administration of a novel DHA preparation that forms micelles independent of bile salts resulted in increased fatty acid absorption, increased ileal tissue incorporation, and increased systemic levels of DHA. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. System level ESD co-design

    CERN Document Server

    Gossner, Harald

    2015-01-01

    An effective and cost efficient protection of electronic system against ESD stress pulses specified by IEC 61000-4-2 is paramount for any system design. This pioneering book presents the collective knowledge of system designers and system testing experts and state-of-the-art techniques for achieving efficient system-level ESD protection, with minimum impact on the system performance. All categories of system failures ranging from ‘hard’ to ‘soft’ types are considered to review simulation and tool applications that can be used. The principal focus of System Level ESD Co-Design is defining and establishing the importance of co-design efforts from both IC supplier and system builder perspectives. ESD designers often face challenges in meeting customers' system-level ESD requirements and, therefore, a clear understanding of the techniques presented here will facilitate effective simulation approaches leading to better solutions without compromising system performance. With contributions from Robert Asht...

  14. A methodology for physically based rockfall hazard assessment

    Directory of Open Access Journals (Sweden)

    G. B. Crosta

    2003-01-01

    Full Text Available Rockfall hazard assessment is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. The mobility of rockfalls implies a more difficult hazard definition with respect to other slope instabilities with minimal runout. Rockfall hazard assessment involves complex definitions for "occurrence probability" and "intensity". This paper is an attempt to evaluate rockfall hazard using the results of 3-D numerical modelling on a topography described by a DEM. Maps portraying the maximum frequency of passages, velocity and height of blocks at each model cell, are easily combined in a GIS in order to produce physically based rockfall hazard maps. Different methods are suggested and discussed for rockfall hazard mapping at a regional and local scale both along linear features or within exposed areas. An objective approach based on three-dimensional matrixes providing both a positional "Rockfall Hazard Index" and a "Rockfall Hazard Vector" is presented. The opportunity of combining different parameters in the 3-D matrixes has been evaluated to better express the relative increase in hazard. Furthermore, the sensitivity of the hazard index with respect to the included variables and their combinations is preliminarily discussed in order to constrain as objective as possible assessment criteria.

  15. Physically-based Canopy Reflectance Model Inversion of Vegetation Biophysical-Structural Information from Terra-MODIS Imagery in Boreal and Mountainous Terrain for Ecosystem, Climate and Carbon Models using the BIOPHYS-MFM Algorithm

    Science.gov (United States)

    Peddle, D. R.; Hall, F.

    2009-12-01

    The BIOPHYS algorithm provides innovative and flexible methods for the inversion of canopy reflectance models (CRM) to derive essential biophysical structural information (BSI) for quantifying vegetation state and disturbance, and for input to ecosystem, climate and carbon models. Based on spectral, angular, temporal and scene geometry inputs that can be provided or automatically derived, the BIOPHYS Multiple-Forward Mode (MFM) approach generates look-up tables (LUTs) that comprise reflectance data, structural inputs over specified or computed ranges, and the associated CRM output from forward mode runs. Image pixel and model LUT spectral values are then matched. The corresponding BSI retrieved from the LUT matches is output as the BSI results. BIOPHYS-MFM has been extensively used with agencies in Canada and the USA over the past decade (Peddle et al 2000-09; Soenen et al 2005-09; Gamon et al 2004; Cihlar et al 2003), such as CCRS, CFS, AICWR, NASA LEDAPS, BOREAS and MODIS Science Teams, and for the North American Carbon Program. The algorithm generates BSI products such as land cover, biomass, stand volume, stem density, height, crown closure, leaf area index (LAI) and branch area, crown dimension, productivity, topographic correction, structural change from harvest, forest fires and mountain pine beetle damage, and water / hydrology applications. BIOPHYS-MFM has been applied in different locations in Canada (six provinces from Newfoundland to British Columbia) and USA (NASA COVER, MODIS and LEDAPS sites) using 7 different CRM models and a variety of imagery (e.g. MODIS, Landsat, SPOT, IKONOS, airborne MSV, MMR, casi, Probe-1, AISA). In this paper we summarise the BIOPHYS-MFM algorithm and results from Terra-MODIS imagery from MODIS validation sites at Kananaskis Alberta in the Canadian Rocky Mountains, and from the Boreal Ecosystem Atmosphere Study (BOREAS) in Saskatchewan Canada. At the montane Rocky Mountain site, BIOPHYS-MFM density estimates were within

  16. Engineering uses of physics-based ground motion simulations

    Science.gov (United States)

    Baker, Jack W.; Luco, Nicolas; Abrahamson, Norman A.; Graves, Robert W.; Maechling, Phillip J.; Olsen, Kim B.

    2014-01-01

    This paper summarizes validation methodologies focused on enabling ground motion simulations to be used with confidence in engineering applications such as seismic hazard analysis and dynmaic analysis of structural and geotechnical systems. Numberical simullation of ground motion from large erthquakes, utilizing physics-based models of earthquake rupture and wave propagation, is an area of active research in the earth science community. Refinement and validatoin of these models require collaboration between earthquake scientists and engineering users, and testing/rating methodolgies for simulated ground motions to be used with confidence in engineering applications. This paper provides an introduction to this field and an overview of current research activities being coordinated by the Souther California Earthquake Center (SCEC). These activities are related both to advancing the science and computational infrastructure needed to produce ground motion simulations, as well as to engineering validation procedures. Current research areas and anticipated future achievements are also discussed.

  17. Design for testability and diagnosis at the system-level

    Science.gov (United States)

    Simpson, William R.; Sheppard, John W.

    1993-01-01

    The growing complexity of full-scale systems has surpassed the capabilities of most simulation software to provide detailed models or gate-level failure analyses. The process of system-level diagnosis approaches the fault-isolation problem in a manner that differs significantly from the traditional and exhaustive failure mode search. System-level diagnosis is based on a functional representation of the system. For example, one can exercise one portion of a radar algorithm (the Fast Fourier Transform (FFT) function) by injecting several standard input patterns and comparing the results to standardized output results. An anomalous output would point to one of several items (including the FFT circuit) without specifying the gate or failure mode. For system-level repair, identifying an anomalous chip is sufficient. We describe here an information theoretic and dependency modeling approach that discards much of the detailed physical knowledge about the system and analyzes its information flow and functional interrelationships. The approach relies on group and flow associations and, as such, is hierarchical. Its hierarchical nature allows the approach to be applicable to any level of complexity and to any repair level. This approach has been incorporated in a product called STAMP (System Testability and Maintenance Program) which was developed and refined through more than 10 years of field-level applications to complex system diagnosis. The results have been outstanding, even spectacular in some cases. In this paper we describe system-level testability, system-level diagnoses, and the STAMP analysis approach, as well as a few STAMP applications.

  18. Intrinsically Disordered Proteins in a Physics-Based World

    Directory of Open Access Journals (Sweden)

    Jianhan Chen

    2010-12-01

    Full Text Available Intrinsically disordered proteins (IDPs are a newly recognized class of functional proteins that rely on a lack of stable structure for function. They are highly prevalent in biology, play fundamental roles, and are extensively involved in human diseases. For signaling and regulation, IDPs often fold into stable structures upon binding to specific targets. The mechanisms of these coupled binding and folding processes are of significant importance because they underlie the organization of regulatory networks that dictate various aspects of cellular decision-making. This review first discusses the challenge in detailed experimental characterization of these heterogeneous and dynamics proteins and the unique and exciting opportunity for physics-based modeling to make crucial contributions, and then summarizes key lessons from recent de novo simulations of the structure and interactions of several regulatory IDPs.

  19. A physically based catchment partitioning method for hydrological analysis

    Science.gov (United States)

    Menduni, Giovanni; Riboni, Vittoria

    2000-07-01

    We propose a partitioning method for the topographic surface, which is particularly suitable for hydrological distributed modelling and shallow-landslide distributed modelling. The model provides variable mesh size and appears to be a natural evolution of contour-based digital terrain models. The proposed method allows the drainage network to be derived from the contour lines. The single channels are calculated via a search for the steepest downslope lines. Then, for each network node, the contributing area is determined by means of a search for both steepest upslope and downslope lines. This leads to the basin being partitioned into physically based finite elements delimited by irregular polygons. In particular, the distributed computation of local geomorphological parameters (i.e. aspect, average slope and elevation, main stream length, concentration time, etc.) can be performed easily for each single element. The contributing area system, together with the information on the distribution of geomorphological parameters provide a useful tool for distributed hydrological modelling and simulation of environmental processes such as erosion, sediment transport and shallow landslides.

  20. Hologenomics: Systems-Level Host Biology.

    Science.gov (United States)

    Theis, Kevin R

    2018-01-01

    The hologenome concept of evolution is a hypothesis explaining host evolution in the context of the host microbiomes. As a hypothesis, it needs to be evaluated, especially with respect to the extent of fidelity of transgenerational coassociation of host and microbial lineages and the relative fitness consequences of repeated associations within natural holobiont populations. Behavioral ecologists are in a prime position to test these predictions because they typically focus on animal phenotypes that are quantifiable, conduct studies over multiple generations within natural animal populations, and collect metadata on genetic relatedness and relative reproductive success within these populations. Regardless of the conclusion on the hologenome concept as an evolutionary hypothesis, a hologenomic perspective has applied value as a systems-level framework for host biology, including in medicine. Specifically, it emphasizes investigating the multivarious and dynamic interactions between patient genomes and the genomes of their diverse microbiota when attempting to elucidate etiologies of complex, noninfectious diseases.

  1. System Level Analysis of LTE-Advanced

    DEFF Research Database (Denmark)

    Wang, Yuanye

    This PhD thesis focuses on system level analysis of Multi-Component Carrier (CC) management for Long Term Evolution (LTE)-Advanced. Cases where multiple CCs are aggregated to form a larger bandwidth are studied. The analysis is performed for both local area and wide area networks. In local area...... reduction. Compared to the case of reuse-1, they achieve a gain of 50∼500% in cell edge user throughput, with small or no loss in average cell throughput. For the wide area network, effort is devoted to the downlink of LTE-Advanced. Such a system is assumed to be backwards compatible to LTE release 8, i...... scheme is recommended. It reduces the CQI by 94% at low load, and 79∼93% at medium to high load, with reasonable loss in downlink performance. To reduce the ACK/NACK feedback, multiple ACK/NACKs can be bundled, with slightly degraded downlink throughput....

  2. Physically based arc-circuit interaction

    International Nuclear Information System (INIS)

    Zhong-Lie, L.

    1984-01-01

    An integral arc model is extended to study the interaction of the gas blast arc with the test circuit in this paper. The deformation in the waveshapes of arc current and voltage around the current zero has been formulated to first approximation by using a simple model of arc voltage based on the arc core energy conservation. By supplementing with the time scale for the radiation, the time rates of arc processes were amended. Both the contributions of various arc processes and the influence of circuit parameters to the arc-circuit interaction have been estimated by this theory. Analysis generated a new method of calculating test circuit parameters which improves the accurate simulation of arc-circuit interaction. The new method agrees with the published experimental results

  3. Physically Based Rendering in the Nightshade NG Visualization Platform

    Science.gov (United States)

    Berglund, Karrie; Larey-Williams, Trystan; Spearman, Rob; Bogard, Arthur

    2015-01-01

    This poster describes our work on creating a physically based rendering model in Nightshade NG planetarium simulation and visualization software (project website: NightshadeSoftware.org). We discuss techniques used for rendering realistic scenes in the universe and dealing with astronomical distances in real time on consumer hardware. We also discuss some of the challenges of rewriting the software from scratch, a project which began in 2011.Nightshade NG can be a powerful tool for sharing data and visualizations. The desktop version of the software is free for anyone to download, use, and modify; it runs on Windows and Linux (and eventually Mac). If you are looking to disseminate your data or models, please stop by to discuss how we can work together.Nightshade software is used in literally hundreds of digital planetarium systems worldwide. Countless teachers and astronomy education groups run the software on flat screens. This wide use makes Nightshade an effective tool for dissemination to educators and the public.Nightshade NG is an especially powerful visualization tool when projected on a dome. We invite everyone to enter our inflatable dome in the exhibit hall to see this software in a 3D environment.

  4. An extension to SUf3 and Dirac particle of the transformation between physical bases and symmetry bases for dibaryon states

    International Nuclear Information System (INIS)

    Ping Jialun

    1994-01-01

    The transformation between physical bases and symmetry bases is extended from SU f 2 to SU f 3 . Its application in dibaryon calculation for both nonrelativistic and relativistic quark model is discussed

  5. Development of a Physically-Based Methodology for Predicting Material Variability in Fatigue Crack Initiation and Growth Response

    National Research Council Canada - National Science Library

    Chan, Kwai

    2004-01-01

    ... of aerospace structural alloys. In this three-year program, physics-based fatigue crack initiation and growth models were developed and integrated into a probabilistic micromechanical code for treating fatigue life variability...

  6. Physics-based simulations of the impacts forest management practices have on hydrologic response

    Science.gov (United States)

    Adrianne Carr; Keith Loague

    2012-01-01

    The impacts of logging on near-surface hydrologic response at the catchment and watershed scales were examined quantitatively using numerical simulation. The simulations were conducted with the Integrated Hydrology Model (InHM) for the North Fork of Caspar Creek Experimental Watershed, located near Fort Bragg, California. InHM is a comprehensive physics-based...

  7. Constraining magma physical properties and its temporal evolution from InSAR and topographic data only: a physics-based eruption model for the effusive phase of the Cordon Caulle 2011-2012 rhyodacitic eruption

    Science.gov (United States)

    Delgado, F.; Kubanek, J.; Anderson, K. R.; Lundgren, P.; Pritchard, M. E.

    2017-12-01

    The 2011-2012 eruption of Cordón Caulle volcano in Chile is the best scientifically observed rhyodacitic eruption and is thus a key place to understand the dynamics of these rare but powerful explosive rhyodacitic eruptions. Because the volatile phase controls both the eruption temporal evolution and the eruptive style, either explosive or effusive, it is important to constrain the physical parameters that drive these eruptions. The eruption began explosively and after two weeks evolved into a hybrid explosive - lava flow effusion whose volume-time evolution we constrain with a series of TanDEM-X Digital Elevation Models. Our data shows the intrusion of a large volume laccolith or cryptodome during the first 2.5 months of the eruption and lava flow effusion only afterwards, with a total volume of 1.4 km3. InSAR data from the ENVISAT and TerraSAR-X missions shows more than 2 m of subsidence during the effusive eruption phase produced by deflation of a finite spheroidal source at a depth of 5 km. In order to constrain the magma total H2O content, crystal cargo, and reservoir pressure drop we numerically solve the coupled set of equations of a pressurized magma reservoir, magma conduit flow and time dependent density, volatile exsolution and viscosity that we use to invert the InSAR and topographic data time series. We compare the best-fit model parameters with independent estimates of magma viscosity and total gas content measured from lava samples. Preliminary modeling shows that although it is not possible to model both the InSAR and the topographic data during the onset of the laccolith emplacement, it is possible to constrain the magma H2O and crystal content, to 4% wt and 30% which agree well with published literature values.

  8. Blind Test of Physics-Based Prediction of Protein Structures

    Science.gov (United States)

    Shell, M. Scott; Ozkan, S. Banu; Voelz, Vincent; Wu, Guohong Albert; Dill, Ken A.

    2009-01-01

    We report here a multiprotein blind test of a computer method to predict native protein structures based solely on an all-atom physics-based force field. We use the AMBER 96 potential function with an implicit (GB/SA) model of solvation, combined with replica-exchange molecular-dynamics simulations. Coarse conformational sampling is performed using the zipping and assembly method (ZAM), an approach that is designed to mimic the putative physical routes of protein folding. ZAM was applied to the folding of six proteins, from 76 to 112 monomers in length, in CASP7, a community-wide blind test of protein structure prediction. Because these predictions have about the same level of accuracy as typical bioinformatics methods, and do not utilize information from databases of known native structures, this work opens up the possibility of predicting the structures of membrane proteins, synthetic peptides, or other foldable polymers, for which there is little prior knowledge of native structures. This approach may also be useful for predicting physical protein folding routes, non-native conformations, and other physical properties from amino acid sequences. PMID:19186130

  9. The use of a hydrological physically based model to evaluate the vine adaptability to future climate: the case study of a Protected Designation of Origin area (DOC and DOCG) of Southern Italy

    Science.gov (United States)

    Bonfante, Antonello; Basile, Angelo; Menenti, Massimo; Monaco, Eugenia; Alfieri, Silvia Maria; Manna, Piero; Langella, Giuliano; De Lorenzi, Francesca

    2013-04-01

    The quality of grape and wine is variety-specific and depends significantly on the pedoclimatic conditions, thus from the terroir characteristics. In viticulture the concept of terroir is known to be very complex. At present some changes are occurring in the studies of terroir. Their spatial analysis is improving by means of studies that account for the spatial distribution of solar radiation and of bioclimatic indexes. Moreover, simulation models are used to study the water flow in the soil-plant-atmosphere system in order to determine the water balance of vines as a function of i) soil physical properties, ii) climatic regime and iii) agro-ecosystems characteristics. The future climate evolution may endanger not only yield production (IPCC, 2007), but also its quality. The effects on quality may be relevant for grape production, since they can affect the sustainability of the cultivation of grape varieties in the areas where they are currently grown. This study addresses this question by evaluating the adaptive capacity of grape's cultivars in a 20000 ha viticultural area in the "Valle Telesina" (Campania Region, Southern Italy). This area has a long tradition in the production of high quality wines (DOC and DOCG) and it is characterized by a complex geomorphology with a large variability of soils and micro-climate. Two climate scenarios were considered: "past" (1961-1990) and "future" (2021-2050), the latter constructed applying statistical downscaling to GCMs scenarios. For each climate scenario the moisture regime of the soils of the study area was calculated by means of a simulation model of the soil-water-atmosphere system (SWAP). The hydrological model SWAP was applied to the representative soils of the entire area (47 soil units); the soil hydraulic properties were estimated (by means of pedo-transfer function HYPRES) and measured. Upper boundary conditions were derived from the climate scenarios. Unit gradient in soil water potential was set as lower

  10. Droplet Nucleation: Physically-Based Parameterizations and Comparative Evaluation

    Directory of Open Access Journals (Sweden)

    Steve Ghan

    2011-10-01

    Full Text Available One of the greatest sources of uncertainty in simulations of climate and climate change is the influence of aerosols on the optical properties of clouds. The root of this influence is the droplet nucleation process, which involves the spontaneous growth of aerosol into cloud droplets at cloud edges, during the early stages of cloud formation, and in some cases within the interior of mature clouds. Numerical models of droplet nucleation represent much of the complexity of the process, but at a computational cost that limits their application to simulations of hours or days. Physically-based parameterizations of droplet nucleation are designed to quickly estimate the number nucleated as a function of the primary controlling parameters: the aerosol number size distribution, hygroscopicity and cooling rate. Here we compare and contrast the key assumptions used in developing each of the most popular parameterizations and compare their performances under a variety of conditions. We find that the more complex parameterizations perform well under a wider variety of nucleation conditions, but all parameterizations perform well under the most common conditions. We then discuss the various applications of the parameterizations to cloud-resolving, regional and global models to study aerosol effects on clouds at a wide range of spatial and temporal scales. We compare estimates of anthropogenic aerosol indirect effects using two different parameterizations applied to the same global climate model, and find that the estimates of indirect effects differ by only 10%. We conclude with a summary of the outstanding challenges remaining for further development and application.

  11. FEATURES, EVENTS, AND PROCESSES: SYSTEM-LEVEL AND CRITICALITY

    International Nuclear Information System (INIS)

    D.L. McGregor

    2000-01-01

    The primary purpose of this Analysis/Model Report (AMR) is to identify and document the screening analyses for the features, events, and processes (FEPs) that do not easily fit into the existing Process Model Report (PMR) structure. These FEPs include the 3 1 FEPs designated as System-Level Primary FEPs and the 22 FEPs designated as Criticality Primary FEPs. A list of these FEPs is provided in Section 1.1. This AMR (AN-WIS-MD-000019) documents the Screening Decision and Regulatory Basis, Screening Argument, and Total System Performance Assessment (TSPA) Disposition for each of the subject Primary FEPs. This AMR provides screening information and decisions for the TSPA-SR report and provides the same information for incorporation into a project-specific FEPs database. This AMR may also assist reviewers during the licensing-review process

  12. FEATURES, EVENTS, AND PROCESSES: SYSTEM-LEVEL AND CRITICALITY

    Energy Technology Data Exchange (ETDEWEB)

    D.L. McGregor

    2000-12-20

    The primary purpose of this Analysis/Model Report (AMR) is to identify and document the screening analyses for the features, events, and processes (FEPs) that do not easily fit into the existing Process Model Report (PMR) structure. These FEPs include the 3 1 FEPs designated as System-Level Primary FEPs and the 22 FEPs designated as Criticality Primary FEPs. A list of these FEPs is provided in Section 1.1. This AMR (AN-WIS-MD-000019) documents the Screening Decision and Regulatory Basis, Screening Argument, and Total System Performance Assessment (TSPA) Disposition for each of the subject Primary FEPs. This AMR provides screening information and decisions for the TSPA-SR report and provides the same information for incorporation into a project-specific FEPs database. This AMR may also assist reviewers during the licensing-review process.

  13. Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources

    Science.gov (United States)

    Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.

    2017-09-01

    We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction

  14. Analyzing the Biology on the System Level

    OpenAIRE

    Tong, Wei

    2016-01-01

    Although various genome projects have provided us enormous static sequence information, understanding of the sophisticated biology continues to require integrating the computational modeling, system analysis, technology development for experiments, and quantitative experiments all together to analyze the biology architecture on various levels, which is just the origin of systems biology subject. This review discusses the object, its characteristics, and research attentions in systems biology,...

  15. Physics-based shape matching for intraoperative image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Suwelack, Stefan, E-mail: suwelack@kit.edu; Röhl, Sebastian; Bodenstedt, Sebastian; Reichard, Daniel; Dillmann, Rüdiger; Speidel, Stefanie [Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Adenauerring 2, Karlsruhe 76131 (Germany); Santos, Thiago dos; Maier-Hein, Lena [Computer-assisted Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg 69120 (Germany); Wagner, Martin; Wünscher, Josephine; Kenngott, Hannes; Müller, Beat P. [General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 110, Heidelberg 69120 (Germany)

    2014-11-01

    Purpose: Soft-tissue deformations can severely degrade the validity of preoperative planning data during computer assisted interventions. Intraoperative imaging such as stereo endoscopic, time-of-flight or, laser range scanner data can be used to compensate these movements. In this context, the intraoperative surface has to be matched to the preoperative model. The shape matching is especially challenging in the intraoperative setting due to noisy sensor data, only partially visible surfaces, ambiguous shape descriptors, and real-time requirements. Methods: A novel physics-based shape matching (PBSM) approach to register intraoperatively acquired surface meshes to preoperative planning data is proposed. The key idea of the method is to describe the nonrigid registration process as an electrostatic–elastic problem, where an elastic body (preoperative model) that is electrically charged slides into an oppositely charged rigid shape (intraoperative surface). It is shown that the corresponding energy functional can be efficiently solved using the finite element (FE) method. It is also demonstrated how PBSM can be combined with rigid registration schemes for robust nonrigid registration of arbitrarily aligned surfaces. Furthermore, it is shown how the approach can be combined with landmark based methods and outline its application to image guidance in laparoscopic interventions. Results: A profound analysis of the PBSM scheme based on in silico and phantom data is presented. Simulation studies on several liver models show that the approach is robust to the initial rigid registration and to parameter variations. The studies also reveal that the method achieves submillimeter registration accuracy (mean error between 0.32 and 0.46 mm). An unoptimized, single core implementation of the approach achieves near real-time performance (2 TPS, 7–19 s total registration time). It outperforms established methods in terms of speed and accuracy. Furthermore, it is shown that the

  16. System-level design methodologies for telecommunication

    CERN Document Server

    Sklavos, Nicolas; Goehringer, Diana; Kitsos, Paris

    2013-01-01

    This book provides a comprehensive overview of modern networks design, from specifications and modeling to implementations and test procedures, including the design and implementation of modern networks on chip, in both wireless and mobile applications.  Topical coverage includes algorithms and methodologies, telecommunications, hardware (including networks on chip), security and privacy, wireless and mobile networks and a variety of modern applications, such as VoLTE and the internet of things.

  17. Physics-based simulation models for EBSD: advances and challenges

    Science.gov (United States)

    Winkelmann, A.; Nolze, G.; Vos, M.; Salvat-Pujol, F.; Werner, W. S. M.

    2016-02-01

    EBSD has evolved into an effective tool for microstructure investigations in the scanning electron microscope. The purpose of this contribution is to give an overview of various simulation approaches for EBSD Kikuchi patterns and to discuss some of the underlying physical mechanisms.

  18. Characterization and Physics-Based Modeling of Electrochemical Memristors

    Science.gov (United States)

    2015-11-16

    5  3.2.1  GexSe1-x Chalcagenide Glass Technology ................................................................. 5  3.2.2  Cu- SiO2 ...potential radiation threats, of which there seem to be few except for some single event effect susceptibility. 15. SUBJECT TERMS chalcogenide glass ...2  3  Task 1 – Chalcogenide Glass Material and

  19. Systems-Level Synthetic Biology for Advanced Biofuel Production

    Energy Technology Data Exchange (ETDEWEB)

    Ruffing, Anne [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jensen, Travis J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Strickland, Lucas Marshall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meserole, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tallant, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-01

    Cyanobacteria have been shown to be capable of producing a variety of advanced biofuels; however, product yields remain well below those necessary for large scale production. New genetic tools and high throughput metabolic engineering techniques are needed to optimize cyanobacterial metabolisms for enhanced biofuel production. Towards this goal, this project advances the development of a multiple promoter replacement technique for systems-level optimization of gene expression in a model cyanobacterial host: Synechococcus sp. PCC 7002. To realize this multiple-target approach, key capabilities were developed, including a high throughput detection method for advanced biofuels, enhanced transformation efficiency, and genetic tools for Synechococcus sp. PCC 7002. Moreover, several additional obstacles were identified for realization of this multiple promoter replacement technique. The techniques and tools developed in this project will help to enable future efforts in the advancement of cyanobacterial biofuels.

  20. Combination of statistical and physically based methods to assess shallow slide susceptibility at the basin scale

    Science.gov (United States)

    Oliveira, Sérgio C.; Zêzere, José L.; Lajas, Sara; Melo, Raquel

    2017-07-01

    Approaches used to assess shallow slide susceptibility at the basin scale are conceptually different depending on the use of statistical or physically based methods. The former are based on the assumption that the same causes are more likely to produce the same effects, whereas the latter are based on the comparison between forces which tend to promote movement along the slope and the counteracting forces that are resistant to motion. Within this general framework, this work tests two hypotheses: (i) although conceptually and methodologically distinct, the statistical and deterministic methods generate similar shallow slide susceptibility results regarding the model's predictive capacity and spatial agreement; and (ii) the combination of shallow slide susceptibility maps obtained with statistical and physically based methods, for the same study area, generate a more reliable susceptibility model for shallow slide occurrence. These hypotheses were tested at a small test site (13.9 km2) located north of Lisbon (Portugal), using a statistical method (the information value method, IV) and a physically based method (the infinite slope method, IS). The landslide susceptibility maps produced with the statistical and deterministic methods were combined into a new landslide susceptibility map. The latter was based on a set of integration rules defined by the cross tabulation of the susceptibility classes of both maps and analysis of the corresponding contingency tables. The results demonstrate a higher predictive capacity of the new shallow slide susceptibility map, which combines the independent results obtained with statistical and physically based models. Moreover, the combination of the two models allowed the identification of areas where the results of the information value and the infinite slope methods are contradictory. Thus, these areas were classified as uncertain and deserve additional investigation at a more detailed scale.

  1. System-Level Design Methodologies for Networked Multiprocessor Systems-on-Chip

    DEFF Research Database (Denmark)

    Virk, Kashif Munir

    2008-01-01

    is the first such attempt in the published literature. The second part of the thesis deals with the issues related to the development of system-level design methodologies for networked multiprocessor systems-on-chip at various levels of design abstraction with special focus on the modeling and design...... at the system-level. The multiprocessor modeling framework is then extended to include models of networked multiprocessor systems-on-chip which is then employed to model wireless sensor networks both at the sensor node level as well as the wireless network level. In the third and the final part, the thesis...... to the transaction-level model. The thesis, as a whole makes contributions by describing a design methodology for networked multiprocessor embedded systems at three layers of abstraction from system-level through transaction-level to the cycle accurate level as well as demonstrating it practically by implementing...

  2. Physics-based and human-derived information fusion for analysts

    Science.gov (United States)

    Blasch, Erik; Nagy, James; Scott, Steve; Okoth, Joshua; Hinman, Michael

    2017-05-01

    Recent trends in physics-based and human-derived information fusion (PHIF) have amplified the capabilities of analysts; however with the big data opportunities there is a need for open architecture designs, methods of distributed team collaboration, and visualizations. In this paper, we explore recent trends in the information fusion to support user interaction and machine analytics. Challenging scenarios requiring PHIF include combing physics-based video data with human-derived text data for enhanced simultaneous tracking and identification. A driving effort would be to provide analysts with applications, tools, and interfaces that afford effective and affordable solutions for timely decision making. Fusion at scale should be developed to allow analysts to access data, call analytics routines, enter solutions, update models, and store results for distributed decision making.

  3. A System-Level Throughput Model for Quantum Key Distribution

    Science.gov (United States)

    2015-09-17

    discrete logarithms in a finite field [35]. Arguably the most popular asymmetric encryption scheme is the RSA algorithm, published a year later in...Theory, vol. 22, no. 6, pp. 644-654, 1976. [36] G. Singh and S. Supriya, ’A Study of Encryption Algorithms ( RSA , DES, 3DES and AES) for Information...xv Dictionary QKD = Quantum Key Distribution OTP = One-Time Pad cryptographic algorithm DES = Data Encryption Standard 3DES

  4. System-level modeling for geological storage of CO2

    OpenAIRE

    Zhang, Yingqi; Oldenburg, Curtis M.; Finsterle, Stefan; Bodvarsson, Gudmundur S.

    2006-01-01

    One way to reduce the effects of anthropogenic greenhouse gases on climate is to inject carbon dioxide (CO2) from industrial sources into deep geological formations such as brine formations or depleted oil or gas reservoirs. Research has and is being conducted to improve understanding of factors affecting particular aspects of geological CO2 storage, such as performance, capacity, and health, safety and environmental (HSE) issues, as well as to lower the cost of CO2 capture and related p...

  5. The Artemis workbench for system-level performance evaluation of embedded systems

    NARCIS (Netherlands)

    Pimentel, A.D.

    2008-01-01

    In this paper, we present an overview of the Artemis workbench, which provides modelling and simulation methods and tools for efficient performance evaluation and exploration of heterogeneous embedded multimedia systems. More specifically, we describe the Artemis system-level modelling methodology,

  6. A Systems-Level Approach to Characterizing Effects of ENMs ...

    Science.gov (United States)

    Engineered nanomaterials (ENMs) represent a new regulatory challenge because of their unique properties and their potential to interact with ecological organisms at various developmental stages, in numerous environmental compartments. Traditional toxicity tests have proven to be unreliable due to their short-term nature and the subtle responses often observed following ENM exposure. In order to fully assess the potential for various ENMs to affect responses in organisms and ecosystems, we are using a systems-level framework to link molecular initiating events with changes in whole-organism responses, and to identify how these changes may translate across scales to disrupt important ecosystem processes. This framework utilizes information from nanoparticle characteristics and exposures to help make linkages across scales. We have used Arabidopsis thaliana as a model organism to identify potential transcriptome changes in response to specific ENMs. In addition, we have focused on plant species of agronomic importance to follow multi-generational changes in physiology and phenology, as well as epigenetic markers to identify possible mechanisms of inheritance. We are employing and developing complementary analytical tools (plasma-based and synchrotron spectroscopies, microscopy, and molecular and stable-isotopic techniques) to follow movement of ENMs and ENM products in plants as they develop. These studies have revealed that changes in gene expression do not a

  7. Evidence for systems-level molecular mechanisms of tumorigenesis

    Directory of Open Access Journals (Sweden)

    Capellá Gabriel

    2007-06-01

    Full Text Available Abstract Background Cancer arises from the consecutive acquisition of genetic alterations. Increasing evidence suggests that as a consequence of these alterations, molecular interactions are reprogrammed in the context of highly connected and regulated cellular networks. Coordinated reprogramming would allow the cell to acquire the capabilities for malignant growth. Results Here, we determine the coordinated function of cancer gene products (i.e., proteins encoded by differentially expressed genes in tumors relative to healthy tissue counterparts, hereafter referred to as "CGPs" defined as their topological properties and organization in the interactome network. We show that CGPs are central to information exchange and propagation and that they are specifically organized to promote tumorigenesis. Centrality is identified by both local (degree and global (betweenness and closeness measures, and systematically appears in down-regulated CGPs. Up-regulated CGPs do not consistently exhibit centrality, but both types of cancer products determine the overall integrity of the network structure. In addition to centrality, down-regulated CGPs show topological association that correlates with common biological processes and pathways involved in tumorigenesis. Conclusion Given the current limited coverage of the human interactome, this study proposes that tumorigenesis takes place in a specific and organized way at the molecular systems-level and suggests a model that comprises the precise down-regulation of groups of topologically-associated proteins involved in particular functions, orchestrated with the up-regulation of specific proteins.

  8. Physics-based hybrid method for multiscale transport in porous media

    Science.gov (United States)

    Yousefzadeh, Mehrdad; Battiato, Ilenia

    2017-09-01

    Despite advancements in the development of multiscale models for flow and reactive transport in porous media, the accurate, efficient and physics-based coupling of multiple scales in hybrid models remains a major theoretical and computational challenge. Improving the predictivity of macroscale predictions by means of multiscale algorithms relative to classical at-scale models is the primary motivation for the development of multiscale simulators. Yet, very few are the quantitative studies that explicitly address the predictive capability of multiscale coupling algorithms as it is still generally not possible to have a priori estimates of the errors that are present when complex flow processes are modeled. We develop a nonintrusive pore-/continuum-scale hybrid model whose coupling error is bounded by the upscaling error, i.e. we build a predictive tightly coupled multiscale scheme. This is accomplished by slightly enlarging the subdomain where continuum-scale equations are locally invalid and analytically defining physics-based coupling conditions at the interfaces separating the two computational sub-domains, while enforcing state variable and flux continuity. The proposed multiscale coupling approach retains the advantages of domain decomposition approaches, including the use of existing solvers for each subdomain, while it gains flexibility in the choice of the numerical discretization method and maintains the coupling errors bounded by the upscaling error. We implement the coupling in finite volumes and test the proposed method by modeling flow and transport through a reactive channel and past an array of heterogeneously reactive cylinders.

  9. Practical options for selecting data-driven or physics-based prognostics algorithms with reviews

    International Nuclear Information System (INIS)

    An, Dawn; Kim, Nam H.; Choi, Joo-Ho

    2015-01-01

    This paper is to provide practical options for prognostics so that beginners can select appropriate methods for their fields of application. To achieve this goal, several popular algorithms are first reviewed in the data-driven and physics-based prognostics methods. Each algorithm’s attributes and pros and cons are analyzed in terms of model definition, model parameter estimation and ability to handle noise and bias in data. Fatigue crack growth examples are then used to illustrate the characteristics of different algorithms. In order to suggest a suitable algorithm, several studies are made based on the number of data sets, the level of noise and bias, availability of loading and physical models, and complexity of the damage growth behavior. Based on the study, it is concluded that the Gaussian process is easy and fast to implement, but works well only when the covariance function is properly defined. The neural network has the advantage in the case of large noise and complex models but only with many training data sets. The particle filter and Bayesian method are superior to the former methods because they are less affected by noise and model complexity, but work only when physical model and loading conditions are available. - Highlights: • Practical review of data-driven and physics-based prognostics are provided. • As common prognostics algorithms, NN, GP, PF and BM are introduced. • Algorithms’ attributes, pros and cons, and applicable conditions are discussed. • This will be helpful to choose the best algorithm for different applications

  10. A physically-based correlation of irradiation-induced transition temperature shifts for RPV steels

    International Nuclear Information System (INIS)

    Eason, E.D.; Odette, G.R.; Nanstad, R.K.; Yamamoto, T.

    2013-01-01

    This paper presents a physically-based, empirically calibrated model for estimating irradiation-induced transition temperature shifts in reactor pressure vessel steels, based on a broader database and more complete understanding of embrittlement mechanisms than was available for earlier models. Brief descriptions of the underlying radiation damage mechanisms and the database are included, but the emphasis is on the model and the quality of its fit to U.S. power reactor surveillance data. The model is compared to a random sample of surveillance data that were set aside and not used in fitting and to selected independent data from test reactor irradiations, in both cases showing good ability to predict data that were not used for calibration. The model is a good fit to the surveillance data, with no significant residual error trends for variables included in the model or additional variables that could be included

  11. Physics-based Space Weather Forecasting in the Project for Solar-Terrestrial Environment Prediction (PSTEP) in Japan

    Science.gov (United States)

    Kusano, K.

    2016-12-01

    Project for Solar-Terrestrial Environment Prediction (PSTEP) is a Japanese nation-wide research collaboration, which was recently launched. PSTEP aims to develop a synergistic interaction between predictive and scientific studies of the solar-terrestrial environment and to establish the basis for next-generation space weather forecasting using the state-of-the-art observation systems and the physics-based models. For this project, we coordinate the four research groups, which develop (1) the integration of space weather forecast system, (2) the physics-based solar storm prediction, (3) the predictive models of magnetosphere and ionosphere dynamics, and (4) the model of solar cycle activity and its impact on climate, respectively. In this project, we will build the coordinated physics-based model to answer the fundamental questions concerning the onset of solar eruptions and the mechanism for radiation belt dynamics in the Earth's magnetosphere. In this paper, we will show the strategy of PSTEP, and discuss about the role and prospect of the physics-based space weather forecasting system being developed by PSTEP.

  12. Tsunami Early Warning via a Physics-Based Simulation Pipeline

    Science.gov (United States)

    Wilson, J. M.; Rundle, J. B.; Donnellan, A.; Ward, S. N.; Komjathy, A.

    2017-12-01

    Through independent efforts, physics-based simulations of earthquakes, tsunamis, and atmospheric signatures of these phenomenon have been developed. With the goal of producing tsunami forecasts and early warning tools for at-risk regions, we join these three spheres to create a simulation pipeline. The Virtual Quake simulator can produce thousands of years of synthetic seismicity on large, complex fault geometries, as well as the expected surface displacement in tsunamigenic regions. These displacements are used as initial conditions for tsunami simulators, such as Tsunami Squares, to produce catalogs of potential tsunami scenarios with probabilities. Finally, these tsunami scenarios can act as input for simulations of associated ionospheric total electron content, signals which can be detected by GNSS satellites for purposes of early warning in the event of a real tsunami. We present the most recent developments in this project.

  13. System-Level Sensitivity Analysis of SiNW-bioFET-Based Biosensing Using Lockin Amplification

    DEFF Research Database (Denmark)

    Patou, François; Dimaki, Maria; Kjærgaard, Claus

    2017-01-01

    carry out for the first time the system-level sensitivity analysis of a generic SiNW-bioFET model coupled to a custom-design instrument based on the lock-in amplifier. By investigating a large parametric space spanning over both sensor and instrumentation specifications, we demonstrate that systemwide...

  14. Physically-based Assessment of Tropical Cyclone Damage and Economic Losses

    Science.gov (United States)

    Lin, N.

    2012-12-01

    Estimating damage and economic losses caused by tropical cyclones (TC) is a topic of considerable research interest in many scientific fields, including meteorology, structural and coastal engineering, and actuarial sciences. One approach is based on the empirical relationship between TC characteristics and loss data. Another is to model the physical mechanism of TC-induced damage. In this talk we discuss about the physically-based approach to predict TC damage and losses due to extreme wind and storm surge. We first present an integrated vulnerability model, which, for the first time, explicitly models the essential mechanisms causing wind damage to residential areas during storm passage, including windborne-debris impact and the pressure-debris interaction that may lead, in a chain reaction, to structural failures (Lin and Vanmarcke 2010; Lin et al. 2010a). This model can be used to predict the economic losses in a residential neighborhood (with hundreds of buildings) during a specific TC (Yau et al. 2011) or applied jointly with a TC risk model (e.g., Emanuel et al 2008) to estimate the expected losses over long time periods. Then we present a TC storm surge risk model that has been applied to New York City (Lin et al. 2010b; Lin et al. 2012; Aerts et al. 2012), Miami-Dade County, Florida (Klima et al. 2011), Galveston, Texas (Lickley, 2012), and other coastal areas around the world (e.g., Tampa, Florida; Persian Gulf; Darwin, Australia; Shanghai, China). These physically-based models are applicable to various coastal areas and have the capability to account for the change of the climate and coastal exposure over time. We also point out that, although made computationally efficient for risk assessment, these models are not suitable for regional or global analysis, which has been a focus of the empirically-based economic analysis (e.g., Hsiang and Narita 2012). A future research direction is to simplify the physically-based models, possibly through

  15. Rethinking earthquake-related DC-ULF electromagnetic phenomena: towards a physics-based approach

    Directory of Open Access Journals (Sweden)

    Q. Huang

    2011-11-01

    Full Text Available Numerous electromagnetic changes possibly related with earthquakes have been independently reported and have even been attempted to apply to short-term prediction of earthquakes. However, there are active debates on the above issue because the seismogenic process is rather complicated and the studies have been mainly empirical (i.e. a kind of experience-based approach. Thus, a physics-based study would be helpful for understanding earthquake-related electromagnetic phenomena and strengthening their applications. As a potential physics-based approach, I present an integrated research scheme, taking into account the interaction among observation, methodology, and physical model. For simplicity, this work focuses only on the earthquake-related DC-ULF electromagnetic phenomena. The main approach includes the following key problems: (1 how to perform a reliable and appropriate observation with some clear physical quantities; (2 how to develop a robust methodology to reveal weak earthquake-related electromagnetic signals from noisy background; and (3 how to develop plausible physical models based on theoretical analyses and/or laboratory experiments for the explanation of the earthquake-related electromagnetic signals observed in the field conditions.

  16. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    Science.gov (United States)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  17. Physics-based scoring of protein-ligand interactions: explicit polarizability, quantum mechanics and free energies.

    Science.gov (United States)

    Bryce, Richard A

    2011-04-01

    The ability to accurately predict the interaction of a ligand with its receptor is a key limitation in computer-aided drug design approaches such as virtual screening and de novo design. In this article, we examine current strategies for a physics-based approach to scoring of protein-ligand affinity, as well as outlining recent developments in force fields and quantum chemical techniques. We also consider advances in the development and application of simulation-based free energy methods to study protein-ligand interactions. Fuelled by recent advances in computational algorithms and hardware, there is the opportunity for increased integration of physics-based scoring approaches at earlier stages in computationally guided drug discovery. Specifically, we envisage increased use of implicit solvent models and simulation-based scoring methods as tools for computing the affinities of large virtual ligand libraries. Approaches based on end point simulations and reference potentials allow the application of more advanced potential energy functions to prediction of protein-ligand binding affinities. Comprehensive evaluation of polarizable force fields and quantum mechanical (QM)/molecular mechanical and QM methods in scoring of protein-ligand interactions is required, particularly in their ability to address challenging targets such as metalloproteins and other proteins that make highly polar interactions. Finally, we anticipate increasingly quantitative free energy perturbation and thermodynamic integration methods that are practical for optimization of hits obtained from screened ligand libraries.

  18. New light field camera based on physical based rendering tracing

    Science.gov (United States)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  19. System-level perturbations of cell metabolism using CRISPR/Cas9

    DEFF Research Database (Denmark)

    Jakociunas, Tadas; Jensen, Michael Krogh; Keasling, Jay

    2017-01-01

    CRISPR/Cas9 (clustered regularly interspaced palindromic repeats and the associated protein Cas9) techniques have made genome engineering and transcriptional reprogramming studies more advanced and cost-effective. For metabolic engineering purposes, the CRISPR-based tools have been applied...... previously possible. In this mini-review we highlight recent studies adopting CRISPR/Cas9 for systems-level perturbations and model-guided metabolic engineering....

  20. Gradient Enhanced Physically Based Plasticity: Implementation and Application to a Problem Pertaining Size Effect : Implementation and application to a problem pertaining size effect

    NARCIS (Netherlands)

    Perdahcioglu, Emin Semih; Soyarslan, C.; van den Boogaard, Antonius H.; Bargmann, S.

    2016-01-01

    A physically based plasticity model is implemented which describes work hardening of a material as a function of the total dislocation density. The local part of the model, which involves statistically stored dislocations (SSDs) only, is based on Bergström's original model. The nonlocal part is

  1. Demonstration of fundamental statistics by studying timing of electronics signals in a physics-based laboratory

    Science.gov (United States)

    Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.

    2017-07-01

    We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.

  2. A Physics-Based Deep Learning Approach to Shadow Invariant Representations of Hyperspectral Images.

    Science.gov (United States)

    Windrim, Lloyd; Ramakrishnan, Rishi; Melkumyan, Arman; Murphy, Richard J

    2018-02-01

    This paper proposes the Relit Spectral Angle-Stacked Autoencoder, a novel unsupervised feature learning approach for mapping pixel reflectances to illumination invariant encodings. This work extends the Spectral Angle-Stacked Autoencoder so that it can learn a shadow-invariant mapping. The method is inspired by a deep learning technique, Denoising Autoencoders, with the incorporation of a physics-based model for illumination such that the algorithm learns a shadow invariant mapping without the need for any labelled training data, additional sensors, a priori knowledge of the scene or the assumption of Planckian illumination. The method is evaluated using datasets captured from several different cameras, with experiments to demonstrate the illumination invariance of the features and how they can be used practically to improve the performance of high-level perception algorithms that operate on images acquired outdoors.

  3. Accelerating next generation sequencing data analysis with system level optimizations.

    Science.gov (United States)

    Kathiresan, Nagarajan; Temanni, Ramzi; Almabrazi, Hakeem; Syed, Najeeb; Jithesh, Puthen V; Al-Ali, Rashid

    2017-08-22

    Next generation sequencing (NGS) data analysis is highly compute intensive. In-memory computing, vectorization, bulk data transfer, CPU frequency scaling are some of the hardware features in the modern computing architectures. To get the best execution time and utilize these hardware features, it is necessary to tune the system level parameters before running the application. We studied the GATK-HaplotypeCaller which is part of common NGS workflows, that consume more than 43% of the total execution time. Multiple GATK 3.x versions were benchmarked and the execution time of HaplotypeCaller was optimized by various system level parameters which included: (i) tuning the parallel garbage collection and kernel shared memory to simulate in-memory computing, (ii) architecture-specific tuning in the PairHMM library for vectorization, (iii) including Java 1.8 features through GATK source code compilation and building a runtime environment for parallel sorting and bulk data transfer (iv) the default 'on-demand' mode of CPU frequency is over-clocked by using 'performance-mode' to accelerate the Java multi-threads. As a result, the HaplotypeCaller execution time was reduced by 82.66% in GATK 3.3 and 42.61% in GATK 3.7. Overall, the execution time of NGS pipeline was reduced to 70.60% and 34.14% for GATK 3.3 and GATK 3.7 respectively.

  4. A systems-level approach for investigating organophosphorus pesticide toxicity.

    Science.gov (United States)

    Zhu, Jingbo; Wang, Jing; Ding, Yan; Liu, Baoyue; Xiao, Wei

    2018-03-01

    The full understanding of the single and joint toxicity of a variety of organophosphorus (OP) pesticides is still unavailable, because of the extreme complex mechanism of action. This study established a systems-level approach based on systems toxicology to investigate OP pesticide toxicity by incorporating ADME/T properties, protein prediction, and network and pathway analysis. The results showed that most OP pesticides are highly toxic according to the ADME/T parameters, and can interact with significant receptor proteins to cooperatively lead to various diseases by the established OP pesticide -protein and protein-disease networks. Furthermore, the studies that multiple OP pesticides potentially act on the same receptor proteins and/or the functionally diverse proteins explained that multiple OP pesticides could mutually enhance toxicological synergy or additive on a molecular/systematic level. To the end, the integrated pathways revealed the mechanism of toxicity of the interaction of OP pesticides and elucidated the pathogenesis induced by OP pesticides. This study demonstrates a systems-level approach for investigating OP pesticide toxicity that can be further applied to risk assessments of various toxins, which is of significant interest to food security and environmental protection. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. MEDICARE PAYMENTS AND SYSTEM-LEVEL HEALTH-CARE USE

    Science.gov (United States)

    ROBBINS, JACOB A.

    2015-01-01

    The rapid growth of Medicare managed care over the past decade has the potential to increase the efficiency of health-care delivery. Improvements in care management for some may improve efficiency system-wide, with implications for optimal payment policy in public insurance programs. These system-level effects may depend on local health-care market structure and vary based on patient characteristics. We use exogenous variation in the Medicare payment schedule to isolate the effects of market-level managed care enrollment on the quantity and quality of care delivered. We find that in areas with greater enrollment of Medicare beneficiaries in managed care, the non–managed care beneficiaries have fewer days in the hospital but more outpatient visits, consistent with a substitution of less expensive outpatient care for more expensive inpatient care, particularly at high levels of managed care. We find no evidence that care is of lower quality. Optimal payment policies for Medicare managed care enrollees that account for system-level spillovers may thus be higher than those that do not. PMID:27042687

  6. Measuring healthcare productivity - from unit to system level.

    Science.gov (United States)

    Kämäräinen, Vesa Johannes; Peltokorpi, Antti; Torkki, Paulus; Tallbacka, Kaj

    2016-04-18

    Purpose - Healthcare productivity is a growing issue in most Western countries where healthcare expenditure is rapidly increasing. Therefore, accurate productivity metrics are essential to avoid sub-optimization within a healthcare system. The purpose of this paper is to focus on healthcare production system productivity measurement. Design/methodology/approach - Traditionally, healthcare productivity has been studied and measured independently at the unit, organization and system level. Suggesting that productivity measurement should be done in different levels, while simultaneously linking productivity measurement to incentives, this study presents the challenges of productivity measurement at the different levels. The study introduces different methods to measure productivity in healthcare. In addition, it provides background information on the methods used to measure productivity and the parameters used in these methods. A pilot investigation of productivity measurement is used to illustrate the challenges of measurement, to test the developed measures and to prove the practical information for managers. Findings - The study introduces different approaches and methods to measure productivity in healthcare. Practical implications - A pilot investigation of productivity measurement is used to illustrate the challenges of measurement, to test the developed measures and to prove the practical benefits for managers. Originality/value - The authors focus on the measurement of the whole healthcare production system and try to avoid sub-optimization. Additionally considering an individual patient approach, productivity measurement is examined at the unit level, the organizational level and the system level.

  7. "Physically-based" numerical experiment to determine the dominant hillslope processes during floods?

    Science.gov (United States)

    Gaume, Eric; Esclaffer, Thomas; Dangla, Patrick; Payrastre, Olivier

    2016-04-01

    To study the dynamics of hillslope responses during flood event, a fully coupled "physically-based" model for the combined numerical simulation of surface runoff and underground flows has been developed. A particular attention has been given to the selection of appropriate numerical schemes for the modelling of both processes and of their coupling. Surprisingly, the most difficult question to solve, from a numerical point of view, was not related to the coupling of two processes with contrasted kinetics such as surface and underground flows, but to the high gradient infiltration fronts appearing in soils, source of numerical diffusion, instabilities and sometimes divergence. The model being elaborated, it has been successfully tested against results of high quality experiments conducted on a laboratory sandy slope in the early eighties, which is still considered as a reference hillslope experimental setting (Abdul & Guilham). The model appeared able to accurately simulate the pore pressure distributions observed in this 1.5 meter deep and wide laboratory hillslope, as well as its outflow hydrograph shapes and the measured respective contributions of direct runoff and groundwater to these outflow hydrographs. Based on this great success, the same model has been used to simulate the response of a theoretical 100-meter wide and 10% sloped hillslope, with a 2 meter deep pervious soil and impervious bedrock. Three rain events have been tested: a 100 millimeter rainfall event over 10 days, over 1 day or over one hour. The simulated responses are hydrologically not realistic and especially the fast component of the response, that is generally observed in the real-world and explains flood events, is almost absent of the simulated response. Thinking a little about the whole problem, the simulation results appears totally logical according to the proposed model. The simulated response, in fact a recession hydrograph, corresponds to a piston flow of a relatively uniformly

  8. Integrating physically based simulators with Event Detection Systems: Multi-site detection approach.

    Science.gov (United States)

    Housh, Mashor; Ohar, Ziv

    2017-03-01

    The Fault Detection (FD) Problem in control theory concerns of monitoring a system to identify when a fault has occurred. Two approaches can be distinguished for the FD: Signal processing based FD and Model-based FD. The former concerns of developing algorithms to directly infer faults from sensors' readings, while the latter uses a simulation model of the real-system to analyze the discrepancy between sensors' readings and expected values from the simulation model. Most contamination Event Detection Systems (EDSs) for water distribution systems have followed the signal processing based FD, which relies on analyzing the signals from monitoring stations independently of each other, rather than evaluating all stations simultaneously within an integrated network. In this study, we show that a model-based EDS which utilizes a physically based water quality and hydraulics simulation models, can outperform the signal processing based EDS. We also show that the model-based EDS can facilitate the development of a Multi-Site EDS (MSEDS), which analyzes the data from all the monitoring stations simultaneously within an integrated network. The advantage of the joint analysis in the MSEDS is expressed by increased detection accuracy (higher true positive alarms and fewer false alarms) and shorter detection time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. System-level techniques for analog performance enhancement

    CERN Document Server

    Song, Bang-Sup

    2016-01-01

    This book shows readers to avoid common mistakes in circuit design, and presents classic circuit concepts and design approaches from the transistor to the system levels. The discussion is geared to be accessible and optimized for practical designers who want to learn to create circuits without simulations. Topic by topic, the author guides designers to learn the classic analog design skills by understanding the basic electronics principles correctly, and further prepares them to feel confident in designing high-performance, state-of-the art CMOS analog systems. This book combines and presents all in-depth necessary information to perform various design tasks so that readers can grasp essential material, without reading through the entire book. This top-down approach helps readers to build practical design expertise quickly, starting from their understanding of electronics fundamentals. .

  10. Process for Selecting System Level Assessments for Human System Technologies

    Science.gov (United States)

    Watts, James; Park, John

    2006-01-01

    The integration of many life support systems necessary to construct a stable habitat is difficult. The correct identification of the appropriate technologies and corresponding interfaces is an exhaustive process. Once technologies are selected secondary issues such as mechanical and electrical interfaces must be addressed. The required analytical and testing work must be approached in a piecewise fashion to achieve timely results. A repeatable process has been developed to identify and prioritize system level assessments and testing needs. This Assessment Selection Process has been defined to assess cross cutting integration issues on topics at the system or component levels. Assessments are used to identify risks, encourage future actions to mitigate risks, or spur further studies.

  11. System-level integration of active silicon photonic biosensors

    Science.gov (United States)

    Laplatine, L.; Al'Mrayat, O.; Luan, E.; Fang, C.; Rezaiezadeh, S.; Ratner, D. M.; Cheung, K.; Dattner, Y.; Chrostowski, L.

    2017-02-01

    Biosensors based on silicon photonic integrated circuits have attracted a growing interest in recent years. The use of sub-micron silicon waveguides to propagate near-infrared light allows for the drastic reduction of the optical system size, while increasing its complexity and sensitivity. Using silicon as the propagating medium also leverages the fabrication capabilities of CMOS foundries, which offer low-cost mass production. Researchers have deeply investigated photonic sensor devices, such as ring resonators, interferometers and photonic crystals, but the practical integration of silicon photonic biochips as part of a complete system has received less attention. Herein, we present a practical system-level architecture which can be employed to integrate the aforementioned photonic biosensors. We describe a system based on 1 mm2 dies that integrate germanium photodetectors and a single light coupling device. The die are embedded into a 16x16 mm2 epoxy package to enable microfluidic and electrical integration. First, we demonstrate a simple process to mimic Fan-Out Wafer-level-Packaging, which enables low-cost mass production. We then characterize the photodetectors in the photovoltaic mode, which exhibit high sensitivity at low optical power. Finally, we present a new grating coupler concept to relax the lateral alignment tolerance down to +/- 50 μm at 1-dB (80%) power penalty, which should permit non-experts to use the biochips in a"plug-and-play" style. The system-level integration demonstrated in this study paves the way towards the mass production of low-cost and highly sensitive biosensors, and can facilitate their wide adoption for biomedical and agro-environmental applications.

  12. Public health preparedness in Alberta: a systems-level study.

    Science.gov (United States)

    Moore, Douglas; Shiell, Alan; Noseworthy, Tom; Russell, Margaret; Predy, Gerald

    2006-12-28

    Recent international and national events have brought critical attention to the Canadian public health system and how prepared the system is to respond to various types of contemporary public health threats. This article describes the study design and methods being used to conduct a systems-level analysis of public health preparedness in the province of Alberta, Canada. The project is being funded under the Health Research Fund, Alberta Heritage Foundation for Medical Research. We use an embedded, multiple-case study design, integrating qualitative and quantitative methods to measure empirically the degree of inter-organizational coordination existing among public health agencies in Alberta, Canada. We situate our measures of inter-organizational network ties within a systems-level framework to assess the relative influence of inter-organizational ties, individual organizational attributes, and institutional environmental features on public health preparedness. The relative contribution of each component is examined for two potential public health threats: pandemic influenza and West Nile virus. The organizational dimensions of public health preparedness depend on a complex mix of individual organizational characteristics, inter-agency relationships, and institutional environmental factors. Our study is designed to discriminate among these different system components and assess the independent influence of each on the other, as well as the overall level of public health preparedness in Alberta. While all agree that competent organizations and functioning networks are important components of public health preparedness, this study is one of the first to use formal network analysis to study the role of inter-agency networks in the development of prepared public health systems.

  13. Promoting system-level learning from project-level lessons

    International Nuclear Information System (INIS)

    Jong, Amos A. de; Runhaar, Hens A.C.; Runhaar, Piety R.; Kolhoff, Arend J.; Driessen, Peter P.J.

    2012-01-01

    A growing number of low and middle income nations (LMCs) have adopted some sort of system for environmental impact assessment (EIA). However, generally many of these EIA systems are characterised by a low performance in terms of timely information dissemination, monitoring and enforcement after licencing. Donor actors (such as the World Bank) have attempted to contribute to a higher performance of EIA systems in LMCs by intervening at two levels: the project level (e.g. by providing scoping advice or EIS quality review) and the system level (e.g. by advising on EIA legislation or by capacity building). The aims of these interventions are environmental protection in concrete cases and enforcing the institutionalisation of environmental protection, respectively. Learning by actors involved is an important condition for realising these aims. A relatively underexplored form of learning concerns learning at EIA system-level via project level donor interventions. This ‘indirect’ learning potentially results in system changes that better fit the specific context(s) and hence contribute to higher performances. Our exploratory research in Ghana and the Maldives shows that thus far, ‘indirect’ learning only occurs incidentally and that donors play a modest role in promoting it. Barriers to indirect learning are related to the institutional context rather than to individual characteristics. Moreover, ‘indirect’ learning seems to flourish best in large projects where donors achieved a position of influence that they can use to evoke reflection upon system malfunctions. In order to enhance learning at all levels donors should thereby present the outcomes of the intervention elaborately (i.e. discuss the outcomes with a large audience), include practical suggestions about post-EIS activities such as monitoring procedures and enforcement options and stimulate the use of their advisory reports to generate organisational memory and ensure a better information

  14. Public health preparedness in Alberta: a systems-level study

    Directory of Open Access Journals (Sweden)

    Noseworthy Tom

    2006-12-01

    Full Text Available Abstract Background Recent international and national events have brought critical attention to the Canadian public health system and how prepared the system is to respond to various types of contemporary public health threats. This article describes the study design and methods being used to conduct a systems-level analysis of public health preparedness in the province of Alberta, Canada. The project is being funded under the Health Research Fund, Alberta Heritage Foundation for Medical Research. Methods/Design We use an embedded, multiple-case study design, integrating qualitative and quantitative methods to measure empirically the degree of inter-organizational coordination existing among public health agencies in Alberta, Canada. We situate our measures of inter-organizational network ties within a systems-level framework to assess the relative influence of inter-organizational ties, individual organizational attributes, and institutional environmental features on public health preparedness. The relative contribution of each component is examined for two potential public health threats: pandemic influenza and West Nile virus. Discussion The organizational dimensions of public health preparedness depend on a complex mix of individual organizational characteristics, inter-agency relationships, and institutional environmental factors. Our study is designed to discriminate among these different system components and assess the independent influence of each on the other, as well as the overall level of public health preparedness in Alberta. While all agree that competent organizations and functioning networks are important components of public health preparedness, this study is one of the first to use formal network analysis to study the role of inter-agency networks in the development of prepared public health systems.

  15. Promoting system-level learning from project-level lessons

    Energy Technology Data Exchange (ETDEWEB)

    Jong, Amos A. de, E-mail: amosdejong@gmail.com [Innovation Management, Utrecht (Netherlands); Runhaar, Hens A.C., E-mail: h.a.c.runhaar@uu.nl [Section of Environmental Governance, Utrecht University, Utrecht (Netherlands); Runhaar, Piety R., E-mail: piety.runhaar@wur.nl [Organisational Psychology and Human Resource Development, University of Twente, Enschede (Netherlands); Kolhoff, Arend J., E-mail: Akolhoff@eia.nl [The Netherlands Commission for Environmental Assessment, Utrecht (Netherlands); Driessen, Peter P.J., E-mail: p.driessen@geo.uu.nl [Department of Innovation and Environment Sciences, Utrecht University, Utrecht (Netherlands)

    2012-02-15

    A growing number of low and middle income nations (LMCs) have adopted some sort of system for environmental impact assessment (EIA). However, generally many of these EIA systems are characterised by a low performance in terms of timely information dissemination, monitoring and enforcement after licencing. Donor actors (such as the World Bank) have attempted to contribute to a higher performance of EIA systems in LMCs by intervening at two levels: the project level (e.g. by providing scoping advice or EIS quality review) and the system level (e.g. by advising on EIA legislation or by capacity building). The aims of these interventions are environmental protection in concrete cases and enforcing the institutionalisation of environmental protection, respectively. Learning by actors involved is an important condition for realising these aims. A relatively underexplored form of learning concerns learning at EIA system-level via project level donor interventions. This 'indirect' learning potentially results in system changes that better fit the specific context(s) and hence contribute to higher performances. Our exploratory research in Ghana and the Maldives shows that thus far, 'indirect' learning only occurs incidentally and that donors play a modest role in promoting it. Barriers to indirect learning are related to the institutional context rather than to individual characteristics. Moreover, 'indirect' learning seems to flourish best in large projects where donors achieved a position of influence that they can use to evoke reflection upon system malfunctions. In order to enhance learning at all levels donors should thereby present the outcomes of the intervention elaborately (i.e. discuss the outcomes with a large audience), include practical suggestions about post-EIS activities such as monitoring procedures and enforcement options and stimulate the use of their advisory reports to generate organisational memory and ensure a better

  16. Physically-Based Assessment of Intrinsic Groundwater Resource Vulnerability in AN Urban Catchment

    Science.gov (United States)

    Graf, T.; Therrien, R.; Lemieux, J.; Molson, J. W.

    2013-12-01

    Several methods exist to assess intrinsic groundwater (re)source vulnerability for the purpose of sustainable groundwater management and protection. However, several methods are empirical and limited in their application to specific types of hydrogeological systems. Recent studies suggest that a physically-based approach could be better suited to provide a general, conceptual and operational basis for groundwater vulnerability assessment. A novel method for physically-based assessment of intrinsic aquifer vulnerability is currently under development and tested to explore the potential of an integrated modelling approach, combining groundwater travel time probability and future scenario modelling in conjunction with the fully integrated HydroGeoSphere model. To determine the intrinsic groundwater resource vulnerability, a fully coupled 2D surface water and 3D variably-saturated groundwater flow model in conjunction with a 3D geological model (GoCAD) has been developed for a case study of the Rivière Saint-Charles (Québec/Canada) regional scale, urban watershed. The model has been calibrated under transient flow conditions for the hydrogeological, variably-saturated subsurface system, coupled with the overland flow zone by taking into account monthly recharge variation and evapotranspiration. To better determine the intrinsic groundwater vulnerability, two independent approaches are considered and subsequently combined in a simple, holistic multi-criteria-decision analyse. Most data for the model comes from an extensive hydrogeological database for the watershed, whereas data gaps have been complemented via field tests and literature review. The subsurface is composed of nine hydrofacies, ranging from unconsolidated fluvioglacial sediments to low permeability bedrock. The overland flow zone is divided into five major zones (Urban, Rural, Forest, River and Lake) to simulate the differences in landuse, whereas the unsaturated zone is represented via the model

  17. Abstract Radio Resource Management Framework for System Level Simulations in LTE-A Systems

    DEFF Research Database (Denmark)

    Fotiadis, Panagiotis; Viering, Ingo; Zanier, Paolo

    2014-01-01

    This paper provides a simple mathematical model of different packet scheduling policies in Long Term Evolution- Advanced (LTE-A) systems, by investigating the performance of Proportional Fair (PF) and the generalized cross-Component Carrier scheduler from a theoretical perspective. For that purpose......, an abstract Radio Resource Management (RRM) framework has been developed and tested for different ratios of users with Carrier Aggregation (CA) capabilities. The conducted system level simulations confirm that the proposed model can satisfactorily capture the main properties of the aforementioned scheduling...

  18. System-level perturbations of cell metabolism using CRISPR/Cas9

    Energy Technology Data Exchange (ETDEWEB)

    Jakočiūnas, Tadas [Technical Univ. of Denmark, Lyngby (Denmark); Jensen, Michael K. [Technical Univ. of Denmark, Lyngby (Denmark); Keasling, Jay D. [Technical Univ. of Denmark, Lyngby (Denmark); Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States)

    2017-03-30

    CRISPR/Cas9 (clustered regularly interspaced palindromic repeats and the associated protein Cas9) techniques have made genome engineering and transcriptional reprogramming studies much more advanced and cost-effective. For metabolic engineering purposes, the CRISPR-based tools have been applied to single and multiplex pathway modifications and transcriptional regulations. The effectiveness of these tools allows researchers to implement genome-wide perturbations, test model-guided genome editing strategies, and perform transcriptional reprogramming perturbations in a more advanced manner than previously possible. In this mini-review we highlight recent studies adopting CRISPR/Cas9 for systems-level perturbations and model-guided metabolic engineering.

  19. Value of information in sequential decision making: Component inspection, permanent monitoring and system-level scheduling

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo

    2016-01-01

    We illustrate how to assess the Value of Information (VoI) in sequential decision making problems modeled by Partially Observable Markov Decision Processes (POMDPs). POMDPs provide a general framework for modeling the management of infrastructure components, including operation and maintenance, when only partial or noisy observations are available; VoI is a key concept for selecting explorative actions, with application to component inspection and monitoring. Furthermore, component-level VoI can serve as an effective heuristic for assigning priorities to system-level inspection scheduling. We introduce two alternative models for the availability of information, and derive the VoI in each of those settings: the Stochastic Allocation (SA) model assumes that observations are collected with a given probability, while the Fee-based Allocation model (FA) assumes that they are available at a given cost. After presenting these models at component-level, we investigate how they perform for system-level inspection scheduling. - Highlights: • On the Value of Information in POMDPs, for optimal exploration of systems. • A method for assessing the Value of Information of permanent monitoring. • A method for allocating inspections in systems made up by parallel POMDPs.

  20. Optimal maintenance policy incorporating system level and unit level for mechanical systems

    Science.gov (United States)

    Duan, Chaoqun; Deng, Chao; Wang, Bingran

    2018-04-01

    The study works on a multi-level maintenance policy combining system level and unit level under soft and hard failure modes. The system experiences system-level preventive maintenance (SLPM) when the conditional reliability of entire system exceeds SLPM threshold, and also undergoes a two-level maintenance for each single unit, which is initiated when a single unit exceeds its preventive maintenance (PM) threshold, and the other is performed simultaneously the moment when any unit is going for maintenance. The units experience both periodic inspections and aperiodic inspections provided by failures of hard-type units. To model the practical situations, two types of economic dependence have been taken into account, which are set-up cost dependence and maintenance expertise dependence due to the same technology and tool/equipment can be utilised. The optimisation problem is formulated and solved in a semi-Markov decision process framework. The objective is to find the optimal system-level threshold and unit-level thresholds by minimising the long-run expected average cost per unit time. A formula for the mean residual life is derived for the proposed multi-level maintenance policy. The method is illustrated by a real case study of feed subsystem from a boring machine, and a comparison with other policies demonstrates the effectiveness of our approach.

  1. Autonomous physics-based color learning under daylight

    Science.gov (United States)

    Berube Lauziere, Yves; Gingras, Denis J.; Ferrie, Frank P.

    1999-09-01

    An autonomous approach for learning the colors of specific objects assumed to have known body spectral reflectances is developed for daylight illumination conditions. The main issue is to be able to find these objects autonomously in a set of training images captured under a wide variety of daylight illumination conditions, and to extract their colors to determine color space regions that are representative of the objects' colors and their variations. The work begins by modeling color formation under daylight using the color formation equations and the semi-empirical model of Judd, MacAdam and Wyszecki (CIE daylight model) for representing the typical spectral distributions of daylight. This results in color space regions that serve as prior information in the initial phase of learning which consists in detecting small reliable clusters of pixels having the appropriate colors. These clusters are then expanded by a region growing technique using broader color space regions than those predicted by the model. This is to detect objects in a way that is able to account for color variations which the model cannot due to its limitations. Validation on the detected objects is performed to filter out those that are not of interest and to eliminate unreliable pixel color values extracted from the remaining ones. Detection results using the color space regions determined from color values obtained by this procedure are discussed.

  2. System level traffic shaping in disk servers with heterogeneous protocols

    International Nuclear Information System (INIS)

    Cano, Eric; Kruse, Daniele Francesco

    2014-01-01

    Disk access and tape migrations compete for network bandwidth in CASTORs disk servers, over various protocols: RFIO, Xroot, root and GridFTP. As there are a limited number of tape drives, it is important to keep them busy all the time, at their nominal speed. With potentially 100s of user read streams per server, the bandwidth for the tape migrations has to be guaranteed to a controlled level, and not the fair share the system gives by default. Xroot provides a prioritization mechanism, but using it implies moving exclusively to the Xroot protocol, which is not possible in short to mid-term time frame, as users are equally using all protocols. The greatest commonality of all those protocols is not more than the usage of TCP/IP. We investigated the Linux kernel traffic shaper to control TCP/ IP bandwidth. The performance and limitations of the traffic shaper have been understood in test environment, and satisfactory working point has been found for production. Notably, TCP offload engines' negative impact on traffic shaping, and the limitations of the length of the traffic shaping rules were discovered and measured. A suitable working point has been found and the traffic shaping is now successfully deployed in the CASTOR production systems at CERN. This system level approach could be transposed easily to other environments.

  3. A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips

    Directory of Open Access Journals (Sweden)

    Guanyi Sun

    2011-01-01

    Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.

  4. Estimating yield gaps at the cropping system level.

    Science.gov (United States)

    Guilpart, Nicolas; Grassini, Patricio; Sadras, Victor O; Timsina, Jagadish; Cassman, Kenneth G

    2017-05-01

    Yield gap analyses of individual crops have been used to estimate opportunities for increasing crop production at local to global scales, thus providing information crucial to food security. However, increases in crop production can also be achieved by improving cropping system yield through modification of spatial and temporal arrangement of individual crops. In this paper we define the cropping system yield potential as the output from the combination of crops that gives the highest energy yield per unit of land and time, and the cropping system yield gap as the difference between actual energy yield of an existing cropping system and the cropping system yield potential. Then, we provide a framework to identify alternative cropping systems which can be evaluated against the current ones. A proof-of-concept is provided with irrigated rice-maize systems at four locations in Bangladesh that represent a range of climatic conditions in that country. The proposed framework identified (i) realistic alternative cropping systems at each location, and (ii) two locations where expected improvements in crop production from changes in cropping intensity (number of crops per year) were 43% to 64% higher than from improving the management of individual crops within the current cropping systems. The proposed framework provides a tool to help assess food production capacity of new systems ( e.g. with increased cropping intensity) arising from climate change, and assess resource requirements (water and N) and associated environmental footprint per unit of land and production of these new systems. By expanding yield gap analysis from individual crops to the cropping system level and applying it to new systems, this framework could also be helpful to bridge the gap between yield gap analysis and cropping/farming system design.

  5. Out-of-order parallel discrete event simulation for electronic system-level design

    CERN Document Server

    Chen, Weiwei

    2014-01-01

    This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems.? It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time.? Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifyin

  6. A new physics-based method for detecting weak nuclear signals via spectral decomposition

    International Nuclear Information System (INIS)

    Chan, Kung-Sik; Li, Jinzheng; Eichinger, William; Bai, Erwei

    2012-01-01

    We propose a new physics-based method to determine the presence of the spectral signature of one or more nuclides from a poorly resolved spectra with weak signatures. The method is different from traditional methods that rely primarily on peak finding algorithms. The new approach considers each of the signatures in the library to be a linear combination of subspectra. These subspectra are obtained by assuming a signature consisting of just one of the unique gamma rays emitted by the nuclei. We propose a Poisson regression model for deducing which nuclei are present in the observed spectrum. In recognition that a radiation source generally comprises few nuclear materials, the underlying Poisson model is sparse, i.e. most of the regression coefficients are zero (positive coefficients correspond to the presence of nuclear materials). We develop an iterative algorithm for a penalized likelihood estimation that prompts sparsity. We illustrate the efficacy of the proposed method by simulations using a variety of poorly resolved, low signal-to-noise ratio (SNR) situations, which show that the proposed approach enjoys excellent empirical performance even with SNR as low as to -15 db.

  7. Prediction of shallow landslide occurrence: Validation of a physically-based approach through a real case study.

    Science.gov (United States)

    Schilirò, Luca; Montrasio, Lorella; Scarascia Mugnozza, Gabriele

    2016-11-01

    In recent years, physically-based numerical models have frequently been used in the framework of early-warning systems devoted to rainfall-induced landslide hazard monitoring and mitigation. For this reason, in this work we describe the potential of SLIP (Shallow Landslides Instability Prediction), a simplified physically-based model for the analysis of shallow landslide occurrence. In order to test the reliability of this model, a back analysis of recent landslide events occurred in the study area (located SW of Messina, northeastern Sicily, Italy) on October 1st, 2009 was performed. The simulation results have been compared with those obtained for the same event by using TRIGRS, another well-established model for shallow landslide prediction. Afterwards, a simulation over a 2-year span period has been performed for the same area, with the aim of evaluating the performance of SLIP as early warning tool. The results confirm the good predictive capability of the model, both in terms of spatial and temporal prediction of the instability phenomena. For this reason, we recommend an operating procedure for the real-time definition of shallow landslide triggering scenarios at the catchment scale, which is based on the use of SLIP calibrated through a specific multi-methodological approach. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Physically based principles of cell adhesion mechanosensitivity in tissues

    International Nuclear Information System (INIS)

    Ladoux, Benoit; Nicolas, Alice

    2012-01-01

    The minimal structural unit that defines living organisms is a single cell. By proliferating and mechanically interacting with each other, cells can build complex organization such as tissues that ultimately organize into even more complex multicellular living organisms, such as mammals, composed of billions of single cells interacting with each other. As opposed to passive materials, living cells actively respond to the mechanical perturbations occurring in their environment. Tissue cell adhesion to its surrounding extracellular matrix or to neighbors is an example of a biological process that adapts to physical cues. The adhesion of tissue cells to their surrounding medium induces the generation of intracellular contraction forces whose amplitude adapts to the mechanical properties of the environment. In turn, solicitation of adhering cells with physical forces, such as blood flow shearing the layer of endothelial cells in the lumen of arteries, reinforces cell adhesion and impacts cell contractility. In biological terms, the sensing of physical signals is transduced into biochemical signaling events that guide cellular responses such as cell differentiation, cell growth and cell death. Regarding the biological and developmental consequences of cell adaptation to mechanical perturbations, understanding mechanotransduction in tissue cell adhesion appears as an important step in numerous fields of biology, such as cancer, regenerative medicine or tissue bioengineering for instance. Physicists were first tempted to view cell adhesion as the wetting transition of a soft bag having a complex, adhesive interaction with the surface. But surprising responses of tissue cell adhesion to mechanical cues challenged this view. This, however, did not exclude that cell adhesion could be understood in physical terms. It meant that new models and descriptions had to be created specifically for these biological issues, and could not straightforwardly be adapted from dead matter

  9. Physics-based Inverse Problem to Deduce Marine Atmospheric Boundary Layer Parameters

    Science.gov (United States)

    2017-03-07

    knowledge and capabilities in the use and development of inverse problem techniques to deduce atmospheric parameters. WORK COMPLETED The research completed...please find the Final Technical Report with SF 298 for Dr. Erin E. Hackett’s ONR grant entitled Physics -based Inverse Problem to Deduce Marine...From- To) 07/03/2017 Final Technica l Dec 2012- Dec 2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Physics -based Inverse Problem to Deduce Marine

  10. A physically based criterion for hydraulic hazard mapping

    Science.gov (United States)

    Milanesi, Luca; Pilotti, Marco; Petrucci, Olga

    2013-04-01

    thresholds for debris flows if the simplification of considering the presence of debris through an augmented density of the fluid continuum is accepted. This methodology, which fits with most literature experimental dataset for both adults and children impacted by a flood, is then tested with historical data concerning flood events truly occurred in the past. Data have been mined from a historical database containing approximately 11000 records concerning the effects of hydro-meteorological events occurred in Calabria (southern Italy) since 19th century, selecting only the events where people were directly involved. These data come from different sources as newspapers, archives of national and regional agencies, scientific and technical reports, on-site surveys reports and information collected by interviewing both involved people and local administrators. Dealing with descriptive information of events occurred in different historical periods and morpho-climatic sectors of the region, the quantities required to implement the model can be found in a limited number of recent cases. In order to widen the data set that can be used to validate the proposed methodology, we explore some approaches to indirectly assess the parameters required to implement the model.

  11. Electronic system level design an open-source approach

    CERN Document Server

    Rigo, Sandro; Santos, Luiz

    2014-01-01

    This book devises ESL design from the pragmatic perspective of a SystemC-based representation by showing how to build and how to use ESL languages, models and tools. It includes TLM 2.0 and step-by-step examples; it also addresses power modeling.

  12. Soil erosion modelling: description and data requirements for the LISEM physically based erosion model

    NARCIS (Netherlands)

    Elsen, van den H.G.M.

    2002-01-01

    Presentation of an EU funded project, An interdisciplinary approach to analyse the dynamics of forest and soil degradation and to develop a sustainable agro-ecological strategy for fragile Himalayan watersheds. 'Himalayan Degradation'

  13. A Physics-Based Engineering Approach to Predict the Cross Section for Advanced SRAMs

    Science.gov (United States)

    Li, Lei; Zhou, Wanting; Liu, Huihua

    2012-12-01

    This paper presents a physics-based engineering approach to estimate the heavy ion induced upset cross section for 6T SRAM cells from layout and technology parameters. The new approach calculates the effects of radiation with junction photocurrent, which is derived based on device physics. The new and simple approach handles the problem by using simple SPICE simulations. At first, the approach uses a standard SPICE program on a typical PC to predict the SPICE-simulated curve of the collected charge vs. its affected distance from the drain-body junction with the derived junction photocurrent. And then, the SPICE-simulated curve is used to calculate the heavy ion induced upset cross section with a simple model, which considers that the SEU cross section of a SRAM cell is more related to a “radius of influence” around a heavy ion strike than to the physical size of a diffusion node in the layout for advanced SRAMs in nano-scale process technologies. The calculated upset cross section based on this method is in good agreement with the test results for 6T SRAM cells processed using 90 nm process technology.

  14. A Physics-Based Rock Friction Constitutive Law: Steady State Friction

    Science.gov (United States)

    Aharonov, Einat; Scholz, Christopher H.

    2018-02-01

    Experiments measuring friction over a wide range of sliding velocities find that the value of the friction coefficient varies widely: friction is high and behaves according to the rate and state constitutive law during slow sliding, yet markedly weakens as the sliding velocity approaches seismic slip speeds. We introduce a physics-based theory to explain this behavior. Using conventional microphysics of creep, we calculate the velocity and temperature dependence of contact stresses during sliding, including the thermal effects of shear heating. Contacts are assumed to reach a coupled thermal and mechanical steady state, and friction is calculated for steady sliding. Results from theory provide good quantitative agreement with reported experimental results for quartz and granite friction over 11 orders of magnitude in velocity. The new model elucidates the physics of friction and predicts the connection between friction laws to independently determined material parameters. It predicts four frictional regimes as function of slip rate: at slow velocity friction is either velocity strengthening or weakening, depending on material parameters, and follows the rate and state friction law. Differences between surface and volume activation energies are the main control on velocity dependence. At intermediate velocity, for some material parameters, a distinct velocity strengthening regime emerges. At fast sliding, shear heating produces thermal softening of friction. At the fastest sliding, melting causes further weakening. This theory, with its four frictional regimes, fits well previously published experimental results under low temperature and normal stress.

  15. UNRES server for physics-based coarse-grained simulations and prediction of protein structure, dynamics and thermodynamics.

    Science.gov (United States)

    Czaplewski, Cezary; Karczynska, Agnieszka; Sieradzan, Adam K; Liwo, Adam

    2018-04-30

    A server implementation of the UNRES package (http://www.unres.pl) for coarse-grained simulations of protein structures with the physics-based UNRES model, coined a name UNRES server, is presented. In contrast to most of the protein coarse-grained models, owing to its physics-based origin, the UNRES force field can be used in simulations, including those aimed at protein-structure prediction, without ancillary information from structural databases; however, the implementation includes the possibility of using restraints. Local energy minimization, canonical molecular dynamics simulations, replica exchange and multiplexed replica exchange molecular dynamics simulations can be run with the current UNRES server; the latter are suitable for protein-structure prediction. The user-supplied input includes protein sequence and, optionally, restraints from secondary-structure prediction or small x-ray scattering data, and simulation type and parameters which are selected or typed in. Oligomeric proteins, as well as those containing D-amino-acid residues and disulfide links can be treated. The output is displayed graphically (minimized structures, trajectories, final models, analysis of trajectory/ensembles); however, all output files can be downloaded by the user. The UNRES server can be freely accessed at http://unres-server.chem.ug.edu.pl.

  16. Close-range laser scanning in forests: towards physically based semantics across scales.

    Science.gov (United States)

    Morsdorf, F; Kükenbrink, D; Schneider, F D; Abegg, M; Schaepman, M E

    2018-04-06

    Laser scanning with its unique measurement concept holds the potential to revolutionize the way we assess and quantify three-dimensional vegetation structure. Modern laser systems used at close range, be it on terrestrial, mobile or unmanned aerial platforms, provide dense and accurate three-dimensional data whose information just waits to be harvested. However, the transformation of such data to information is not as straightforward as for airborne and space-borne approaches, where typically empirical models are built using ground truth of target variables. Simpler variables, such as diameter at breast height, can be readily derived and validated. More complex variables, e.g. leaf area index, need a thorough understanding and consideration of the physical particularities of the measurement process and semantic labelling of the point cloud. Quantified structural models provide a framework for such labelling by deriving stem and branch architecture, a basis for many of the more complex structural variables. The physical information of the laser scanning process is still underused and we show how it could play a vital role in conjunction with three-dimensional radiative transfer models to shape the information retrieval methods of the future. Using such a combined forward and physically based approach will make methods robust and transferable. In addition, it avoids replacing observer bias from field inventories with instrument bias from different laser instruments. Still, an intensive dialogue with the users of the derived information is mandatory to potentially re-design structural concepts and variables so that they profit most of the rich data that close-range laser scanning provides.

  17. A system-level mechanistic investigation of traditional Chinese ...

    African Journals Online (AJOL)

    Conclusion: Based on these in silico findings, the use of YD for treating respiratory diseases, inflammation and various infections, most probably via the suppression of inflammation, has been established. The approach adopted in this study can serve as a model methodology to develop an innovative TCM candidate drug at ...

  18. A Distributed Approach to System-Level Prognostics

    Science.gov (United States)

    2012-09-01

    the end of (useful) life ( EOL ) and/or the remaining useful life (RUL) of components, subsystems, or systems. The prognostics problem itself can be...system state estimate, computes EOL and/or RUL. In this paper, we focus on a model-based prognostics approach (Orchard & Vachtse- vanos, 2009; Daigle...been focused on individual components, and determining their EOL and RUL, e.g., (Orchard & Vachtsevanos, 2009; Saha & Goebel, 2009; Daigle & Goebel

  19. Centering Pregnancy in Missouri: A System Level Analysis

    Directory of Open Access Journals (Sweden)

    Pamela K. Xaverius

    2014-01-01

    Full Text Available Background. Centering Pregnancy (CP is an effective method of delivering prenatal care, yet providers have been slow to adopt the CP model. Our main hypothesis is that a site’s adoption of CP is contingent upon knowledge of the CP, characteristics health care personnel, anticipated patient impact, and system readiness. Methods. Using a matched, pretest-posttest, observational design, 223 people completed pretest and posttest surveys. Our analysis included the effect of the seminar on the groups’ knowledge of CP essential elements, barriers to prenatal care, and perceived value of CP to the patients and to the system of care. Results. Before the CP Seminar only 34% of respondents were aware of the model, while knowledge significantly after the Seminar. The three greatest improvements were in understanding that the group is conducted in a circle, the health assessment occurs in the group space, and a facilitative leadership style is used. Child care, transportation, and language issues were the top three barriers. The greatest improvements reported for patients included improvements in timeliness, patient-centeredness and efficiency, although readiness for adoption was influenced by costs, resources, and expertise. Discussion. Readiness to adopt CP will require support for the start-up and sustainability of this model.

  20. Towards a physically-based multi-scale ecohydrological simulator for semi-arid regions

    Science.gov (United States)

    Caviedes-Voullième, Daniel; Josefik, Zoltan; Hinz, Christoph

    2017-04-01

    dynamics of infiltration affects water storage under vegetation and in bare soil Despite the volume of research in this field, fundamental limitations still exist in the models regarding the aforementioned issues. Topography and hydrodynamics have been strongly simplified. Infiltration has been modelled as dependent on depth but independent of soil moisture. Temporal rainfall variability has only been addressed for seasonal rain. Spatial heterogenity of the topography as well as roughness and infiltration properties, has not been fully and explicitly represented. We hypothesize that physical processes must be robustly modelled and the drivers of complexity must be present with as much resolution as possible in order to provide the necessary realism to improve transient simulations, perhaps leading the way to virtual laboratories and, arguably, predictive tools. This work provides a first approach into a model with explicit hydrological processes represented by physically-based hydrodynamic models, coupled with well-accepted vegetation models. The model aims to enable new possibilities relating to spatiotemporal variability, arbitrary topography and representation of spatial heterogeneity, including sub-daily (in fact, arbitrary) temporal variability of rain as the main forcing of the model, explicit representation of infiltration processes, and various feedback mechanisms between the hydrodynamics and the vegetation. Preliminary testing strongly suggests that the model is viable, has the potential of producing new information of internal dynamics of the system, and allows to successfully aggregate many of the sources of complexity. Initial benchmarking of the model also reveals strengths to be exploited, thus providing an interesting research outlook, as well as weaknesses to be addressed in the immediate future.

  1. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    Science.gov (United States)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  2. System-level Reliability Assessment of Power Stage in Fuel Cell Application

    DEFF Research Database (Denmark)

    Zhou, Dao; Wang, Huai; Blaabjerg, Frede

    2016-01-01

    reliability. In a case study of a 5 kW fuel cell power stage, the parameter variations of the lifetime model prove that the exponential factor of the junction temperature fluctuation is the most sensitive parameter. Besides, if a 5-out-of-6 redundancy is used, it is concluded both the B10 and the B1 system......High efficient and less pollutant fuel cell stacks are emerging and strong candidates of the power solution used for mobile base stations. In the application of the backup power, the availability and reliability hold the highest priority. This paper considers the reliability metrics from...... the component-level to the system-level for the power stage used in a fuel cell application. It starts with an estimation of the annual accumulated damage for the key power electronic components according to the real mission profile of the fuel cell system. Then, considering the parameter variations in both...

  3. System-Level Design Considerations for Carbon Nanotube Electromechanical Resonators

    Directory of Open Access Journals (Sweden)

    Christian Kauth

    2013-01-01

    Full Text Available Despite an evermore complete plethora of complex domain-specific semiempirical models, no succinct recipe for large-scale carbon nanotube electromechanical systems design has been formulated. To combine the benefits of these highly sensitive miniaturized mechanical sensors with the vast functionalities available in electronics, we identify a reduced key parameter set of carbon nanotube properties, nanoelectromechanical system design, and operation that steers the sensor’s performance towards system applications, based on open- and closed-loop topologies. Suspended single-walled carbon nanotubes are reviewed in terms of their electromechanical properties with the objective of evaluating orders of magnitude of the electrical actuation and detection mechanisms. Open-loop time-averaging and 1ω or 2ω mixing methods are completed by a new 4ω actuation and detection technique. A discussion on their extension to closed-loop topologies and system applications concludes the analysis, covering signal-to-noise ratio, and the capability to spectrally isolate the motional information from parasitical feedthrough by contemporary electronic read-out techniques.

  4. What Can We Learn from a Simple Physics-Based Earthquake Simulator?

    Science.gov (United States)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2018-03-01

    Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of

  5. Physics based simulation of seismicity induced in the vicinity of a high-pressure fluid injection

    Science.gov (United States)

    McCloskey, J.; NicBhloscaidh, M.; Murphy, S.; O'Brien, G. S.; Bean, C. J.

    2013-12-01

    High-pressure fluid injection into subsurface is known, in some cases, to induce earthquakes in the surrounding volume. The increasing importance of ';fracking' as a potential source of hydrocarbons has made the seismic hazard from this effect an important issue the adjudication of planning applications and it is likely that poor understanding of the process will be used as justification of refusal of planning in Ireland and the UK. Here we attempt to understand some of the physical controls on the size and frequency of induced earthquakes using a physics-based simulation of the process and examine resulting earthquake catalogues The driver for seismicity in our simulations is identical to that used in the paper by Murphy et al. in this session. Fluid injection is simulated using pore fluid movement throughout a permeable layer from a high-pressure point source using a lattice Boltzmann scheme. Diffusivities and frictional parameters can be defined independently at individual nodes/cells allowing us to reproduce 3-D geological structures. Active faults in the model follow a fractal size distribution and exhibit characteristic event size, resulting in a power-law frequency-size distribution. The fluid injection is not hydraulically connected to the fault (i.e. fluid does not come into physical contact with the fault); however stress perturbations from the injection drive the seismicity model. The duration and pressure-time function of the fluid injection can be adjusted to model any given injection scenario and the rate of induced seismicity is controlled by the local structures and ambient stress field as well as by the stress perturbations resulting from the fluid injection. Results from the rate and state fault models of Murphy et al. are incorporated to include the effect of fault strengthening in seismically quite areas. Initial results show similarities with observed induced seismic catalogues. Seismicity is only induced where the active faults have not been

  6. Mapping how local perturbations influence systems-level brain dynamics.

    Science.gov (United States)

    Gollo, Leonardo L; Roberts, James A; Cocchi, Luca

    2017-10-15

    The human brain exhibits a distinct spatiotemporal organization that supports brain function and can be manipulated via local brain stimulation. Such perturbations to local cortical dynamics are globally integrated by distinct neural systems. However, it remains unclear how local changes in neural activity affect large-scale system dynamics. Here, we briefly review empirical and computational studies addressing how localized perturbations affect brain activity. We then systematically analyze a model of large-scale brain dynamics, assessing how localized changes in brain activity at the different sites affect whole-brain dynamics. We find that local stimulation induces changes in brain activity that can be summarized by relatively smooth tuning curves, which relate a region's effectiveness as a stimulation site to its position within the cortical hierarchy. Our results also support the notion that brain hubs, operating in a slower regime, are more resilient to focal perturbations and critically contribute to maintain stability in global brain dynamics. In contrast, perturbations of peripheral regions, characterized by faster activity, have greater impact on functional connectivity. As a parallel with this region-level result, we also find that peripheral systems such as the visual and sensorimotor networks were more affected by local perturbations than high-level systems such as the cingulo-opercular network. Our findings highlight the importance of a periphery-to-core hierarchy to determine the effect of local stimulation on the brain network. This study also provides novel resources to orient empirical work aiming at manipulating functional connectivity using non-invasive brain stimulation. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Is health workforce planning recognising the dynamic interplay between health literacy at an individual, organisation and system level?

    Science.gov (United States)

    Naccarella, Lucio; Wraight, Brenda; Gorman, Des

    2016-02-01

    The growing demands on the health system to adapt to constant change has led to investment in health workforce planning agencies and approaches. Health workforce planning approaches focusing on identifying, predicting and modelling workforce supply and demand are criticised as being simplistic and not contributing to system-level resiliency. Alternative evidence- and needs-based health workforce planning approaches are being suggested. However, to contribute to system-level resiliency, workforce planning approaches need to also adopt system-based approaches. The increased complexity and fragmentation of the healthcare system, especially for patients with complex and chronic conditions, has also led to a focus on health literacy not simply as an individual trait, but also as a dynamic product of the interaction between individual (patients, workforce)-, organisational- and system-level health literacy. Although it is absolutely essential that patients have a level of health literacy that enables them to navigate and make decisions, so too the health workforce, organisations and indeed the system also needs to be health literate. Herein we explore whether health workforce planning is recognising the dynamic interplay between health literacy at an individual, organisation and system level, and the potential for strengthening resiliency across all those levels.

  8. Rockfall hazard assessment integrating probabilistic physically based rockfall source detection (Norddal municipality, Norway).

    Science.gov (United States)

    Yugsi Molina, F. X.; Oppikofer, T.; Fischer, L.; Hermanns, R. L.; Taurisano, A.

    2012-04-01

    Traditional techniques to assess rockfall hazard are partially based on probabilistic analysis. Stochastic methods has been used for run-out analysis of rock blocks to estimate the trajectories that a detached block will follow during its fall until it stops due to kinetic energy loss. However, the selection of rockfall source areas is usually defined either by multivariate analysis or by field observations. For either case, a physically based approach is not used for the source area detection. We present an example of rockfall hazard assessment that integrates a probabilistic rockfall run-out analysis with a stochastic assessment of the rockfall source areas using kinematic stability analysis in a GIS environment. The method has been tested for a steep more than 200 m high rock wall, located in the municipality of Norddal (Møre og Romsdal county, Norway), where a large number of people are either exposed to snow avalanches, rockfalls, or debris flows. The area was selected following the recently published hazard mapping plan of Norway. The cliff is formed by medium to coarse-grained quartz-dioritic to granitic gneisses of Proterozoic age. Scree deposits product of recent rockfall activity are found at the bottom of the rock wall. Large blocks can be found several tens of meters away from the cliff in Sylte, the main locality in the Norddal municipality. Structural characterization of the rock wall was done using terrestrial laser scanning (TLS) point clouds in the software Coltop3D (www.terranum.ch), and results were validated with field data. Orientation data sets from the structural characterization were analyzed separately to assess best-fit probability density functions (PDF) for both dip angle and dip direction angle of each discontinuity set. A GIS-based stochastic kinematic analysis was then carried out using the discontinuity set orientations and the friction angle as random variables. An airborne laser scanning digital elevation model (ALS-DEM) with 1 m

  9. Variability in soil-water retention properties and implications for physics-based simulation of landslide early warning criteria

    Science.gov (United States)

    Thomas, Matthew A.; Mirus, Benjamin B.; Collins, Brian D.; Lu, Ning; Godt, Jonathan W.

    2018-01-01

    Rainfall-induced shallow landsliding is a persistent hazard to human life and property. Despite the observed connection between infiltration through the unsaturated zone and shallow landslide initiation, there is considerable uncertainty in how estimates of unsaturated soil-water retention properties affect slope stability assessment. This source of uncertainty is critical to evaluating the utility of physics-based hydrologic modeling as a tool for landslide early warning. We employ a numerical model of variably saturated groundwater flow parameterized with an ensemble of texture-, laboratory-, and field-based estimates of soil-water retention properties for an extensively monitored landslide-prone site in the San Francisco Bay Area, CA, USA. Simulations of soil-water content, pore-water pressure, and the resultant factor of safety show considerable variability across and within these different parameter estimation techniques. In particular, we demonstrate that with the same permeability structure imposed across all simulations, the variability in soil-water retention properties strongly influences predictions of positive pore-water pressure coincident with widespread shallow landsliding. We also find that the ensemble of soil-water retention properties imposes an order-of-magnitude and nearly two-fold variability in seasonal and event-scale landslide susceptibility, respectively. Despite the reduced factor of safety uncertainty during wet conditions, parameters that control the dry end of the soil-water retention function markedly impact the ability of a hydrologic model to capture soil-water content dynamics observed in the field. These results suggest that variability in soil-water retention properties should be considered for objective physics-based simulation of landslide early warning criteria.

  10. A new physically-based quantification of marine isoprene and primary organic aerosol emissions

    Directory of Open Access Journals (Sweden)

    N. Meskhidze

    2009-07-01

    Full Text Available The global marine sources of organic carbon (OC are estimated here using a physically-based parameterization for the emission of marine isoprene and primary organic matter. The marine isoprene emission model incorporates new physical parameters such as light sensitivity of phytoplankton isoprene production and dynamic euphotic depth to simulate hourly marine isoprene emissions totaling 0.92 Tg C yr−1. Sensitivity studies using different schemes for the euphotic zone depth and ocean phytoplankton speciation produce the upper and the lower range of marine-isoprene emissions of 0.31 to 1.09 Tg C yr−1, respectively. Established relationships between sea spray fractionation of water-insoluble organic carbon (WIOC and chlorophyll-a concentration are used to estimate the total primary sources of marine sub- and super-micron OC of 2.9 and 19.4 Tg C yr−1, respectively. The consistent spatial and temporal resolution of the two emission types allow us, for the first time, to explore the relative contributions of sub- and super-micron organic matter and marine isoprene-derived secondary organic aerosol (SOA to the total OC fraction of marine aerosol. Using a fixed 3% mass yield for the conversion of isoprene to SOA, our emission simulations show minor (<0.2% contribution of marine isoprene to the total marine source of OC on a global scale. However, our model calculations also indicate that over the tropical oceanic regions (30° S to 30° N, marine isoprene SOA may contribute over 30% of the total monthly-averaged sub-micron OC fraction of marine aerosol. The estimated contribution of marine isoprene SOA to hourly-averaged sub-micron marine OC emission is even higher, approaching 50% over the vast regions of the oceans during the midday hours when isoprene emissions are highest. As it is widely believed that sub-micron OC has the potential to influence the cloud droplet activation of marine aerosols, our

  11. DAEDALUS: System-Level Design Methodology for Streaming Multiprocessor Embedded Systems on Chips

    NARCIS (Netherlands)

    Stefanov, T.; Pimentel, A.; Nikolov, H.; Ha, S.; Teich, J.

    2017-01-01

    The complexity of modern embedded systems, which are increasingly based on heterogeneous multiprocessor system-on-chip (MPSoC) architectures, has led to the emergence of system-level design. To cope with this design complexity, system-level design aims at raising the abstraction level of the design

  12. NASA: A generic infrastructure for system-level MP-SoC design space exploration

    NARCIS (Netherlands)

    Jia, Z.J.; Pimentel, A.D.; Thompson, M.; Bautista, T.; Núñez, A.

    2010-01-01

    System-level simulation and design space exploration (DSE) are key ingredients for the design of multiprocessor system-on-chip (MP-SoC) based embedded systems. The efforts in this area, however, typically use ad-hoc software infrastructures to facilitate and support the system-level DSE experiments.

  13. Exploiting Domain Knowledge in System-level MPSoC Design Space Exploration

    NARCIS (Netherlands)

    Thompson, M.; Pimentel, A.D.

    2013-01-01

    System-level design space exploration (DSE), which is performed early in the design process, is of eminent importance to the design of complex multi-processor embedded multimedia systems. During system-level DSE, system parameters like, e.g., the number and type of processors, and the mapping of

  14. Homelessness Outcome Reporting Normative Framework: Systems-Level Evaluation of Progress in Ending Homelessness

    Science.gov (United States)

    Austen, Tyrone; Pauly, Bernie

    2012-01-01

    Homelessness is a serious and growing issue. Evaluations of systemic-level changes are needed to determine progress in reducing or ending homelessness. The report card methodology is one means of systems-level assessment. Rather than solely establishing an enumeration, homelessness report cards can capture pertinent information about structural…

  15. Design space pruning through hybrid analysis in system-level design space exploration

    NARCIS (Netherlands)

    Piscitelli, R.; Pimentel, A.D.

    2012-01-01

    System-level design space exploration (DSE), which is performed early in the design process, is of eminent importance to the design of complex multi-processor embedded system archi- tectures. During system-level DSE, system parameters like, e.g., the number and type of processors, the type and size

  16. Interleaving methods for hybrid system-level MPSoC design space exploration

    NARCIS (Netherlands)

    Piscitelli, R.; Pimentel, A.D.; McAllister, J.; Bhattacharyya, S.

    2012-01-01

    System-level design space exploration (DSE), which is performed early in the design process, is of eminent importance to the design of complex multi-processor embedded system architectures. During system-level DSE, system parameters like, e.g., the number and type of processors, the type and size of

  17. Pruning techniques for multi-objective system-level design space exploration

    NARCIS (Netherlands)

    Piscitelli, R.

    2014-01-01

    System-level design space exploration (DSE), which is performed early in the design process, is of eminent importance to the design of complex multi-processor embedded system architectures. During system-level DSE, system parameters like, e.g., the number and type of processors, the type and size of

  18. Graphical interface for the physics-based generation of inputs to 3D MEEC SGEMP and SREMP simulations

    International Nuclear Information System (INIS)

    Bland, M; Walters, D; Wondra, J

    1999-01-01

    A graphical user interface (GUI) is under development for the MEEC family of SGEMP and SREMP simulation codes [1,2]. These codes are ''workhorse'' legacy codes that have been in use for nearly two decades, with modifications and enhanced physics models added throughout the years. The MEEC codes are currently being evaluated for use by the DOE in the Dual Revalidation Program and experiments at NIF. The new GUI makes the codes more accessible and less prone to input errors by automatically generating the parameters and grids that previously had to be designed ''by hand''. Physics-based algorithms define the simulation volume with expanding meshes. Users are able to specify objects, materials, and emission surfaces through dialogs and input boxes. 3D and orthographic views are available to view objects in the volume. Zone slice views are available for stepping through the overlay of objects on the mesh in planes aligned with the primary axes

  19. Physics-based and statistical earthquake forecasting in a continental rift zone: the case study of Corinth Gulf (Greece)

    Science.gov (United States)

    Segou, Margarita

    2016-01-01

    I perform a retrospective forecast experiment in the most rapid extensive continental rift worldwide, the western Corinth Gulf (wCG, Greece), aiming to predict shallow seismicity (depth statistics, four physics-based (CRS) models, combining static stress change estimations and the rate-and-state laboratory law and one hybrid model. For the latter models, I incorporate the stress changes imparted from 31 earthquakes with magnitude M ≥ 4.5 at the extended area of wCG. Special attention is given on the 3-D representation of active faults, acting as potential receiver planes for the estimation of static stress changes. I use reference seismicity between 1990 and 1995, corresponding to the learning phase of physics-based models, and I evaluate the forecasts for six months following the 1995 M = 6.4 Aigio earthquake using log-likelihood performance metrics. For the ETAS realizations, I use seismic events with magnitude M ≥ 2.5 within daily update intervals to enhance their predictive power. For assessing the role of background seismicity, I implement a stochastic reconstruction (aka declustering) aiming to answer whether M > 4.5 earthquakes correspond to spontaneous events and identify, if possible, different triggering characteristics between aftershock sequences and swarm-type seismicity periods. I find that: (1) ETAS models outperform CRS models in most time intervals achieving very low rejection ratio RN = 6 per cent, when I test their efficiency to forecast the total number of events inside the study area, (2) the best rejection ratio for CRS models reaches RN = 17 per cent, when I use varying target depths and receiver plane geometry, (3) 75 per cent of the 1995 Aigio aftershocks that occurred within the first month can be explained by static stress changes, (4) highly variable performance on behalf of both statistical and physical models is suggested by large confidence intervals of information gain per earthquake and (5) generic ETAS models can adequately

  20. PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems

    Science.gov (United States)

    Ghaffarizadeh, Ahmadreza; Mumenthaler, Shannon M.

    2018-01-01

    Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal “virtual laboratory” for such multicellular systems simulates both the biochemical microenvironment (the “stage”) and many mechanically and biochemically interacting cells (the “players” upon the stage). PhysiCell—physics-based multicellular simulator—is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility “out of the box.” The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 105-106 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a “cellular cargo delivery” system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also

  1. PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems.

    Science.gov (United States)

    Ghaffarizadeh, Ahmadreza; Heiland, Randy; Friedman, Samuel H; Mumenthaler, Shannon M; Macklin, Paul

    2018-02-01

    Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal "virtual laboratory" for such multicellular systems simulates both the biochemical microenvironment (the "stage") and many mechanically and biochemically interacting cells (the "players" upon the stage). PhysiCell-physics-based multicellular simulator-is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility "out of the box." The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 105-106 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a "cellular cargo delivery" system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant

  2. Knowledge-Oriented Physics-Based Motion Planning for Grasping Under Uncertainty

    OpenAIRE

    Ud Din, Muhayy; Akbari, Aliakbar; Rosell Gratacòs, Jan

    2017-01-01

    Grasping an object in unstructured and uncertain environments is a challenging task, particularly when a collision-free trajectory does not exits. High-level knowledge and reasoning processes, as well as the allowing of interaction between objects, can enhance the planning efficiency in such environments. In this direction, this study proposes a knowledge-oriented physics-based motion planning approach for a hand-arm system that uses a high-level knowledge-based reasoning to partition the wor...

  3. A Platform-Based Methodology for System-Level Mixed-Signal Design

    Directory of Open Access Journals (Sweden)

    Alberto Sangiovanni-Vincentelli

    2010-01-01

    Full Text Available The complexity of today's embedded electronic systems as well as their demanding performance and reliability requirements are such that their design can no longer be tackled with ad hoc techniques while still meeting tight time to-market constraints. In this paper, we present a system level design approach for electronic circuits, utilizing the platform-based design (PBD paradigm as the natural framework for mixed-domain design formalization. In PBD, a meet-in-the-middle approach allows systematic exploration of the design space through a series of top-down mapping of system constraints onto component feasibility models in a platform library, which is based on bottom-up characterizations. In this framework, new designs can be assembled from the precharacterized library components, giving the highest priority to design reuse, correct assembly, and efficient design flow from specifications to implementation. We apply concepts from design centering to enforce robustness to modeling errors as well as process, voltage, and temperature variations, which are currently plaguing embedded system design in deep-submicron technologies. The effectiveness of our methodology is finally shown on the design of a pipeline A/D converter and two receiver front-ends for UMTS and UWB communications.

  4. Simbios: an NIH national center for physics-based simulation of biological structures.

    Science.gov (United States)

    Delp, Scott L; Ku, Joy P; Pande, Vijay S; Sherman, Michael A; Altman, Russ B

    2012-01-01

    Physics-based simulation provides a powerful framework for understanding biological form and function. Simulations can be used by biologists to study macromolecular assemblies and by clinicians to design treatments for diseases. Simulations help biomedical researchers understand the physical constraints on biological systems as they engineer novel drugs, synthetic tissues, medical devices, and surgical interventions. Although individual biomedical investigators make outstanding contributions to physics-based simulation, the field has been fragmented. Applications are typically limited to a single physical scale, and individual investigators usually must create their own software. These conditions created a major barrier to advancing simulation capabilities. In 2004, we established a National Center for Physics-Based Simulation of Biological Structures (Simbios) to help integrate the field and accelerate biomedical research. In 6 years, Simbios has become a vibrant national center, with collaborators in 16 states and eight countries. Simbios focuses on problems at both the molecular scale and the organismal level, with a long-term goal of uniting these in accurate multiscale simulations.

  5. Gypsies in the palace: Experimentalist's view on the use of 3-D physics-based simulation of hillslope hydrological response

    Science.gov (United States)

    James, A.L.; McDonnell, Jeffery J.; Tromp-Van Meerveld, I.; Peters, N.E.

    2010-01-01

    As a fundamental unit of the landscape, hillslopes are studied for their retention and release of water and nutrients across a wide range of ecosystems. The understanding of these near-surface processes is relevant to issues of runoff generation, groundwater-surface water interactions, catchment export of nutrients, dissolved organic carbon, contaminants (e.g. mercury) and ultimately surface water health. We develop a 3-D physics-based representation of the Panola Mountain Research Watershed experimental hillslope using the TOUGH2 sub-surface flow and transport simulator. A recent investigation of sub-surface flow within this experimental hillslope has generated important knowledge of threshold rainfall-runoff response and its relation to patterns of transient water table development. This work has identified components of the 3-D sub-surface, such as bedrock topography, that contribute to changing connectivity in saturated zones and the generation of sub-surface stormflow. Here, we test the ability of a 3-D hillslope model (both calibrated and uncalibrated) to simulate forested hillslope rainfall-runoff response and internal transient sub-surface stormflow dynamics. We also provide a transparent illustration of physics-based model development, issues of parameterization, examples of model rejection and usefulness of data types (e.g. runoff, mean soil moisture and transient water table depth) to the model enterprise. Our simulations show the inability of an uncalibrated model based on laboratory and field characterization of soil properties and topography to successfully simulate the integrated hydrological response or the distributed water table within the soil profile. Although not an uncommon result, the failure of the field-based characterized model to represent system behaviour is an important challenge that continues to vex scientists at many scales. We focus our attention particularly on examining the influence of bedrock permeability, soil anisotropy and

  6. Bar and channel evolution in meandering and braiding rivers using physics-based modeling

    NARCIS (Netherlands)

    Schuurman, F.

    2015-01-01

    Rivers are among the most dynamic earth surface systems. Some rivers meander, forming bends that migrate, reshape and have inner-bend bars. Other rivers form a complicated braided pattern of branches, islands and mid-channel bars. Thorough understanding of their morphodynamics is important for

  7. Adaptive Modeling of Details for Physically-Based Sound Synthesis and Propagation

    Science.gov (United States)

    2015-03-21

    CONTENTS LIST OF TABLES...Rauber, A., and Merkl, D. (2002). Content -based organization and visualization of music archives. In Proceedings of the tenth ACM international...Interactive simulation of complex audiovisual scenes. Presence: Teleoper. Virtual Environ., 13:99–111. van den Doel, K., Kry, P., and Pai, D. (2001

  8. Physics-Based Modeling and Assessment of Mobile Landing Platform System Design

    Science.gov (United States)

    2008-09-01

    Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE September 2008 3...will be considered in this study. These technologies are aimed at addressing the onload, offload, and material management aspects of the Sea...replace the current system of elevators, conveyors, dumb waiters , chain falls and other handling equipment [4]. 13 Figure 4. High Rate

  9. A High Performance Computing Framework for Physics-based Modeling and Simulation of Military Ground Vehicles

    Science.gov (United States)

    2011-03-25

    cluster. The co-processing idea is the enabler of the heterogeneous computing concept advertised recently as the paradigm capable of delivering exascale ...Petascale to Exascale : Extending Intel’s HPC Commitment: http://download.intel.com/pressroom/archive/reference/ISC_2010_Skaugen_keynote.pdf in

  10. Physics Based Electrolytic Capacitor Degradation Models for Prognostic Studies under Thermal Overstress

    Data.gov (United States)

    National Aeronautics and Space Administration — Electrolytic capacitors are used in several applications rang- ing from power supplies on safety critical avionics equipment to power drivers for electro-mechanical...

  11. Physics-based models for measurement correlations: application to an inverse Sturm–Liouville problem

    International Nuclear Information System (INIS)

    Bal, Guillaume; Ren Kui

    2009-01-01

    In many inverse problems, the measurement operator, which maps objects of interest to available measurements, is a smoothing (regularizing) operator. Its inverse is therefore unbounded and as a consequence, only the low-frequency component of the object of interest is accessible from inevitably noisy measurements. In many inverse problems however, the neglected high-frequency component may significantly affect the measured data. Using simple scaling arguments, we characterize the influence of the high-frequency component. We then consider situations where the correlation function of such an influence may be estimated by asymptotic expansions, for instance as a random corrector in homogenization theory. This allows us to consistently eliminate the high-frequency component and derive a closed form, more accurate, inverse problem for the low-frequency component of the object of interest. We present the asymptotic expression of the correlation matrix of the eigenvalues in a Sturm–Liouville problem with unknown potential. We propose an iterative algorithm for the reconstruction of the potential from knowledge of the eigenvalues and show that using the approximate correlation matrix significantly improves the reconstructions

  12. Segmentation of vessels cluttered with cells using a physics based model.

    Science.gov (United States)

    Schmugge, Stephen J; Keller, Steve; Nguyen, Nhat; Souvenir, Richard; Huynh, Toan; Clemens, Mark; Shin, Min C

    2008-01-01

    Segmentation of vessels in biomedical images is important as it can provide insight into analysis of vascular morphology, topology and is required for kinetic analysis of flow velocity and vessel permeability. Intravital microscopy is a powerful tool as it enables in vivo imaging of both vasculature and circulating cells. However, the analysis of vasculature in those images is difficult due to the presence of cells and their image gradient. In this paper, we provide a novel method of segmenting vessels with a high level of cell related clutter. A set of virtual point pairs ("vessel probes") are moved reacting to forces including Vessel Vector Flow (VVF) and Vessel Boundary Vector Flow (VBVF) forces. Incorporating the cell detection, the VVF force attracts the probes toward the vessel, while the VBVF force attracts the virtual points of the probes to localize the vessel boundary without being distracted by the image features of the cells. The vessel probes are moved according to Newtonian Physics reacting to the net of forces applied on them. We demonstrate the results on a set of five real in vivo images of liver vasculature cluttered by white blood cells. When compared against the ground truth prepared by the technician, the Root Mean Squared Error (RMSE) of segmentation with VVF and VBVF was 55% lower than the method without VVF and VBVF.

  13. Physics-based mathematical models for quantum devices via experimental system identification

    Energy Technology Data Exchange (ETDEWEB)

    Schirmer, S G; Oi, D K L; Devitt, S J [Department of Applied Maths and Theoretical Physics, University of Cambridge, Wilberforce Rd, Cambridge, CB3 0WA (United Kingdom); SUPA, Department of Physics, University of Strathclyde, Glasgow G4 0NG (United Kingdom); National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430 (Japan)], E-mail: sgs29@cam.ac.uk

    2008-03-15

    We consider the task of intrinsic control system identification for quantum devices. The problem of experimental determination of subspace confinement is considered, and simple general strategies for full Hamiltonian identification and decoherence characterization of a controlled two-level system are presented.

  14. A high performance computing framework for physics-based modeling and simulation of military ground vehicles

    Science.gov (United States)

    Negrut, Dan; Lamb, David; Gorsich, David

    2011-06-01

    This paper describes a software infrastructure made up of tools and libraries designed to assist developers in implementing computational dynamics applications running on heterogeneous and distributed computing environments. Together, these tools and libraries compose a so called Heterogeneous Computing Template (HCT). The heterogeneous and distributed computing hardware infrastructure is assumed herein to be made up of a combination of CPUs and Graphics Processing Units (GPUs). The computational dynamics applications targeted to execute on such a hardware topology include many-body dynamics, smoothed-particle hydrodynamics (SPH) fluid simulation, and fluid-solid interaction analysis. The underlying theme of the solution approach embraced by HCT is that of partitioning the domain of interest into a number of subdomains that are each managed by a separate core/accelerator (CPU/GPU) pair. Five components at the core of HCT enable the envisioned distributed computing approach to large-scale dynamical system simulation: (a) the ability to partition the problem according to the one-to-one mapping; i.e., spatial subdivision, discussed above (pre-processing); (b) a protocol for passing data between any two co-processors; (c) algorithms for element proximity computation; and (d) the ability to carry out post-processing in a distributed fashion. In this contribution the components (a) and (b) of the HCT are demonstrated via the example of the Discrete Element Method (DEM) for rigid body dynamics with friction and contact. The collision detection task required in frictional-contact dynamics (task (c) above), is shown to benefit on the GPU of a two order of magnitude gain in efficiency when compared to traditional sequential implementations. Note: Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not imply its endorsement, recommendation, or favoring by the United States Army. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Army, and shall not be used for advertising or product endorsement purposes.

  15. Ultrafast Laser Diagnostics for Energetic-Material Ignition Mechanisms: Tools for Physics-Based Model Development.

    Energy Technology Data Exchange (ETDEWEB)

    Kearney, Sean Patrick; Jilek, Brook Anton; Kohl, Ian Thomas; Farrow, Darcie; Urayama, Junji

    2014-11-01

    We present the results of an LDRD project to develop diagnostics to perform fundamental measurements of material properties during shock compression of condensed phase materials at micron spatial scales and picosecond time scales. The report is structured into three main chapters, which each focus on a different diagnostic devel opment effort. Direct picosecond laser drive is used to introduce shock waves into thin films of energetic and inert materials. The resulting laser - driven shock properties are probed via Ultrafast Time Domain Interferometry (UTDI), which can additionally be used to generate shock Hugoniot data in tabletop experiments. Stimulated Raman scattering (SRS) is developed as a temperature diagnostic. A transient absorption spectroscopy setup has been developed to probe shock - induced changes during shock compressio n. UTDI results are presented under dynamic, direct - laser - drive conditions and shock Hugoniots are estimated for inert polystyrene samples and for the explosive hexanitroazobenzene, with results from both Sandia and Lawrence Livermore presented here. SRS a nd transient absorption diagnostics are demonstrated on static thin - film samples, and paths forward to dynamic experiments are presented.

  16. Development of Physics-Based Numerical Models for Uncertainty Quantification of Selective Laser Melting Processes

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of the proposed research is to characterize the influence of process parameter variability inherent to Selective Laser Melting (SLM) and performance effect...

  17. Simulation of stretch forming with intermediate heat treatments of aircraft skins - A physically based modeling approach

    NARCIS (Netherlands)

    Kurukuri, S.; Miroux, Alexis; Wisselink, H.H.; van den Boogaard, Antonius H.

    2011-01-01

    In the aerospace industry stretch forming is often used to produce skin parts. During stretch forming a sheet is clamped at two sides and stretched over a die, such that the sheet gets the shape of the die. However for complex shapes it is necessary to use expensive intermediate heat-treatments in

  18. Active Battery Management System with Physics Based life modeling topology, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Robust Data Acquisition on flight applications enables Researchers to rapidly advance technology. Distributed Electric Propulsion (DEP) and Hybrid Electric...

  19. Applications of the BIOPHYS Algorithm for Physically-Based Retrieval of Biophysical, Structural and Forest Disturbance Information

    Science.gov (United States)

    Peddle, Derek R.; Huemmrich, K. Fred; Hall, Forrest G.; Masek, Jeffrey G.; Soenen, Scott A.; Jackson, Chris D.

    2011-01-01

    Canopy reflectance model inversion using look-up table approaches provides powerful and flexible options for deriving improved forest biophysical structural information (BSI) compared with traditional statistical empirical methods. The BIOPHYS algorithm is an improved, physically-based inversion approach for deriving BSI for independent use and validation and for monitoring, inventory and quantifying forest disturbance as well as input to ecosystem, climate and carbon models. Based on the multiple-forward mode (MFM) inversion approach, BIOPHYS results were summarized from different studies (Minnesota/NASA COVER; Virginia/LEDAPS; Saskatchewan/BOREAS), sensors (airborne MMR; Landsat; MODIS) and models (GeoSail; GOMS). Applications output included forest density, height, crown dimension, branch and green leaf area, canopy cover, disturbance estimates based on multi-temporal chronosequences, and structural change following recovery from forest fires over the last century. Good correspondences with validation field data were obtained. Integrated analyses of multiple solar and view angle imagery further improved retrievals compared with single pass data. Quantifying ecosystem dynamics such as the area and percent of forest disturbance, early regrowth and succession provide essential inputs to process-driven models of carbon flux. BIOPHYS is well suited for large-area, multi-temporal applications involving multiple image sets and mosaics for assessing vegetation disturbance and quantifying biophysical structural dynamics and change. It is also suitable for integration with forest inventory, monitoring, updating, and other programs.

  20. ASPECT (Automated System-level Performance Evaluation and Characterization Tool), Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI has developed a suite of SAA tools and an analysis capability referred to as ASPECT (Automated System-level Performance Evaluation and Characterization Tool)....

  1. A Systems-Level Analysis Reveals Circadian Regulation of Splicing in Colorectal Cancer.

    Science.gov (United States)

    El-Athman, Rukeia; Fuhr, Luise; Relógio, Angela

    2018-06-20

    Accumulating evidence points to a significant role of the circadian clock in the regulation of splicing in various organisms, including mammals. Both dysregulated circadian rhythms and aberrant pre-mRNA splicing are frequently implicated in human disease, in particular in cancer. To investigate the role of the circadian clock in the regulation of splicing in a cancer progression context at the systems-level, we conducted a genome-wide analysis and compared the rhythmic transcriptional profiles of colon carcinoma cell lines SW480 and SW620, derived from primary and metastatic sites of the same patient, respectively. We identified spliceosome components and splicing factors with cell-specific circadian expression patterns including SRSF1, HNRNPLL, ESRP1, and RBM 8A, as well as altered alternative splicing events and circadian alternative splicing patterns of output genes (e.g., VEGFA, NCAM1, FGFR2, CD44) in our cellular model. Our data reveals a remarkable interplay between the circadian clock and pre-mRNA splicing with putative consequences in tumor progression and metastasis. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Tinnitus: pathology of synaptic plasticity at the cellular and system levels

    Directory of Open Access Journals (Sweden)

    Matthieu J Guitton

    2012-03-01

    Full Text Available Despite being more and more common, and having a high impact on the quality of life of sufferers, tinnitus does not yet have a cure. This has been mostly the result of limited knowledge of the biological mechanisms underlying this adverse pathology. However, the last decade has witnessed tremendous progress in our understanding on the pathophysiology of tinnitus. Animal models have demonstrated that tinnitus is a pathology of neural plasticity, and has two main components: a molecular, peripheral component related to the initiation phase of tinnitus; and a system-level, central component related to the long-term maintenance of tinnitus. Using the most recent experimental data and the molecular/system dichotomy as a framework, we describe here the biological basis of tinnitus. We then discuss these mechanisms from an evolutionary perspective, highlighting similarities with memory. Finally, we consider how these discoveries can translate into therapies, and we suggest operative strategies to design new and effective combined therapeutic solutions using both pharmacological (local and systemic and behavioral tools (e.g., using tele-medicine and virtual reality settings.

  3. System Level Analysis of a Water PCM HX Integrated into Orion's Thermal Control System

    Science.gov (United States)

    Navarro, Moses; Hansen, Scott; Seth, Rubik; Ungar, Eugene

    2015-01-01

    In a cyclical heat load environment such as low Lunar orbit, a spacecraft's radiators are not sized to reject the full heat load requirement. Traditionally, a supplemental heat rejection device (SHReD) such as an evaporator or sublimator is used to act as a "topper" to meet the additional heat rejection demands. Utilizing a Phase Change Material (PCM) heat exchanger (HX) as a SHReD provides an attractive alternative to evaporators and sublimators as PCM HXs do not use a consumable, thereby leading to reduced launch mass and volume requirements. In continued pursuit of water PCM HX development an Orion system level analysis was performed using Thermal Desktop for a water PCM HX integrated into Orion's thermal control system in a 100km Lunar orbit. The study verified of the thermal model by using a wax PCM and analyzed 1) placing the PCM on the Internal Thermal Control System (ITCS) versus the External Thermal Control System (ETCS) 2) use of 30/70 PGW verses 50/50 PGW and 3) increasing the radiator area in order to reduce PCM freeze times. The analysis showed that for the assumed operating and boundary conditions utilizing a water PCM HX on Orion is not a viable option for any case. Additionally, it was found that the radiator area would have to be increased by at least 40% in order to support a viable water-based PCM HX.

  4. Interventions to Support System-level Implementation of Health Promoting Schools: A Scoping Review

    Directory of Open Access Journals (Sweden)

    Jessie-Lee D. McIsaac

    2016-02-01

    Full Text Available Health promoting schools (HPS is recognized globally as a multifaceted approach that can support health behaviours. There is increasing clarity around factors that influence HPS at a school level but limited synthesized knowledge on the broader system-level elements that may impact local implementation barriers and support uptake of a HPS approach. This study comprised a scoping review to identify, summarise and disseminate the range of research to support the uptake of a HPS approach across school systems. Two reviewers screened and extracted data according to inclusion/exclusion criteria. Relevant studies were identified using a multi-phased approach including searching electronic bibliographic databases of peer reviewed literature, hand-searching reference lists and article recommendations from experts. In total, 41 articles met the inclusion criteria for the review, representing studies across nine international school systems. Overall, studies described policies that provided high-level direction and resources within school jurisdictions to support implementation of a HPS approach. Various multifaceted organizational and professional interventions were identified, including strategies to enable and restructure school environments through education, training, modelling and incentives. A systematic realist review of the literature may be warranted to identify the types of intervention that work best for whom, in what circumstance to create healthier schools and students.

  5. On-Site Renewable Energy and Green Buildings: A System-Level Analysis.

    Science.gov (United States)

    Al-Ghamdi, Sami G; Bilec, Melissa M

    2016-05-03

    Adopting a green building rating system (GBRSs) that strongly considers use of renewable energy can have important environmental consequences, particularly in developing countries. In this paper, we studied on-site renewable energy and GBRSs at the system level to explore potential benefits and challenges. While we have focused on GBRSs, the findings can offer additional insight for renewable incentives across sectors. An energy model was built for 25 sites to compute the potential solar and wind power production on-site and available within the building footprint and regional climate. A life-cycle approach and cost analysis were then completed to analyze the environmental and economic impacts. Environmental impacts of renewable energy varied dramatically between sites, in some cases, the environmental benefits were limited despite the significant economic burden of those renewable systems on-site and vice versa. Our recommendation for GBRSs, and broader policies and regulations, is to require buildings with higher environmental impacts to achieve higher levels of energy performance and on-site renewable energy utilization, instead of fixed percentages.

  6. Interventions to Support System-level Implementation of Health Promoting Schools: A Scoping Review

    Science.gov (United States)

    McIsaac, Jessie-Lee D.; Hernandez, Kimberley J.; Kirk, Sara F.L.; Curran, Janet A.

    2016-01-01

    Health promoting schools (HPS) is recognized globally as a multifaceted approach that can support health behaviours. There is increasing clarity around factors that influence HPS at a school level but limited synthesized knowledge on the broader system-level elements that may impact local implementation barriers and support uptake of a HPS approach. This study comprised a scoping review to identify, summarise and disseminate the range of research to support the uptake of a HPS approach across school systems. Two reviewers screened and extracted data according to inclusion/exclusion criteria. Relevant studies were identified using a multi-phased approach including searching electronic bibliographic databases of peer reviewed literature, hand-searching reference lists and article recommendations from experts. In total, 41 articles met the inclusion criteria for the review, representing studies across nine international school systems. Overall, studies described policies that provided high-level direction and resources within school jurisdictions to support implementation of a HPS approach. Various multifaceted organizational and professional interventions were identified, including strategies to enable and restructure school environments through education, training, modelling and incentives. A systematic realist review of the literature may be warranted to identify the types of intervention that work best for whom, in what circumstance to create healthier schools and students. PMID:26861376

  7. Physics-Based Scientific Learning Module to Improve Students Motivation and Results

    Directory of Open Access Journals (Sweden)

    Soni Nugroho Yuliono

    2018-02-01

    Full Text Available Teaching materials that available in the school to learn physics especially scientific-based is limited and become one of the obstacles to achieving the learning objectives on electromagnetic waves maerial. The research aims is to gain scientific Physics-based learning modules for high school grade XII students who have met the eligibility criteria, determine the effectiveness of using scientific-based learning modules Physics to improve motivation and learning outcomes from students of grade XII High School. The development of this research on Physics module using 4D development procedure which consist of the steps of define, design, development, and dissemination. Definition phase consists of the teacher and student’s needs analysis process, material analysis, as well as the formulation of the learning module. The design phase of physics learning modules according to the stage of scientific learning are integrated into the module. The development phase consists of the development process of the modules from the design results, validating the feasibility, module revision, limited testing, and the use of scientifically-based learning modules Physics in grade XII IPA 1 Batik 2 Surakarta senior high school. The deployment phase is the deployment process module to another Senior High School in Surakarta. Data Analysis for the study is quantitative descriptive analysis based on the score criteria and analysis of increasing student motivation through N-gain. Conclusion obtained are ; 1 Physics-based scientific learning modules that developed meet the eligibility criteria on aspects of content and presentation, language, the chart, and aspects of learning. The module is declared worthy of the ideals validation results with the percentage of 85.16%, 83.66% by students and teachers in the response phase of the deployment of 85.93%, which is included in the category of "very good"; 2 Physics-based scietific learning modules with material scientific

  8. A physics-based algorithm for real-time simulation of electrosurgery procedures in minimally invasive surgery.

    Science.gov (United States)

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu

    2014-12-01

    High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Advancements toward a Systems Level Understanding of the Human Oral Microbiome

    Directory of Open Access Journals (Sweden)

    Jeffrey Scott Mclean

    2014-07-01

    Full Text Available Oral microbes represent one of the most well studied microbial communities owing to the fact that they are a fundamental part of human development influencing health and disease, an easily accessible human microbiome, a highly structured and remarkably resilient biofilm as well as a model of bacteria-bacteria and bacteria-host interactions. In the last eighty years since oral plaque was first characterized for its functionally stable physiological properties such as the highly repeatable rapid pH decrease upon carbohydrate addition and subsequent recovery phase, the fundamental approaches to study the oral microbiome have cycled back and forth between community level investigations and characterizing individual model isolates. Since that time, many individual species have been well characterized and the development of the early plaque community, which involves many cell–cell binding interactions, has been carefully described. With high throughput sequencing enabling the enormous diversity of the oral cavity to be realized, a number of new challenges to progress were revealed. The large number of uncultivated oral species, the high interpersonal variability of taxonomic carriage and the possibility of multiple pathways to dysbiosis pose as major hurdles to obtain a systems level understanding from the community to the gene level. It is now possible however to start connecting the insights gained from single species with community wide approaches. This review will discuss some of the recent insights into the oral microbiome at a fundamental level, existing knowledge gaps, as well as challenges that have surfaced and the approaches to address them.

  10. An Adaptive Physics-Based Method for the Solution of One-Dimensional Wave Motion Problems

    Directory of Open Access Journals (Sweden)

    Masoud Shafiei

    2015-12-01

    Full Text Available In this paper, an adaptive physics-based method is developed for solving wave motion problems in one dimension (i.e., wave propagation in strings, rods and beams. The solution of the problem includes two main parts. In the first part, after discretization of the domain, a physics-based method is developed considering the conservation of mass and the balance of momentum. In the second part, adaptive points are determined using the wavelet theory. This part is done employing the Deslauries-Dubuc (D-D wavelets. By solving the problem in the first step, the domain of the problem is discretized by the same cells taking into consideration the load and characteristics of the structure. After the first trial solution, the D-D interpolation shows the lack and redundancy of points in the domain. These points will be added or eliminated for the next solution. This process may be repeated for obtaining an adaptive mesh for each step. Also, the smoothing spline fit is used to eliminate the noisy portion of the solution. Finally, the results of the proposed method are compared with the results available in the literature. The comparison shows excellent agreement between the obtained results and those already reported.

  11. Estimating fractional vegetation cover and the vegetation index of bare soil and highly dense vegetation with a physically based method

    Science.gov (United States)

    Song, Wanjuan; Mu, Xihan; Ruan, Gaiyan; Gao, Zhan; Li, Linyuan; Yan, Guangjian

    2017-06-01

    Normalized difference vegetation index (NDVI) of highly dense vegetation (NDVIv) and bare soil (NDVIs), identified as the key parameters for Fractional Vegetation Cover (FVC) estimation, are usually obtained with empirical statistical methods However, it is often difficult to obtain reasonable values of NDVIv and NDVIs at a coarse resolution (e.g., 1 km), or in arid, semiarid, and evergreen areas. The uncertainty of estimated NDVIs and NDVIv can cause substantial errors in FVC estimations when a simple linear mixture model is used. To address this problem, this paper proposes a physically based method. The leaf area index (LAI) and directional NDVI are introduced in a gap fraction model and a linear mixture model for FVC estimation to calculate NDVIv and NDVIs. The model incorporates the Moderate Resolution Imaging Spectroradiometer (MODIS) Bidirectional Reflectance Distribution Function (BRDF) model parameters product (MCD43B1) and LAI product, which are convenient to acquire. Two types of evaluation experiments are designed 1) with data simulated by a canopy radiative transfer model and 2) with satellite observations. The root-mean-square deviation (RMSD) for simulated data is less than 0.117, depending on the type of noise added on the data. In the real data experiment, the RMSD for cropland is 0.127, for grassland is 0.075, and for forest is 0.107. The experimental areas respectively lack fully vegetated and non-vegetated pixels at 1 km resolution. Consequently, a relatively large uncertainty is found while using the statistical methods and the RMSD ranges from 0.110 to 0.363 based on the real data. The proposed method is convenient to produce NDVIv and NDVIs maps for FVC estimation on regional and global scales.

  12. Behavioral System Level Power Consumption Modeling of Mobile Video Streaming applications

    OpenAIRE

    Benmoussa , Yahia; Boukhobza , Jalil; Hadjadj-Aoul , Yassine; Lagadec , Loïc; Benazzouz , Djamel

    2012-01-01

    National audience; Nowadays, the use of mobile applications and terminals faces fundamental challenges related to energy constraint. This is due to the limited battery lifetime as compared to the increasing hardware evolution. Video streaming is one of the most energy consuming applications in a mobile system because of its intensive use of bandwidth, memory and processing power. In this work, we aim to propose a methodology for building and validating a high level global power consumption mo...

  13. System-level causal modelling of widescale resource plundering: Acting on the rhino poaching catastrophe

    CSIR Research Space (South Africa)

    Koen, Hildegarde S

    2017-10-01

    Full Text Available been uncovered. The rhino poaching problem is even more complex than was initially thought and this paper serves as a reflective piece on how the research methodology for the complex and poorly understood rhino poaching problem was shifted towards...

  14. Underwater Wireless Optical Communications Systems: from System-Level Demonstrations to Channel Modeling

    KAUST Repository

    Oubei, Hassan M.

    2018-01-01

    Approximately, two-thirds of earth's surface is covered by water. There is a growing interest from the military and commercial communities in having, an efficient, secure and high bandwidth underwater wireless communication (UWC) system for tactical

  15. Analysis of RF thrusters with TOPICA and a global system-level model

    NARCIS (Netherlands)

    Lancellotti, V.; Vecchi, G.; Maggiora, R.; Pavarin, D.; Rocca, S.; Bramanti, C.

    2007-01-01

    Recent advances in plasma-based propulsion systems have led to the development of electromagnetic (RF) generation and acceleration systems, capable of providing highly controllable and wide-ranging exhaust velocities, and potentially enabling a wide range of missions from KWs to MWs levels. In this

  16. Randomization and resilience of brain functional networks as systems-level endophenotypes of schizophrenia.

    Science.gov (United States)

    Lo, Chun-Yi Zac; Su, Tsung-Wei; Huang, Chu-Chung; Hung, Chia-Chun; Chen, Wei-Ling; Lan, Tsuo-Hung; Lin, Ching-Po; Bullmore, Edward T

    2015-07-21

    Schizophrenia is increasingly conceived as a disorder of brain network organization or dysconnectivity syndrome. Functional MRI (fMRI) networks in schizophrenia have been characterized by abnormally random topology. We tested the hypothesis that network randomization is an endophenotype of schizophrenia and therefore evident also in nonpsychotic relatives of patients. Head movement-corrected, resting-state fMRI data were acquired from 25 patients with schizophrenia, 25 first-degree relatives of patients, and 29 healthy volunteers. Graphs were used to model functional connectivity as a set of edges between regional nodes. We estimated the topological efficiency, clustering, degree distribution, resilience, and connection distance (in millimeters) of each functional network. The schizophrenic group demonstrated significant randomization of global network metrics (reduced clustering, greater efficiency), a shift in the degree distribution to a more homogeneous form (fewer hubs), a shift in the distance distribution (proportionally more long-distance edges), and greater resilience to targeted attack on network hubs. The networks of the relatives also demonstrated abnormal randomization and resilience compared with healthy volunteers, but they were typically less topologically abnormal than the patients' networks and did not have abnormal connection distances. We conclude that schizophrenia is associated with replicable and convergent evidence for functional network randomization, and a similar topological profile was evident also in nonpsychotic relatives, suggesting that this is a systems-level endophenotype or marker of familial risk. We speculate that the greater resilience of brain networks may confer some fitness advantages on nonpsychotic relatives that could explain persistence of this endophenotype in the population.

  17. Multiple fMRI system-level baseline connectivity is disrupted in patients with consciousness alterations.

    Science.gov (United States)

    Demertzi, Athena; Gómez, Francisco; Crone, Julia Sophia; Vanhaudenhuyse, Audrey; Tshibanda, Luaba; Noirhomme, Quentin; Thonnard, Marie; Charland-Verville, Vanessa; Kirsch, Murielle; Laureys, Steven; Soddu, Andrea

    2014-03-01

    In healthy conditions, group-level fMRI resting state analyses identify ten resting state networks (RSNs) of cognitive relevance. Here, we aim to assess the ten-network model in severely brain-injured patients suffering from disorders of consciousness and to identify those networks which will be most relevant to discriminate between patients and healthy subjects. 300 fMRI volumes were obtained in 27 healthy controls and 53 patients in minimally conscious state (MCS), vegetative state/unresponsive wakefulness syndrome (VS/UWS) and coma. Independent component analysis (ICA) reduced data dimensionality. The ten networks were identified by means of a multiple template-matching procedure and were tested on neuronality properties (neuronal vs non-neuronal) in a data-driven way. Univariate analyses detected between-group differences in networks' neuronal properties and estimated voxel-wise functional connectivity in the networks, which were significantly less identifiable in patients. A nearest-neighbor "clinical" classifier was used to determine the networks with high between-group discriminative accuracy. Healthy controls were characterized by more neuronal components compared to patients in VS/UWS and in coma. Compared to healthy controls, fewer patients in MCS and VS/UWS showed components of neuronal origin for the left executive control network, default mode network (DMN), auditory, and right executive control network. The "clinical" classifier indicated the DMN and auditory network with the highest accuracy (85.3%) in discriminating patients from healthy subjects. FMRI multiple-network resting state connectivity is disrupted in severely brain-injured patients suffering from disorders of consciousness. When performing ICA, multiple-network testing and control for neuronal properties of the identified RSNs can advance fMRI system-level characterization. Automatic data-driven patient classification is the first step towards future single-subject objective diagnostics

  18. NASA System-Level Design, Analysis and Simulation Tools Research on NextGen

    Science.gov (United States)

    Bardina, Jorge

    2011-01-01

    A review of the research accomplished in 2009 in the System-Level Design, Analysis and Simulation Tools (SLDAST) of the NASA's Airspace Systems Program is presented. This research thrust focuses on the integrated system-level assessment of component level innovations, concepts and technologies of the Next Generation Air Traffic System (NextGen) under research in the ASP program to enable the development of revolutionary improvements and modernization of the National Airspace System. The review includes the accomplishments on baseline research and the advancements on design studies and system-level assessment, including the cluster analysis as an annualization standard of the air traffic in the U.S. National Airspace, and the ACES-Air MIDAS integration for human-in-the-loop analyzes within the NAS air traffic simulation.

  19. Implementing a Multi-Tiered System of Support (MTSS): Collaboration between School Psychologists and Administrators to Promote Systems-Level Change

    Science.gov (United States)

    Eagle, John W.; Dowd-Eagle, Shannon E.; Snyder, Andrew; Holtzman, Elizabeth Gibbons

    2015-01-01

    Current educational reform mandates the implementation of school-based models for early identification and intervention, progress monitoring, and data-based assessment of student progress. This article provides an overview of interdisciplinary collaboration for systems-level consultation within a Multi-Tiered System of Support (MTSS) framework.…

  20. A revolution without tooth and claw-redefining the physical base units.

    Science.gov (United States)

    Pietsch, Wolfgang

    2014-06-01

    A case study is presented of a recent proposal by the major metrology institutes to redefine four of the physical base units, namely kilogram, ampere, mole, and kelvin. The episode shows a number of features that are unusual for progress in an objective science: for example, the progress is not triggered by experimental discoveries or theoretical innovations; also, the new definitions are eventually implemented by means of a voting process. In the philosophical analysis, I will first argue that the episode provides considerable evidence for confirmation holism, i.e. the claim that central statements in fundamental science cannot be tested in isolation; second, that the episode satisfies many of the criteria which Kuhn requires for scientific revolutions even though one would naturally classify it as normal science. These two observations are interrelated since holism can provide within normal science a possible source of future revolutionary periods.

  1. Physically-Based Rendering of Particle-Based Fluids with Light Transport Effects

    Science.gov (United States)

    Beddiaf, Ali; Babahenini, Mohamed Chaouki

    2018-03-01

    Recent interactive rendering approaches aim to efficiently produce images. However, time constraints deeply affect their output accuracy and realism (many light phenomena are poorly or not supported at all). To remedy this issue, in this paper, we propose a physically-based fluid rendering approach. First, while state-of-the-art methods focus on isosurface rendering with only two refractions, our proposal (1) considers the fluid as a heterogeneous participating medium with refractive boundaries, and (2) supports both multiple refractions and scattering. Second, the proposed solution is fully particle-based in the sense that no particles transformation into a grid is required. This interesting feature makes it able to handle many particle types (water, bubble, foam, and sand). On top of that, a medium with different fluids (color, phase function, etc.) can also be rendered.

  2. RenderToolbox3: MATLAB tools that facilitate physically based stimulus rendering for vision research.

    Science.gov (United States)

    Heasly, Benjamin S; Cottaris, Nicolas P; Lichtman, Daniel P; Xiao, Bei; Brainard, David H

    2014-02-07

    RenderToolbox3 provides MATLAB utilities and prescribes a workflow that should be useful to researchers who want to employ graphics in the study of vision and perhaps in other endeavors as well. In particular, RenderToolbox3 facilitates rendering scene families in which various scene attributes and renderer behaviors are manipulated parametrically, enables spectral specification of object reflectance and illuminant spectra, enables the use of physically based material specifications, helps validate renderer output, and converts renderer output to physical units of radiance. This paper describes the design and functionality of the toolbox and discusses several examples that demonstrate its use. We have designed RenderToolbox3 to be portable across computer hardware and operating systems and to be free and open source (except for MATLAB itself). RenderToolbox3 is available at https://github.com/DavidBrainard/RenderToolbox3.

  3. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    Science.gov (United States)

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  4. A Probabilistic Approach for the System-Level Design of Multi-ASIP Platforms

    DEFF Research Database (Denmark)

    Micconi, Laura

    introduce a system-level Design Space Exploration (DSE) for the very early phases of the design that automatizes part of the multi-ASIP design flow. Our DSE is responsible for assigning the tasks to the different ASIPs exploring different platform alternatives. We perform a schedulability analysis for each...

  5. System-Level Design of an Integrated Receiver Front End for a Wireless Ultrasound Probe

    DEFF Research Database (Denmark)

    di Ianni, Tommaso; Hemmsen, Martin Christian; Llimos Muntal, Pere

    2016-01-01

    In this paper, a system-level design is presented for an integrated receive circuit for a wireless ultrasound probe, which includes analog front ends and beamformation modules. This paper focuses on the investigation of the effects of architectural design choices on the image quality. The point...

  6. Toward a physics-based rate and state friction law for earthquake nucleation processes in fault zones with granular gouge

    Science.gov (United States)

    Ferdowsi, B.; Rubin, A. M.

    2017-12-01

    Numerical simulations of earthquake nucleation rely on constitutive rate and state evolution laws to model earthquake initiation and propagation processes. The response of different state evolution laws to large velocity increases is an important feature of these constitutive relations that can significantly change the style of earthquake nucleation in numerical models. However, currently there is not a rigorous understanding of the physical origins of the response of bare rock or gouge-filled fault zones to large velocity increases. This in turn hinders our ability to design physics-based friction laws that can appropriately describe those responses. We here argue that most fault zones form a granular gouge after an initial shearing phase and that it is the behavior of the gouge layer that controls the fault friction. We perform numerical experiments of a confined sheared granular gouge under a range of confining stresses and driving velocities relevant to fault zones and apply 1-3 order of magnitude velocity steps to explore dynamical behavior of the system from grain- to macro-scales. We compare our numerical observations with experimental data from biaxial double-direct-shear fault gouge experiments under equivalent loading and driving conditions. Our intention is to first investigate the degree to which these numerical experiments, with Hertzian normal and Coulomb friction laws at the grain-grain contact scale and without any time-dependent plasticity, can reproduce experimental fault gouge behavior. We next compare the behavior observed in numerical experiments with predictions of the Dieterich (Aging) and Ruina (Slip) friction laws. Finally, the numerical observations at the grain and meso-scales will be used for designing a rate and state evolution law that takes into account recent advances in rheology of granular systems, including local and non-local effects, for a wide range of shear rates and slow and fast deformation regimes of the fault gouge.

  7. Non-covalent interactions across organic and biological subsets of chemical space: Physics-based potentials parametrized from machine learning

    Science.gov (United States)

    Bereau, Tristan; DiStasio, Robert A.; Tkatchenko, Alexandre; von Lilienfeld, O. Anatole

    2018-06-01

    Classical intermolecular potentials typically require an extensive parametrization procedure for any new compound considered. To do away with prior parametrization, we propose a combination of physics-based potentials with machine learning (ML), coined IPML, which is transferable across small neutral organic and biologically relevant molecules. ML models provide on-the-fly predictions for environment-dependent local atomic properties: electrostatic multipole coefficients (significant error reduction compared to previously reported), the population and decay rate of valence atomic densities, and polarizabilities across conformations and chemical compositions of H, C, N, and O atoms. These parameters enable accurate calculations of intermolecular contributions—electrostatics, charge penetration, repulsion, induction/polarization, and many-body dispersion. Unlike other potentials, this model is transferable in its ability to handle new molecules and conformations without explicit prior parametrization: All local atomic properties are predicted from ML, leaving only eight global parameters—optimized once and for all across compounds. We validate IPML on various gas-phase dimers at and away from equilibrium separation, where we obtain mean absolute errors between 0.4 and 0.7 kcal/mol for several chemically and conformationally diverse datasets representative of non-covalent interactions in biologically relevant molecules. We further focus on hydrogen-bonded complexes—essential but challenging due to their directional nature—where datasets of DNA base pairs and amino acids yield an extremely encouraging 1.4 kcal/mol error. Finally, and as a first look, we consider IPML for denser systems: water clusters, supramolecular host-guest complexes, and the benzene crystal.

  8. System Level Design of a Continuous-Time Delta-Sigma Modulator for Portable Ultrasound Scanners

    DEFF Research Database (Denmark)

    Llimos Muntal, Pere; Færch, Kjartan; Jørgensen, Ivan Harald Holger

    2015-01-01

    In this paper the system level design of a continuous-time ∆Σ modulator for portable ultrasound scanners is presented. The overall required signal-to-noise ratio (SNR) is derived to be 42 dB and the sampling frequency used is 320 MHz for an oversampling ratio of 16. In order to match these requir......, based on high-level VerilogA simulations, the performance of the ∆Σ modulator versus various block performance parameters is presented as trade-off curves. Based on these results, the block specifications are derived.......In this paper the system level design of a continuous-time ∆Σ modulator for portable ultrasound scanners is presented. The overall required signal-to-noise ratio (SNR) is derived to be 42 dB and the sampling frequency used is 320 MHz for an oversampling ratio of 16. In order to match...

  9. System-Level Optimization of a DAC for Hearing-Aid Audio Class D Output Stage

    DEFF Research Database (Denmark)

    Pracný, Peter; Jørgensen, Ivan Harald Holger; Bruun, Erik

    2013-01-01

    This paper deals with system-level optimization of a digital-to-analog converter (DAC) for hearing-aid audio Class D output stage. We discuss the ΣΔ modulator system-level design parameters – the order, the oversampling ratio (OSR) and the number of bits in the quantizer. We show that combining...... by comparing two ΣΔ modulator designs. The proposed optimization has impact on the whole hearing-aid audio back-end system including less hardware in the interpolation filter and half the switching rate in the digital-pulse-width-modulation (DPWM) block and Class D output stage...... a reduction of the OSR with an increase of the order results in considerable power savings while the audio quality is kept. For further savings in the ΣΔ modulator, overdesign and subsequent coarse coefficient quantization are used. A figure of merit (FOM) is introduced to confirm this optimization approach...

  10. Next Generation Civil Transport Aircraft Design Considerations for Improving Vehicle and System-Level Efficiency

    Science.gov (United States)

    Acosta, Diana M.; Guynn, Mark D.; Wahls, Richard A.; DelRosario, Ruben,

    2013-01-01

    The future of aviation will benefit from research in aircraft design and air transportation management aimed at improving efficiency and reducing environmental impacts. This paper presents civil transport aircraft design trends and opportunities for improving vehicle and system-level efficiency. Aircraft design concepts and the emerging technologies critical to reducing thrust specific fuel consumption, reducing weight, and increasing lift to drag ratio currently being developed by NASA are discussed. Advancements in the air transportation system aimed towards system-level efficiency are discussed as well. Finally, the paper describes the relationship between the air transportation system, aircraft, and efficiency. This relationship is characterized by operational constraints imposed by the air transportation system that influence aircraft design, and operational capabilities inherent to an aircraft design that impact the air transportation system.

  11. A system level boundary scan controller board for VME applications [to CERN experiments

    CERN Document Server

    Cardoso, N; Da Silva, J C

    2000-01-01

    This work is the result of a collaboration between INESC and LIP in the CMS experiment being conducted at CERN. The collaboration addresses the application of boundary scan test at system level namely the development of a VME boundary scan controller (BSC) board prototype and the corresponding software. This prototype uses the MTM bus existing in the VME64* backplane to apply the 1149.1 test vectors to a system composed of nineteen boards, called here units under test (UUTs). A top-down approach is used to describe our work. The paper begins with some insights about the experiment being conducted at CERN, proceed with system level considerations concerning our work and with some details about the BSC board. The results obtained so far and the proposed work is reviewed in the end of this contribution. (11 refs).

  12. Competition, liquidity and stability: international evidence at the bank and systemic levels

    OpenAIRE

    Nguyen, Thi Ngoc My

    2017-01-01

    This thesis investigates the impact of market power on bank liquidity; the association between competition and systemic liquidity; and whether the associations between liquidity and stability at both bank- and systemic- levels are affected by competition. The first research question is explored in the context of 101 countries over 1996-2013 while the second and the third, which require listed banks, use a smaller sample of 32 nations during 2001-2013. The Panel Least Squares and the system Ge...

  13. System-level energy efficiency is the greatest barrier to development of the hydrogen economy

    International Nuclear Information System (INIS)

    Page, Shannon; Krumdieck, Susan

    2009-01-01

    Current energy research investment policy in New Zealand is based on assumed benefits of transitioning to hydrogen as a transport fuel and as storage for electricity from renewable resources. The hydrogen economy concept, as set out in recent commissioned research investment policy advice documents, includes a range of hydrogen energy supply and consumption chains for transport and residential energy services. The benefits of research and development investments in these advice documents were not fully analyzed by cost or improvements in energy efficiency or green house gas emissions reduction. This paper sets out a straightforward method to quantify the system-level efficiency of these energy chains. The method was applied to transportation and stationary heat and power, with hydrogen generated from wind energy, natural gas and coal. The system-level efficiencies for the hydrogen chains were compared to direct use of conventionally generated electricity, and with internal combustion engines operating on gas- or coal-derived fuel. The hydrogen energy chains were shown to provide little or no system-level efficiency improvement over conventional technology. The current research investment policy is aimed at enabling a hydrogen economy without considering the dramatic loss of efficiency that would result from using this energy carrier.

  14. Integrating Omics Technologies to Study Pulmonary Physiology and Pathology at the Systems Level

    Directory of Open Access Journals (Sweden)

    Ravi Ramesh Pathak

    2014-04-01

    Full Text Available Assimilation and integration of “omics” technologies, including genomics, epigenomics, proteomics, and metabolomics has readily altered the landscape of medical research in the last decade. The vast and complex nature of omics data can only be interpreted by linking molecular information at the organismic level, forming the foundation of systems biology. Research in pulmonary biology/medicine has necessitated integration of omics, network, systems and computational biology data to differentially diagnose, interpret, and prognosticate pulmonary diseases, facilitating improvement in therapy and treatment modalities. This review describes how to leverage this emerging technology in understanding pulmonary diseases at the systems level -called a “systomic” approach. Considering the operational wholeness of cellular and organ systems, diseased genome, proteome, and the metabolome needs to be conceptualized at the systems level to understand disease pathogenesis and progression. Currently available omics technology and resources require a certain degree of training and proficiency in addition to dedicated hardware and applications, making them relatively less user friendly for the pulmonary biologist and clinicians. Herein, we discuss the various strategies, computational tools and approaches required to study pulmonary diseases at the systems level for biomedical scientists and clinical researchers.

  15. Physics-based preconditioning and the Newton-Krylov method for non-equilibrium radiation diffusion

    International Nuclear Information System (INIS)

    Mousseau, V.A.; Knoll, D.A.; Rider, W.J.

    2000-01-01

    An algorithm is presented for the solution of the time dependent reaction-diffusion systems which arise in non-equilibrium radiation diffusion applications. This system of nonlinear equations is solved by coupling three numerical methods, Jacobian-free Newton-Krylov, operator splitting, and multigrid linear solvers. An inexact Newton's method is used to solve the system of nonlinear equations. Since building the Jacobian matrix for problems of interest can be challenging, the authors employ a Jacobian-free implementation of Newton's method, where the action of the Jacobian matrix on a vector is approximated by a first order Taylor series expansion. Preconditioned generalized minimal residual (PGMRES) is the Krylov method used to solve the linear systems that come from the iterations of Newton's method. The preconditioner in this solution method is constructed using a physics-based divide and conquer approach, often referred to as operator splitting. This solution procedure inverts the scalar elliptic systems that make up the preconditioner using simple multigrid methods. The preconditioner also addresses the strong coupling between equations with local 2 x 2 block solves. The intra-cell coupling is applied after the inter-cell coupling has already been addressed by the elliptic solves. Results are presented using this solution procedure that demonstrate its efficiency while incurring minimal memory requirements

  16. Training Knowledge Bots for Physics-Based Simulations Using Artificial Neural Networks

    Science.gov (United States)

    Samareh, Jamshid A.; Wong, Jay Ming

    2014-01-01

    Millions of complex physics-based simulations are required for design of an aerospace vehicle. These simulations are usually performed by highly trained and skilled analysts, who execute, monitor, and steer each simulation. Analysts rely heavily on their broad experience that may have taken 20-30 years to accumulate. In addition, the simulation software is complex in nature, requiring significant computational resources. Simulations of system of systems become even more complex and are beyond human capacity to effectively learn their behavior. IBM has developed machines that can learn and compete successfully with a chess grandmaster and most successful jeopardy contestants. These machines are capable of learning some complex problems much faster than humans can learn. In this paper, we propose using artificial neural network to train knowledge bots to identify the idiosyncrasies of simulation software and recognize patterns that can lead to successful simulations. We examine the use of knowledge bots for applications of computational fluid dynamics (CFD), trajectory analysis, commercial finite-element analysis software, and slosh propellant dynamics. We will show that machine learning algorithms can be used to learn the idiosyncrasies of computational simulations and identify regions of instability without including any additional information about their mathematical form or applied discretization approaches.

  17. Coherent tools for physics-based simulation and characterization of noise in semiconductor devices oriented to nonlinear microwave circuit CAD

    Science.gov (United States)

    Riah, Zoheir; Sommet, Raphael; Nallatamby, Jean C.; Prigent, Michel; Obregon, Juan

    2004-05-01

    We present in this paper a set of coherent tools for noise characterization and physics-based analysis of noise in semiconductor devices. This noise toolbox relies on a low frequency noise measurement setup with special high current capabilities thanks to an accurate and original calibration. It relies also on a simulation tool based on the drift diffusion equations and the linear perturbation theory, associated with the Green's function technique. This physics-based noise simulator has been implemented successfully in the Scilab environment and is specifically dedicated to HBTs. Some results are given and compared to those existing in the literature.

  18. System-level power optimization for real-time distributed embedded systems

    Science.gov (United States)

    Luo, Jiong

    Power optimization is one of the crucial design considerations for modern electronic systems. In this thesis, we present several system-level power optimization techniques for real-time distributed embedded systems, based on dynamic voltage scaling, dynamic power management, and management of peak power and variance of the power profile. Dynamic voltage scaling has been widely acknowledged as an important and powerful technique to trade off dynamic power consumption and delay. Efficient dynamic voltage scaling requires effective variable-voltage scheduling mechanisms that can adjust voltages and clock frequencies adaptively based on workloads and timing constraints. For this purpose, we propose static variable-voltage scheduling algorithms utilizing criticalpath driven timing analysis for the case when tasks are assumed to have uniform switching activities, as well as energy-gradient driven slack allocation for a more general scenario. The proposed techniques can achieve closeto-optimal power savings with very low computational complexity, without violating any real-time constraints. We also present algorithms for power-efficient joint scheduling of multi-rate periodic task graphs along with soft aperiodic tasks. The power issue is addressed through both dynamic voltage scaling and power management. Periodic task graphs are scheduled statically. Flexibility is introduced into the static schedule to allow the on-line scheduler to make local changes to PE schedules through resource reclaiming and slack stealing, without interfering with the validity of the global schedule. We provide a unified framework in which the response times of aperiodic tasks and power consumption are dynamically optimized simultaneously. Interconnection network fabrics point to a new generation of power-efficient and scalable interconnection architectures for distributed embedded systems. As the system bandwidth continues to increase, interconnection networks become power/energy limited as

  19. Network Physics - the only company to provide physics-based network management - secures additional funding and new executives

    CERN Multimedia

    2003-01-01

    "Network Physics, the only provider of physics-based network management products, today announced an additional venture round of $6 million in funding, as well as the addition of David Jones as president and CEO and Tom Dunn as vice president of sales and business development" (1 page).

  20. River channel and bar patterns explained and predicted by an empirical and a physics-based method

    NARCIS (Netherlands)

    Kleinhans, M.G.; Berg, J.H. van den

    2011-01-01

    Our objective is to understand general causes of different river channel patterns. In this paper we compare an empirical stream power-based classification and a physics-based bar pattern predictor. We present a careful selection of data from the literature that contains rivers with discharge and

  1. The effect of decentralized behavioral decision making on system-level risk.

    Science.gov (United States)

    Kaivanto, Kim

    2014-12-01

    Certain classes of system-level risk depend partly on decentralized lay decision making. For instance, an organization's network security risk depends partly on its employees' responses to phishing attacks. On a larger scale, the risk within a financial system depends partly on households' responses to mortgage sales pitches. Behavioral economics shows that lay decisionmakers typically depart in systematic ways from the normative rationality of expected utility (EU), and instead display heuristics and biases as captured in the more descriptively accurate prospect theory (PT). In turn, psychological studies show that successful deception ploys eschew direct logical argumentation and instead employ peripheral-route persuasion, manipulation of visceral emotions, urgency, and familiar contextual cues. The detection of phishing emails and inappropriate mortgage contracts may be framed as a binary classification task. Signal detection theory (SDT) offers the standard normative solution, formulated as an optimal cutoff threshold, for distinguishing between good/bad emails or mortgages. In this article, we extend SDT behaviorally by rederiving the optimal cutoff threshold under PT. Furthermore, we incorporate the psychology of deception into determination of SDT's discriminability parameter. With the neo-additive probability weighting function, the optimal cutoff threshold under PT is rendered unique under well-behaved sampling distributions, tractable in computation, and transparent in interpretation. The PT-based cutoff threshold is (i) independent of loss aversion and (ii) more conservative than the classical SDT cutoff threshold. Independently of any possible misalignment between individual-level and system-level misclassification costs, decentralized behavioral decisionmakers are biased toward underdetection, and system-level risk is consequently greater than in analyses predicated upon normative rationality. © 2014 Society for Risk Analysis.

  2. Study on the system-level test method of digital metering in smart substation

    Science.gov (United States)

    Zhang, Xiang; Yang, Min; Hu, Juan; Li, Fuchao; Luo, Ruixi; Li, Jinsong; Ai, Bing

    2017-03-01

    Nowadays, the test methods of digital metering system in smart substation are used to test and evaluate the performance of a single device, but these methods can only effectively guarantee the accuracy and reliability of the measurement results of a digital metering device in a single run, it does not completely reflect the performance when each device constitutes a complete system. This paper introduced the shortages of the existing test methods. A system-level test method of digital metering in smart substation was proposed, and the feasibility of the method was proved by the actual test.

  3. Enhanced Discrete-Time Scheduler Engine for MBMS E-UMTS System Level Simulator

    DEFF Research Database (Denmark)

    Pratas, Nuno; Rodrigues, António

    2007-01-01

    In this paper the design of an E-UMTS system level simulator developed for the study of optimization methods for the MBMS is presented. The simulator uses a discrete event based philosophy, which captures the dynamic behavior of the Radio Network System. This dynamic behavior includes the user...... mobility, radio interfaces and the Radio Access Network. Its given emphasis on the enhancements developed for the simulator core, the Event Scheduler Engine. Two implementations for the Event Scheduler Engine are proposed, one optimized for single core processors and other for multi-core ones....

  4. Exploration of a digital audio processing platform using a compositional system level performance estimation framework

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan

    2009-01-01

    This paper presents the application of a compositional simulation based system-level performance estimation framework on a non-trivial industrial case study. The case study is provided by the Danish company Bang & Olufsen ICEpower a/s and focuses on the exploration of a digital mobile audio...... processing platform. A short overview of the compositional performance estimation framework used is given followed by a presentation of how it is used for performance estimation using an iterative refinement process towards the final implementation. Finally, an evaluation in terms of accuracy and speed...

  5. Design of power converter in DFIG wind turbine with enhanced system-level reliability

    DEFF Research Database (Denmark)

    Zhou, Dao; Zhang, Guanguan; Blaabjerg, Frede

    2017-01-01

    With the increasing penetration of wind power, reliable and cost-effective wind energy production are of more and more importance. As one of the promising configurations, the doubly-fed induction generator based partial-scale wind power converter is still dominating in the existing wind farms...... margin. It can be seen that the B1 lifetime of the grid-side converter and the rotor-side converter deviates a lot by considering the electrical stresses, while they become more balanced by using an optimized reliable design. The system-level lifetime significantly increases with an appropriate design...

  6. System Level Power Optimization of Digital Audio Back End for Hearing Aids

    DEFF Research Database (Denmark)

    Pracny, Peter; Jørgensen, Ivan Harald Holger; Bruun, Erik

    2017-01-01

    This work deals with power optimization of the audio processing back end for hearing aids - the interpolation filter (IF), the sigma-delta (SD modulator and the Class D power amplifier (PA) as a whole. Specifications are derived and insight into the tradeoffs involved is used to optimize...... the interpolation filter and the SD modulator on the system level so that the switching frequency of the Class D PA - the main power consumer in the back end - is minimized. A figure-of-merit (FOM) which allows judging the power consumption of the digital part of the back end early in the design process is used...

  7. System-level planning, coordination, and communication: care of the critically ill and injured during pandemics and disasters: CHEST consensus statement.

    Science.gov (United States)

    Dichter, Jeffrey R; Kanter, Robert K; Dries, David; Luyckx, Valerie; Lim, Matthew L; Wilgis, John; Anderson, Michael R; Sarani, Babak; Hupert, Nathaniel; Mutter, Ryan; Devereaux, Asha V; Christian, Michael D; Kissoon, Niranjan

    2014-10-01

    System-level planning involves uniting hospitals and health systems, local/regional government agencies, emergency medical services, and other health-care entities involved in coordinating and enabling care in a major disaster. We reviewed the literature and sought expert opinions concerning system-level planning and engagement for mass critical care due to disasters or pandemics and offer suggestions for system-planning, coordination, communication, and response. The suggestions in this chapter are important for all of those involved in a pandemic or disaster with multiple critically ill or injured patients, including front-line clinicians, hospital administrators, and public health or government officials. The American College of Chest Physicians (CHEST) consensus statement development process was followed in developing suggestions. Task Force members met in person to develop nine key questions believed to be most relevant for system-planning, coordination, and communication. A systematic literature review was then performed for relevant articles and documents, reports, and other publications reported since 1993. No studies of sufficient quality were identified upon which to make evidence-based recommendations. Therefore, the panel developed expert opinion-based suggestions using a modified Delphi process. Suggestions were developed and grouped according to the following thematic elements: (1) national government support of health-care coalitions/regional health authorities (HC/RHAs), (2) teamwork within HC/RHAs, (3) system-level communication, (4) system-level surge capacity and capability, (5) pediatric patients and special populations, (6) HC/RHAs and networks, (7) models of advanced regional care systems, and (8) the use of simulation for preparedness and planning. System-level planning is essential to provide care for large numbers of critically ill patients because of disaster or pandemic. It also entails a departure from the routine, independent system and

  8. Human- Versus System-Level Factors and Their Effect on Electronic Work List Variation: Challenging Radiology's Fundamental Attribution Error.

    Science.gov (United States)

    Davenport, Matthew S; Khalatbari, Shokoufeh; Platt, Joel F

    2015-09-01

    The aim of this study was to analyze sources of variation influencing the unread volume on an electronic abdominopelvic CT work list and to compare those results with blinded radiologist perception. The requirement for institutional review board approval was waived for this HIPAA-compliant quality improvement effort. Data pertaining to an electronic abdominopelvic CT work list were analyzed retrospectively from July 1, 2013, to June 30, 2014, and modeled with respect to the unread case total at 6 pm (Monday through Friday, excluding holidays). Eighteen system-level factors outside individual control (eg, number of workers, workload) and 7 human-level factors within individual control (eg, individual productivity) were studied. Attending radiologist perception was assessed with a blinded anonymous survey (n = 12 of 15 surveys completed). The mean daily unread total was 24 (range, 3-72). The upper control limit (48 CT studies [3 SDs above the mean]) was exceeded 10 times. Multivariate analysis revealed that the rate of unread CT studies was affected principally by system-level factors, including the number of experienced trainees on service (postgraduate year 5 residents [odds ratio, 0.83; 95% confidence interval, 0.74-0.92; P = .0008] and fellows [odds ratio, 0.84; 95% confidence interval, 0.74-0.95; P = .005]) and the daily workload (P = .02 to P level factors best predict the variation in unread CT examinations, but blinded faculty radiologists believe that it relates most strongly to variable individual effort. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  9. An investigation into soft error detection efficiency at operating system level.

    Science.gov (United States)

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  10. Local and System Level Considerations for Plasma-Based Techniques in Hypersonic Flight

    Science.gov (United States)

    Suchomel, Charles; Gaitonde, Datta

    2007-01-01

    The harsh environment encountered due to hypersonic flight, particularly when air-breathing propulsion devices are utilized, poses daunting challenges to successful maturation of suitable technologies. This has spurred the quest for revolutionary solutions, particularly those exploiting the fact that air under these conditions can become electrically conducting either naturally or through artificial enhancement. Optimized development of such concepts must emphasize not only the detailed physics by which the fluid interacts with the imposed electromagnetic fields, but must also simultaneously identify system level issues integration and efficiencies that provide the greatest leverage. This paper presents some recent advances at both levels. At the system level, an analysis is summarized that incorporates the interdependencies occurring between weight, power and flow field performance improvements. Cruise performance comparisons highlight how one drag reduction device interacts with the vehicle to improve range. Quantified parameter interactions allow specification of system requirements and energy consuming technologies that affect overall flight vehicle performance. Results based on on the fundamental physics are presented by distilling numerous computational studies into a few guiding principles. These highlight the complex non-intuitive relationships between the various fluid and electromagnetic fields, together with thermodynamic considerations. Generally, energy extraction is an efficient process, while the reverse is accompanied by significant dissipative heating and inefficiency. Velocity distortions can be detrimental to plasma operation, but can be exploited to tailor flows through innovative electromagnetic configurations.

  11. Self-Driving Cars and Engineering Ethics: The Need for a System Level Analysis.

    Science.gov (United States)

    Borenstein, Jason; Herkert, Joseph R; Miller, Keith W

    2017-11-13

    The literature on self-driving cars and ethics continues to grow. Yet much of it focuses on ethical complexities emerging from an individual vehicle. That is an important but insufficient step towards determining how the technology will impact human lives and society more generally. What must complement ongoing discussions is a broader, system level of analysis that engages with the interactions and effects that these cars will have on one another and on the socio-technical systems in which they are embedded. To bring the conversation of self-driving cars to the system level, we make use of two traffic scenarios which highlight some of the complexities that designers, policymakers, and others should consider related to the technology. We then describe three approaches that could be used to address such complexities and their associated shortcomings. We conclude by bringing attention to the "Moral Responsibility for Computing Artifacts: The Rules", a framework that can provide insight into how to approach ethical issues related to self-driving cars.

  12. An Investigation into Soft Error Detection Efficiency at Operating System Level

    Directory of Open Access Journals (Sweden)

    Seyyed Amir Asghari

    2014-01-01

    Full Text Available Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  13. A System-level Infrastructure for Multi-dimensional MP-SoC Design Space Co-exploration

    NARCIS (Netherlands)

    Jia, Z.J.; Bautista, T.; Nunez, A.; Pimentel, A.D.; Thompson, M.

    2013-01-01

    In this article, we present a flexible and extensible system-level MP-SoC design space exploration (DSE) infrastructure, called NASA. This highly modular framework uses well-defined interfaces to easily integrate different system-level simulation tools as well as different combinations of search

  14. Physics-based approach to chemical source localization using mobile robotic swarms

    Science.gov (United States)

    Zarzhitsky, Dimitri

    2008-07-01

    Recently, distributed computation has assumed a dominant role in the fields of artificial intelligence and robotics. To improve system performance, engineers are combining multiple cooperating robots into cohesive collectives called swarms. This thesis illustrates the application of basic principles of physicomimetics, or physics-based design, to swarm robotic systems. Such principles include decentralized control, short-range sensing and low power consumption. We show how the application of these principles to robotic swarms results in highly scalable, robust, and adaptive multi-robot systems. The emergence of these valuable properties can be predicted with the help of well-developed theoretical methods. In this research effort, we have designed and constructed a distributed physicomimetics system for locating sources of airborne chemical plumes. This task, called chemical plume tracing (CPT), is receiving a great deal of attention due to persistent homeland security threats. For this thesis, we have created a novel CPT algorithm called fluxotaxis that is based on theoretical principles of fluid dynamics. Analytically, we show that fluxotaxis combines the essence, as well as the strengths, of the two most popular biologically-inspired CPT methods-- chemotaxis and anemotaxis. The chemotaxis strategy consists of navigating in the direction of the chemical density gradient within the plume, while the anemotaxis approach is based on an upwind traversal of the chemical cloud. Rigorous and extensive experimental evaluations have been performed in simulated chemical plume environments. Using a suite of performance metrics that capture the salient aspects of swarm-specific behavior, we have been able to evaluate and compare the three CPT algorithms. We demonstrate the improved performance of our fluxotaxis approach over both chemotaxis and anemotaxis in these realistic simulation environments, which include obstacles. To test our understanding of CPT on actual hardware

  15. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy.

    Science.gov (United States)

    Jagetic, Lydia J; Newhauser, Wayne D

    2015-06-21

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  16. Fused Silica Final Optics for Inertial Fusion Energy: Radiation Studies and System-Level Analysis

    International Nuclear Information System (INIS)

    Latkowski, Jeffery F.; Kubota, Alison; Caturla, Maria J.; Dixit, Sham N.; Speth, Joel A.; Payne, Stephen A.

    2003-01-01

    The survivability of the final optic, which must sit in the line of sight of high-energy neutrons and gamma rays, is a key issue for any laser-driven inertial fusion energy (IFE) concept. Previous work has concentrated on the use of reflective optics. Here, we introduce and analyze the use of a transmissive final optic for the IFE application. Our experimental work has been conducted at a range of doses and dose rates, including those comparable to the conditions at the IFE final optic. The experimental work, in conjunction with detailed analysis, suggests that a thin, fused silica Fresnel lens may be an attractive option when used at a wavelength of 351 nm. Our measurements and molecular dynamics simulations provide convincing evidence that the radiation damage, which leads to optical absorption, not only saturates but that a 'radiation annealing' effect is observed. A system-level description is provided, including Fresnel lens and phase plate designs

  17. Unravelling evolutionary strategies of yeast for improving galactose utilization through integrated systems level analysis

    DEFF Research Database (Denmark)

    Hong, Kuk-Ki; Vongsangnak, Wanwipa; Vemuri, Goutham N

    2011-01-01

    Identification of the underlying molecular mechanisms for a derived phenotype by adaptive evolution is difficult. Here, we performed a systems-level inquiry into the metabolic changes occurring in the yeast Saccharomyces cerevisiae as a result of its adaptive evolution to increase its specific...... showed changes in ergosterol biosynthesis. Mutations were identified in proteins involved in the global carbon sensing Ras/PKA pathway, which is known to regulate the reserve carbohydrates metabolism. We evaluated one of the identified mutations, RAS2(Tyr112), and this mutation resulted in an increased...... design in bioengineering of improved strains and, that through systems biology, it is possible to identify mutations in evolved strain that can serve as unforeseen metabolic engineering targets for improving microbial strains for production of biofuels and chemicals....

  18. Systems-level mechanisms of action of Panax ginseng: a network pharmacological approach.

    Science.gov (United States)

    Park, Sa-Yoon; Park, Ji-Hun; Kim, Hyo-Su; Lee, Choong-Yeol; Lee, Hae-Jeung; Kang, Ki Sung; Kim, Chang-Eop

    2018-01-01

    Panax ginseng has been used since ancient times based on the traditional Asian medicine theory and clinical experiences, and currently, is one of the most popular herbs in the world. To date, most of the studies concerning P. ginseng have focused on specific mechanisms of action of individual constituents. However, in spite of many studies on the molecular mechanisms of P. ginseng , it still remains unclear how multiple active ingredients of P. ginseng interact with multiple targets simultaneously, giving the multidimensional effects on various conditions and diseases. In order to decipher the systems-level mechanism of multiple ingredients of P. ginseng , a novel approach is needed beyond conventional reductive analysis. We aim to review the systems-level mechanism of P. ginseng by adopting novel analytical framework-network pharmacology. Here, we constructed a compound-target network of P. ginseng using experimentally validated and machine learning-based prediction results. The targets of the network were analyzed in terms of related biological process, pathways, and diseases. The majority of targets were found to be related with primary metabolic process, signal transduction, nitrogen compound metabolic process, blood circulation, immune system process, cell-cell signaling, biosynthetic process, and neurological system process. In pathway enrichment analysis of targets, mainly the terms related with neural activity showed significant enrichment and formed a cluster. Finally, relative degrees analysis for the target-disease association of P. ginseng revealed several categories of related diseases, including respiratory, psychiatric, and cardiovascular diseases.

  19. System-level hazard analysis using the sequence-tree method

    International Nuclear Information System (INIS)

    Huang, H.-W.; Shih Chunkuan; Yih Swu; Chen, M.-H.

    2008-01-01

    A system-level PHA using the sequence-tree method is presented to perform safety-related digital I and C system SSA. The conventional PHA involves brainstorming among experts on various portions of the system to identify hazards through discussions. However, since the conventional PHA is not a systematic technique, the analysis results depend strongly on the experts' subjective opinions. The quality of analysis cannot be appropriately controlled. Therefore, this study presents a system-level sequence tree based PHA, which can clarify the relationship among the major digital I and C systems. This sequence-tree-based technique has two major phases. The first phase adopts a table to analyze each event in SAR Chapter 15 for a specific safety-related I and C system, such as RPS. The second phase adopts a sequence tree to recognize the I and C systems involved in the event, the working of the safety-related systems and how the backup systems can be activated to mitigate the consequence if the primary safety systems fail. The defense-in-depth echelons, namely the Control echelon, Reactor trip echelon, ESFAS echelon and Monitoring and indicator echelon, are arranged to build the sequence-tree structure. All the related I and C systems, including the digital systems and the analog back-up systems, are allocated in their specific echelons. This system-centric sequence-tree analysis not only systematically identifies preliminary hazards, but also vulnerabilities in a nuclear power plant. Hence, an effective simplified D3 evaluation can also be conducted

  20. Physics-Based Predictions for Coherent Change Detection Using X-Band Synthetic Aperture Radar

    Directory of Open Access Journals (Sweden)

    Mark Preiss

    2005-12-01

    Full Text Available A theoretical model is developed to describe the interferometric coherency between pairs of SAR images of rough soil surfaces. The model is derived using a dyadic form for surface reflectivity in the Kirchhoff approximation. This permits the combination of Kirchhoff theory and spotlight synthetic aperture radar (SAR image formation theory. The resulting model is used to describe the interferometric coherency between pairs of SAR images of rough soil surfaces. The theoretical model is applied to SAR images formed before and after surface changes observed by a repeat-pass SAR system. The change in surface associated with a tyre track following vehicle passage is modelled and SAR coherency estimates are obtained. Predicted coherency distributions for both the change and no-change scenarios are used to estimate receiver operator curves for the detection of the changes using a high-resolution, X-band SAR system.

  1. The challenge of measuring emergency preparedness: integrating component metrics to build system-level measures for strategic national stockpile operations.

    Science.gov (United States)

    Jackson, Brian A; Faith, Kay Sullivan

    2013-02-01

    Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations. We adapted an engineering analytic technique used to assess the reliability of technological systems-failure mode and effects analysis-to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall. Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example. Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.

  2. A Physically Based Correlation of Irradiation-Induced Transition Temperature Shifts for RPV Steels

    Energy Technology Data Exchange (ETDEWEB)

    Eason, Ernest D. [Modeling and Computing Services, LLC; Odette, George Robert [UCSB; Nanstad, Randy K [ORNL; Yamamoto, Takuya [ORNL

    2007-11-01

    The reactor pressure vessels (RPVs) of commercial nuclear power plants are subject to embrittlement due to exposure to high-energy neutrons from the core, which causes changes in material toughness properties that increase with radiation exposure and are affected by many variables. Irradiation embrittlement of RPV beltline materials is currently evaluated using Regulatory Guide 1.99 Revision 2 (RG1.99/2), which presents methods for estimating the shift in Charpy transition temperature at 30 ft-lb (TTS) and the drop in Charpy upper shelf energy (ΔUSE). The purpose of the work reported here is to improve on the TTS correlation model in RG1.99/2 using the broader database now available and current understanding of embrittlement mechanisms. The USE database and models have not been updated since the publication of NUREG/CR-6551 and, therefore, are not discussed in this report. The revised embrittlement shift model is calibrated and validated on a substantially larger, better-balanced database compared to prior models, including over five times the amount of data used to develop RG1.99/2. It also contains about 27% more data than the most recent update to the surveillance shift database, in 2000. The key areas expanded in the current database relative to the database available in 2000 are low-flux, low-copper, and long-time, high-fluence exposures, all areas that were previously relatively sparse. All old and new surveillance data were reviewed for completeness, duplicates, and discrepancies in cooperation with the American Society for Testing and Materials (ASTM) Subcommittee E10.02 on Radiation Effects in Structural Materials. In the present modeling effort, a 10% random sample of data was reserved from the fitting process, and most aspects of the model were validated with that sample as well as other data not used in calibration. The model is a hybrid, incorporating both physically motivated features and empirical calibration to the U.S. power reactor surveillance

  3. Physics-Based Conceptual Design Flying Qualities Analysis using OpenVSP and VSPAero, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA?s OpenVSP tool suite provides a common parametrically driven geometry model formany different analyses for aircraft and is primarily used in the conceptual...

  4. Non-traditional Physics-based Inverse Approaches for Determining a Buried Object’s Location

    Science.gov (United States)

    2008-09-01

    parameterization of its time-decay curve) in dipole models ( Pasion and Oldenburg, 2001) or the amplitudes of responding magnetic sources in the NSMS...commonly in use. According to the simple dipole model ( Pasion and Oldenburg, 2001), the secondary magnetic field due to the dipole m is 3 0 1 ˆ ˆ(3...Forum, St. Louis, MO. L. R. Pasion and D. W. Oldenburg (2001), “A discrimination algorithm for UXO using time domain electromagnetics.” J. Environ

  5. System-level tools and reconfigurable computing for next-generation HWIL systems

    Science.gov (United States)

    Stark, Derek; McAulay, Derek; Cantle, Allan J.; Devlin, Malachy

    2001-08-01

    Previous work has been presented on the creation of computing architectures called DIME, which addressed the particular computing demands of hardware in the loop systems. These demands include low latency, high data rates and interfacing. While it is essential to have a capable platform for handling and processing of the data streams, the tools must also complement this so that a system's engineer is able to construct their final system. The paper will present the work in the area of integration of system level design tools, such as MATLAB and SIMULINK, with a reconfigurable computing platform. This will demonstrate how algorithms can be implemented and simulated in a familiar rapid application development environment before they are automatically transposed for downloading directly to the computing platform. This complements the established control tools, which handle the configuration and control of the processing systems leading to a tool suite for system development and implementation. As the development tools have evolved the core-processing platform has also been enhanced. These improved platforms are based on dynamically reconfigurable computing, utilizing FPGA technologies, and parallel processing methods that more than double the performance and data bandwidth capabilities. This offers support for the processing of images in Infrared Scene Projectors with 1024 X 1024 resolutions at 400 Hz frame rates. The processing elements will be using the latest generation of FPGAs, which implies that the presented systems will be rated in terms of Tera (1012) operations per second.

  6. The next generation in optical transport semiconductors: IC solutions at the system level

    Science.gov (United States)

    Gomatam, Badri N.

    2005-02-01

    In this tutorial overview, we survey some of the challenging problems facing Optical Transport and their solutions using new semiconductor-based technologies. Advances in 0.13um CMOS, SiGe/HBT and InP/HBT IC process technologies and mixed-signal design strategies are the fundamental breakthroughs that have made these solutions possible. In combination with innovative packaging and transponder/transceiver architectures IC approaches have clearly demonstrated enhanced optical link budgets with simultaneously lower (perhaps the lowest to date) cost and manufacturability tradeoffs. This paper will describe: *Electronic Dispersion Compensation broadly viewed as the overcoming of dispersion based limits to OC-192 links and extending link budgets, *Error Control/Coding also known as Forward Error Correction (FEC), *Adaptive Receivers for signal quality monitoring for real