WorldWideScience

Sample records for modeling process includes

  1. Atmosphere-soil-vegetation model including CO2 exchange processes: SOLVEG2

    International Nuclear Information System (INIS)

    Nagai, Haruyasu

    2004-11-01

    A new atmosphere-soil-vegetation model named SOLVEG2 (SOLVEG version 2) was developed to study the heat, water, and CO 2 exchanges between the atmosphere and land-surface. The model consists of one-dimensional multilayer sub-models for the atmosphere, soil, and vegetation. It also includes sophisticated processes for solar and long-wave radiation transmission in vegetation canopy and CO 2 exchanges among the atmosphere, soil, and vegetation. Although the model usually simulates only vertical variation of variables in the surface-layer atmosphere, soil, and vegetation canopy by using meteorological data as top boundary conditions, it can be used by coupling with a three-dimensional atmosphere model. In this paper, details of SOLVEG2, which includes the function of coupling with atmosphere model MM5, are described. (author)

  2. Modeling and analysis of the affinity filtration process, including broth feeding, washing, and elution steps.

    Science.gov (United States)

    He, L Z; Dong, X Y; Sun, Y

    1998-01-01

    Affinity filtration is a developing protein purification technique that combines the high selectivity of affinity chromatography and the high processing speed of membrane filtration. In this work a lumped kinetic model was developed to describe the whole affinity filtration process, including broth feeding, contaminant washing, and elution steps. Affinity filtration experiments were conducted to evaluate the model using bovine serum albumin as a model protein and a highly substituted Blue Sepharose as an affinity adsorbent. The model with nonadjustable parameters agreed fairly to the experimental results. Thus, the performance of the affinity filtration in processing a crude broth containing contaminant proteins was analyzed by computer simulations using the lumped model. The simulation results show that there is an optimal protein loading for obtaining the maximum recovery yield of the desired protein with a constant purity at each operating condition. The concentration of a crude broth is beneficial in increasing the recovery yield of the desired protein. Using a constant amount of the affinity adsorbent, the recovery yield can be enhanced by decreasing the solution volume in the stirred tank due to the increase of the adsorbent weight fraction. It was found that the lumped kinetic model was simple and useful in analyzing the whole affinity filtration process.

  3. Collisional-radiative model including recombination processes for W27+ ion★

    Science.gov (United States)

    Murakami, Izumi; Sasaki, Akira; Kato, Daiji; Koike, Fumihiro

    2017-10-01

    We have constructed a collisional-radiative (CR) model for W27+ ions including 226 configurations with n ≤ 9 and ł ≤ 5 for spectroscopic diagnostics. We newly include recombination processes in the model and this is the first result of extreme ultraviolet spectrum calculated for recombining plasma component. Calculated spectra in 40-70 Å range in ionizing and recombining plasma components show similar 3 strong lines and 1 line weak in recombining plasma component at 45-50 Å and many weak lines at 50-65 Å for both components. Recombination processes do not contribute much to the spectrum at around 60 Å for W27+ ion. Dielectronic satellite lines are also minor contribution to the spectrum of recombining plasma component. Dielectronic recombination (DR) rate coefficient from W28+ to W27+ ions is also calculated with the same atomic data in the CR model. We found that larger set of energy levels including many autoionizing states gave larger DR rate coefficients but our rate agree within factor 6 with other works at electron temperature around 1 keV in which W27+ and W28+ ions are usually observed in plasmas. Contribution to the Topical Issue "Atomic and Molecular Data and their Applications", edited by Gordon W.F. Drake, Jung-Sik Yoon, Daiji Kato, and Grzegorz Karwasz.

  4. Finding practical phenomenological models that include both photoresist behavior and etch process effects

    Science.gov (United States)

    Jung, Sunwook; Do, Thuy; Sturtevant, John

    2015-03-01

    For more than five decades, the semiconductor industry has overcome technology challenges with innovative ideas that have continued to enable Moore's Law. It is clear that multi-patterning lithography is vital for 20nm half pitch using 193i. Multi-patterning exposure sequences and pattern multiplication processes can create complicated tolerance accounting due to the variability associated with the component processes. It is essential to ensure good predictive accuracy of compact etch models used in multipatterning simulation. New modelforms have been developed to account for etch bias behavior at 20 nm and below. The new modeling components show good results in terms of global fitness and some improved predication capability for specific features. We've also investigated a new methodology to make the etch model aware of 3D resist profiles.

  5. Steady-state analysis of activated sludge processes with a settler model including sludge compression.

    Science.gov (United States)

    Diehl, S; Zambrano, J; Carlsson, B

    2016-01-01

    A reduced model of a completely stirred-tank bioreactor coupled to a settling tank with recycle is analyzed in its steady states. In the reactor, the concentrations of one dominant particulate biomass and one soluble substrate component are modelled. While the biomass decay rate is assumed to be constant, growth kinetics can depend on both substrate and biomass concentrations, and optionally model substrate inhibition. Compressive and hindered settling phenomena are included using the Bürger-Diehl settler model, which consists of a partial differential equation. Steady-state solutions of this partial differential equation are obtained from an ordinary differential equation, making steady-state analysis of the entire plant difficult. A key result showing that the ordinary differential equation can be replaced with an approximate algebraic equation simplifies model analysis. This algebraic equation takes the location of the sludge-blanket during normal operation into account, allowing for the limiting flux capacity caused by compressive settling to easily be included in the steady-state mass balance equations for the entire plant system. This novel approach grants the possibility of more realistic solutions than other previously published reduced models, comprised of yet simpler settler assumptions. The steady-state concentrations, solids residence time, and the wastage flow ratio are functions of the recycle ratio. Solutions are shown for various growth kinetics; with different values of biomass decay rate, influent volumetric flow, and substrate concentration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes

    Energy Technology Data Exchange (ETDEWEB)

    García-Gen, Santiago [Department of Chemical Engineering, Institute of Technology, University of Santiago de Compostela, 15782 Santiago de Compostela (Spain); Sousbie, Philippe; Rangaraj, Ganesh [INRA, UR50, Laboratoire de Biotechnologie de l’Environnement, Avenue des Etangs, Narbonne F-11100 (France); Lema, Juan M. [Department of Chemical Engineering, Institute of Technology, University of Santiago de Compostela, 15782 Santiago de Compostela (Spain); Rodríguez, Jorge, E-mail: jrodriguez@masdar.ac.ae [Department of Chemical Engineering, Institute of Technology, University of Santiago de Compostela, 15782 Santiago de Compostela (Spain); Institute Centre for Water and Environment (iWater), Masdar Institute of Science and Technology, PO Box 54224 Abu Dhabi (United Arab Emirates); Steyer, Jean-Philippe; Torrijos, Michel [INRA, UR50, Laboratoire de Biotechnologie de l’Environnement, Avenue des Etangs, Narbonne F-11100 (France)

    2015-01-15

    Highlights: • Fractionation of solid wastes into readily and slowly biodegradable fractions. • Kinetic coefficients estimation from mono-digestion batch assays. • Validation of kinetic coefficients with a co-digestion continuous experiment. • Simulation of batch and continuous experiments with an ADM1-based model. - Abstract: A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowly biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 g VS/L d. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes.

  7. Modelling of safety barriers including human and organisational factors to improve process safety

    DEFF Research Database (Denmark)

    Markert, Frank; Duijm, Nijs Jan; Thommesen, Jacob

    2013-01-01

    Assessment Methodology for IndustrieS, see Salvi et al 2006). ARAMIS employs the bow-tie approach to modelling hazardous scenarios, and it suggests the outcome of auditing safety management to be connected to a semi-quantitative assessment of the quality of safety barriers. ARAMIS discriminates a number...... of safety barrier (passive, automated, or involving human action). Such models are valuable for many purposes, but are difficult to apply to more complex situations, as the influences are to be set individually for each barrier. The approach described in this paper is trying to improve the state...

  8. Including Organizational Cultural Parameters in Work Processes

    National Research Council Canada - National Science Library

    Handley, Holly A; Heacox, Nancy J

    2004-01-01

    .... In order to represent the organizational impact on the work process, five organizational cultural parameters were identified and included in an algorithm for modeling and simulation of cultural...

  9. A Self-consistent Model of the Coronal Heating and Solar Wind Acceleration Including Compressible and Incompressible Heating Processes

    Science.gov (United States)

    Shoda, Munehito; Yokoyama, Takaaki; Suzuki, Takeru K.

    2018-02-01

    We propose a novel one-dimensional model that includes both shock and turbulence heating and qualify how these processes contribute to heating the corona and driving the solar wind. Compressible MHD simulations allow us to automatically consider shock formation and dissipation, while turbulent dissipation is modeled via a one-point closure based on Alfvén wave turbulence. Numerical simulations were conducted with different photospheric perpendicular correlation lengths {λ }0, which is a critical parameter of Alfvén wave turbulence, and different root-mean-square photospheric transverse-wave amplitudes δ {v}0. For the various {λ }0, we obtain a low-temperature chromosphere, high-temperature corona, and supersonic solar wind. Our analysis shows that turbulence heating is always dominant when {λ }0≲ 1 {Mm}. This result does not mean that we can ignore the compressibility because the analysis indicates that the compressible waves and their associated density fluctuations enhance the Alfvén wave reflection and therefore the turbulence heating. The density fluctuation and the cross-helicity are strongly affected by {λ }0, while the coronal temperature and mass-loss rate depend weakly on {λ }0.

  10. A practical kinetic model that considers endproduct inhibition in anaerobic digestion processes by including the equilibrium constant.

    Science.gov (United States)

    Hoh, C Y; Cord-Ruwisch, R

    1996-09-05

    The classical Michaelis-Menten model is widely used as the basis for modeling of a number of biological systems. As the model does not consider the inhibitory effect of endproducts that accumulate in virtually all bioprocesses, it is often modified to prevent the overestimation of reaction rates when products have accumulated. Traditional approaches of model modification use the inclusion of irreversible, competitive, and noncompetitive inhibition factors. This article demonstrates that these inhibition factors are insufficient to predict product inhibition of reactions that are close the dynamic equilibrium. All models investigated were found to violate thermodynamic laws as they predicted positive reaction rates for reactions that were endergonic due to high endproduct concentrations. For modeling of biological processes that operate close to the dynamic equilibrium (e.g., anaerobic processes), it is critical to prevent the prediction of positive reaction rates when the reaction has already reached the dynamic equilibrium. This can be achieved by using a reversible kinetic model. However, the major drawback of the reversible kinetic model is the large number of empirical parameters it requires. These parameters are difficult to determine and prone to experimental error. For this reason, the reversible model is not practical in the modeling of biological processes.This article uses the fundamentals of steady-state kinetics and thermodynamics to establish an equation for the reversible kinetic model that is of practical use in bio-process modeling. The behavior of this equilibrium-based model is compared with Michaelis-Menten-based models that use traditional inhibition factors. The equilibrium-based model did not require any empirical inhibition factor to correctly predict when reaction rates must be zero due to the free energy change being zero. For highly exergonic reactions, the equilibrium-based model did not deviate significantly from the Michaelis

  11. Unified model to the Tungsten inert Gas welding process including the cathode, the plasma and the anode

    International Nuclear Information System (INIS)

    Brochard, M.

    2009-06-01

    During this work, a 2D axially symmetric model of a TIG arc welding process had been developed in order to predict for given welding parameters, the needed variables for a designer of welded assembly: the heat input on the work piece, the weld pool geometry,... The developed model, using the Cast3M finite elements software, deals with the physical phenomena acting in each part of the process: the cathode, the plasma, the work piece with a weld pool, and the interfaces between these parts. To solve this model, the thermohydraulics equations are coupled with the electromagnetic equations that are calculated in part using the least squares finite element method. The beginning of the model validation consisted in comparing the results obtained with the ones available in the scientific literature. Thus, this step points out the action of each force in the weld pool, the contribution of each heat flux in the energy balance. Finally, to validate the model predictiveness, experimental and numerical sensitivity analyses were conducted using a design of experiments approach. The effects of the process current, the arc gap and the electrode tip angle on the weld pool geometry and the energy transferred to the work piece and the arc efficiency were studied. The good agreement obtained by the developed model for these outputs shows the good reproduction of the process physics. (author)

  12. 2D Numerical Modelling of the Resin Injection Pultrusion Process Including Experimental Resin Kinetics and Temperature Validation

    DEFF Research Database (Denmark)

    Rasmussen, Filip Salling; Sonne, Mads Rostgaard; Larsen, Martin

    In the present study, a two-dimensional (2D) transient Eulerian thermo-chemical analysis of a carbon fibre epoxy thermosetting Resin Injection Pultrusion (RIP) process is carried out. The numerical model is implemented using the well known unconditionally stable Alternating Direction Implicit (ADI...... inside the composite profile are validated by comparison with experimental measurements and good agreement is found....

  13. Including product features in process redesign

    DEFF Research Database (Denmark)

    Hvam, Lars; Hauksdóttir, Dagný; Mortensen, Niels Henrik

    2017-01-01

    This article suggests a visual modelling method for integrating models of product features with business process models for redesigning the business processes involving specifications of customer-tailored products and services. The current methods for redesigning these types of business processes...... do not take into account how the product features are applied throughout the process, which makes it difficult to obtain a comprehensive understanding of the activities in the processes and to generate significant improvements. The suggested approach models the product family using the so......-called product variant master and the business process modelling notation for modelling the process flow. The product model is combined with the process map by identifying features used in each step of the process flow. Additionally, based on the information absorbed from the integrated model, the value stream...

  14. Coupled physical/biogeochemical modeling including O2-dependent processes in the Eastern Boundary Upwelling Systems: application in the Benguela

    Directory of Open Access Journals (Sweden)

    E. Gutknecht

    2013-06-01

    Full Text Available The Eastern Boundary Upwelling Systems (EBUS contribute to one fifth of the global catches in the ocean. Often associated with Oxygen Minimum Zones (OMZs, EBUS represent key regions for the oceanic nitrogen (N cycle. Important bioavailable N loss due to denitrification and anammox processes as well as greenhouse gas emissions (e.g, N2O occur also in these EBUS. However, their dynamics are currently crudely represented in global models. In the climate change context, improving our capability to properly represent these areas is crucial due to anticipated changes in the winds, productivity, and oxygen content. We developed a biogeochemical model (BioEBUS taking into account the main processes linked with EBUS and associated OMZs. We implemented this model in a 3-D realistic coupled physical/biogeochemical configuration in the Namibian upwelling system (northern Benguela using the high-resolution hydrodynamic ROMS model. We present here a validation using in situ and satellite data as well as diagnostic metrics and sensitivity analyses of key parameters and N2O parameterizations. The impact of parameter values on the OMZ off Namibia, on N loss, and on N2O concentrations and emissions is detailed. The model realistically reproduces the vertical distribution and seasonal cycle of observed oxygen, nitrate, and chlorophyll a concentrations, and the rates of microbial processes (e.g, NH4+ and NO2− oxidation, NO3− reduction, and anammox as well. Based on our sensitivity analyses, biogeochemical parameter values associated with organic matter decomposition, vertical sinking, and nitrification play a key role for the low-oxygen water content, N loss, and N2O concentrations in the OMZ. Moreover, the explicit parameterization of both steps of nitrification, ammonium oxidation to nitrate with nitrite as an explicit intermediate, is necessary to improve the representation of microbial activity linked with the OMZ. The simulated minimum oxygen

  15. Segment Tracking via a Spatiotemporal Linking Process including Feedback Stabilization in an n-D Lattice Model

    Directory of Open Access Journals (Sweden)

    Florentin Wörgötter

    2009-11-01

    Full Text Available Model-free tracking is important for solving tasks such as moving-object tracking and action recognition in cases where no prior object knowledge is available. For this purpose, we extend the concept of spatially synchronous dynamics in spin-lattice models to the spatiotemporal domain to track segments within an image sequence. The method is related to synchronization processes in neural networks and based on superparamagnetic clustering of data. Spin interactions result in the formation of clusters of correlated spins, providing an automatic labeling of corresponding image regions. The algorithm obeys detailed balance. This is an important property as it allows for consistent spin-transfer across subsequent frames, which can be used for segment tracking. Therefore, in the tracking process the correct equilibrium will always be found, which is an important advance as compared with other more heuristic tracking procedures. In the case of long image sequences, i.e., movies, the algorithm is augmented with a feedback mechanism, further stabilizing segment tracking.

  16. Model of spur processes in aqueous radiation chemistry including spur overlap and a novel initial hydrated electron distribution

    International Nuclear Information System (INIS)

    Short, D.R.

    1980-01-01

    Results are presented from computer calculations based upon an improved diffusion-kinetic model of the spur which includes a novel initial distribution for the hydrated electron and an approximate mathematical treatment of the overlap of spurs in three dimensions. Experimental data for the decay of the hydrated electron and hydroxyl radical before one in electron-pulse-irradated, solute-free and air-free water are fit wihtin experimental uncertainty by adjustment of the initial spatial distributions of spur intermediates and the average energy deposited in the spur. Using the same values of these parameters, the hydrated electron decay is computed for times from 1 ps 10 μs after the radiatio pulse. The results of such calcuations for various conditions of pulse dose and concentrations of scavengers of individual primary chemical species in the spur are compared with corresponding experimental data obtained predominantly from water and aqueous solutions irradiated with 10 to 15 MeV electron pulses. Very good agreement between calculated and experimental hydrated electron decay in pure water is observed for the entire time range studied when a pulse dose of approximately 7900 rads is modeled, but the calcuated and experimental curves are observed to deviate for times greater than 10 ns nanoseconds when low pulse doses and low scavenger concentrations are considered. It is shown that this deviation is experimental and calculated hydrated electron decay cannot be explained by assuming the presence of a hydrated electron scavenging impurity nor by employing a distribution of nearest neighbor interspur distances to refine the overlap approximation

  17. Numerical modeling of AA2024-T3 friction stir welding process for residual stress evaluation, including softening effects

    DEFF Research Database (Denmark)

    Sonne, Mads Rostgaard; Carlone, Pierpaolo; Palazzo, Gaetano S.

    2014-01-01

    In the present paper, a numerical finite element model of the precipitation hardenable AA2024-T3 aluminum alloy, consisting of a heat transfer analysis based on the Thermal Pseudo Mechanical model for heat generation, and a sequentially coupled quasi-static stress analysis is proposed. Metallurgi......In the present paper, a numerical finite element model of the precipitation hardenable AA2024-T3 aluminum alloy, consisting of a heat transfer analysis based on the Thermal Pseudo Mechanical model for heat generation, and a sequentially coupled quasi-static stress analysis is proposed...

  18. Including Organizational Cultural Parameters in Work Processes

    National Research Council Canada - National Science Library

    Handley, Holly A; Heacox, Nancy J

    2004-01-01

    ... between decision-makers of different nationalities. In addition to nationality, a decision-maker is also a member of an organization and brings this organizational culture to his role in the work process, where it may also affect his task performance...

  19. SEEPAGE MODEL FOR PA INCLUDING DRIFT COLLAPSE

    International Nuclear Information System (INIS)

    C. Tsang

    2004-01-01

    The purpose of this report is to document the predictions and analyses performed using the seepage model for performance assessment (SMPA) for both the Topopah Spring middle nonlithophysal (Tptpmn) and lower lithophysal (Tptpll) lithostratigraphic units at Yucca Mountain, Nevada. Look-up tables of seepage flow rates into a drift (and their uncertainty) are generated by performing numerical simulations with the seepage model for many combinations of the three most important seepage-relevant parameters: the fracture permeability, the capillary-strength parameter 1/a, and the percolation flux. The percolation flux values chosen take into account flow focusing effects, which are evaluated based on a flow-focusing model. Moreover, multiple realizations of the underlying stochastic permeability field are conducted. Selected sensitivity studies are performed, including the effects of an alternative drift geometry representing a partially collapsed drift from an independent drift-degradation analysis (BSC 2004 [DIRS 166107]). The intended purpose of the seepage model is to provide results of drift-scale seepage rates under a series of parameters and scenarios in support of the Total System Performance Assessment for License Application (TSPA-LA). The SMPA is intended for the evaluation of drift-scale seepage rates under the full range of parameter values for three parameters found to be key (fracture permeability, the van Genuchten 1/a parameter, and percolation flux) and drift degradation shape scenarios in support of the TSPA-LA during the period of compliance for postclosure performance [Technical Work Plan for: Performance Assessment Unsaturated Zone (BSC 2002 [DIRS 160819], Section I-4-2-1)]. The flow-focusing model in the Topopah Spring welded (TSw) unit is intended to provide an estimate of flow focusing factors (FFFs) that (1) bridge the gap between the mountain-scale and drift-scale models, and (2) account for variability in local percolation flux due to

  20. Computer Simulation of the Solidification Process Including Air Gap Formation

    Directory of Open Access Journals (Sweden)

    Skrzypczak T.

    2017-12-01

    Full Text Available The paper presents an approach of numerical modelling of alloy solidification in permanent mold and transient heat transport between the casting and the mold in two-dimensional space. The gap of time-dependent width called "air gap", filled with heat conducting gaseous medium is included in the model. The coefficient of thermal conductivity of the gas filling the space between the casting and the mold is small enough to introduce significant thermal resistance into the heat transport process. The mathematical model of heat transport is based on the partial differential equation of heat conduction written independently for the solidifying region and the mold. Appropriate solidification model based on the latent heat of solidification is also included in the mathematical description. These equations are supplemented by appropriate initial and boundary conditions. The formation process of air gap depends on the thermal deformations of the mold and the casting. The numerical model is based on the finite element method (FEM with independent spatial discretization of interacting regions. It results in multi-mesh problem because the considered regions are disconnected.

  1. Seepage Model for PA Including Drift Collapse

    International Nuclear Information System (INIS)

    Li, G.; Tsang, C.

    2000-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the predictions and analysis performed using the Seepage Model for Performance Assessment (PA) and the Disturbed Drift Seepage Submodel for both the Topopah Spring middle nonlithophysal and lower lithophysal lithostratigraphic units at Yucca Mountain. These results will be used by PA to develop the probability distribution of water seepage into waste-emplacement drifts at Yucca Mountain, Nevada, as part of the evaluation of the long term performance of the potential repository. This AMR is in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' (CRWMS M andO 2000 [153447]). This purpose is accomplished by performing numerical simulations with stochastic representations of hydrological properties, using the Seepage Model for PA, and evaluating the effects of an alternative drift geometry representing a partially collapsed drift using the Disturbed Drift Seepage Submodel. Seepage of water into waste-emplacement drifts is considered one of the principal factors having the greatest impact of long-term safety of the repository system (CRWMS M andO 2000 [153225], Table 4-1). This AMR supports the analysis and simulation that are used by PA to develop the probability distribution of water seepage into drift, and is therefore a model of primary (Level 1) importance (AP-3.15Q, ''Managing Technical Product Inputs''). The intended purpose of the Seepage Model for PA is to support: (1) PA; (2) Abstraction of Drift-Scale Seepage; and (3) Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR). Seepage into drifts is evaluated by applying numerical models with stochastic representations of hydrological properties and performing flow simulations with multiple realizations of the permeability field around the drift. The Seepage Model for PA uses the distribution of permeabilities derived from air injection testing in niches and in the cross drift to

  2. Seepage Model for PA Including Dift Collapse

    Energy Technology Data Exchange (ETDEWEB)

    G. Li; C. Tsang

    2000-12-20

    The purpose of this Analysis/Model Report (AMR) is to document the predictions and analysis performed using the Seepage Model for Performance Assessment (PA) and the Disturbed Drift Seepage Submodel for both the Topopah Spring middle nonlithophysal and lower lithophysal lithostratigraphic units at Yucca Mountain. These results will be used by PA to develop the probability distribution of water seepage into waste-emplacement drifts at Yucca Mountain, Nevada, as part of the evaluation of the long term performance of the potential repository. This AMR is in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' (CRWMS M&O 2000 [153447]). This purpose is accomplished by performing numerical simulations with stochastic representations of hydrological properties, using the Seepage Model for PA, and evaluating the effects of an alternative drift geometry representing a partially collapsed drift using the Disturbed Drift Seepage Submodel. Seepage of water into waste-emplacement drifts is considered one of the principal factors having the greatest impact of long-term safety of the repository system (CRWMS M&O 2000 [153225], Table 4-1). This AMR supports the analysis and simulation that are used by PA to develop the probability distribution of water seepage into drift, and is therefore a model of primary (Level 1) importance (AP-3.15Q, ''Managing Technical Product Inputs''). The intended purpose of the Seepage Model for PA is to support: (1) PA; (2) Abstraction of Drift-Scale Seepage; and (3) Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR). Seepage into drifts is evaluated by applying numerical models with stochastic representations of hydrological properties and performing flow simulations with multiple realizations of the permeability field around the drift. The Seepage Model for PA uses the distribution of permeabilities derived from air injection testing in

  3. An Integrated Biochemistry Laboratory, Including Molecular Modeling

    Science.gov (United States)

    Hall, Adele J. Wolfson Mona L.; Branham, Thomas R.

    1996-11-01

    purity of the final preparation. Using mini-gels (BioRad Mini-Protean II apparatus), each group can pour, run, and stain their own 8 7.3-cm gel within the lab period; destaining can be carried out at any time afterward. The main contaminating band observed is ovalbumin, at a molecular weight of 46,000. Computer Modeling Using the program Quanta (MSI, Burlington, MA) on Indigo workstations (Silicon Graphics, Hudson, MA), the students retrieve coordinates from an MSI version of the Protein Data Bank, display the structure, and rationalize what changes would occur with a mutated form of the protein. Even for those who do not have Quanta or analogous programs, structural coordinates are available through the Internet. Students are prepared for their independent use of the molecular modeling workstations through a series of tutorials during the course of the semester. These exercises require that the students become familiar with specific applications of Quanta, including setting secondary conformation and hydrogen bonds, energy calculations, selectively displaying parts of molecules, measuring interatomic distances, and editing existing proteins. This introduction to macromolecular modeling is comparable to that suggested by Harvey and Tan (17) as a brief introduction to the field. Peer Review For each writing assignment (short paper and grant proposal), one week of lab is devoted to the peer review process. Students are to come to lab with a draft of their paper and a cover letter to their reviewers, which states how far they believe they are in the writing process; what they like and don't like about their work at this stage; and in what specific areas they need help (e.g., audience level, organization, use of references). They exchange papers, reading two or three during the course of the lab period. For each paper, they fill out a peer review form, which requires that they summarize the paper; look for clarity of presentation, appropriate citations, and use of others

  4. Grand unified models including extra Z bosons

    International Nuclear Information System (INIS)

    Li Tiezhong

    1989-01-01

    The grand unified theories (GUT) of the simple Lie groups including extra Z bosons are discussed. Under authors's hypothesis there are only SU 5+m SO 6+4n and E 6 groups. The general discussion of SU 5+m is given, then the SU 6 and SU 7 are considered. In SU 6 the 15+6 * +6 * fermion representations are used, which are not same as others in fermion content, Yukawa coupling and broken scales. A conception of clans of particles, which are not families, is suggested. These clans consist of extra Z bosons and the corresponding fermions of the scale. The all of fermions in the clans are down quarks except for the standard model which consists of Z bosons and 15 fermions, therefore, the spectrum of the hadrons which are composed of these down quarks are different from hadrons at present

  5. Enhanced battery model including temperature effects

    NARCIS (Netherlands)

    Rosca, B.; Wilkins, S.

    2013-01-01

    Within electric and hybrid vehicles, batteries are used to provide/buffer the energy required for driving. However, battery performance varies throughout the temperature range specific to automotive applications, and as such, models that describe this behaviour are required. This paper presents a

  6. A hydrodynamic model for granular material flows including segregation effects

    Directory of Open Access Journals (Sweden)

    Gilberg Dominik

    2017-01-01

    Full Text Available The simulation of granular flows including segregation effects in large industrial processes using particle methods is accurate, but very time-consuming. To overcome the long computation times a macroscopic model is a natural choice. Therefore, we couple a mixture theory based segregation model to a hydrodynamic model of Navier-Stokes-type, describing the flow behavior of the granular material. The granular flow model is a hybrid model derived from kinetic theory and a soil mechanical approach to cover the regime of fast dilute flow, as well as slow dense flow, where the density of the granular material is close to the maximum packing density. Originally, the segregation model has been formulated by Thornton and Gray for idealized avalanches. It is modified and adapted to be in the preferred form for the coupling. In the final coupled model the segregation process depends on the local state of the granular system. On the other hand, the granular system changes as differently mixed regions of the granular material differ i.e. in the packing density. For the modeling process the focus lies on dry granular material flows of two particle types differing only in size but can be easily extended to arbitrary granular mixtures of different particle size and density. To solve the coupled system a finite volume approach is used. To test the model the rotational mixing of small and large particles in a tumbler is simulated.

  7. A hydrodynamic model for granular material flows including segregation effects

    Science.gov (United States)

    Gilberg, Dominik; Klar, Axel; Steiner, Konrad

    2017-06-01

    The simulation of granular flows including segregation effects in large industrial processes using particle methods is accurate, but very time-consuming. To overcome the long computation times a macroscopic model is a natural choice. Therefore, we couple a mixture theory based segregation model to a hydrodynamic model of Navier-Stokes-type, describing the flow behavior of the granular material. The granular flow model is a hybrid model derived from kinetic theory and a soil mechanical approach to cover the regime of fast dilute flow, as well as slow dense flow, where the density of the granular material is close to the maximum packing density. Originally, the segregation model has been formulated by Thornton and Gray for idealized avalanches. It is modified and adapted to be in the preferred form for the coupling. In the final coupled model the segregation process depends on the local state of the granular system. On the other hand, the granular system changes as differently mixed regions of the granular material differ i.e. in the packing density. For the modeling process the focus lies on dry granular material flows of two particle types differing only in size but can be easily extended to arbitrary granular mixtures of different particle size and density. To solve the coupled system a finite volume approach is used. To test the model the rotational mixing of small and large particles in a tumbler is simulated.

  8. Modeling heart rate variability including the effect of sleep stages

    Science.gov (United States)

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.

  9. Biosphere Process Model Report

    Energy Technology Data Exchange (ETDEWEB)

    J. Schmitt

    2000-05-25

    To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor

  10. Biosphere Process Model Report

    International Nuclear Information System (INIS)

    Schmitt, J.

    2000-01-01

    To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor

  11. Foam process models.

    Energy Technology Data Exchange (ETDEWEB)

    Moffat, Harry K.; Noble, David R.; Baer, Thomas A. (Procter & Gamble Co., West Chester, OH); Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann

    2008-09-01

    In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.

  12. Olkiluoto surface hydrological modelling: Update 2012 including salt transport modelling

    International Nuclear Information System (INIS)

    Karvonen, T.

    2013-11-01

    Posiva Oy is responsible for implementing a final disposal program for spent nuclear fuel of its owners Teollisuuden Voima Oyj and Fortum Power and Heat Oy. The spent nuclear fuel is planned to be disposed at a depth of about 400-450 meters in the crystalline bedrock at the Olkiluoto site. Leakages located at or close to spent fuel repository may give rise to the upconing of deep highly saline groundwater and this is a concern with regard to the performance of the tunnel backfill material after the closure of the tunnels. Therefore a salt transport sub-model was added to the Olkiluoto surface hydrological model (SHYD). The other improvements include update of the particle tracking algorithm and possibility to estimate the influence of open drillholes in a case where overpressure in inflatable packers decreases causing a hydraulic short-circuit between hydrogeological zones HZ19 and HZ20 along the drillhole. Four new hydrogeological zones HZ056, HZ146, BFZ100 and HZ039 were added to the model. In addition, zones HZ20A and HZ20B intersect with each other in the new structure model, which influences salinity upconing caused by leakages in shafts. The aim of the modelling of long-term influence of ONKALO, shafts and repository tunnels provide computational results that can be used to suggest limits for allowed leakages. The model input data included all the existing leakages into ONKALO (35-38 l/min) and shafts in the present day conditions. The influence of shafts was computed using eight different values for total shaft leakage: 5, 11, 20, 30, 40, 50, 60 and 70 l/min. The selection of the leakage criteria for shafts was influenced by the fact that upconing of saline water increases TDS-values close to the repository areas although HZ20B does not intersect any deposition tunnels. The total limit for all leakages was suggested to be 120 l/min. The limit for HZ20 zones was proposed to be 40 l/min: about 5 l/min the present day leakages to access tunnel, 25 l/min from

  13. Project Interface Requirements Process Including Shuttle Lessons Learned

    Science.gov (United States)

    Bauch, Garland T.

    2010-01-01

    Most failures occur at interfaces between organizations and hardware. Processing interface requirements at the start of a project life cycle will reduce the likelihood of costly interface changes/failures later. This can be done by adding Interface Control Documents (ICDs) to the Project top level drawing tree, providing technical direction to the Projects for interface requirements, and by funding the interface requirements function directly from the Project Manager's office. The interface requirements function within the Project Systems Engineering and Integration (SE&I) Office would work in-line with the project element design engineers early in the life cycle to enhance communications and negotiate technical issues between the elements. This function would work as the technical arm of the Project Manager to help ensure that the Project cost, schedule, and risk objectives can be met during the Life Cycle. Some ICD Lessons Learned during the Space Shuttle Program (SSP) Life Cycle will include the use of hardware interface photos in the ICD, progressive life cycle design certification by analysis, test, & operations experience, assigning interface design engineers to Element Interface (EI) and Project technical panels, and linking interface design drawings with project build drawings

  14. Process, including membrane separation, for separating hydrogen from hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Richard W. (Palo Alto, CA); Lokhandwala, Kaaeid A. (Union City, CA); He, Zhenjie (Fremont, CA); Pinnau, Ingo (Palo Alto, CA)

    2001-01-01

    Processes for providing improved methane removal and hydrogen reuse in reactors, particularly in refineries and petrochemical plants. The improved methane removal is achieved by selective purging, by passing gases in the reactor recycle loop across membranes selective in favor of methane over hydrogen, and capable of exhibiting a methane/hydrogen selectivity of at least about 2.5 under the process conditions.

  15. A fast and robust hepatocyte quantification algorithm including vein processing

    Directory of Open Access Journals (Sweden)

    Homeyer André

    2010-03-01

    Full Text Available Abstract Background Quantification of different types of cells is often needed for analysis of histological images. In our project, we compute the relative number of proliferating hepatocytes for the evaluation of the regeneration process after partial hepatectomy in normal rat livers. Results Our presented automatic approach for hepatocyte (HC quantification is suitable for the analysis of an entire digitized histological section given in form of a series of images. It is the main part of an automatic hepatocyte quantification tool that allows for the computation of the ratio between the number of proliferating HC-nuclei and the total number of all HC-nuclei for a series of images in one processing run. The processing pipeline allows us to obtain desired and valuable results for a wide range of images with different properties without additional parameter adjustment. Comparing the obtained segmentation results with a manually retrieved segmentation mask which is considered to be the ground truth, we achieve results with sensitivity above 90% and false positive fraction below 15%. Conclusions The proposed automatic procedure gives results with high sensitivity and low false positive fraction and can be applied to process entire stained sections.

  16. Cognitive Development Includes Global and Domain-Specific Processes

    Science.gov (United States)

    Kail, Robert V.

    2004-01-01

    Global accounts of cognitive development, best illustrated by Piaget's theory, dominated the field until the 1970s and 1980s, when they were gradually superseded by domain-specific accounts. In this article I present evidence suggesting that both global and domain-specific processes make important contributions to cognitive development, and I…

  17. Chloroplast Protein Turnover: The Influence of Extraplastidic Processes, Including Autophagy

    Directory of Open Access Journals (Sweden)

    Masanori Izumi

    2018-03-01

    Full Text Available Most assimilated nutrients in the leaves of land plants are stored in chloroplasts as photosynthetic proteins, where they mediate CO2 assimilation during growth. During senescence or under suboptimal conditions, chloroplast proteins are degraded, and the amino acids released during this process are used to produce young tissues, seeds, or respiratory energy. Protein degradation machineries contribute to the quality control of chloroplasts by removing damaged proteins caused by excess energy from sunlight. Whereas previous studies revealed that chloroplasts contain several types of intraplastidic proteases that likely derived from an endosymbiosed prokaryotic ancestor of chloroplasts, recent reports have demonstrated that multiple extraplastidic pathways also contribute to chloroplast protein turnover in response to specific cues. One such pathway is autophagy, an evolutionarily conserved process that leads to the vacuolar or lysosomal degradation of cytoplasmic components in eukaryotic cells. Here, we describe and contrast the extraplastidic pathways that degrade chloroplasts. This review shows that diverse pathways participate in chloroplast turnover during sugar starvation, senescence, and oxidative stress. Elucidating the mechanisms that regulate these pathways will help decipher the relationship among the diverse pathways mediating chloroplast protein turnover.

  18. Track structure for low energy ions including charge exchange processes

    International Nuclear Information System (INIS)

    Uehara, S.; Nikjoo, H.

    2002-01-01

    The model and development is described of a new generation of Monte Carlo track structure codes. The code LEAHIST simulates full slowing down of low-energy proton history tracks in the range 1 keV-1 MeV and the code LEAHIST simulates low-energy alpha particle history tracks in the range 1 keV-8 MeV in water. All primary ion interactions are followed down to 1 keV and all electrons to 1 eV. Tracks of secondary electrons ejected by ions were traced using the electron code KURBUC. Microdosimetric parameters derived by analysis of generated tracks are presented. (author)

  19. Process modeling style

    CERN Document Server

    Long, John

    2014-01-01

    Process Modeling Style focuses on other aspects of process modeling beyond notation that are very important to practitioners. Many people who model processes focus on the specific notation used to create their drawings. While that is important, there are many other aspects to modeling, such as naming, creating identifiers, descriptions, interfaces, patterns, and creating useful process documentation. Experience author John Long focuses on those non-notational aspects of modeling, which practitioners will find invaluable. Gives solid advice for creating roles, work produ

  20. Dynamic hysteresis modeling including skin effect using diffusion equation model

    Energy Technology Data Exchange (ETDEWEB)

    Hamada, Souad, E-mail: souadhamada@yahoo.fr [LSP-IE: Research Laboratory, Electrical Engineering Department, University of Batna, 05000 Batna (Algeria); Louai, Fatima Zohra, E-mail: fz_louai@yahoo.com [LSP-IE: Research Laboratory, Electrical Engineering Department, University of Batna, 05000 Batna (Algeria); Nait-Said, Nasreddine, E-mail: n_naitsaid@yahoo.com [LSP-IE: Research Laboratory, Electrical Engineering Department, University of Batna, 05000 Batna (Algeria); Benabou, Abdelkader, E-mail: Abdelkader.Benabou@univ-lille1.fr [L2EP, Université de Lille1, 59655 Villeneuve d’Ascq (France)

    2016-07-15

    An improved dynamic hysteresis model is proposed for the prediction of hysteresis loop of electrical steel up to mean frequencies, taking into account the skin effect. In previous works, the analytical solution of the diffusion equation for low frequency (DELF) was coupled with the inverse static Jiles-Atherton (JA) model in order to represent the hysteresis behavior for a lamination. In the present paper, this approach is improved to ensure the reproducibility of measured hysteresis loops at mean frequency. The results of simulation are compared with the experimental ones. The selected results for frequencies 50 Hz, 100 Hz, 200 Hz and 400 Hz are presented and discussed.

  1. Product and Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian T.; Gani, Rafiqul

    This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models....... These approaches are put into the context of life cycle modelling, where multiscale and multiform modelling is increasingly prevalent in the 21st century. The book commences with a discussion of modern product and process modelling theory and practice followed by a series of case studies drawn from a variety...... to biotechnology applications, food, polymer and human health application areas. The book highlights to important nature of modern product and process modelling in the decision making processes across the life cycle. As such it provides an important resource for students, researchers and industrial practitioners....

  2. Stochastic modelling of two-phase flows including phase change

    International Nuclear Information System (INIS)

    Hurisse, O.; Minier, J.P.

    2011-01-01

    Stochastic modelling has already been developed and applied for single-phase flows and incompressible two-phase flows. In this article, we propose an extension of this modelling approach to two-phase flows including phase change (e.g. for steam-water flows). Two aspects are emphasised: a stochastic model accounting for phase transition and a modelling constraint which arises from volume conservation. To illustrate the whole approach, some remarks are eventually proposed for two-fluid models. (authors)

  3. Standard Model processes

    CERN Document Server

    Mangano, M.L.; Aguilar-Saavedra, Juan Antonio; Alekhin, S.; Badger, S.; Bauer, C.W.; Becher, T.; Bertone, V.; Bonvini, M.; Boselli, S.; Bothmann, E.; Boughezal, R.; Cacciari, M.; Carloni Calame, C.M.; Caola, F.; Campbell, J.M.; Carrazza, S.; Chiesa, M.; Cieri, L.; Cimaglia, F.; Febres Cordero, F.; Ferrarese, P.; D'Enterria, D.; Ferrera, G.; Garcia i Tormo, X.; Garzelli, M.V.; Germann, E.; Hirschi, V.; Han, T.; Ita, H.; Jäger, B.; Kallweit, S.; Karlberg, A.; Kuttimalai, S.; Krauss, F.; Larkoski, A.J.; Lindert, J.; Luisoni, G.; Maierhöfer, P.; Mattelaer, O.; Martinez, H.; Moch, S.; Montagna, G.; Moretti, M.; Nason, P.; Nicrosini, O.; Oleari, C.; Pagani, D.; Papaefstathiou, A.; Petriello, F.; Piccinini, F.; Pierini, M.; Pierog, T.; Pozzorini, S.; Re, E.; Robens, T.; Rojo, J.; Ruiz, R.; Sakurai, K.; Salam, G.P.; Salfelder, L.; Schönherr, M.; Schulze, M.; Schumann, S.; Selvaggi, M.; Shivaji, A.; Siodmok, A.; Skands, P.; Torrielli, P.; Tramontano, F.; Tsinikos, I.; Tweedie, B.; Vicini, A.; Westhoff, S.; Zaro, M.; Zeppenfeld, D.; CERN. Geneva. ATS Department

    2017-06-22

    This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  4. Modeling Electric Double-Layers Including Chemical Reaction Effects

    DEFF Research Database (Denmark)

    Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.

    2014-01-01

    A physicochemical and numerical model for the transient formation of an electric double-layer between an electrolyte and a chemically-active flat surface is presented, based on a finite elements integration of the nonlinear Nernst-Planck-Poisson model including chemical reactions. The model works...

  5. WWTP Process Tank Modelling

    DEFF Research Database (Denmark)

    Laursen, Jesper

    solution of the Navier-Stokes equations in a multiphase scheme. After a general introduction to the activated sludge tank as a system, the activated sludge tank model is gradually setup in separate stages. The individual sub-processes that are often occurring in activated sludge tanks are initially......-process models, the last part of the thesis, where the integrated process tank model is tested on three examples of activated sludge systems, is initiated. The three case studies are introduced with an increasing degree of model complexity. All three cases are take basis in Danish municipal wastewater treatment...... plants. The first case study involves the modeling of an activated sludge tank undergoing a special controlling strategy with the intention minimizing the sludge loading on the subsequent secondary settlers during storm events. The applied model is a two-phase model, where the sedimentation of sludge...

  6. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  7. UML in business process modeling

    Directory of Open Access Journals (Sweden)

    Bartosz Marcinkowski

    2013-03-01

    Full Text Available Selection and proper application of business process modeling methods and techniques have a significant impact on organizational improvement capabilities as well as proper understanding of functionality of information systems that shall support activity of the organization. A number of business process modeling notations were popularized in practice in recent decades. Most significant of the notations include Business Process Modeling Notation (OMG BPMN and several Unified Modeling Language (OMG UML extensions. In this paper, the assessment whether one of the most flexible and strictly standardized contemporary business process modeling notations, i.e. Rational UML Profile for Business Modeling, enable business analysts to prepare business models that are all-embracing and understandable by all the stakeholders. After the introduction, methodology of research is discussed. Section 2 presents selected case study results. The paper is concluded with a summary.

  8. Progressive IRP Models for Power Resources Including EPP

    Directory of Open Access Journals (Sweden)

    Yiping Zhu

    2017-01-01

    Full Text Available In the view of optimizing regional power supply and demand, the paper makes effective planning scheduling of supply and demand side resources including energy efficiency power plant (EPP, to achieve the target of benefit, cost, and environmental constraints. In order to highlight the characteristics of different supply and demand resources in economic, environmental, and carbon constraints, three planning models with progressive constraints are constructed. Results of three models by the same example show that the best solutions to different models are different. The planning model including EPP has obvious advantages considering pollutant and carbon emission constraints, which confirms the advantages of low cost and emissions of EPP. The construction of progressive IRP models for power resources considering EPP has a certain reference value for guiding the planning and layout of EPP within other power resources and achieving cost and environmental objectives.

  9. Model Process Control Language

    Data.gov (United States)

    National Aeronautics and Space Administration — The MPC (Model Process Control) language enables the capture, communication and preservation of a simulation instance, with sufficient detail that it can be...

  10. Business Model Process Configurations

    DEFF Research Database (Denmark)

    Taran, Yariv; Nielsen, Christian; Thomsen, Peter

    2015-01-01

    strategic preference, as part of their business model innovation activity planned. Practical implications – This paper aimed at strengthening researchers and, particularly, practitioner’s perspectives into the field of business model process configurations. By insuring an [abstracted] alignment between......Purpose – The paper aims: 1) To develop systematically a structural list of various business model process configuration and to group (deductively) these selected configurations in a structured typological categorization list. 2) To facilitate companies in the process of BM innovation......, by developing (inductively) an ontological classification framework, in view of the BM process configurations typology developed. Design/methodology/approach – Given the inconsistencies found in the business model studies (e.g. definitions, configurations, classifications) we adopted the analytical induction...

  11. Modeling nuclear processes by Simulink

    Science.gov (United States)

    Rashid, Nahrul Khair Alang Md

    2015-04-01

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox software that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.

  12. Modeling nuclear processes by Simulink

    Energy Technology Data Exchange (ETDEWEB)

    Rashid, Nahrul Khair Alang Md, E-mail: nahrul@iium.edu.my [Faculty of Engineering, International Islamic University Malaysia, Jalan Gombak, Selangor (Malaysia)

    2015-04-29

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox software that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.

  13. Simple suggestions for including vertical physics in oil spill models

    International Nuclear Information System (INIS)

    D'Asaro, Eric; University of Washington, Seatle, WA

    2001-01-01

    Current models of oil spills include no vertical physics. They neglect the effect of vertical water motions on the transport and concentration of floating oil. Some simple ways to introduce vertical physics are suggested here. The major suggestion is to routinely measure the density stratification of the upper ocean during oil spills in order to develop a database on the effect of stratification. (Author)

  14. Command Process Modeling & Risk Analysis

    Science.gov (United States)

    Meshkat, Leila

    2011-01-01

    Commanding Errors may be caused by a variety of root causes. It's important to understand the relative significance of each of these causes for making institutional investment decisions. One of these causes is the lack of standardized processes and procedures for command and control. We mitigate this problem by building periodic tables and models corresponding to key functions within it. These models include simulation analysis and probabilistic risk assessment models.

  15. Exclusive queueing model including the choice of service windows

    Science.gov (United States)

    Tanaka, Masahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro

    2018-01-01

    In a queueing system involving multiple service windows, choice behavior is a significant concern. This paper incorporates the choice of service windows into a queueing model with a floor represented by discrete cells. We contrived a logit-based choice algorithm for agents considering the numbers of agents and the distances to all service windows. Simulations were conducted with various parameters of agent choice preference for these two elements and for different floor configurations, including the floor length and the number of service windows. We investigated the model from the viewpoint of transit times and entrance block rates. The influences of the parameters on these factors were surveyed in detail and we determined that there are optimum floor lengths that minimize the transit times. In addition, we observed that the transit times were determined almost entirely by the entrance block rates. The results of the presented model are relevant to understanding queueing systems including the choice of service windows and can be employed to optimize facility design and floor management.

  16. Properties of spatial Cox process models

    DEFF Research Database (Denmark)

    Møller, Jesper

    Probabilistic properties of Cox processes of relevance for statistical modelling and inference are studied. Particularly, we study the most important classes of Cox processes, including log Gaussian Cox processes, shot noise Cox processes, and permanent Cox processes. We consider moment properties...... and point process operations such as thinning, displacements, and superpositioning. We also discuss how to simulate specific Cox processes....

  17. The Brookhaven Process Optimization Models

    Energy Technology Data Exchange (ETDEWEB)

    Pilati, D. A.; Sparrow, F. T.

    1979-01-01

    The Brookhaven National Laboratory Industry Model Program (IMP) has undertaken the development of a set of industry-specific process-optimization models. These models are to be used for energy-use projections, energy-policy analyses, and process technology assessments. Applications of the models currently under development show that system-wide energy impacts may be very different from engineering estimates, selected investment tax credits for cogeneration (or other conservation strategies) may have the perverse effect of increasing industrial energy use, and that a proper combination of energy taxes and investment tax credits is more socially desirable than either policy alone. A section is included describing possible extensions of these models to answer questions or address other systems (e.g., a single plant instead of an entire industry).

  18. Decree 316/011. It approve the bases for the oil companies selection process about the hydrocarbons exploration and exploitation in the Republica Oriental del Uruguay offshore Round II including the respective model contract

    International Nuclear Information System (INIS)

    2011-01-01

    This decree approve the bases for the oil companies interested in the hydrocarbons exploration and exploitation in the Republica Oriental del Uruguay. The energetic fossil research is regulated by the energetic sector with rules defined by the executive. Ancap evaluate the company proposals in relation of different topics such as drilling and processing, electromagnetism, sea floor sediments samples, oil well evidences and seismic information

  19. Simulation of Injection Molding Process Including Mold Filling and Compound Curing

    Directory of Open Access Journals (Sweden)

    Mohamad Reza Erfanian

    2012-12-01

    Full Text Available The present work reports and discusses the results of a 3D simulation of the injection molding process of a rubber compound that includes the mold flling stage and  material curing, using the computer code is developed in “UDF” part of the Fluent 6.3 CAE software. The data obtained from a rheometer (MDR 2000 is used to characterize the rubber material in order to fnd the cure model parameters which exist in curing model. Because of non-newtonian behavior of rubber, in this work the non-newtonian model for viscosity was used and viscosity parameters were computed by mean of viscometry test by RPA. After calculation of the physical and curing properties, vulcanization process was simulated for a complex rubber article with non-uniform thickness by solving the continuity, momentum, energy and curing process equations. Predicted flling and curing time in a complex and 3D rubber part is compared with experimentally measured data which confrmed  the accuracy and applicability of the method.

  20. Optimization of Processing and Modeling Issues for Thin Film Solar Cell Devices Including Concepts for The Development of Polycrystalline Multijunctions: Annual Report; 24 August 1998-23 August 1999

    Energy Technology Data Exchange (ETDEWEB)

    Birkmire, R.W.; Phillips, J.E.; Shafarman, W.N.; Eser, E.; Hegedus, S.S.; McCandless, B.E. (Institute of Energy Conversion)

    2000-08-25

    This report describes results achieved during phase 1 of a three-phase subcontract to develop and understand thin-film solar cell technology associated to CuInSe{sub 2} and related alloys, a-Si and its alloys, and CdTe. Modules based on all these thin films are promising candidates to meet DOE long-range efficiency, reliability, and manufacturing cost goals. The critical issues being addressed under this program are intended to provide the science and engineering basis for the development of viable commercial processes and to improve module performance. The generic research issues addressed are: (1) quantitative analysis of processing steps to provide information for efficient commercial-scale equipment design and operation; (2) device characterization relating the device performance to materials properties and process conditions; (3) development of alloy materials with different bandgaps to allow improved device structures for stability and compatibility with module design; (4) development of improved window/heterojunction layers and contacts to improve device performance and reliability; and (5) evaluation of cell stability with respect to illumination, temperature, and ambient and with respect to device structure and module encapsulation.

  1. Optimization of Processing and Modeling Issues for Thin Film Solar Cell Devices Including Concepts for the Development of Polycrystalline Multijunctions Annual Subcontract Report, 24 August 1999 - 23 August 2000

    Energy Technology Data Exchange (ETDEWEB)

    Birkmire, R. W.; Phillips, J. E.; Shafarman, W. N.; Eser, E.; Hegedus, S. S.; McCandless, B. E.

    2001-11-14

    This report describes the results achieved during Phase I of a three-phase subcontract to develop and understand thin-film solar cell technology associated with CuInSe2 and related alloys, a-Si and its alloys, and CdTe. Modules based on all these thin films are promising candidates to meet DOE long-range efficiency, reliability, and manufacturing cost goals. The critical issues being addressed under this program are intended to provide the science and engineering basis for developing viable commercial processes and to improve module performance. The generic research issues addressed are: (1) quantitative analysis of processing steps to provide information for efficient commercial-scale equipment design and operation; (2) device characterization relating the device performance to materials properties and process conditions; (3) development of alloy materials with different bandgaps to allow improved device structures for stability and compatibility with module design; (4) development of improved window/heterojunction layers and contacts to improve device performance and reliability; and (5) evaluation of cell stability with respect to illumination, temperature, and ambient, and with respect to device structure and module encapsulation.

  2. Including spatial data in nutrient balance modelling on dairy farms

    Science.gov (United States)

    van Leeuwen, Maricke; van Middelaar, Corina; Stoof, Cathelijne; Oenema, Jouke; Stoorvogel, Jetse; de Boer, Imke

    2017-04-01

    The Annual Nutrient Cycle Assessment (ANCA) calculates the nitrogen (N) and phosphorus (P) balance at a dairy farm, while taking into account the subsequent nutrient cycles of the herd, manure, soil and crop components. Since January 2016, Dutch dairy farmers are required to use ANCA in order to increase understanding of nutrient flows and to minimize nutrient losses to the environment. A nutrient balance calculates the difference between nutrient inputs and outputs. Nutrients enter the farm via purchased feed, fertilizers, deposition and fixation by legumes (nitrogen), and leave the farm via milk, livestock, manure, and roughages. A positive balance indicates to which extent N and/or P are lost to the environment via gaseous emissions (N), leaching, run-off and accumulation in soil. A negative balance indicates that N and/or P are depleted from soil. ANCA was designed to calculate average nutrient flows on farm level (for the herd, manure, soil and crop components). ANCA was not designed to perform calculations of nutrient flows at the field level, as it uses averaged nutrient inputs and outputs across all fields, and it does not include field specific soil characteristics. Land management decisions, however, such as the level of N and P application, are typically taken at the field level given the specific crop and soil characteristics. Therefore the information that ANCA provides is likely not sufficient to support farmers' decisions on land management to minimize nutrient losses to the environment. This is particularly a problem when land management and soils vary between fields. For an accurate estimate of nutrient flows in a given farming system that can be used to optimize land management, the spatial scale of nutrient inputs and outputs (and thus the effect of land management and soil variation) could be essential. Our aim was to determine the effect of the spatial scale of nutrient inputs and outputs on modelled nutrient flows and nutrient use efficiencies

  3. Single-Phase Bundle Flows Including Macroscopic Turbulence Model

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Jun; Yoon, Han Young [KAERI, Daejeon (Korea, Republic of); Yoon, Seok Jong; Cho, Hyoung Kyu [Seoul National University, Seoul (Korea, Republic of)

    2016-05-15

    To deal with various thermal hydraulic phenomena due to rapid change of fluid properties when an accident happens, securing mechanistic approaches as much as possible may reduce the uncertainty arising from improper applications of the experimental models. In this study, the turbulence mixing model, which is well defined in the subchannel analysis code such as VIPRE, COBRA, and MATRA by experiments, is replaced by a macroscopic k-e turbulence model, which represents the aspect of mathematical derivation. The performance of CUPID with macroscopic turbulence model is validated against several bundle experiments: CNEN 4x4 and PNL 7x7 rod bundle tests. In this study, the macroscopic k-e model has been validated for the application to subchannel analysis. It has been implemented in the CUPID code and validated against CNEN 4x4 and PNL 7x7 rod bundle tests. The results showed that the macroscopic k-e turbulence model can estimate the experiments properly.

  4. Characterization of carbon/carbon composites prepared by different processing routes including liquid pitch densification process

    International Nuclear Information System (INIS)

    Dekeyrel, Alixe; Dourges, Marie-Anne; Weisbecker, Patrick; Pailler, Rene; Allemand, Alexandre; Teneze, Nicolas; Epherre, Jean-Francois

    2013-01-01

    Carbon/carbon composites with an apparent density higher than 1.80 g/cm 3 were prepared using a multi-step densification process. This consists of a pre-densification step followed by pitch impregnation/pyrolysis (I/P) cycles carried out under moderate pressure. Three pre-densification methods were investigated to significantly increase the apparent density of a raw preform to about 1.4 g/cm 3 . These were:(i) impregnation by carbonaceous powder slurry, (ii) film boiling chemical vapor infiltration, (iii) impregnation with a combination of synthetic pitch I/P and carbonaceous powder slurry. Composites were prepared from each of these three pre-densified materials, using a liquid pitch processing route with four I/P cycles with M50 petroleum pitch, under moderate pressures (10 MPa). As a reference a carbon/carbon composite was prepared using four I/P cycles with pitch. All four composites had different microstructural characteristics and different thermal properties. The influence of processing on thermal properties is discussed in relation to the microstructural characteristics. (authors)

  5. Speech-language therapists' process of including significant others in aphasia rehabilitation.

    Science.gov (United States)

    Hallé, Marie-Christine; Le Dorze, Guylaine; Mingant, Anne

    2014-11-01

    Although aphasia rehabilitation should include significant others, it is currently unknown how this recommendation is adopted in speech-language therapy practice. Speech-language therapists' (SLTs) experience of including significant others in aphasia rehabilitation is also understudied, yet a better understanding of clinical reality would be necessary to facilitate implementation of best evidence pertaining to family interventions. To explore the process through which SLTs work with significant others of people with aphasia in rehabilitation settings. Individual semi-structured interviews were conducted with eight SLTs who had been working with persons with aphasia in rehabilitation centres for at least 1 year. Grounded theory principles were applied in analysing interview transcripts. A theoretical model was developed representing SLTs' process of working with significant others of persons with aphasia in rehabilitation. Including significant others was perceived as challenging, yet a bonus to their fundamental patient-centred approach. Basic interventions with significant others when they were available included information sharing. If necessary, significant others were referred to social workers or psychologists or the participants collaborated with those professionals. Participants rarely and only under specific conditions provided significant others with language exercises or trained them to communicate better with the aphasic person. As a result, even if participants felt satisfied with their efforts to offer family and friends interventions, they also had unachieved ideals, such as having more frequent contacts with significant others. If SLTs perceived work with significant others as a feasible necessity, rather than as a challenging bonus, they could be more inclined to include family and friends within therapy with the aim to improve their communication with the person with aphasia. SLTs could also be more satisfied with their practice. In order to

  6. Global atmospheric model for mercury including oxidation by bromine atoms

    Directory of Open Access Journals (Sweden)

    C. D. Holmes

    2010-12-01

    Full Text Available Global models of atmospheric mercury generally assume that gas-phase OH and ozone are the main oxidants converting Hg0 to HgII and thus driving mercury deposition to ecosystems. However, thermodynamic considerations argue against the importance of these reactions. We demonstrate here the viability of atomic bromine (Br as an alternative Hg0 oxidant. We conduct a global 3-D simulation with the GEOS-Chem model assuming gas-phase Br to be the sole Hg0 oxidant (Hg + Br model and compare to the previous version of the model with OH and ozone as the sole oxidants (Hg + OH/O3 model. We specify global 3-D Br concentration fields based on our best understanding of tropospheric and stratospheric Br chemistry. In both the Hg + Br and Hg + OH/O3 models, we add an aqueous photochemical reduction of HgII in cloud to impose a tropospheric lifetime for mercury of 6.5 months against deposition, as needed to reconcile observed total gaseous mercury (TGM concentrations with current estimates of anthropogenic emissions. This added reduction would not be necessary in the Hg + Br model if we adjusted the Br oxidation kinetics downward within their range of uncertainty. We find that the Hg + Br and Hg + OH/O3 models are equally capable of reproducing the spatial distribution of TGM and its seasonal cycle at northern mid-latitudes. The Hg + Br model shows a steeper decline of TGM concentrations from the tropics to southern mid-latitudes. Only the Hg + Br model can reproduce the springtime depletion and summer rebound of TGM observed at polar sites; the snowpack component of GEOS-Chem suggests that 40% of HgII deposited to snow in the Arctic is transferred to the ocean and land reservoirs, amounting to a net deposition flux to the Arctic of 60 Mg a−1. Summertime events of depleted Hg0 at Antarctic sites due to subsidence are much better simulated by

  7. Thematic report: Macroeconomic models including specifically social and environmental aspects

    OpenAIRE

    Kratena, Kurt

    2015-01-01

    WWWforEurope Deliverable No. 8, 30 pages A significant reduction of the global environmental consequences of European consumption and production activities are the main objective of the policy simulations carried out in this paper. For this purpose three different modelling approaches have been chosen. Two macroeconomic models following the philosophy of consistent stock-flow accounting for the main institutional sectors (households, firms, banks, central bank and government) are used for...

  8. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  9. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available The calculation of unsteady air loads is an essential step in any aeroelastic analysis. The subsonic doublet lattice method (DLM) is used extensively for this purpose due to its simplicity and reliability. The body models available with the popular...

  10. GREENSCOPE: Sustainable Process Modeling

    Science.gov (United States)

    EPA researchers are responding to environmental problems by incorporating sustainability into process design and evaluation. EPA researchers are also developing a tool that allows users to assess modifications to existing and new chemical processes to determine whether changes in...

  11. Auditory processing models

    DEFF Research Database (Denmark)

    Dau, Torsten

    2008-01-01

    The Handbook of Signal Processing in Acoustics will compile the techniques and applications of signal processing as they are used in the many varied areas of Acoustics. The Handbook will emphasize the interdisciplinary nature of signal processing in acoustics. Each Section of the Handbook...... will present topics on signal processing which are important in a specific area of acoustics. These will be of interest to specialists in these areas because they will be presented from their technical perspective, rather than a generic engineering approach to signal processing. Non-specialists, or specialists...

  12. Model-based software process improvement

    Science.gov (United States)

    Zettervall, Brenda T.

    1994-01-01

    The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.

  13. A thermal lens model including the Soret effect

    International Nuclear Information System (INIS)

    Cabrera, Humberto; Sira, Eloy; Rahn, Kareem; Garcia-Sucre, Maximo

    2009-01-01

    In this letter we generalize the thermal lens model to account for the Soret effect in binary liquid mixtures. This formalism permits the precise determination of the Soret coefficient in a steady-state situation. The theory is experimentally verified using the measured values in the ethanol/water mixtures. The time evolution of the Soret signal has been used to derive mass-diffusion times from which mass-diffusion coefficients were calculated. (Author)

  14. Including lateral interactions into microkinetic models of catalytic reactions

    DEFF Research Database (Denmark)

    Hellman, Anders; Honkala, Johanna Karoliina

    2007-01-01

    In many catalytic reactions lateral interactions between adsorbates are believed to have a strong influence on the reaction rates. We apply a microkinetic model to explore the effect of lateral interactions and how to efficiently take them into account in a simple catalytic reaction. Three differ...... different approximations are investigated: site, mean-field, and quasichemical approximations. The obtained results are compared to accurate Monte Carlo numbers. In the end, we apply the approximations to a real catalytic reaction, namely, ammonia synthesis....

  15. A stochastic model of gene expression including splicing events

    OpenAIRE

    Penim, Flávia Alexandra Mendes

    2014-01-01

    Tese de mestrado, Bioinformática e Biologia Computacional, Universidade de Lisboa, Faculdade de Ciências, 2014 Proteins carry out the great majority of the catalytic and structural work within an organism. The RNA templates used in their synthesis determines their identity, and this is dictated by which genes are transcribed. Therefore, gene expression is the fundamental determinant of an organism’s nature. The main objective of this thesis was to develop a stochastic computational model a...

  16. Mathematical modeling of biological processes

    CERN Document Server

    Friedman, Avner

    2014-01-01

    This book on mathematical modeling of biological processes includes a wide selection of biological topics that demonstrate the power of mathematics and computational codes in setting up biological processes with a rigorous and predictive framework. Topics include: enzyme dynamics, spread of disease, harvesting bacteria, competition among live species, neuronal oscillations, transport of neurofilaments in axon, cancer and cancer therapy, and granulomas. Complete with a description of the biological background and biological question that requires the use of mathematics, this book is developed for graduate students and advanced undergraduate students with only basic knowledge of ordinary differential equations and partial differential equations; background in biology is not required. Students will gain knowledge on how to program with MATLAB without previous programming experience and how to use codes in order to test biological hypothesis.

  17. Parton recombination model including resonance production. RL-78-040

    International Nuclear Information System (INIS)

    Roberts, R.G.; Hwa, R.C.; Matsuda, S.

    1978-05-01

    Possible effects of resonance production on the meson inclusive distribution in the fragmentation region are investigated in the framework of the parton recombination model. From a detailed study of the data on vector-meson production, a reliable ratio of the vector-to-pseudoscalar rates is determined. Then the influence of the decay of the vector mesons on the pseudoscalar spectrum is examined, and the effect found to be no more than 25% for x > 0.5. The normalization of the non-strange antiquark distributions are still higher than those in a quiescent proton. The agreement between the calculated results and data remain very good. 36 references

  18. Extending PSA models including ageing and asset management - 15291

    International Nuclear Information System (INIS)

    Martorell, S.; Marton, I.; Carlos, S.; Sanchez, A.I.

    2015-01-01

    This paper proposes a new approach to Ageing Probabilistic Safety Assessment (APSA) modelling, which is intended to be used to support risk-informed decisions on the effectiveness of maintenance management programs and technical specification requirements of critical equipment of Nuclear Power Plants (NPP) within the framework of the Risk Informed Decision Making according to R.G. 1.174 principles. This approach focuses on the incorporation of not only equipment ageing but also effectiveness of maintenance and efficiency of surveillance testing explicitly into APSA models and data. This methodology is applied to a motor-operated valve of the auxiliary feed water system (AFWS) of a PWR. This simple example of application focuses on a critical safety-related equipment of a NPP in order to evaluate the risk impact of considering different approaches to APSA and the combined effect of equipment ageing and maintenance and testing alternatives along NPP design life. The risk impact of several alternatives in maintenance strategy is discussed

  19. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Walker, Anthony P. [Environmental Sciences Division and Climate Change Science Institute, Oak Ridge National Laboratory, Oak Ridge Tennessee USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA

    2017-04-01

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averaging methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.

  20. Characterization results and Markov chain Monte Carlo algorithms including exact simulation for some spatial point processes

    DEFF Research Database (Denmark)

    Häggström, Olle; Lieshout, Marie-Colette van; Møller, Jesper

    1999-01-01

    The area-interaction process and the continuum random-cluster model are characterized in terms of certain functional forms of their respective conditional intensities. In certain cases, these two point process models can be derived from a bivariate point process model which in many respects...... is simpler to analyse and simulate. Using this correspondence we devise a two-component Gibbs sampler, which can be used for fast and exact simulation by extending the recent ideas of Propp and Wilson. We further introduce a Swendsen-Wang type algorithm. The relevance of the results within spatial statistics...

  1. INNOVATION PROCESS MODELLING

    Directory of Open Access Journals (Sweden)

    JANUSZ K. GRABARA

    2011-01-01

    Full Text Available Modelling phenomena in accordance with the structural approach enables one to simplify the observed relations and to present the classification grounds. An example may be a model of organisational structure identifying the logical relations between particular units and presenting the division of authority, work.

  2. Animal models and conserved processes

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2012-09-01

    Full Text Available Abstract Background The concept of conserved processes presents unique opportunities for using nonhuman animal models in biomedical research. However, the concept must be examined in the context that humans and nonhuman animals are evolved, complex, adaptive systems. Given that nonhuman animals are examples of living systems that are differently complex from humans, what does the existence of a conserved gene or process imply for inter-species extrapolation? Methods We surveyed the literature including philosophy of science, biological complexity, conserved processes, evolutionary biology, comparative medicine, anti-neoplastic agents, inhalational anesthetics, and drug development journals in order to determine the value of nonhuman animal models when studying conserved processes. Results Evolution through natural selection has employed components and processes both to produce the same outcomes among species but also to generate different functions and traits. Many genes and processes are conserved, but new combinations of these processes or different regulation of the genes involved in these processes have resulted in unique organisms. Further, there is a hierarchy of organization in complex living systems. At some levels, the components are simple systems that can be analyzed by mathematics or the physical sciences, while at other levels the system cannot be fully analyzed by reducing it to a physical system. The study of complex living systems must alternate between focusing on the parts and examining the intact whole organism while taking into account the connections between the two. Systems biology aims for this holism. We examined the actions of inhalational anesthetic agents and anti-neoplastic agents in order to address what the characteristics of complex living systems imply for inter-species extrapolation of traits and responses related to conserved processes. Conclusion We conclude that even the presence of conserved processes is

  3. Animal models and conserved processes.

    Science.gov (United States)

    Greek, Ray; Rice, Mark J

    2012-09-10

    The concept of conserved processes presents unique opportunities for using nonhuman animal models in biomedical research. However, the concept must be examined in the context that humans and nonhuman animals are evolved, complex, adaptive systems. Given that nonhuman animals are examples of living systems that are differently complex from humans, what does the existence of a conserved gene or process imply for inter-species extrapolation? We surveyed the literature including philosophy of science, biological complexity, conserved processes, evolutionary biology, comparative medicine, anti-neoplastic agents, inhalational anesthetics, and drug development journals in order to determine the value of nonhuman animal models when studying conserved processes. Evolution through natural selection has employed components and processes both to produce the same outcomes among species but also to generate different functions and traits. Many genes and processes are conserved, but new combinations of these processes or different regulation of the genes involved in these processes have resulted in unique organisms. Further, there is a hierarchy of organization in complex living systems. At some levels, the components are simple systems that can be analyzed by mathematics or the physical sciences, while at other levels the system cannot be fully analyzed by reducing it to a physical system. The study of complex living systems must alternate between focusing on the parts and examining the intact whole organism while taking into account the connections between the two. Systems biology aims for this holism. We examined the actions of inhalational anesthetic agents and anti-neoplastic agents in order to address what the characteristics of complex living systems imply for inter-species extrapolation of traits and responses related to conserved processes. We conclude that even the presence of conserved processes is insufficient for inter-species extrapolation when the trait or response

  4. Big-bang nucleosynthesis with a long-lived CHAMP including He4 spallation process

    Science.gov (United States)

    Yamanaka, Masato; Jittoh, Toshifumi; Kohri, Kazunori; Koike, Masafumi; Sato, Joe; Sugai, Kenichi; Yazaki, Koichi

    2014-03-01

    We propose helium-4 spallation processes induced by long-lived stau in supersymmetric standard models, and investigate an impact of the processes on light elements abundances. We show that, as long as the phase space of helium-4 spallation processes is open, they are more important than stau-catalyzed fusion and hence constrain the stau property. This talk is based on the work of ref. [1].

  5. Multi-enzyme Process Modeling

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia

    The subject of this thesis is to develop a methodological framework that can systematically guide mathematical model building for better understanding of multi-enzyme processes. In this way, opportunities for process improvements can be identified by analyzing simulations of either existing...... features of the process and provides the information required to structure the process model by using a step-by-step procedure with the required tools and methods. In this way, this framework increases efficiency of the model development process with respect to time and resources needed (fast and effective...... in the scientific literature. Reliable mathematical models of such multi-catalytic schemes can exploit the potential benefit of these processes. In this way, the best outcome of the process can be obtained understanding the types of modification that are required for process optimization. An effective evaluation...

  6. Modeling of column apparatus processes

    CERN Document Server

    Boyadjiev, Christo; Boyadjiev, Boyan; Popova-Krumova, Petya

    2016-01-01

    This book presents a new approach for the modeling of chemical and interphase mass transfer processes in industrial column apparatuses, using convection-diffusion and average-concentration models. The convection-diffusion type models are used for a qualitative analysis of the processes and to assess the main, small and slight physical effects, and then reject the slight effects. As a result, the process mechanism can be identified. It also introduces average concentration models for quantitative analysis, which use the average values of the velocity and concentration over the cross-sectional area of the column. The new models are used to analyze different processes (simple and complex chemical reactions, absorption, adsorption and catalytic reactions), and make it possible to model the processes of gas purification with sulfur dioxide, which form the basis of several patents.

  7. BioModels: expanding horizons to include more modelling approaches and formats.

    Science.gov (United States)

    Glont, Mihai; Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Malik-Sheriff, Rahuman S; Chelliah, Vijayalakshmi; Le Novère, Nicolas; Hermjakob, Henning

    2018-01-04

    BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Business Process Modeling: Perceived Benefits

    Science.gov (United States)

    Indulska, Marta; Green, Peter; Recker, Jan; Rosemann, Michael

    The process-centered design of organizations and information systems is globally seen as an appropriate response to the increased economic pressure on organizations. At the methodological core of process-centered management is process modeling. However, business process modeling in large initiatives can be a time-consuming and costly exercise, making it potentially difficult to convince executive management of its benefits. To date, and despite substantial interest and research in the area of process modeling, the understanding of the actual benefits of process modeling in academia and practice is limited. To address this gap, this paper explores the perception of benefits derived from process modeling initiatives, as reported through a global Delphi study. The study incorporates the views of three groups of stakeholders - academics, practitioners and vendors. Our findings lead to the first identification and ranking of 19 unique benefits associated with process modeling. The study in particular found that process modeling benefits vary significantly between practitioners and academics. We argue that the variations may point to a disconnect between research projects and practical demands.

  9. Chemical Process Modeling and Control.

    Science.gov (United States)

    Bartusiak, R. Donald; Price, Randel M.

    1987-01-01

    Describes some of the features of Lehigh University's (Pennsylvania) process modeling and control program. Highlights the creation and operation of the Chemical Process Modeling and Control Center (PMC). Outlines the program's philosophy, faculty, technical program, current research projects, and facilities. (TW)

  10. Chapter 1: Standard Model processes

    OpenAIRE

    Becher, Thomas

    2017-01-01

    This chapter documents the production rates and typical distributions for a number of benchmark Standard Model processes, and discusses new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  11. A Multiobjective Optimization Including Results of Life Cycle Assessment in Developing Biorenewables-Based Processes.

    Science.gov (United States)

    Helmdach, Daniel; Yaseneva, Polina; Heer, Parminder K; Schweidtmann, Artur M; Lapkin, Alexei A

    2017-09-22

    A decision support tool has been developed that uses global multiobjective optimization based on 1) the environmental impacts, evaluated within the framework of full life cycle assessment; and 2) process costs, evaluated by using rigorous process models. This approach is particularly useful in developing biorenewable-based energy solutions and chemicals manufacturing, for which multiple criteria must be evaluated and optimization-based decision-making processes are particularly attractive. The framework is demonstrated by using a case study of the conversion of terpenes derived from biowaste feedstocks into reactive intermediates. A two-step chemical conversion/separation sequence was implemented as a rigorous process model and combined with a life cycle model. A life cycle inventory for crude sulfate turpentine was developed, as well as a conceptual process of its separation into pure terpene feedstocks. The performed single- and multiobjective optimizations demonstrate the functionality of the optimization-based process development and illustrate the approach. The most significant advance is the ability to perform multiobjective global optimization, resulting in identification of a region of Pareto-optimal solutions. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Business process modeling in healthcare.

    Science.gov (United States)

    Ruiz, Francisco; Garcia, Felix; Calahorra, Luis; Llorente, César; Gonçalves, Luis; Daniel, Christel; Blobel, Bernd

    2012-01-01

    The importance of the process point of view is not restricted to a specific enterprise sector. In the field of health, as a result of the nature of the service offered, health institutions' processes are also the basis for decision making which is focused on achieving their objective of providing quality medical assistance. In this chapter the application of business process modelling - using the Business Process Modelling Notation (BPMN) standard is described. Main challenges of business process modelling in healthcare are the definition of healthcare processes, the multi-disciplinary nature of healthcare, the flexibility and variability of the activities involved in health care processes, the need of interoperability between multiple information systems, and the continuous updating of scientific knowledge in healthcare.

  13. Collapse models and perceptual processes

    International Nuclear Information System (INIS)

    Ghirardi, Gian Carlo; Romano, Raffaele

    2014-01-01

    Theories including a collapse mechanism have been presented various years ago. They are based on a modification of standard quantum mechanics in which nonlinear and stochastic terms are added to the evolution equation. Their principal merits derive from the fact that they are mathematically precise schemes accounting, on the basis of a unique universal dynamical principle, both for the quantum behavior of microscopic systems as well as for the reduction associated to measurement processes and for the classical behavior of macroscopic objects. Since such theories qualify themselves not as new interpretations but as modifications of the standard theory they can be, in principle, tested against quantum mechanics. Recently, various investigations identifying possible crucial test have been discussed. In spite of the extreme difficulty to perform such tests it seems that recent technological developments allow at least to put precise limits on the parameters characterizing the modifications of the evolution equation. Here we will simply mention some of the recent investigations in this direction, while we will mainly concentrate our attention to the way in which collapse theories account for definite perceptual process. The differences between the case of reductions induced by perceptions and those related to measurement procedures by means of standard macroscopic devices will be discussed. On this basis, we suggest a precise experimental test of collapse theories involving conscious observers. We make plausible, by discussing in detail a toy model, that the modified dynamics can give rise to quite small but systematic errors in the visual perceptual process.

  14. Path modeling and process control

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Rodionova, O.; Pomerantsev, A.

    2007-01-01

    and having three or more stages. The methods are applied to a process control of a multi-stage production process having 25 variables and one output variable. When moving along the process, variables change their roles. It is shown how the methods of path modeling can be applied to estimate variables......Many production processes are carried out in stages. At the end of each stage, the production engineer can analyze the intermediate results and correct process parameters (variables) of the next stage. Both analysis of the process and correction to process parameters at next stage should...... be performed regarding the foreseeable output property y, and with respect to an admissible range of correcting actions for the parameters of the next stage. In this paper the basic principles of path modeling is presented. The mathematics is presented for processes having only one stage, having two stages...

  15. Model for safety reports including descriptive examples; Mall foer saekerhetsrapporter med beskrivande exempel

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-12-01

    Several safety reports will be produced in the process of planning and constructing the system for disposal of high-level radioactive waste in Sweden. The present report gives a model, with detailed examples, of how these reports should be organized and what steps they should include. In the near future safety reports will deal with the encapsulation plant and the repository. Later reports will treat operation of the handling systems and the repository.

  16. Markov Decision Process Measurement Model.

    Science.gov (United States)

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  17. Process and Context in Choice Models

    DEFF Research Database (Denmark)

    Ben-Akiva, Moshe; Palma, André de; McFadden, Daniel

    2012-01-01

    We develop a general framework that extends choice models by including an explicit representation of the process and context of decision making. Process refers to the steps involved in decision making. Context refers to factors affecting the process, focusing in this paper on social networks....... The extended choice framework includes more behavioral richness through the explicit representation of the planning process preceding an action and its dynamics and the effects of context (family, friends, and market) on the process leading to a choice, as well as the inclusion of new types of subjective data...

  18. Simple Models for Process Control

    Czech Academy of Sciences Publication Activity Database

    Gorez, R.; Klán, Petr

    2011-01-01

    Roč. 22, č. 2 (2011), s. 58-62 ISSN 0929-2268 Institutional research plan: CEZ:AV0Z10300504 Keywords : process models * PID control * second order dynamics Subject RIV: JB - Sensors, Measurment, Regulation

  19. BALANCED SCORECARDS EVALUATION MODEL THAT INCLUDES ELEMENTS OF ENVIRONMENTAL MANAGEMENT SYSTEM USING AHP MODEL

    Directory of Open Access Journals (Sweden)

    Jelena Jovanović

    2010-03-01

    Full Text Available The research is oriented on improvement of environmental management system (EMS using BSC (Balanced Scorecard model that presents strategic model of measurem ents and improvement of organisational performance. The research will present approach of objectives and environmental management me trics involvement (proposed by literature review in conventional BSC in "Ad Barska plovi dba" organisation. Further we will test creation of ECO-BSC model based on business activities of non-profit organisations in order to improve envir onmental management system in parallel with other systems of management. Using this approach we may obtain 4 models of BSC that includ es elements of environmen tal management system for AD "Barska plovidba". Taking into acc ount that implementation and evaluation need long period of time in AD "Barska plovidba", the final choice will be based on 14598 (Information technology - Software product evaluation and ISO 9126 (Software engineering - Product quality using AHP method. Those standards are usually used for evaluation of quality software product and computer programs that serve in organisation as support and factors for development. So, AHP model will be bas ed on evolution criteria based on suggestion of ISO 9126 standards and types of evaluation from two evaluation teams. Members of team & will be experts in BSC and environmental management system that are not em ployed in AD "Barska Plovidba" organisation. The members of team 2 will be managers of AD "Barska Plovidba" organisation (including manage rs from environmental department. Merging results based on previously cr eated two AHP models, one can obtain the most appropriate BSC that includes elements of environmental management system. The chosen model will present at the same time suggestion for approach choice including ecological metrics in conventional BSC model for firm that has at least one ECO strategic orientation.

  20. Analysis Technique for Exhaust Gas Including PFCs from Microelectronics Manufacturing Processes

    Science.gov (United States)

    Tomita, Nobuyasu; Isaki, Ryuichiro

    In the manufacturing processes of Semiconductor and the Liquid Crystal Display (LCD), Perfluorocompounds (PFCs), Sulfurhexafluoride (SF6) and Nitrogenfluoride (NF3), which have high Green house effect,are used in large quantities. As emission reduction of these gases, the following countermeasures are taken. 1. Opimization of PFCs usage 2. Utilization of alternative gas 3. Instllation of Scrubber for exhaust gas treatment To inspect the effect of countermeasure that are introduced for these PFCs emission reduction, it is necessary to analyze PFCs in exhaust gas. In this report, we will discribe about analysis technique for exhaust gas including PFCs from microelectornics manufacturing processes.

  1. Model feedstock supply processing plants

    Directory of Open Access Journals (Sweden)

    V. M. Bautin

    2013-01-01

    Full Text Available The model of raw providing the processing enterprises entering into vertically integrated structure on production and processing of dairy raw materials, differing by an orientation on achievement of cumulative effect by the integrated structure acting as criterion function which maximizing is reached by optimization of capacities, volumes of deliveries of raw materials and its qualitative characteristics, costs of industrial processing of raw materials and demand for dairy production is developed.

  2. Sato Processes in Default Modeling

    DEFF Research Database (Denmark)

    Kokholm, Thomas; Nicolato, Elisa

    In reduced form default models, the instantaneous default intensity is classically the modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature has shown a tendency towards...

  3. Sato Processes in Default Modeling

    DEFF Research Database (Denmark)

    Kokholm, Thomas; Nicolato, Elisa

    2010-01-01

    In reduced form default models, the instantaneous default intensity is the classical modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature tends to specify the cumulative...

  4. Modelling Hospital Materials Management Processes

    Directory of Open Access Journals (Sweden)

    Raffaele Iannone

    2013-06-01

    integrated and detailed analysis and description model for hospital materials management data and tasks, which is able to tackle information from patient requirements to usage, from replenishment requests to supplying and handling activities. The model takes account of medical risk reduction, traceability and streamlined processes perspectives. Second, the paper translates this information into a business process model and mathematical formalization.The study provides a useful guide to the various relevant technology‐related, management and business issues, laying the foundations of an efficient reengineering of the supply chain to reduce healthcare costs and improve the quality of care.

  5. Including Effects of Water Stress on Dead Organic Matter Decay to a Forest Carbon Model

    Science.gov (United States)

    Kim, H.; Lee, J.; Han, S. H.; Kim, S.; Son, Y.

    2017-12-01

    Decay of dead organic matter is a key process of carbon (C) cycling in forest ecosystems. The change in decay rate depends on temperature sensitivity and moisture conditions. The Forest Biomass and Dead organic matter Carbon (FBDC) model includes a decay sub-model considering temperature sensitivity, yet does not consider moisture conditions as drivers of the decay rate change. This study aimed to improve the FBDC model by including a water stress function to the decay sub-model. Also, soil C sequestration under climate change with the FBDC model including the water stress function was simulated. The water stress functions were determined with data from decomposition study on Quercus variabilis forests and Pinus densiflora forests of Korea, and adjustment parameters of the functions were determined for both species. The water stress functions were based on the ratio of precipitation to potential evapotranspiration. Including the water stress function increased the explained variances of the decay rate by 19% for the Q. variabilis forests and 7% for the P. densiflora forests, respectively. The increase of the explained variances resulted from large difference in temperature range and precipitation range across the decomposition study plots. During the period of experiment, the mean annual temperature range was less than 3°C, while the annual precipitation ranged from 720mm to 1466mm. Application of the water stress functions to the FBDC model constrained increasing trend of temperature sensitivity under climate change, and thus increased the model-estimated soil C sequestration (Mg C ha-1) by 6.6 for the Q. variabilis forests and by 3.1 for the P. densiflora forests, respectively. The addition of water stress functions increased reliability of the decay rate estimation and could contribute to reducing the bias in estimating soil C sequestration under varying moisture condition. Acknowledgement: This study was supported by Korea Forest Service (2017044B10-1719-BB01)

  6. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent. The theoret......Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  7. Including policy and management in socio-hydrology models: initial conceptualizations

    Science.gov (United States)

    Hermans, Leon; Korbee, Dorien

    2017-04-01

    Socio-hydrology studies the interactions in coupled human-water systems. So far, the use of dynamic models that capture the direct feedback between societal and hydrological systems has been dominant. What has not yet been included with any particular emphasis, is the policy or management layer, which is a central element in for instance integrated water resources management (IWRM) or adaptive delta management (ADM). Studying the direct interactions between human-water systems generates knowledges that eventually helps influence these interactions in ways that may ensure better outcomes - for society and for the health and sustainability of water systems. This influence sometimes occurs through spontaneous emergence, uncoordinated by societal agents - private sector, citizens, consumers, water users. However, the term 'management' in IWRM and ADM also implies an additional coordinated attempt through various public actors. This contribution is a call to include the policy and management dimension more prominently into the research focus of the socio-hydrology field, and offers first conceptual variables that should be considered in attempts to include this policy or management layer in socio-hydrology models. This is done by drawing on existing frameworks to study policy processes throughout both planning and implementation phases. These include frameworks such as the advocacy coalition framework, collective learning and policy arrangements, which all emphasis longer-term dynamics and feedbacks between actor coalitions in strategic planning and implementation processes. A case about longter-term dynamics in the management of the Haringvliet in the Netherlands is used to illustrate the paper.

  8. Distillation modeling for a uranium refining process

    International Nuclear Information System (INIS)

    Westphal, B.R.

    1996-01-01

    As part of the spent fuel treatment program at Argonne National Laboratory, a vacuum distillation process is being employed for the recovery of uranium following an electrorefining process. Distillation of a salt electrolyte, containing a eutectic mixture of lithium and potassium chlorides, from uranium is achieved by a simple batch operation and is termed open-quotes cathode processingclose quotes. The incremental distillation of electrolyte salt will be modeled by an equilibrium expression and on a molecular basis since the operation is conducted under moderate vacuum conditions. As processing continues, the two models will be compared and analyzed for correlation with actual operating results. Possible factors that may contribute to aberrations from the models include impurities at the vapor-liquid boundary, distillate reflux, anomalous pressure gradients, and mass transport phenomena at the evaporating surface. Ultimately, the purpose of either process model is to enable the parametric optimization of the process

  9. Neuroscientific model of motivational process.

    Science.gov (United States)

    Kim, Sung-Il

    2013-01-01

    Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three sub processes, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous sub processes, namely reward-driven approach, value-based decision-making, and goal-directed control. Reward-driven approach is the process in which motivation is generated by reward anticipation and selective approach behaviors toward reward. This process recruits the ventral striatum (reward area) in which basic stimulus-action association is formed, and is classified as an automatic motivation to which relatively less attention is assigned. By contrast, value-based decision-making is the process of evaluating various outcomes of actions, learning through positive prediction error, and calculating the value continuously. The striatum and the orbitofrontal cortex (valuation area) play crucial roles in sustaining motivation. Lastly, the goal-directed control is the process of regulating motivation through cognitive control to achieve goals. This consciously controlled motivation is associated with higher-level cognitive functions such as planning, retaining the goal, monitoring the performance, and regulating action. The anterior cingulate cortex (attention area) and the dorsolateral prefrontal cortex (cognitive control area) are the main neural circuits related to regulation of motivation. These three sub processes interact with each other by sending reward prediction error signals through dopaminergic pathway from the striatum and to the prefrontal cortex. The neuroscientific model of motivational process suggests several educational implications with regard to the generation, maintenance, and regulation of motivation to learn in the learning environment.

  10. Piecewise deterministic processes in biological models

    CERN Document Server

    Rudnicki, Ryszard

    2017-01-01

    This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...

  11. Modelling Mediterranean agro-ecosystems by including agricultural trees in the LPJmL model

    Science.gov (United States)

    Fader, M.; von Bloh, W.; Shi, S.; Bondeau, A.; Cramer, W.

    2015-11-01

    In the Mediterranean region, climate and land use change are expected to impact on natural and agricultural ecosystems by warming, reduced rainfall, direct degradation of ecosystems and biodiversity loss. Human population growth and socioeconomic changes, notably on the eastern and southern shores, will require increases in food production and put additional pressure on agro-ecosystems and water resources. Coping with these challenges requires informed decisions that, in turn, require assessments by means of a comprehensive agro-ecosystem and hydrological model. This study presents the inclusion of 10 Mediterranean agricultural plants, mainly perennial crops, in an agro-ecosystem model (Lund-Potsdam-Jena managed Land - LPJmL): nut trees, date palms, citrus trees, orchards, olive trees, grapes, cotton, potatoes, vegetables and fodder grasses. The model was successfully tested in three model outputs: agricultural yields, irrigation requirements and soil carbon density. With the development presented in this study, LPJmL is now able to simulate in good detail and mechanistically the functioning of Mediterranean agriculture with a comprehensive representation of ecophysiological processes for all vegetation types (natural and agricultural) and in a consistent framework that produces estimates of carbon, agricultural and hydrological variables for the entire Mediterranean basin. This development paves the way for further model extensions aiming at the representation of alternative agro-ecosystems (e.g. agroforestry), and opens the door for a large number of applications in the Mediterranean region, for example assessments of the consequences of land use transitions, the influence of management practices and climate change impacts.

  12. Process Models for Security Architectures

    Directory of Open Access Journals (Sweden)

    Floarea NASTASE

    2006-01-01

    Full Text Available This paper presents a model for an integrated security system, which can be implemented in any organization. It is based on security-specific standards and taxonomies as ISO 7498-2 and Common Criteria. The functionalities are derived from the classes proposed in the Common Criteria document. In the paper we present the process model for each functionality and also we focus on the specific components.

  13. Mathematical modelling in economic processes.

    Directory of Open Access Journals (Sweden)

    L.V. Kravtsova

    2008-06-01

    Full Text Available In article are considered a number of methods of mathematical modelling of economic processes and opportunities of use of spreadsheets Excel for reception of the optimum decision of tasks or calculation of financial operations with the help of the built-in functions.

  14. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent. The theoret...

  15. Investigating the Process of Process Modeling with Eye Movement Analysis

    OpenAIRE

    Pinggera, Jakob; Furtner, Marco; Martini, Markus; Sachse, Pierre; Reiter, Katharina; Zugal, Stefan; Weber, Barbara

    2015-01-01

    Research on quality issues of business process models has recently begun to explore the process of creating process models by analyzing the modeler's interactions with the modeling environment. In this paper we aim to complement previous insights on the modeler's modeling behavior with data gathered by tracking the modeler's eye movements when engaged in the act of modeling. We present preliminary results and outline directions for future research to triangulate toward a more comprehensive un...

  16. Integrated Site Model Process Model Report

    International Nuclear Information System (INIS)

    Booth, T.

    2000-01-01

    The Integrated Site Model (ISM) provides a framework for discussing the geologic features and properties of Yucca Mountain, which is being evaluated as a potential site for a geologic repository for the disposal of nuclear waste. The ISM is important to the evaluation of the site because it provides 3-D portrayals of site geologic, rock property, and mineralogic characteristics and their spatial variabilities. The ISM is not a single discrete model; rather, it is a set of static representations that provide three-dimensional (3-D), computer representations of site geology, selected hydrologic and rock properties, and mineralogic-characteristics data. These representations are manifested in three separate model components of the ISM: the Geologic Framework Model (GFM), the Rock Properties Model (RPM), and the Mineralogic Model (MM). The GFM provides a representation of the 3-D stratigraphy and geologic structure. Based on the framework provided by the GFM, the RPM and MM provide spatial simulations of the rock and hydrologic properties, and mineralogy, respectively. Functional summaries of the component models and their respective output are provided in Section 1.4. Each of the component models of the ISM considers different specific aspects of the site geologic setting. Each model was developed using unique methodologies and inputs, and the determination of the modeled units for each of the components is dependent on the requirements of that component. Therefore, while the ISM represents the integration of the rock properties and mineralogy into a geologic framework, the discussion of ISM construction and results is most appropriately presented in terms of the three separate components. This Process Model Report (PMR) summarizes the individual component models of the ISM (the GFM, RPM, and MM) and describes how the three components are constructed and combined to form the ISM

  17. Steady-State Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    illustrate the “equation oriented” approach as well as the “sequential modular” approach to solving complex flowsheets for steady state applications. The applications include the Williams-Otto plant, the hydrodealkylation (HDA) of toluene, conversion of ethylene to ethanol and a bio-ethanol process....

  18. Analog modelling of obduction processes

    Science.gov (United States)

    Agard, P.; Zuo, X.; Funiciello, F.; Bellahsen, N.; Faccenna, C.; Savva, D.

    2012-04-01

    Obduction corresponds to one of plate tectonics oddities, whereby dense, oceanic rocks (ophiolites) are presumably 'thrust' on top of light, continental ones, as for the short-lived, almost synchronous Peri-Arabic obduction (which took place along thousands of km from Turkey to Oman in c. 5-10 Ma). Analog modelling experiments were performed to study the mechanisms of obduction initiation and test various triggering hypotheses (i.e., plate acceleration, slab hitting the 660 km discontinuity, ridge subduction; Agard et al., 2007). The experimental setup comprises (1) an upper mantle, modelled as a low-viscosity transparent Newtonian glucose syrup filling a rigid Plexiglas tank and (2) high-viscosity silicone plates (Rhodrosil Gomme with PDMS iron fillers to reproduce densities of continental or oceanic plates), located at the centre of the tank above the syrup to simulate the subducting and the overriding plates - and avoid friction on the sides of the tank. Convergence is simulated by pushing on a piston at one end of the model with velocities comparable to those of plate tectonics (i.e., in the range 1-10 cm/yr). The reference set-up includes, from one end to the other (~60 cm): (i) the piston, (ii) a continental margin containing a transition zone to the adjacent oceanic plate, (iii) a weakness zone with variable resistance and dip (W), (iv) an oceanic plate - with or without a spreading ridge, (v) a subduction zone (S) dipping away from the piston and (vi) an upper, active continental margin, below which the oceanic plate is being subducted at the start of the experiment (as is known to have been the case in Oman). Several configurations were tested and over thirty different parametric tests were performed. Special emphasis was placed on comparing different types of weakness zone (W) and the extent of mechanical coupling across them, particularly when plates were accelerated. Displacements, together with along-strike and across-strike internal deformation in all

  19. Model for amorphous aggregation processes

    Science.gov (United States)

    Stranks, Samuel D.; Ecroyd, Heath; van Sluyter, Steven; Waters, Elizabeth J.; Carver, John A.; von Smekal, Lorenz

    2009-11-01

    The amorphous aggregation of proteins is associated with many phenomena, ranging from the formation of protein wine haze to the development of cataract in the eye lens and the precipitation of recombinant proteins during their expression and purification. While much literature exists describing models for linear protein aggregation, such as amyloid fibril formation, there are few reports of models which address amorphous aggregation. Here, we propose a model to describe the amorphous aggregation of proteins which is also more widely applicable to other situations where a similar process occurs, such as in the formation of colloids and nanoclusters. As first applications of the model, we have tested it against experimental turbidimetry data of three proteins relevant to the wine industry and biochemistry, namely, thaumatin, a thaumatinlike protein, and α -lactalbumin. The model is very robust and describes amorphous experimental data to a high degree of accuracy. Details about the aggregation process, such as shape parameters of the aggregates and rate constants, can also be extracted.

  20. Analytical and Experimental Performance Evaluation of BLE Neighbor Discovery Process Including Non-Idealities of Real Chipsets.

    Science.gov (United States)

    Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio

    2017-03-03

    The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage.

  1. Analytical and Experimental Performance Evaluation of BLE Neighbor Discovery Process Including Non-Idealities of Real Chipsets

    Directory of Open Access Journals (Sweden)

    David Perez-Diaz de Cerio

    2017-03-01

    Full Text Available The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage.

  2. Models of transport processes in concrete

    International Nuclear Information System (INIS)

    Pommersheim, J.M.; Clifton, J.R.

    1991-01-01

    An approach being considered by the US Nuclear Regulatory Commission for disposal of low-level radioactive waste is to place the waste forms in concrete vaults buried underground. The vaults would need a service life of 500 years. Approaches for predicting the service life of concrete of such vaults include the use of mathematical models. Mathematical models are presented in this report for the major degradation processes anticipated for the concrete vaults, which are corrosion of steel reinforcement, sulfate attack, acid attack, and leaching. The models mathematically represent rate controlling processes including diffusion, convection, and reaction and sorption of chemical species. These models can form the basis for predicting the life of concrete under in-service conditions. 33 refs., 6 figs., 7 tabs

  3. A structural model for the in vivo human cornea including collagen-swelling interaction.

    Science.gov (United States)

    Cheng, Xi; Petsche, Steven J; Pinsky, Peter M

    2015-08-06

    A structural model of the in vivo cornea, which accounts for tissue swelling behaviour, for the three-dimensional organization of stromal fibres and for collagen-swelling interaction, is proposed. Modelled as a binary electrolyte gel in thermodynamic equilibrium, the stromal electrostatic free energy is based on the mean-field approximation. To account for active endothelial ionic transport in the in vivo cornea, which modulates osmotic pressure and hydration, stromal mobile ions are shown to satisfy a modified Boltzmann distribution. The elasticity of the stromal collagen network is modelled based on three-dimensional collagen orientation probability distributions for every point in the stroma obtained by synthesizing X-ray diffraction data for azimuthal angle distributions and second harmonic-generated image processing for inclination angle distributions. The model is implemented in a finite-element framework and employed to predict free and confined swelling of stroma in an ionic bath. For the in vivo cornea, the model is used to predict corneal swelling due to increasing intraocular pressure (IOP) and is adapted to model swelling in Fuchs' corneal dystrophy. The biomechanical response of the in vivo cornea to a typical LASIK surgery for myopia is analysed, including tissue fluid pressure and swelling responses. The model provides a new interpretation of the corneal active hydration control (pump-leak) mechanism based on osmotic pressure modulation. The results also illustrate the structural necessity of fibre inclination in stabilizing the corneal refractive surface with respect to changes in tissue hydration and IOP. © 2015 The Author(s).

  4. Process modeling study of the CIF incinerator

    International Nuclear Information System (INIS)

    Hang, T.

    1995-01-01

    The Savannah River Site (SRS) plans to begin operating the Consolidated Incineration Facility (CIF) in 1996. The CIF will treat liquid and solid low-level radioactive, mixed and RCRA hazardous wastes generated at SRS. In addition to experimental test programs, process modeling was applied to provide guidance in areas of safety, environmental regulation compliances, process improvement and optimization. A steady-state flowsheet model was used to calculate material/energy balances and to track key chemical constituents throughout the process units. Dynamic models were developed to predict the CIF transient characteristics in normal and abnormal operation scenarios. Predictions include the rotary kiln heat transfer, dynamic responses of the CIF to fluctuations in the solid waste feed or upsets in the system equipments, performance of the control system, air inleakage in the kiln, etc. This paper reviews the modeling study performed to assist in the deflagration risk assessment

  5. Chain binomial models and binomial autoregressive processes.

    Science.gov (United States)

    Weiss, Christian H; Pollett, Philip K

    2012-09-01

    We establish a connection between a class of chain-binomial models of use in ecology and epidemiology and binomial autoregressive (AR) processes. New results are obtained for the latter, including expressions for the lag-conditional distribution and related quantities. We focus on two types of chain-binomial model, extinction-colonization and colonization-extinction models, and present two approaches to parameter estimation. The asymptotic distributions of the resulting estimators are studied, as well as their finite-sample performance, and we give an application to real data. A connection is made with standard AR models, which also has implications for parameter estimation. © 2011, The International Biometric Society.

  6. Habitability of super-Earth planets around other suns: models including Red Giant Branch evolution.

    Science.gov (United States)

    von Bloh, W; Cuntz, M; Schröder, K-P; Bounama, C; Franck, S

    2009-01-01

    The unexpected diversity of exoplanets includes a growing number of super-Earth planets, i.e., exoplanets with masses of up to several Earth masses and a similar chemical and mineralogical composition as Earth. We present a thermal evolution model for a 10 Earth-mass planet orbiting a star like the Sun. Our model is based on the integrated system approach, which describes the photosynthetic biomass production and takes into account a variety of climatological, biogeochemical, and geodynamical processes. This allows us to identify a so-called photosynthesis-sustaining habitable zone (pHZ), as determined by the limits of biological productivity on the planetary surface. Our model considers solar evolution during the main-sequence stage and along the Red Giant Branch as described by the most recent solar model. We obtain a large set of solutions consistent with the principal possibility of life. The highest likelihood of habitability is found for "water worlds." Only mass-rich water worlds are able to realize pHZ-type habitability beyond the stellar main sequence on the Red Giant Branch.

  7. Modelling of Water Cooled Fuel Including Design Basis and Severe Accidents. Proceedings of a Technical Meeting

    International Nuclear Information System (INIS)

    2015-11-01

    The demands on nuclear fuel have recently been increasing, and include transient regimes, higher discharge burnup and longer fuel cycles. This has resulted in an increase of loads on fuel and core internals. In order to satisfy these demands while ensuring compliance with safety criteria, new national and international programmes have been launched and advanced modelling codes are being developed. The Fukushima Daiichi accident has particularly demonstrated the need for adequate analysis of all aspects of fuel performance to prevent a failure and also to predict fuel behaviour were an accident to occur.This publication presents the Proceedings of the Technical Meeting on Modelling of Water Cooled Fuel Including Design Basis and Severe Accidents, which was hosted by the Nuclear Power Institute of China (NPIC) in Chengdu, China, following the recommendation made in 2013 at the IAEA Technical Working Group on Fuel Performance and Technology. This recommendation was in agreement with IAEA mid-term initiatives, linked to the post-Fukushima IAEA Nuclear Safety Action Plan, as well as the forthcoming Coordinated Research Project (CRP) on Fuel Modelling in Accident Conditions. At the technical meeting in Chengdu, major areas and physical phenomena, as well as types of code and experiment to be studied and used in the CRP, were discussed. The technical meeting provided a forum for international experts to review the state of the art of code development for modelling fuel performance of nuclear fuel for water cooled reactors with regard to steady state and transient conditions, and for design basis and early phases of severe accidents, including experimental support for code validation. A round table discussion focused on the needs and perspectives on fuel modelling in accident conditions. This meeting was the ninth in a series of IAEA meetings, which reflects Member States’ continuing interest in nuclear fuel issues. The previous meetings were held in 1980 (jointly with

  8. Exergoeconomic performance optimization for a steady-flow endoreversible refrigeration model including six typical cycles

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Lingen; Kan, Xuxian; Sun, Fengrui; Wu, Feng [College of Naval Architecture and Power, Naval University of Engineering, Wuhan 430033 (China)

    2013-07-01

    The operation of a universal steady flow endoreversible refrigeration cycle model consisting of a constant thermal-capacity heating branch, two constant thermal-capacity cooling branches and two adiabatic branches is viewed as a production process with exergy as its output. The finite time exergoeconomic performance optimization of the refrigeration cycle is investigated by taking profit rate optimization criterion as the objective. The relations between the profit rate and the temperature ratio of working fluid, between the COP (coefficient of performance) and the temperature ratio of working fluid, as well as the optimal relation between profit rate and the COP of the cycle are derived. The focus of this paper is to search the compromised optimization between economics (profit rate) and the utilization factor (COP) for endoreversible refrigeration cycles, by searching the optimum COP at maximum profit, which is termed as the finite-time exergoeconomic performance bound. Moreover, performance analysis and optimization of the model are carried out in order to investigate the effect of cycle process on the performance of the cycles using numerical example. The results obtained herein include the performance characteristics of endoreversible Carnot, Diesel, Otto, Atkinson, Dual and Brayton refrigeration cycles.

  9. Rosemary Aromatization of Extra Virgin Olive Oil and Process Optimization Including Antioxidant Potential and Yield

    Directory of Open Access Journals (Sweden)

    Erkan Karacabey

    2016-08-01

    Full Text Available Aromatization of olive oil especially by spices and herbs has been widely used technique throughout the ages in Mediterranean diets. The present study was focused on aromatization of olive oil by rosemary (Rosmarinus officinalis L.. Aromatization process was optimized by response surface methodology as a function of malaxation’s conditions (temperature and time. According to authors’ best knowledge it was first time for examination of oil yield performance with antioxidant potential and pigments under effect of aromatization parameters. For all oil samples, values of the free acidity, peroxide, K232 and K270 as quality parameters fell within the ranges established for the highest quality category “extra virgin oil”. Oil yield (mL oil/kg olive paste changed from 158 to 208 with respect to design parameters. Total phenolic content and free radical scavenging activity as antioxidant potential of olive oil samples were varied in the range of 182.44 – 348.65 mg gallic acid equivalent/kg oil and 28.91 – 88.75 % inhibition of 2,2-Diphenyl-1-picrylhydrazyl-(DPPH•, respectively. Total contents of carotenoid, chlorophyll and pheophytin a as pigments in oil samples were found to be in between 0.09 – 0.48 mg carotenoid/kg oil, 0.11 – 0.96 mg chlorophyll/kg oil, 0.15 – 4.44 mg pheo α/kg oil, respectively. The proposed models for yield, pigments and antioxidant potential responses were found to be good enough for successful prediction of experimental results. Total phenolics, carotenoids and free radical scavenging activity of aromatized olive oil and oil yield were maximized to gather and optimal conditions were determined as 25°C, 84 min, and 2 % (Rosemary/olive paste; w/w.

  10. Determinantal point process models on the sphere

    DEFF Research Database (Denmark)

    Møller, Jesper; Nielsen, Morten; Porcu, Emilio

    We consider determinantal point processes on the d-dimensional unit sphere Sd . These are finite point processes exhibiting repulsiveness and with moment properties determined by a certain determinant whose entries are specified by a so-called kernel which we assume is a complex covariance function...... and eigenfunctions in a spectral representation for the kernel, and we figure out how repulsive isotropic DPPs can be. Moreover, we discuss the shortcomings of adapting existing models for isotropic covariance functions and consider strategies for developing new models, including a useful spectral approach....

  11. Models of memory: information processing.

    Science.gov (United States)

    Eysenck, M W

    1988-01-01

    A complete understanding of human memory will necessarily involve consideration of the active processes involved at the time of learning and of the organization and nature of representation of information in long-term memory. In addition to process and structure, it is important for theory to indicate the ways in which stimulus-driven and conceptually driven processes interact with each other in the learning situation. Not surprisingly, no existent theory provides a detailed specification of all of these factors. However, there are a number of more specific theories which are successful in illuminating some of the component structures and processes. The working memory model proposed by Baddeley and Hitch (1974) and modified subsequently has shown how the earlier theoretical construct of the short-term store should be replaced with the notion of working memory. In essence, working memory is a system which is used both to process information and to permit the transient storage of information. It comprises a number of conceptually distinct, but functionally interdependent components. So far as long-term memory is concerned, there is evidence of a number of different kinds of representation. Of particular importance is the distinction between declarative knowledge and procedural knowledge, a distinction which has received support from the study of amnesic patients. Kosslyn has argued for a distinction between literal representation and propositional representation, whereas Tulving has distinguished between episodic and semantic memories. While Tulving's distinction is perhaps the best known, there is increasing evidence that episodic and semantic memory differ primarily in content rather than in process, and so the distinction may be of less theoretical value than was originally believed.(ABSTRACT TRUNCATED AT 250 WORDS)

  12. The Lag Model, a Turbulence Model for Wall Bounded Flows Including Separation

    Science.gov (United States)

    Olsen, Michael E.; Coakley, Thomas J.; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A new class of turbulence model is described for wall bounded, high Reynolds number flows. A specific turbulence model is demonstrated, with results for favorable and adverse pressure gradient flowfields. Separation predictions are as good or better than either Spalart Almaras or SST models, do not require specification of wall distance, and have similar or reduced computational effort compared with these models.

  13. Modeling pellet impact drilling process

    Science.gov (United States)

    Kovalyov, A. V.; Ryabchikov, S. Ya; Isaev, Ye D.; Ulyanova, O. S.

    2016-03-01

    The paper describes pellet impact drilling which could be used to increase the drilling speed and the rate of penetration when drilling hard rocks. Pellet impact drilling implies rock destruction by metal pellets with high kinetic energy in the immediate vicinity of the earth formation encountered. The pellets are circulated in the bottom hole by a high velocity fluid jet, which is the principle component of the ejector pellet impact drill bit. The experiments conducted has allowed modeling the process of pellet impact drilling, which creates the scientific and methodological basis for engineering design of drilling operations under different geo-technical conditions.

  14. Integrated modelling in materials and process technology

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri

    2008-01-01

    Integrated modelling of entire process sequences and the subsequent in-service conditions, and multiphysics modelling of the single process steps are areas that increasingly support optimisation of manufactured parts. In the present paper, three different examples of modelling manufacturing...... processes from the viewpoint of combined materials and process modelling are presented: solidification of thin walled ductile cast iron, integrated modelling of spray forming and multiphysics modelling of friction stir welding. The fourth example describes integrated modelling applied to a failure analysis...

  15. A visual analysis of the process of process modeling

    NARCIS (Netherlands)

    Claes, J.; Vanderfeesten, I.; Pinggera, J.; Reijers, H.A.; Weber, B.; Poels, G.

    2015-01-01

    The construction of business process models has become an important requisite in the analysis and optimization of processes. The success of the analysis and optimization efforts heavily depends on the quality of the models. Therefore, a research domain emerged that studies the process of process

  16. Consultation on microbiological criteria for foods to be further processed including by irradiation

    International Nuclear Information System (INIS)

    1989-01-01

    Many foods carry microorganisms that may have serious consequences for the health of the consumer. There is thus often a need for processing to eliminate the resulting health hazards. Concern has been expressed that treatments, especially irradiation, might be applied to clean up food that has not been hygienically processed. Adherence to good manufacturing practice can greatly assist food processors to ensure food quality and safety. Figs

  17. Two Undergraduate Process Modeling Courses Taught Using Inductive Learning Methods

    Science.gov (United States)

    Soroush, Masoud; Weinberger, Charles B.

    2010-01-01

    This manuscript presents a successful application of inductive learning in process modeling. It describes two process modeling courses that use inductive learning methods such as inquiry learning and problem-based learning, among others. The courses include a novel collection of multi-disciplinary complementary process modeling examples. They were…

  18. Speech-Language Therapists' Process of Including Significant Others in Aphasia Rehabilitation

    Science.gov (United States)

    Hallé, Marie-Christine; Le Dorze, Guylaine; Mingant, Anne

    2014-01-01

    Background: Although aphasia rehabilitation should include significant others, it is currently unknown how this recommendation is adopted in speech-language therapy practice. Speech-language therapists' (SLTs) experience of including significant others in aphasia rehabilitation is also understudied, yet a better understanding of clinical…

  19. Derivative processes for modelling metabolic fluxes

    Science.gov (United States)

    Žurauskienė, Justina; Kirk, Paul; Thorne, Thomas; Pinney, John; Stumpf, Michael

    2014-01-01

    Motivation: One of the challenging questions in modelling biological systems is to characterize the functional forms of the processes that control and orchestrate molecular and cellular phenotypes. Recently proposed methods for the analysis of metabolic pathways, for example, dynamic flux estimation, can only provide estimates of the underlying fluxes at discrete time points but fail to capture the complete temporal behaviour. To describe the dynamic variation of the fluxes, we additionally require the assumption of specific functional forms that can capture the temporal behaviour. However, it also remains unclear how to address the noise which might be present in experimentally measured metabolite concentrations. Results: Here we propose a novel approach to modelling metabolic fluxes: derivative processes that are based on multiple-output Gaussian processes (MGPs), which are a flexible non-parametric Bayesian modelling technique. The main advantages that follow from MGPs approach include the natural non-parametric representation of the fluxes and ability to impute the missing data in between the measurements. Our derivative process approach allows us to model changes in metabolite derivative concentrations and to characterize the temporal behaviour of metabolic fluxes from time course data. Because the derivative of a Gaussian process is itself a Gaussian process, we can readily link metabolite concentrations to metabolic fluxes and vice versa. Here we discuss how this can be implemented in an MGP framework and illustrate its application to simple models, including nitrogen metabolism in Escherichia coli. Availability and implementation: R code is available from the authors upon request. Contact: j.norkunaite@imperial.ac.uk; m.stumpf@imperial.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24578401

  20. A Generic Modeling Process to Support Functional Fault Model Development

    Science.gov (United States)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  1. ICT Enabling More Energy Efficient Processes, Including e-Invoicing as a Case

    NARCIS (Netherlands)

    Bomhof, F.W.; Hoorik, P.M. van; Hoeve, M.C.

    2012-01-01

    ICT has the potential to enable a low carbon economy, as pointed out by many studies. One example of the energy (and CO2) saving potential of ICT is illustrated in this chapter: how much energy (and emissions) can be saved if the invoicing process is redesigned? Although there is a net positive

  2. Method of solution preparation of polyolefin class polymers for electrospinning processing included

    Science.gov (United States)

    Rabolt, John F. (Inventor); Lee, Keun-Hyung (Inventor); Givens, Steven R. (Inventor)

    2011-01-01

    A process to make a polyolefin fiber which has the following steps: mixing at least one polyolefin into a solution at room temperature or a slightly elevated temperature to form a polymer solution and electrospinning at room temperature said polymer solution to form a fiber.

  3. Deterministic geologic processes and stochastic modeling

    International Nuclear Information System (INIS)

    Rautman, C.A.; Flint, A.L.

    1991-01-01

    Recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. Consideration of the spatial distribution of measured values and geostatistical measures of spatial variability indicates that there are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. These deterministic features have their origin in the complex, yet logical, interplay of a number of deterministic geologic processes, including magmatic evolution; volcanic eruption, transport, and emplacement; post-emplacement cooling and alteration; and late-stage (diagenetic) alteration. Because of geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly, using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling. It is unlikely that any single representation of physical properties at the site will be suitable for all modeling purposes. Instead, the same underlying physical reality will need to be described many times, each in a manner conducive to assessing specific performance issues

  4. A Mathematical Model of Cigarette Smoldering Process

    Directory of Open Access Journals (Sweden)

    Chen P

    2014-12-01

    Full Text Available A mathematical model for a smoldering cigarette has been proposed. In the analysis of the cigarette combustion and pyrolysis processes, a receding burning front is defined, which has a constant temperature (~450 °C and divides the cigarette into two zones, the burning zone and the pyrolysis zone. The char combustion processes in the burning zone and the pyrolysis of virgin tobacco and evaporation of water in the pyrolysis zone are included in the model. The hot gases flow from the burning zone, are assumed to go out as sidestream smoke during smoldering. The internal heat transport is characterized by effective thermal conductivities in each zone. Thermal conduction of cigarette paper and convective and radiative heat transfer at the outer surface were also considered. The governing partial differential equations were solved using an integral method. Model predictions of smoldering speed as well as temperature and density profiles in the pyrolysis zone for different kinds of cigarettes were found to agree with the experimental data. The model also predicts the coal length and the maximum coal temperatures during smoldering conditions. The model provides a relatively fast and efficient way to simulate the cigarette burning processes. It offers a practical tool for exploring important parameters for cigarette smoldering processes, such as tobacco components, properties of cigarette paper, and heat generation in the burning zone and its dependence on the mass burn rate.

  5. Modeling of Reaction Processes Controlled by Diffusion

    International Nuclear Information System (INIS)

    Revelli, Jorge

    2003-01-01

    Stochastic modeling is quite powerful in science and technology.The technics derived from this process have been used with great success in laser theory, biological systems and chemical reactions.Besides, they provide a theoretical framework for the analysis of experimental results on the field of particle's diffusion in ordered and disordered materials.In this work we analyze transport processes in one-dimensional fluctuating media, which are media that change their state in time.This fact induces changes in the movements of the particles giving rise to different phenomena and dynamics that will be described and analyzed in this work.We present some random walk models to describe these fluctuating media.These models include state transitions governed by different dynamical processes.We also analyze the trapping problem in a lattice by means of a simple model which predicts a resonance-like phenomenon.Also we study effective diffusion processes over surfaces due to random walks in the bulk.We consider different boundary conditions and transitions movements.We derive expressions that describe diffusion behaviors constrained to bulk restrictions and the dynamic of the particles.Finally it is important to mention that the theoretical results obtained from the models proposed in this work are compared with Monte Carlo simulations.We find, in general, excellent agreements between the theory and the simulations

  6. Similarity metrics for surgical process models.

    Science.gov (United States)

    Neumuth, Thomas; Loebe, Frank; Jannin, Pierre

    2012-01-01

    The objective of this work is to introduce a set of similarity metrics for comparing surgical process models (SPMs). SPMs are progression models of surgical interventions that support quantitative analyses of surgical activities, supporting systems engineering or process optimization. Five different similarity metrics are presented and proven. These metrics deal with several dimensions of process compliance in surgery, including granularity, content, time, order, and frequency of surgical activities. The metrics were experimentally validated using 20 clinical data sets each for cataract interventions, craniotomy interventions, and supratentorial tumor resections. The clinical data sets were controllably modified in simulations, which were iterated ten times, resulting in a total of 600 simulated data sets. The simulated data sets were subsequently compared to the original data sets to empirically assess the predictive validity of the metrics. We show that the results of the metrics for the surgical process models correlate significantly (pmetrics meet predictive validity. The clinical use of the metrics was exemplarily, as demonstrated by assessment of the learning curves of observers during surgical process model acquisition. Measuring similarity between surgical processes is a complex task. However, metrics for computing the similarity between surgical process models are needed in many uses in the field of medical engineering. These metrics are essential whenever two SPMs need to be compared, such as during the evaluation of technical systems, the education of observers, or the determination of surgical strategies. These metrics are key figures that provide a solid base for medical decisions, such as during validation of sensor systems for use in operating rooms in the future. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Process for removing and detoxifying cadmium from scrap metal including mixed waste

    International Nuclear Information System (INIS)

    Kronberg, J.W.

    1994-01-01

    Cadmium-bearing scrap from nuclear applications, such as neutron shielding and reactor control and safety rods, must usually be handled as mixed waste since it is radioactive and the cadmium in it is both leachable and highly toxic. Removing the cadmium from this scrap, and converting it to a nonleachable and minimally radioactive form, would greatly simplify disposal or recycling. A process now under development will do this by shredding the scrap; leaching it with reagents which selectively dissolve out the cadmium; reprecipitating the cadmium as its highly insoluble sulfide; then fusing the sulfide into a glassy matrix to bring its leachability below EPA limits before disposal. Alternatively, the cadmium may be recovered for reuse. A particular advantage of the process is that all reagents (except the glass frit) can easily be recovered and reused in a nearly closed cycle, minimizing the risk of radioactive release. The process does not harm common metals such as aluminum, iron and stainless steel, and is also applicable to non-nuclear cadmium-bearing scrap such as nickel-cadmium batteries

  8. Fast phase processing in off-axis holography by CUDA including parallel phase unwrapping.

    Science.gov (United States)

    Backoach, Ohad; Kariv, Saar; Girshovitz, Pinhas; Shaked, Natan T

    2016-02-22

    We present parallel processing implementation for rapid extraction of the quantitative phase maps from off-axis holograms on the Graphics Processing Unit (GPU) of the computer using computer unified device architecture (CUDA) programming. To obtain efficient implementation, we parallelized both the wrapped phase map extraction algorithm and the two-dimensional phase unwrapping algorithm. In contrast to previous implementations, we utilized unweighted least squares phase unwrapping algorithm that better suits parallelism. We compared the proposed algorithm run times on the CPU and the GPU of the computer for various sizes of off-axis holograms. Using the GPU implementation, we extracted the unwrapped phase maps from the recorded off-axis holograms at 35 frames per second (fps) for 4 mega pixel holograms, and at 129 fps for 1 mega pixel holograms, which presents the fastest processing framerates obtained so far, to the best of our knowledge. We then used common-path off-axis interferometric imaging to quantitatively capture the phase maps of a micro-organism with rapid flagellum movements.

  9. Big-bang nucleosynthesis with a long-lived charged massive particle including He4 spallation processes

    Science.gov (United States)

    Jittoh, Toshifumi; Kohri, Kazunori; Koike, Masafumi; Sato, Joe; Sugai, Kenichi; Yamanaka, Masato; Yazaki, Koichi

    2011-08-01

    We propose helium-4 spallation processes induced by long-lived stau in supersymmetric standard models, and investigate an impact of the processes on light elements abundances. We show that, as long as the phase space of helium-4 spallation processes is open, they are more important than stau-catalyzed fusion and hence constrain the stau property.

  10. Investigation of Techno-Stress Levels of Teachers Who Were Included in Technology Integration Processes

    Science.gov (United States)

    Çoklar, Ahmet Naci; Efilti, Erkan; Sahin, Yusef Levent; Akçay, Arif

    2016-01-01

    Techno-stress is defined as a modern adaptation disorder resulting from the failure in coping with new technologies in a healthy way. Techno-stress affects many occupational groups, including teachers. FATIH project and many other previous studies conducted in Turkey in recent years have necessitated the use of technology for teachers. The present…

  11. Process For Controlling Flow Rate Of Viscous Materials Including Use Of Nozzle With Changeable Openings

    Science.gov (United States)

    Ellingson, William A.; Forster, George A.

    1999-11-02

    Apparatus and a method for controlling the flow rate of viscous materials through a nozzle includes an apertured main body and an apertured end cap coupled together and having an elongated, linear flow channel extending the length thereof. An end of the main body is disposed within the end cap and includes a plurality of elongated slots concentrically disposed about and aligned with the flow channel. A generally flat cam plate having a center aperture is disposed between the main body and end cap and is rotatable about the flow channel. A plurality of flow control vane assemblies are concentrically disposed about the flow channel and are coupled to the cam plate. Each vane assembly includes a vane element disposed adjacent the end of the flow channel. Rotation of the cam plate in a first direction causes a corresponding rotation of each of the vane elements for positioning the individual vane elements over the aperture in the end cap blocking flow through the flow channel, while rotation in an opposite direction removes the vane elements from the aperture and positions them about the flow channel in a nested configuration in the full open position, with a continuous range of vane element positions available between the full open and closed positions.

  12. Mathematical Model of Thyristor Inverter Including a Series-parallel Resonant Circuit

    Directory of Open Access Journals (Sweden)

    Miroslaw Luft

    2008-01-01

    Full Text Available The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with theaid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  13. Mathematical model of thyristor inverter including a series-parallel resonant circuit

    OpenAIRE

    Luft, M.; Szychta, E.

    2008-01-01

    The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with the aid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  14. Mathematical Model of Thyristor Inverter Including a Series-parallel Resonant Circuit

    OpenAIRE

    Miroslaw Luft; Elzbieta Szychta

    2008-01-01

    The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with theaid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  15. Modeling of Pem Fuel Cell Systems Including Controls and Reforming Effects for Hybrid Automotive Applications

    National Research Council Canada - National Science Library

    Boettner, Daisie

    2001-01-01

    .... This study develops models for a stand-alone Proton Exchange Membrane (PEM) fuel cell stack, a direct-hydrogen fuel cell system including auxiliaries, and a methanol reforming fuel cell system for integration into a vehicle performance simulator...

  16. Cupola Furnace Computer Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  17. Spectral line intensities of NeVII for non-equilibrium ionization plasma including dielectronic recombination processes

    Energy Technology Data Exchange (ETDEWEB)

    Murakami, Izumi; Kato, Takako [National Inst. for Fusion Science, Toki, Gifu (Japan); Safronova, U.

    1999-01-01

    We have calculated the dielectronic recombination rate coefficients from Li-like Ne (Ne{sup 7+}) ions to Be-like Ne (Ne{sup 6+}) ions for selected excited states of Ne{sup 6+} ions. A collisional-radiative model (CRM) for Ne{sup 6+} ions is constructed to calculate the population density of each excited state in non-equilibrium ionization plasmas, including recombining processes. NeVII spectral line intensities and the radiative power loss are calculated with the CRM. A density effect caused by collisional excitation from the metastable state 2s2p {sup 3}P is found at an electron density of 10{sup 5} - 10{sup 17} cm{sup -3}. The collisional excitations between excited states become important at high electron temperature T{sub e} > or approx. 100 eV. (author)

  18. Modeling of the Direct Current Generator Including the Magnetic Saturation and Temperature Effects

    Directory of Open Access Journals (Sweden)

    Alfonso J. Mercado-Samur

    2013-11-01

    Full Text Available In this paper the inclusion of temperature effect on the field resistance on the direct current generator model DC1A, which is valid to stability studies is proposed. First, the linear generator model is presented, after the effect of magnetic saturation and the change in the resistance value due to temperature produced by the field current are included. The comparison of experimental results and model simulations to validate the model is used. A direct current generator model which is a better representation of the generator is obtained. Visual comparison between simulations and experimental results shows the success of the proposed model, because it presents the lowest error of the compared models. The accuracy of the proposed model is observed via Modified Normalized Sum of Squared Errors index equal to 3.8979%.

  19. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  20. Combining prior knowledge with data driven modeling of a batch distillation column including start-up

    NARCIS (Netherlands)

    van Lith, PF; Betlem, BHL; Roffel, B

    2003-01-01

    This paper presents the development of a simple model which describes the product quality and production over time of an experimental batch distillation column, including start-up. The model structure is based on a simple physical framework, which is augmented with fuzzy logic. This provides a way

  1. Enhanced UWB Radio Channel Model for Short-Range Communication Scenarios Including User Dynamics

    DEFF Research Database (Denmark)

    Kovacs, Istvan Zsolt; Nguyen, Tuan Hung; Eggers, Patrick Claus F.

    2005-01-01

    channel model represents an enhancement of the existing IEEE 802.15.3a/4a PAN channel model, where antenna and user-proximity effects are not included. Our investigations showed that significant variations of the received wideband power and time-delay signal clustering are possible due the human body...

  2. Process Correlation Analysis Model for Process Improvement Identification

    Directory of Open Access Journals (Sweden)

    Su-jin Choi

    2014-01-01

    software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  3. Including natural systems into the system engineering process: benefits to spaceflight and beyond

    Science.gov (United States)

    Studor, George

    2014-03-01

    How did we get to the point where we don't have time to be inspired by the wonders of Nature? Our office walls, homes and city streets are so plain that even when we do escape to a retreat with nature all around us, we may be blind to its magnificence. Yet there are many who have applied what can be known of natural systems (NS) to create practical solutions, but often definite applications for them are lacking. Mimicry of natural systems is not only more possible than ever before, but the education and research programs in many major universities are churning out graduates with a real appreciation for Nature's complex integrated systems. What if these skills and perspectives were employed in the teams of systems engineers and the technology developers that support them to help the teams think "outside-the-box" of manmade inventions? If systems engineers (SE) and technology developers regularly asked the question, "what can we learn from Nature that will help us?" as a part of their processes, they would discover another set of potential solutions. Biomimicry and knowledge of natural systems is exploding. What does this mean for systems engineering and technology? Some disciplines such as robotics and medical devices must consider nature constantly. Perhaps it's time for all technology developers and systems engineers to perceive natural systems experts as potential providers of the technologies they need.

  4. Including operational data in QMRA model: development and impact of model inputs.

    Science.gov (United States)

    Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle

    2009-03-01

    A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).

  5. Mathematical modeling of the hypothalamic–pituitary–adrenal gland (HPA) axis, including hippocampal mechanisms

    DEFF Research Database (Denmark)

    Andersen, Morten; Vinther, Frank; Ottesen, Johnny T.

    2013-01-01

    This paper presents a mathematical model of the HPA axis. The HPA axis consists of the hypothalamus, the pituitary and the adrenal glands in which the three hormones CRH, ACTH and cortisol interact through receptor dynamics. Furthermore, it has been suggested that receptors in the hippocampus have...... an influence on the axis.A model is presented with three coupled, non-linear differential equations, with the hormones CRH, ACTH and cortisol as variables. The model includes the known features of the HPA axis, and includes the effects from the hippocampus through its impact on CRH in the hypothalamus...

  6. From business value model to coordination process model

    NARCIS (Netherlands)

    Fatemi, Hassan; Wieringa, Roelf J.; Poler, R.; van Sinderen, Marten J.; Sanchis, R.

    2009-01-01

    The increased complexity of business webs calls for modeling the collaboration of enterprises from different perspectives, in particular the business and process perspectives, and for mutually aligning these perspectives. Business value modeling and coordination process modeling both are necessary

  7. Business process modeling for processing classified documents using RFID technology

    Directory of Open Access Journals (Sweden)

    Koszela Jarosław

    2016-01-01

    Full Text Available The article outlines the application of the processing approach to the functional description of the designed IT system supporting the operations of the secret office, which processes classified documents. The article describes the application of the method of incremental modeling of business processes according to the BPMN model to the description of the processes currently implemented (“as is” in a manual manner and target processes (“to be”, using the RFID technology for the purpose of their automation. Additionally, the examples of applying the method of structural and dynamic analysis of the processes (process simulation to verify their correctness and efficiency were presented. The extension of the process analysis method is a possibility of applying the warehouse of processes and process mining methods.

  8. Process correlation analysis model for process improvement identification.

    Science.gov (United States)

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  9. Towards the Automated Annotation of Process Models

    NARCIS (Netherlands)

    Leopold, H.; Meilicke, C.; Fellmann, M.; Pittke, F.; Stuckenschmidt, H.; Mendling, J.

    2016-01-01

    Many techniques for the advanced analysis of process models build on the annotation of process models with elements from predefined vocabularies such as taxonomies. However, the manual annotation of process models is cumbersome and sometimes even hardly manageable taking the size of taxonomies into

  10. Aqueous Electrolytes: Model Parameters and Process Simulation

    DEFF Research Database (Denmark)

    Thomsen, Kaj

    This thesis deals with aqueous electrolyte mixtures. The Extended UNIQUAC model is being used to describe the excess Gibbs energy of such solutions. Extended UNIQUAC parameters for the twelve ions Na+, K+, NH4+, H+, Cl-, NO3-, SO42-, HSO4-, OH-, CO32-, HCO3-, and S2O82- are estimated. A computer ...... program including a steady state process simulator for the design, simulation, and optimization of fractional crystallization processes is presented.......This thesis deals with aqueous electrolyte mixtures. The Extended UNIQUAC model is being used to describe the excess Gibbs energy of such solutions. Extended UNIQUAC parameters for the twelve ions Na+, K+, NH4+, H+, Cl-, NO3-, SO42-, HSO4-, OH-, CO32-, HCO3-, and S2O82- are estimated. A computer...

  11. Properties of spatial Cox process models

    DEFF Research Database (Denmark)

    Møller, Jesper

    2005-01-01

    Particularly, we study the most important classes of Cox processes, including log Gaussian Cox processes, shot noise Cox processes, and permanent Cox processes. We consider moment properties and point process operations such as thinning, displacements, and super positioning. We also discuss how...... to simulate specific Cox processes....

  12. Dynamic experiments with high bisphenol-A concentrations modelled with an ASM model extended to include a separate XOC degrading microorganism

    DEFF Research Database (Denmark)

    Lindblom, Erik Ulfson; Press-Kristensen, Kåre; Vanrolleghem, P.A.

    2009-01-01

    with the endocrine disrupting XOC bisphenol-A (BPA) in an activated sludge process with real wastewater were used to hypothesize an ASM-based process model including aerobic growth of a specific BPA-degrading microorganism and sorption of BPA to sludge. A parameter estimation method was developed, which...

  13. A computational model of human auditory signal processing and perception

    OpenAIRE

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell t...

  14. Property Modelling for Applications in Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Physical-chemical properties of pure chemicals and their mixtures play an important role in the design of chemicals based products and the processes that manufacture them. Although, the use of experimental data in design and analysis of chemicals based products and their processes is desirable...... such as database, property model library, model parameter regression, and, property-model based product-process design will be presented. The database contains pure component and mixture data for a wide range of organic chemicals. The property models are based on the combined group contribution and atom...... modeling tools in design and analysis of chemical product-process design, including biochemical processes will be highlighted....

  15. Modification of TOUGH2 to Include the Dusty Gas Model for Gas Diffusion; TOPICAL

    International Nuclear Information System (INIS)

    WEBB, STEPHEN W.

    2001-01-01

    The GEO-SEQ Project is investigating methods for geological sequestration of CO(sub 2). This project, which is directed by LBNL and includes a number of other industrial, university, and national laboratory partners, is evaluating computer simulation methods including TOUGH2 for this problem. The TOUGH2 code, which is a widely used code for flow and transport in porous and fractured media, includes simplified methods for gas diffusion based on a direct application of Fick's law. As shown by Webb (1998) and others, the Dusty Gas Model (DGM) is better than Fick's Law for modeling gas-phase diffusion in porous media. In order to improve gas-phase diffusion modeling for the GEO-SEQ Project, the EOS7R module in the TOUGH2 code has been modified to include the Dusty Gas Model as documented in this report. In addition, the liquid diffusion model has been changed from a mass-based formulation to a mole-based model. Modifications for separate and coupled diffusion in the gas and liquid phases have also been completed. The results from the DGM are compared to the Fick's law behavior for TCE and PCE diffusion across a capillary fringe. The differences are small due to the relatively high permeability (k= 10(sup -11) m(sup 2)) of the problem and the small mole fraction of the gases. Additional comparisons for lower permeabilities and higher mole fractions may be useful

  16. Dusty Plasma Modeling of the Fusion Reactor Sheath Including Collisional-Radiative Effects

    International Nuclear Information System (INIS)

    Dezairi, Aouatif; Samir, Mhamed; Eddahby, Mohamed; Saifaoui, Dennoun; Katsonis, Konstantinos; Berenguer, Chloe

    2008-01-01

    The structure and the behavior of the sheath in Tokamak collisional plasmas has been studied. The sheath is modeled taking into account the presence of the dust 2 and the effects of the charged particle collisions and radiative processes. The latter may allow for optical diagnostics of the plasma.

  17. Extending the formal model of a spatial data infrastructure to include volunteered geographical information

    CSIR Research Space (South Africa)

    Cooper, Antony K

    2011-07-01

    Full Text Available -to-date VGI, have led to the integration of VGI into some SDIs. Therefore it is necessary to rethink our formal model of an SDI to accommodate VGI. We started our rethinking process with the SDI stakeholders in an attempt to establish which changes...

  18. A compositional process control model and its application to biochemical processes.

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.

    1999-01-01

    A compositional generic process control model is presented which has been applied to control enzymatic biochemical processes. The model has been designed at a conceptual and formal level using the compositional development method DESIRE, and includes processes for analysis, planning and simulation.

  19. A Compositional Process Control Model and its Application to Biochemical Processes

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.

    2002-01-01

    A compositional generic process control model is presented which has been applied to control enzymatic biochemical processes. The model has been designed at a conceptual and formal level using the compositional development method DESIRE, and includes processes for analysis, planning and simulation.

  20. Numerical Acoustic Models Including Viscous and Thermal losses: Review of Existing and New Methods

    DEFF Research Database (Denmark)

    Andersen, Peter Risby; Cutanda Henriquez, Vicente; Aage, Niels

    2017-01-01

    This work presents an updated overview of numerical methods including acoustic viscous and thermal losses. Numerical modelling of viscothermal losses has gradually become more important due to the general trend of making acoustic devices smaller. Not including viscothermal acoustic losses...... in such numerical computations will therefore lead to inaccurate or even wrong results. Both, Finite Element Method (FEM) and Boundary Element Method (BEM), formulations are available that incorporate these loss mechanisms. Including viscothermal losses in FEM computations can be computationally very demanding, due...... and BEM method including viscothermal dissipation are compared and investigated....

  1. Direct-phase-variable model of a synchronous reluctance motor including all slot and winding harmonics

    International Nuclear Information System (INIS)

    Obe, Emeka S.; Binder, A.

    2011-01-01

    A detailed model in direct-phase variables of a synchronous reluctance motor operating at mains voltage and frequency is presented. The model includes the stator and rotor slot openings, the actual winding layout and the reluctance rotor geometry. Hence, all mmf and permeance harmonics are taken into account. It is seen that non-negligible harmonics introduced by slots are present in the inductances computed by the winding function procedure. These harmonics are usually ignored in d-q models. The machine performance is simulated in the stator reference frame to depict the difference between this new direct-phase model including all harmonics and the conventional rotor reference frame d-q model. Saturation is included by using a polynomial fitting the variation of d-axis inductance with stator current obtained by finite-element software FEMAG DC (registered) . The detailed phase-variable model can yield torque pulsations comparable to those obtained from finite elements while the d-q model cannot.

  2. Process intensification of delignification and enzymatic hydrolysis of delignified cellulosic biomass using various process intensification techniques including cavitation.

    Science.gov (United States)

    Nagula, Karuna Narsappa; Pandit, Aniruddha Bhalchandra

    2016-08-01

    Different methods of pretreatment including alkali treatment, treatment with ultrasound, biological treatment using laccase enzyme and combined treatment like ultrasound-laccase for Napier grass have been tried. With alkali pretreatment optimized conditions obtained were sodium hydroxide 0.3% w/v giving 86% delignification at temperature of 80°C, treatment time of 2h. In physical methods of treatment ultrasound, at a temperature of 45°C, treatment time of 2h, operating at frequency 24kHz and power of 100W gave 18% delignification. For laccase pretreatment, optimized conditions obtained were 300rpm impeller speed, enzyme concentration 10U/gm of Napier grass gave 50% delignification with cellulose. The optimized conditions for delignification by using combination treatment of ultrasound & enzymatic were obtained at 24kHz frequency, 100W giving 75% of delignification in 6h. An enhancement in lignin degradation by 25% and reduction in the treatment time from 12 to 6h is achieved as compared to only laccase treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Business Process Modelling based on Petri nets

    Directory of Open Access Journals (Sweden)

    Qin Jianglong

    2017-01-01

    Full Text Available Business process modelling is the way business processes are expressed. Business process modelling is the foundation of business process analysis, reengineering, reorganization and optimization. It can not only help enterprises to achieve internal information system integration and reuse, but also help enterprises to achieve with the external collaboration. Based on the prototype Petri net, this paper adds time and cost factors to form an extended generalized stochastic Petri net. It is a formal description of the business process. The semi-formalized business process modelling algorithm based on Petri nets is proposed. Finally, The case from a logistics company proved that the modelling algorithm is correct and effective.

  4. Modeling process flow using diagrams

    NARCIS (Netherlands)

    Kemper, B.; de Mast, J.; Mandjes, M.

    2010-01-01

    In the practice of process improvement, tools such as the flowchart, the value-stream map (VSM), and a variety of ad hoc variants of such diagrams are commonly used. The purpose of this paper is to present a clear, precise, and consistent framework for the use of such flow diagrams in process

  5. Modeling of Temperature-Dependent Noise in Silicon Nanowire FETs including Self-Heating Effects

    OpenAIRE

    Anandan, P.; Malathi, N.; Mohankumar, N.

    2014-01-01

    Silicon nanowires are leading the CMOS era towards the downsizing limit and its nature will be effectively suppress the short channel effects. Accurate modeling of thermal noise in nanowires is crucial for RF applications of nano-CMOS emerging technologies. In this work, a perfect temperature-dependent model for silicon nanowires including the self-heating effects has been derived and its effects on device parameters have been observed. The power spectral density as a function of thermal resi...

  6. Dipole model analysis of highest precision HERA data, including very low Q2's

    International Nuclear Information System (INIS)

    Luszczak, A.; Kowalski, H.

    2016-12-01

    We analyse, within a dipole model, the final, inclusive HERA DIS cross section data in the low χ region, using fully correlated errors. We show, that these highest precision data are very well described within the dipole model framework starting from Q 2 values of 3.5 GeV 2 to the highest values of Q 2 =250 GeV 2 . To analyze the saturation effects we evaluated the data including also the very low 0.35including this region show a preference of the saturation ansatz.

  7. modeling grinding modeling grinding processes as micro processes

    African Journals Online (AJOL)

    eobe

    into two parts: static specific chip formation energy and dynamic specific chip formation ... the ratio of static normal chip formation force to static tangential chip formation force and the ratio ... grinding processing parameters to the friction coefficient between workpiece and grinding wheel. From equation. (20), the calculation ...

  8. The No-Core Gamow Shell Model: Including the continuum in the NCSM

    CERN Document Server

    Barrett, B R; Michel, N; Płoszajczak, M

    2015-01-01

    We are witnessing an era of intense experimental efforts that will provide information about the properties of nuclei far from the line of stability, regarding resonant and scattering states as well as (weakly) bound states. This talk describes our formalism for including these necessary ingredients into the No-Core Shell Model by using the Gamow Shell Model approach. Applications of this new approach, known as the No-Core Gamow Shell Model, both to benchmark cases as well as to unstable nuclei will be given.

  9. Conceptualizing a Dynamic Fall Risk Model Including Intrinsic Risks and Exposures.

    Science.gov (United States)

    Klenk, Jochen; Becker, Clemens; Palumbo, Pierpaolo; Schwickert, Lars; Rapp, Kilan; Helbostad, Jorunn L; Todd, Chris; Lord, Stephen R; Kerse, Ngaire

    2017-11-01

    Falls are a major cause of injury and disability in older people, leading to serious health and social consequences including fractures, poor quality of life, loss of independence, and institutionalization. To design and provide adequate prevention measures, accurate understanding and identification of person's individual fall risk is important. However, to date, the performance of fall risk models is weak compared with models estimating, for example, cardiovascular risk. This deficiency may result from 2 factors. First, current models consider risk factors to be stable for each person and not change over time, an assumption that does not reflect real-life experience. Second, current models do not consider the interplay of individual exposure including type of activity (eg, walking, undertaking transfers) and environmental risks (eg, lighting, floor conditions) in which activity is performed. Therefore, we posit a dynamic fall risk model consisting of intrinsic risk factors that vary over time and exposure (activity in context). eHealth sensor technology (eg, smartphones) begins to enable the continuous measurement of both the above factors. We illustrate our model with examples of real-world falls from the FARSEEING database. This dynamic framework for fall risk adds important aspects that may improve understanding of fall mechanisms, fall risk models, and the development of fall prevention interventions. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  10. Modeling of cylindrical surrounding gate MOSFETs including the fringing field effects

    International Nuclear Information System (INIS)

    Gupta, Santosh K.; Baishya, Srimanta

    2013-01-01

    A physically based analytical model for surface potential and threshold voltage including the fringing gate capacitances in cylindrical surround gate (CSG) MOSFETs has been developed. Based on this a subthreshold drain current model has also been derived. This model first computes the charge induced in the drain/source region due to the fringing capacitances and considers an effective charge distribution in the cylindrically extended source/drain region for the development of a simple and compact model. The fringing gate capacitances taken into account are outer fringe capacitance, inner fringe capacitance, overlap capacitance, and sidewall capacitance. The model has been verified with the data extracted from 3D TCAD simulations of CSG MOSFETs and was found to be working satisfactorily. (semiconductor devices)

  11. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    International Nuclear Information System (INIS)

    Chen, Y W; Zhang, L F; Huang, J P

    2007-01-01

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property

  12. Modelling of Batch Process Operations

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli; Cameron, Ian; Gani, Rafiqul

    2011-01-01

    Here a batch cooling crystalliser is modelled and simulated as is a batch distillation system. In the batch crystalliser four operational modes of the crystalliser are considered, namely: initial cooling, nucleation, crystal growth and product removal. A model generation procedure is shown that s...

  13. Mathematical Modeling: A Structured Process

    Science.gov (United States)

    Anhalt, Cynthia Oropesa; Cortez, Ricardo

    2015-01-01

    Mathematical modeling, in which students use mathematics to explain or interpret physical, social, or scientific phenomena, is an essential component of the high school curriculum. The Common Core State Standards for Mathematics (CCSSM) classify modeling as a K-12 standard for mathematical practice and as a conceptual category for high school…

  14. Process Modeling With Inhomogeneous Thin Films

    Science.gov (United States)

    Machorro, R.; Macleod, H. A.; Jacobson, M. R.

    1986-12-01

    Designers of optical multilayer coatings commonly assume that the individual layers will be ideally homogeneous and isotropic. In practice, it is very difficult to control the conditions involved in the complex evaporation process sufficiently to produce such ideal films. Clearly, changes in process parameters, such as evaporation rate, chamber pressure, and substrate temperature, affect the microstructure of the growing film, frequently producing inhomogeneity in structure or composition. In many cases, these effects are interdependent, further complicating the situation. However, this process can be simulated on powerful, interactive, and accessible microcomputers. In this work, we present such a model and apply it to estimate the influence of an inhomogeneous layer on multilayer performance. Presently, the program simulates film growth, thermal expansion and contraction, and thickness monitoring procedures, and includes the effects of uncertainty in these parameters or noise. Although the model is being developed to cover very general cases, we restrict the present discussion to isotropic and nondispersive quarterwave layers to understand the particular effects of inhomogeneity. We studied several coating designs and related results and tolerances to variations in evaporation conditions. The model is composed of several modular subprograms, is written in Fortran, and is executed on an IBM-PC with 640 K of memory. The results can be presented in graphic form on a monochrome monitor. We are currently installing and implementing color capability to improve the clarity of the multidimensional output.

  15. Including an ocean carbon cycle model into iLOVECLIM (v1.0)

    NARCIS (Netherlands)

    Bouttes, N.; Roche, D.M.V.A.P.; Mariotti, V.; Bopp, L.

    2015-01-01

    The atmospheric carbon dioxide concentration plays a crucial role in the radiative balance and as such has a strong influence on the evolution of climate. Because of the numerous interactions between climate and the carbon cycle, it is necessary to include a model of the carbon cycle within a

  16. Modeling business processes: theoretical and practical aspects

    Directory of Open Access Journals (Sweden)

    V.V. Dubininа

    2015-06-01

    Full Text Available The essence of process-oriented enterprise management has been examined in the article. The content and types of information technology have been analyzed in the article, due to the complexity and differentiation of existing methods, as well as the specificity of language, terminology of the enterprise business processes modeling. The theoretical aspects of business processes modeling have been reviewed and the modern traditional modeling techniques received practical application in the visualization model of retailers activity have been studied in the article. In the process of theoretical analysis of the modeling methods found that UFO-toolkit method that has been developed by Ukrainian scientists due to it systemology integrated opportunities, is the most suitable for structural and object analysis of retailers business processes. It was designed visualized simulation model of the business process "sales" as is" of retailers using a combination UFO-elements with the aim of the further practical formalization and optimization of a given business process.

  17. Spherical Process Models for Global Spatial Statistics

    KAUST Repository

    Jeong, Jaehong

    2017-11-28

    Statistical models used in geophysical, environmental, and climate science applications must reflect the curvature of the spatial domain in global data. Over the past few decades, statisticians have developed covariance models that capture the spatial and temporal behavior of these global data sets. Though the geodesic distance is the most natural metric for measuring distance on the surface of a sphere, mathematical limitations have compelled statisticians to use the chordal distance to compute the covariance matrix in many applications instead, which may cause physically unrealistic distortions. Therefore, covariance functions directly defined on a sphere using the geodesic distance are needed. We discuss the issues that arise when dealing with spherical data sets on a global scale and provide references to recent literature. We review the current approaches to building process models on spheres, including the differential operator, the stochastic partial differential equation, the kernel convolution, and the deformation approaches. We illustrate realizations obtained from Gaussian processes with different covariance structures and the use of isotropic and nonstationary covariance models through deformations and geographical indicators for global surface temperature data. To assess the suitability of each method, we compare their log-likelihood values and prediction scores, and we end with a discussion of related research problems.

  18. Modelling heat processing of dairy products

    NARCIS (Netherlands)

    Hotrum, N.; Fox, M.B.; Lieverloo, H.; Smit, E.; Jong, de P.; Schutyser, M.A.I.

    2010-01-01

    This chapter discusses the application of computer modelling to optimise the heat processing of milk. The chapter first reviews types of heat processing equipment used in the dairy industry. Then, the types of objectives that can be achieved using model-based process optimisation are discussed.

  19. How visual cognition influences process model comprehension

    NARCIS (Netherlands)

    Petrusel, Razvan; Mendling, Jan; Reijers, Hajo A.

    2017-01-01

    Process analysts and other professionals extensively use process models to analyze business processes and identify performance improvement opportunities. Therefore, it is important that such models can be easily and properly understood. Previous research has mainly focused on two types of factors

  20. An imprecise Dirichlet model for Bayesian analysis of failure data including right-censored observations

    International Nuclear Information System (INIS)

    Coolen, F.P.A.

    1997-01-01

    This paper is intended to make researchers in reliability theory aware of a recently introduced Bayesian model with imprecise prior distributions for statistical inference on failure data, that can also be considered as a robust Bayesian model. The model consists of a multinomial distribution with Dirichlet priors, making the approach basically nonparametric. New results for the model are presented, related to right-censored observations, where estimation based on this model is closely related to the product-limit estimator, which is an important statistical method to deal with reliability or survival data including right-censored observations. As for the product-limit estimator, the model considered in this paper aims at not using any information other than that provided by observed data, but our model fits into the robust Bayesian context which has the advantage that all inferences can be based on probabilities or expectations, or bounds for probabilities or expectations. The model uses a finite partition of the time-axis, and as such it is also related to life-tables

  1. MEMLS3&a: Microwave Emission Model of Layered Snowpacks adapted to include backscattering

    Directory of Open Access Journals (Sweden)

    M. Proksch

    2015-08-01

    Full Text Available The Microwave Emission Model of Layered Snowpacks (MEMLS was originally developed for microwave emissions of snowpacks in the frequency range 5–100 GHz. It is based on six-flux theory to describe radiative transfer in snow including absorption, multiple volume scattering, radiation trapping due to internal reflection and a combination of coherent and incoherent superposition of reflections between horizontal layer interfaces. Here we introduce MEMLS3&a, an extension of MEMLS, which includes a backscatter model for active microwave remote sensing of snow. The reflectivity is decomposed into diffuse and specular components. Slight undulations of the snow surface are taken into account. The treatment of like- and cross-polarization is accomplished by an empirical splitting parameter q. MEMLS3&a (as well as MEMLS is set up in a way that snow input parameters can be derived by objective measurement methods which avoid fitting procedures of the scattering efficiency of snow, required by several other models. For the validation of the model we have used a combination of active and passive measurements from the NoSREx (Nordic Snow Radar Experiment campaign in Sodankylä, Finland. We find a reasonable agreement between the measurements and simulations, subject to uncertainties in hitherto unmeasured input parameters of the backscatter model. The model is written in Matlab and the code is publicly available for download through the following website: http://www.iapmw.unibe.ch/research/projects/snowtools/memls.html.

  2. Computer modeling of lung cancer diagnosis-to-treatment process.

    Science.gov (United States)

    Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U; Yu, Xinhua; Faris, Nick; Li, Jingshan

    2015-08-01

    We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed.

  3. Modeling process flow using diagrams

    OpenAIRE

    Kemper, B.; de Mast, J.; Mandjes, M.

    2010-01-01

    In the practice of process improvement, tools such as the flowchart, the value-stream map (VSM), and a variety of ad hoc variants of such diagrams are commonly used. The purpose of this paper is to present a clear, precise, and consistent framework for the use of such flow diagrams in process improvement projects. The paper finds that traditional diagrams, such as the flowchart, the VSM, and OR-type of diagrams, have severe limitations, miss certain elements, or are based on implicit but cons...

  4. Modelling of additive manufacturing processes: a review and classification

    Science.gov (United States)

    Stavropoulos, Panagiotis; Foteinopoulos, Panagis

    2018-03-01

    Additive manufacturing (AM) is a very promising technology; however, there are a number of open issues related to the different AM processes. The literature on modelling the existing AM processes is reviewed and classified. A categorization of the different AM processes in process groups, according to the process mechanism, has been conducted and the most important issues are stated. Suggestions are made as to which approach is more appropriate according to the key performance indicator desired to be modelled and a discussion is included as to the way that future modelling work can better contribute to improving today's AM process understanding.

  5. TS Fuzzy Model-Based Controller Design for a Class of Nonlinear Systems Including Nonsmooth Functions

    DEFF Research Database (Denmark)

    Vafamand, Navid; Asemani, Mohammad Hassan; Khayatiyan, Alireza

    2018-01-01

    criterion, new robust controller design conditions in terms of linear matrix inequalities are derived. Three practical case studies, electric power steering system, a helicopter model and servo-mechanical system, are presented to demonstrate the importance of such class of nonlinear systems comprising......This paper proposes a novel robust controller design for a class of nonlinear systems including hard nonlinearity functions. The proposed approach is based on Takagi-Sugeno (TS) fuzzy modeling, nonquadratic Lyapunov function, and nonparallel distributed compensation scheme. In this paper, a novel...... TS modeling of the nonlinear dynamics with signum functions is proposed. This model can exactly represent the original nonlinear system with hard nonlinearity while the discontinuous signum functions are not approximated. Based on the bounded-input-bounded-output stability scheme and L₁ performance...

  6. A roller chain drive model including contact with guide-bars

    DEFF Research Database (Denmark)

    Pedersen, Sine Leergaard; Hansen, John Michael; Ambrósio, J. A. C.

    2004-01-01

    as continuous force. The model of the roller-chain drive now proposed departs from an earlier model where two contact/impact methods are proposed to describe the contact between the rollers of the chain and the teeth of the sprockets. These different formulations are based on unilateral constraints....... In the continuous force method the roller-sprocket contact, is represented by forces applied on each seated roller and in the respective sprocket teeth. These forces are functions of the pseudo penetrations between roller and sprocket, impacting velocities and a restitution coefficient. In the continuous force......A model of a roller chain drive is developed and applied to the simulation and analysis of roller chain drives of large marine diesel engines. The model includes the impact with guide-bars that are the motion delimiter components on the chain strands between the sprockets. The main components...

  7. Prospects for genetically modified non-human primate models, including the common marmoset.

    Science.gov (United States)

    Sasaki, Erika

    2015-04-01

    Genetically modified mice have contributed much to studies in the life sciences. In some research fields, however, mouse models are insufficient for analyzing the molecular mechanisms of pathology or as disease models. Often, genetically modified non-human primate (NHP) models are desired, as they are more similar to human physiology, morphology, and anatomy. Recent progress in studies of the reproductive biology in NHPs has enabled the introduction of exogenous genes into NHP genomes or the alteration of endogenous NHP genes. This review summarizes recent progress in the production of genetically modified NHPs, including the common marmoset, and future perspectives for realizing genetically modified NHP models for use in life sciences research. Copyright © 2015 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  8. Improving weather predictability by including land-surface model parameter uncertainty

    Science.gov (United States)

    Orth, Rene; Dutra, Emanuel; Pappenberger, Florian

    2016-04-01

    The land surface forms an important component of Earth system models and interacts nonlinearly with other parts such as ocean and atmosphere. To capture the complex and heterogenous hydrology of the land surface, land surface models include a large number of parameters impacting the coupling to other components of the Earth system model. Focusing on ECMWF's land-surface model HTESSEL we present in this study a comprehensive parameter sensitivity evaluation using multiple observational datasets in Europe. We select 6 poorly constrained effective parameters (surface runoff effective depth, skin conductivity, minimum stomatal resistance, maximum interception, soil moisture stress function shape, total soil depth) and explore their sensitivity to model outputs such as soil moisture, evapotranspiration and runoff using uncoupled simulations and coupled seasonal forecasts. Additionally we investigate the possibility to construct ensembles from the multiple land surface parameters. In the uncoupled runs we find that minimum stomatal resistance and total soil depth have the most influence on model performance. Forecast skill scores are moreover sensitive to the same parameters as HTESSEL performance in the uncoupled analysis. We demonstrate the robustness of our findings by comparing multiple best performing parameter sets and multiple randomly chosen parameter sets. We find better temperature and precipitation forecast skill with the best-performing parameter perturbations demonstrating representativeness of model performance across uncoupled (and hence less computationally demanding) and coupled settings. Finally, we construct ensemble forecasts from ensemble members derived with different best-performing parameterizations of HTESSEL. This incorporation of parameter uncertainty in the ensemble generation yields an increase in forecast skill, even beyond the skill of the default system. Orth, R., E. Dutra, and F. Pappenberger, 2016: Improving weather predictability by

  9. Description of the new version 4.0 of the tritium model UFOTRI including user guide

    International Nuclear Information System (INIS)

    Raskob, W.

    1993-08-01

    In view of the future operation of fusion reactors the release of tritium may play a dominant role during normal operation as well as after accidents. Because of its physical and chemical properties which differ significantly from those of other radionuclides, the model UFOTRI for assessing the radiological consequences of accidental tritium releases has been developed. It describes the behaviour of tritium in the biosphere and calculates the radiological impact on individuals and the population due to the direct exposure and by the ingestion pathways. Processes such as the conversion of tritium gas into tritiated water (HTO) in the soil, re-emission after deposition and the conversion of HTO into organically bound tritium, are considered. The use of UFOTRI in its probabilistic mode shows the spectrum of the radiological impact together with the associated probability of occurrence. A first model version was established in 1991. As the ongoing work on investigating the main processes of the tritium behaviour in the environment shows up new results, the model has been improved in several points. The report describes the changes incorporated into the model since 1991. Additionally provides the up-dated user guide for handling the revised UFOTRI version which will be distributed to interested organizations. (orig.) [de

  10. Global Reference Atmospheric Models, Including Thermospheres, for Mars, Venus and Earth

    Science.gov (United States)

    Justh, Hilary L.; Justus, C. G.; Keller, Vernon W.

    2006-01-01

    This document is the viewgraph slides of the presentation. Marshall Space Flight Center's Natural Environments Branch has developed Global Reference Atmospheric Models (GRAMs) for Mars, Venus, Earth, and other solar system destinations. Mars-GRAM has been widely used for engineering applications including systems design, performance analysis, and operations planning for aerobraking, entry descent and landing, and aerocapture. Preliminary results are presented, comparing Mars-GRAM with measurements from Mars Reconnaissance Orbiter (MRO) during its aerobraking in Mars thermosphere. Venus-GRAM is based on the Committee on Space Research (COSPAR) Venus International Reference Atmosphere (VIRA), and is suitable for similar engineering applications in the thermosphere or other altitude regions of the atmosphere of Venus. Until recently, the thermosphere in Earth-GRAM has been represented by the Marshall Engineering Thermosphere (MET) model. Earth-GRAM has recently been revised. In addition to including an updated version of MET, it now includes an option to use the Naval Research Laboratory Mass Spectrometer Incoherent Scatter Radar Extended Model (NRLMSISE-00) as an alternate thermospheric model. Some characteristics and results from Venus-GRAM and Earth-GRAM thermospheres are also presented.

  11. A numerical model including PID control of a multizone crystal growth furnace

    Science.gov (United States)

    Panzarella, Charles H.; Kassemi, Mohammad

    1992-01-01

    This paper presents a 2D axisymmetric combined conduction and radiation model of a multizone crystal growth furnace. The model is based on a programmable multizone furnace (PMZF) designed and built at NASA Lewis Research Center for growing high quality semiconductor crystals. A novel feature of this model is a control algorithm which automatically adjusts the power in any number of independently controlled heaters to establish the desired crystal temperatures in the furnace model. The control algorithm eliminates the need for numerous trial and error runs previously required to obtain the same results. The finite element code, FIDAP, used to develop the furnace model, was modified to directly incorporate the control algorithm. This algorithm, which presently uses PID control, and the associated heat transfer model are briefly discussed. Together, they have been used to predict the heater power distributions for a variety of furnace configurations and desired temperature profiles. Examples are included to demonstrate the effectiveness of the PID controlled model in establishing isothermal, Bridgman, and other complicated temperature profies in the sample. Finally, an example is given to show how the algorithm can be used to change the desired profile with time according to a prescribed temperature-time evolution.

  12. A New Circuit Model for Spin-Torque Oscillator Including Perpendicular Torque of Magnetic Tunnel Junction

    Directory of Open Access Journals (Sweden)

    Hyein Lim

    2013-01-01

    Full Text Available Spin-torque oscillator (STO is a promising new technology for the future RF oscillators, which is based on the spin-transfer torque (STT effect in magnetic multilayered nanostructure. It is expected to provide a larger tunability, smaller size, lower power consumption, and higher level of integration than the semiconductor-based oscillators. In our previous work, a circuit-level model of the giant magnetoresistance (GMR STO was proposed. In this paper, we present a physics-based circuit-level model of the magnetic tunnel junction (MTJ-based STO. MTJ-STO model includes the effect of perpendicular torque that has been ignored in the GMR-STO model. The variations of three major characteristics, generation frequency, mean oscillation power, and generation linewidth of an MTJ-STO with respect to the amount of perpendicular torque, are investigated, and the results are applied to our model. The operation of the model was verified by HSPICE simulation, and the results show an excellent agreement with the experimental data. The results also prove that a full circuit-level simulation with MJT-STO devices can be made with our proposed model.

  13. Nonlinear flow model of multiple fractured horizontal wells with stimulated reservoir volume including the quadratic gradient term

    Science.gov (United States)

    Ren, Junjie; Guo, Ping

    2017-11-01

    The real fluid flow in porous media is consistent with the mass conservation which can be described by the nonlinear governing equation including the quadratic gradient term (QGT). However, most of the flow models have been established by ignoring the QGT and little work has been conducted to incorporate the QGT into the flow model of the multiple fractured horizontal (MFH) well with stimulated reservoir volume (SRV). This paper first establishes a semi-analytical model of an MFH well with SRV including the QGT. Introducing the transformed pressure and flow-rate function, the nonlinear model of a point source in a composite system including the QGT is linearized. Then the Laplace transform, principle of superposition, numerical discrete method, Gaussian elimination method and Stehfest numerical inversion are employed to establish and solve the seepage model of the MFH well with SRV. Type curves are plotted and the effects of relevant parameters are analyzed. It is found that the nonlinear effect caused by the QGT can increase the flow capacity of fluid flow and influence the transient pressure positively. The relevant parameters not only have an effect on the type curve but also affect the error in the pressure calculated by the conventional linear model. The proposed model, which is consistent with the mass conservation, reflects the nonlinear process of the real fluid flow, and thus it can be used to obtain more accurate transient pressure of an MFH well with SRV.

  14. Numerical modelling of reflood processes

    International Nuclear Information System (INIS)

    Glynn, D.R.; Rhodes, N.; Tatchell, D.G.

    1983-01-01

    The use of a detailed computer model to investigate the effects of grid size and the choice of wall-to-fluid heat-transfer correlations on the predictions obtained for reflooding of a vertical heated channel is described. The model employs equations for the momentum and enthalpy of vapour and liquid and hence accounts for both thermal non-equilibrium and slip between the phases. Empirical correlations are used to calculate interphase and wall-to-fluid friction and heat-transfer as functions of flow regime and local conditions. The empirical formulae have remained fixed with the exception of the wall-to-fluid heat-transfer correlations. These have been varied according to the practices adopted in other computer codes used to model reflood, namely REFLUX, RELAP and TRAC. Calculations have been performed to predict the CSNI standard problem number 7, and the results are compared with experiment. It is shown that the results are substantially grid-independent, and that the choice of correlation has a significant influence on the general flow behaviour, the rate of quenching and on the maximum cladding temperature predicted by the model. It is concluded that good predictions of reflooding rates can be obtained with particular correlation sets. (author)

  15. Branching process models of cancer

    CERN Document Server

    Durrett, Richard

    2015-01-01

    This volume develops results on continuous time branching processes and applies them to study rate of tumor growth, extending classic work on the Luria-Delbruck distribution. As a consequence, the authors calculate the probability that mutations that confer resistance to treatment are present at detection and quantify the extent of tumor heterogeneity. As applications, the authors evaluate ovarian cancer screening strategies and give rigorous proofs for results of Heano and Michor concerning tumor metastasis. These notes should be accessible to students who are familiar with Poisson processes and continuous time. Richard Durrett is mathematics professor at Duke University, USA. He is the author of 8 books, over 200 journal articles, and has supervised more than 40 Ph.D. students. Most of his current research concerns the applications of probability to biology: ecology, genetics, and most recently cancer.

  16. Including Finite Surface Span Effects in Empirical Jet-Surface Interaction Noise Models

    Science.gov (United States)

    Brown, Clifford A.

    2016-01-01

    The effect of finite span on the jet-surface interaction noise source and the jet mixing noise shielding and reflection effects is considered using recently acquired experimental data. First, the experimental setup and resulting data are presented with particular attention to the role of surface span on far-field noise. These effects are then included in existing empirical models that have previously assumed that all surfaces are semi-infinite. This extended abstract briefly describes the experimental setup and data leaving the empirical modeling aspects for the final paper.

  17. Discovering Process Reference Models from Process Variants Using Clustering Techniques

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    2008-01-01

    In today's dynamic business world, success of an enterprise increasingly depends on its ability to react to changes in a quick and flexible way. In response to this need, process-aware information systems (PAIS) emerged, which support the modeling, orchestration and monitoring of business processes

  18. Systematic approach for the identification of process reference models

    CSIR Research Space (South Africa)

    Van Der Merwe, A

    2009-02-01

    Full Text Available Process models are used in different application domains to capture knowledge on the process flow. Process reference models (PRM) are used to capture reusable process models, which should simplify the identification process of process models...

  19. Modeling of Temperature-Dependent Noise in Silicon Nanowire FETs including Self-Heating Effects

    Directory of Open Access Journals (Sweden)

    P. Anandan

    2014-01-01

    Full Text Available Silicon nanowires are leading the CMOS era towards the downsizing limit and its nature will be effectively suppress the short channel effects. Accurate modeling of thermal noise in nanowires is crucial for RF applications of nano-CMOS emerging technologies. In this work, a perfect temperature-dependent model for silicon nanowires including the self-heating effects has been derived and its effects on device parameters have been observed. The power spectral density as a function of thermal resistance shows significant improvement as the channel length decreases. The effects of thermal noise including self-heating of the device are explored. Moreover, significant reduction in noise with respect to channel thermal resistance, gate length, and biasing is analyzed.

  20. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  1. Producing high-accuracy lattice models from protein atomic coordinates including side chains.

    Science.gov (United States)

    Mann, Martin; Saunders, Rhodri; Smith, Cameron; Backofen, Rolf; Deane, Charlotte M

    2012-01-01

    Lattice models are a common abstraction used in the study of protein structure, folding, and refinement. They are advantageous because the discretisation of space can make extensive protein evaluations computationally feasible. Various approaches to the protein chain lattice fitting problem have been suggested but only a single backbone-only tool is available currently. We introduce LatFit, a new tool to produce high-accuracy lattice protein models. It generates both backbone-only and backbone-side-chain models in any user defined lattice. LatFit implements a new distance RMSD-optimisation fitting procedure in addition to the known coordinate RMSD method. We tested LatFit's accuracy and speed using a large nonredundant set of high resolution proteins (SCOP database) on three commonly used lattices: 3D cubic, face-centred cubic, and knight's walk. Fitting speed compared favourably to other methods and both backbone-only and backbone-side-chain models show low deviation from the original data (~1.5 Å RMSD in the FCC lattice). To our knowledge this represents the first comprehensive study of lattice quality for on-lattice protein models including side chains while LatFit is the only available tool for such models.

  2. A High-Rate, Single-Crystal Model including Phase Transformations, Plastic Slip, and Twinning

    Energy Technology Data Exchange (ETDEWEB)

    Addessio, Francis L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Bronkhorst, Curt Allan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Bolme, Cynthia Anne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Explosive Science and Shock Physics Division; Brown, Donald William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Cerreta, Ellen Kathleen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Lebensohn, Ricardo A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Lookman, Turab [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Mayeur, Jason Rhea [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Morrow, Benjamin M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Rigg, Paulo A. [Washington State Univ., Pullman, WA (United States). Dept. of Physics. Inst. for Shock Physics

    2016-08-09

    An anisotropic, rate-­dependent, single-­crystal approach for modeling materials under the conditions of high strain rates and pressures is provided. The model includes the effects of large deformations, nonlinear elasticity, phase transformations, and plastic slip and twinning. It is envisioned that the model may be used to examine these coupled effects on the local deformation of materials that are subjected to ballistic impact or explosive loading. The model is formulated using a multiplicative decomposition of the deformation gradient. A plate impact experiment on a multi-­crystal sample of titanium was conducted. The particle velocities at the back surface of three crystal orientations relative to the direction of impact were measured. Molecular dynamics simulations were conducted to investigate the details of the high-­rate deformation and pursue issues related to the phase transformation for titanium. Simulations using the single crystal model were conducted and compared to the high-­rate experimental data for the impact loaded single crystals. The model was found to capture the features of the experiments.

  3. A biologically inspired neural model for visual and proprioceptive integration including sensory training.

    Science.gov (United States)

    Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi

    2013-12-01

    Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model

  4. Controlled Carbon Source Addition to an Alternating Nitrification-Denitrification Wastewater Treatment Process Including Biological P Removal

    DEFF Research Database (Denmark)

    Isaacs, Steven Howard; Henze, Mogens

    1995-01-01

    The paper investigates the effect of adding an external carbon source on the rate of denitrification in an alternating activated sludge process including biological P removal. Two carbon sources were examined, acetate and hydrolysate derived from biologically hydrolyzed sludge. Preliminary batch...... that external carbon source addition may serve as a suitable control variable to improve process performance....... process, the addition of either carbon source to the anoxic zone also resulted in an instantaneous and fairly reproducible increase in the denitrification rate. Some release of phosphate associated with the carbon source addition was observed. With respect to nitrogen removal, these results indicate...

  5. A 3D model of the oculomotor plant including the pulley system

    Science.gov (United States)

    Viegener, A.; Armentano, R. L.

    2007-11-01

    Early models of the oculomotor plant only considered the eye globes and the muscles that move them. Recently, connective tissue structures have been found enveloping the extraocular muscles (EOMs) and firmly anchored to the orbital wall. These structures act as pulleys; they determine the functional origin of the EOMs and, in consequence, their effective pulling direction. A three dimensional model of the oculomotor plant, including pulleys, has been developed and simulations in Simulink were performed during saccadic eye movements. Listing's law was implemented based on the supposition that there exists an eye orientation related signal. The inclusion of the pulleys in the model makes this assumption plausible and simplifies the problem of the plant noncommutativity.

  6. Double-gate junctionless transistor model including short-channel effects

    International Nuclear Information System (INIS)

    Paz, B C; Pavanello, M A; Ávila-Herrera, F; Cerdeira, A

    2015-01-01

    This work presents a physically based model for double-gate junctionless transistors (JLTs), continuous in all operation regimes. To describe short-channel transistors, short-channel effects (SCEs), such as increase of the channel potential due to drain bias, carrier velocity saturation and mobility degradation due to vertical and longitudinal electric fields, are included in a previous model developed for long-channel double-gate JLTs. To validate the model, an analysis is made by using three-dimensional numerical simulations performed in a Sentaurus Device Simulator from Synopsys. Different doping concentrations, channel widths and channel lengths are considered in this work. Besides that, the series resistance influence is numerically included and validated for a wide range of source and drain extensions. In order to check if the SCEs are appropriately described, besides drain current, transconductance and output conductance characteristics, the following parameters are analyzed to demonstrate the good agreement between model and simulation and the SCEs occurrence in this technology: threshold voltage (V TH ), subthreshold slope (S) and drain induced barrier lowering. (paper)

  7. Case study modelling for an ettringite treatment process ...

    African Journals Online (AJOL)

    The process modelled in this study includes the formation of ettringite and the recovery of gibbsite through the decomposition of recycled ettringite. The modelling of this process was done using PHREEQC and the results presented in this paper are based on the outcome of different case studies that investigated how the ...

  8. Product Trial Processing (PTP): a model approach from ...

    African Journals Online (AJOL)

    Product Trial Processing (PTP): a model approach from theconsumer's perspective. ... Global Journal of Social Sciences ... Among the constructs used in the model of consumer's processing of product trail includes; experiential and non- experiential attributes, perceived validity of product trial, consumer perceived expertise, ...

  9. Modeling the distribution of ammonia across Europe including bi-directional surface–atmosphere exchange

    Directory of Open Access Journals (Sweden)

    R. J. Wichink Kruit

    2012-12-01

    Full Text Available A large shortcoming of current chemistry transport models (CTM for simulating the fate of ammonia in the atmosphere is the lack of a description of the bi-directional surface–atmosphere exchange. In this paper, results of an update of the surface–atmosphere exchange module DEPAC, i.e. DEPosition of Acidifying Compounds, in the chemistry transport model LOTOS-EUROS are discussed. It is shown that with the new description, which includes bi-directional surface–atmosphere exchange, the modeled ammonia concentrations increase almost everywhere, in particular in agricultural source areas. The reason is that by using a compensation point the ammonia lifetime and transport distance is increased. As a consequence, deposition of ammonia and ammonium decreases in agricultural source areas, while it increases in large nature areas and remote regions especially in southern Scandinavia. The inclusion of a compensation point for water reduces the dry deposition over sea and allows reproducing the observed marine background concentrations at coastal locations to a better extent. A comparison with measurements shows that the model results better represent the measured ammonia concentrations. The concentrations in nature areas are slightly overestimated, while the concentrations in agricultural source areas are still underestimated. Although the introduction of the compensation point improves the model performance, the modeling of ammonia remains challenging. Important aspects are emission patterns in space and time as well as a proper approach to deal with the high concentration gradients in relation to model resolution. In short, the inclusion of a bi-directional surface–atmosphere exchange is a significant step forward for modeling ammonia.

  10. PtProcess: An R Package for Modelling Marked Point Processes Indexed by Time

    Directory of Open Access Journals (Sweden)

    David Harte

    2010-10-01

    Full Text Available This paper describes the package PtProcess which uses the R statistical language. The package provides a unified approach to fitting and simulating a wide variety of temporal point process or temporal marked point process models. The models are specified by an intensity function which is conditional on the history of the process. The user needs to provide routines for calculating the conditional intensity function. Then the package enables one to carry out maximum likelihood fitting, goodness of fit testing, simulation and comparison of models. The package includes the routines for the conditional intensity functions for a variety of standard point process models. The package is intended to simplify the fitting of point process models indexed by time in much the same way as generalized linear model programs have simplified the fitting of various linear models. The primary examples used in this paper are earthquake sequences but the package is intended to have a much wider applicability.

  11. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    Science.gov (United States)

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  12. Health Promotion Behavior of Chinese International Students in Korea Including Acculturation Factors: A Structural Equation Model.

    Science.gov (United States)

    Kim, Sun Jung; Yoo, Il Young

    2016-03-01

    The purpose of this study was to explain the health promotion behavior of Chinese international students in Korea using a structural equation model including acculturation factors. A survey using self-administered questionnaires was employed. Data were collected from 272 Chinese students who have resided in Korea for longer than 6 months. The data were analyzed using structural equation modeling. The p value of final model is .31. The fitness parameters of the final model such as goodness of fit index, adjusted goodness of fit index, normed fit index, non-normed fit index, and comparative fit index were more than .95. Root mean square of residual and root mean square error of approximation also met the criteria. Self-esteem, perceived health status, acculturative stress and acculturation level had direct effects on health promotion behavior of the participants and the model explained 30.0% of variance. The Chinese students in Korea with higher self-esteem, perceived health status, acculturation level, and lower acculturative stress reported higher health promotion behavior. The findings can be applied to develop health promotion strategies for this population. Copyright © 2016. Published by Elsevier B.V.

  13. Process modelling on a canonical basis[Process modelling; Canonical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Siepmann, Volker

    2006-12-20

    Based on an equation oriented solving strategy, this thesis investigates a new approach to process modelling. Homogeneous thermodynamic state functions represent consistent mathematical models of thermodynamic properties. Such state functions of solely extensive canonical state variables are the basis of this work, as they are natural objective functions in optimisation nodes to calculate thermodynamic equilibrium regarding phase-interaction and chemical reactions. Analytical state function derivatives are utilised within the solution process as well as interpreted as physical properties. By this approach, only a limited range of imaginable process constraints are considered, namely linear balance equations of state variables. A second-order update of source contributions to these balance equations is obtained by an additional constitutive equation system. These equations are general dependent on state variables and first-order sensitivities, and cover therefore practically all potential process constraints. Symbolic computation technology efficiently provides sparsity and derivative information of active equations to avoid performance problems regarding robustness and computational effort. A benefit of detaching the constitutive equation system is that the structure of the main equation system remains unaffected by these constraints, and a priori information allows to implement an efficient solving strategy and a concise error diagnosis. A tailor-made linear algebra library handles the sparse recursive block structures efficiently. The optimisation principle for single modules of thermodynamic equilibrium is extended to host entire process models. State variables of different modules interact through balance equations, representing material flows from one module to the other. To account for reusability and encapsulation of process module details, modular process modelling is supported by a recursive module structure. The second-order solving algorithm makes it

  14. Processes in healthcare teams that include nurse practitioners: what do patients and families perceive to be effective?

    Science.gov (United States)

    Kilpatrick, Kelley; Jabbour, Mira; Fortin, Chantal

    2016-03-01

    To explore patient and family perceptions of team effectiveness of teams those include nurse practitioners in acute and primary care. Nurse practitioners provide safe and effective care. Patients are satisfied with the care provided by nurse practitioners. Research examining patient and family perceptions of team effectiveness following the implementation of nurse practitioners in teams is lacking. A descriptive qualitative design was used. We used purposeful sampling to identify participants in four clinical specialties. We collected data from March 2014-January 2015 using semi-structured interviews and demographic questionnaires. Content analysis was used. Descriptive statistics were generated. Participants (n = 49) believed that the teams were more effective after the implementation of a nurse practitioner and this was important to them. They described processes that teams with nurse practitioners used to effectively provide care. These processes included improved communication, involvement in decision-making, cohesion, care coordination, problem-solving, and a focus on the needs of patients and families. Participants highlighted the importance of interpersonal team dynamics. A human approach, trust, being open to discussion, listening to patient and family concerns and respect were particularly valued by participants. Different processes emerged as priorities when data were examined by speciality. However, communication, trust and taking the time to provide care were the most important processes. The study provides new insights into the views of patients and families and micro-level processes in teams with nurse practitioners. The relative importance of each process varied according to the patient's health condition. Patients and providers identified similar team processes. Future research is needed to identify how team processes influence care outcomes. The findings can support patients, clinicians and decision-makers to determine the processes to focus on to

  15. A satellite relative motion model including J_2 and J_3 via Vinti's intermediary

    Science.gov (United States)

    Biria, Ashley D.; Russell, Ryan P.

    2018-03-01

    Vinti's potential is revisited for analytical propagation of the main satellite problem, this time in the context of relative motion. A particular version of Vinti's spheroidal method is chosen that is valid for arbitrary elliptical orbits, encapsulating J_2, J_3, and generally a partial J_4 in an orbit propagation theory without recourse to perturbation methods. As a child of Vinti's solution, the proposed relative motion model inherits these properties. Furthermore, the problem is solved in oblate spheroidal elements, leading to large regions of validity for the linearization approximation. After offering several enhancements to Vinti's solution, including boosts in accuracy and removal of some singularities, the proposed model is derived and subsequently reformulated so that Vinti's solution is piecewise differentiable. While the model is valid for the critical inclination and nonsingular in the element space, singularities remain in the linear transformation from Earth-centered inertial coordinates to spheroidal elements when the eccentricity is zero or for nearly equatorial orbits. The new state transition matrix is evaluated against numerical solutions including the J_2 through J_5 terms for a wide range of chief orbits and separation distances. The solution is also compared with side-by-side simulations of the original Gim-Alfriend state transition matrix, which considers the J_2 perturbation. Code for computing the resulting state transition matrix and associated reference frame and coordinate transformations is provided online as supplementary material.

  16. Process-Based Modeling of Constructed Wetlands

    Science.gov (United States)

    Baechler, S.; Brovelli, A.; Rossi, L.; Barry, D. A.

    2007-12-01

    Constructed wetlands (CWs) are widespread facilities for wastewater treatment. In subsurface flow wetlands, contaminated wastewater flows through a porous matrix, where oxidation and detoxification phenomena occur. Despite the large number of working CWs, system design and optimization are still mainly based upon empirical equations or simplified first-order kinetics. This results from an incomplete understanding of the system functioning, and may in turn hinder the performance and effectiveness of the treatment process. As a result, CWs are often considered not suitable to meet high water quality-standards, or to treat water contaminated with recalcitrant anthropogenic contaminants. To date, only a limited number of detailed numerical models have been developed and successfully applied to simulate constructed wetland behavior. Among these, one of the most complete and powerful is CW2D, which is based on Hydrus2D. The aim of this work is to develop a comprehensive simulator tailored to model the functioning of horizontal flow constructed wetlands and in turn provide a reliable design and optimization tool. The model is based upon PHWAT, a general reactive transport code for saturated flow. PHWAT couples MODFLOW, MT3DMS and PHREEQC-2 using an operator-splitting approach. The use of PHREEQC to simulate reactions allows great flexibility in simulating biogeochemical processes. The biogeochemical reaction network is similar to that of CW2D, and is based on the Activated Sludge Model (ASM). Kinetic oxidation of carbon sources and nutrient transformations (nitrogen and phosphorous primarily) are modeled via Monod-type kinetic equations. Oxygen dissolution is accounted for via a first-order mass-transfer equation. While the ASM model only includes a limited number of kinetic equations, the new simulator permits incorporation of an unlimited number of both kinetic and equilibrium reactions. Changes in pH, redox potential and surface reactions can be easily incorporated

  17. A Comprehensive and Harmonized Digital Forensic Investigation Process Model.

    Science.gov (United States)

    Valjarevic, Aleksandar; Venter, Hein S

    2015-11-01

    Performing a digital forensic investigation (DFI) requires a standardized and formalized process. There is currently neither an international standard nor does a global, harmonized DFI process (DFIP) exist. The authors studied existing state-of-the-art DFIP models and concluded that there are significant disparities pertaining to the number of processes, the scope, the hierarchical levels, and concepts applied. This paper proposes a comprehensive model that harmonizes existing models. An effort was made to incorporate all types of processes proposed by the existing models, including those aimed at achieving digital forensic readiness. The authors introduce a novel class of processes called concurrent processes. This is a novel contribution that should, together with the rest of the model, enable more efficient and effective DFI, while ensuring admissibility of digital evidence. Ultimately, the proposed model is intended to be used for different types of DFI and should lead to standardization. © 2015 American Academy of Forensic Sciences.

  18. General hypothesis and shell model for the synthesis of semiconductor nanotubes, including carbon nanotubes

    Science.gov (United States)

    Mohammad, S. Noor

    2010-09-01

    Semiconductor nanotubes, including carbon nanotubes, have vast potential for new technology development. The fundamental physics and growth kinetics of these nanotubes are still obscured. Various models developed to elucidate the growth suffer from limited applicability. An in-depth investigation of the fundamentals of nanotube growth has, therefore, been carried out. For this investigation, various features of nanotube growth, and the role of the foreign element catalytic agent (FECA) in this growth, have been considered. Observed growth anomalies have been analyzed. Based on this analysis, a new shell model and a general hypothesis have been proposed for the growth. The essential element of the shell model is the seed generated from segregation during growth. The seed structure has been defined, and the formation of droplet from this seed has been described. A modified definition of the droplet exhibiting adhesive properties has also been presented. Various characteristics of the droplet, required for alignment and organization of atoms into tubular forms, have been discussed. Employing the shell model, plausible scenarios for the formation of carbon nanotubes, and the variation in the characteristics of these carbon nanotubes have been articulated. The experimental evidences, for example, for the formation of shell around a core, dipole characteristics of the seed, and the existence of nanopores in the seed, have been presented. They appear to justify the validity of the proposed model. The diversities of nanotube characteristics, fundamentals underlying the creation of bamboo-shaped carbon nanotubes, and the impurity generation on the surface of carbon nanotubes have been elucidated. The catalytic action of FECA on growth has been quantified. The applicability of the proposed model to the nanotube growth by a variety of mechanisms has been elaborated. These mechanisms include the vapor-liquid-solid mechanism, the oxide-assisted growth mechanism, the self

  19. ECO: a generic eutrophication model including comprehensive sediment-water interaction.

    Science.gov (United States)

    Smits, Johannes G C; van Beek, Jan K L

    2013-01-01

    The content and calibration of the comprehensive generic 3D eutrophication model ECO for water and sediment quality is presented. Based on a computational grid for water and sediment, ECO is used as a tool for water quality management to simulate concentrations and mass fluxes of nutrients (N, P, Si), phytoplankton species, detrital organic matter, electron acceptors and related substances. ECO combines integral simulation of water and sediment quality with sediment diagenesis and closed mass balances. Its advanced process formulations for substances in the water column and the bed sediment were developed to allow for a much more dynamic calculation of the sediment-water exchange fluxes of nutrients as resulting from steep concentration gradients across the sediment-water interface than is possible with other eutrophication models. ECO is to more accurately calculate the accumulation of organic matter and nutrients in the sediment, and to allow for more accurate prediction of phytoplankton biomass and water quality in response to mitigative measures such as nutrient load reduction. ECO was calibrated for shallow Lake Veluwe (The Netherlands). Due to restoration measures this lake underwent a transition from hypertrophic conditions to moderately eutrophic conditions, leading to the extensive colonization by submerged macrophytes. ECO reproduces observed water quality well for the transition period of ten years. The values of its process coefficients are in line with ranges derived from literature. ECO's calculation results underline the importance of redox processes and phosphate speciation for the nutrient return fluxes. Among other things, the results suggest that authigenic formation of a stable apatite-like mineral in the sediment can contribute significantly to oligotrophication of a lake after a phosphorus load reduction.

  20. A new model for including the effect of fly ash on biochemical methane potential.

    Science.gov (United States)

    Gertner, Pablo; Huiliñir, César; Pinto-Villegas, Paula; Castillo, Alejandra; Montalvo, Silvio; Guerrero, Lorna

    2017-10-01

    The modelling of the effect of trace elements on anaerobic digestion, and specifically the effect of fly ash, has been scarcely studied. Thus, the present work was aimed at the development of a new function that allows accumulated methane models to predict the effect of FA on the volume of methane accumulation. For this, purpose five fly ash concentrations (10, 25, 50, 250 and 500mg/L) using raw and pre-treated sewage sludge were used to calibrate the new function, while three fly ash concentrations were used (40, 150 and 350mg/L) for validation. Three models for accumulated methane volume (the modified Gompertz equation, the logistic function, and the transfer function) were evaluated. The results showed that methane production increased in the presence of FA when the sewage sludge was not pre-treated, while with pretreated sludge there is inhibition of methane production at FA concentrations higher than 50mg/L. In the calibration of the proposed function, it fits well with the experimental data under all the conditions, including the inhibition and stimulating zones, with the values of the parameters of the methane production models falling in the range of those reported in the literature. For validation experiments, the model succeeded in representing the behavior of new experiments in both the stimulating and inhibiting zones, with NRMSE and R 2 ranging from 0.3577 to 0.03714 and 0.2209 to 0.9911, respectively. Thus, the proposed model is robust and valid for the studied conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A curved multi-component aerosol hygroscopicity model framework: Part 2 – Including organic compounds

    Directory of Open Access Journals (Sweden)

    D. O. Topping

    2005-01-01

    Full Text Available This paper describes the inclusion of organic particulate material within the Aerosol Diameter Dependent Equilibrium Model (ADDEM framework described in the companion paper applied to inorganic aerosol components. The performance of ADDEM is analysed in terms of its capability to reproduce the behaviour of various organic and mixed inorganic/organic systems using recently published bulk data. Within the modelling architecture already described two separate thermodynamic models are coupled in an additive approach and combined with a method for solving the Kohler equation in order to develop a tool for predicting the water content associated with an aerosol of known inorganic/organic composition and dry size. For development of the organic module, the widely used group contribution method UNIFAC is employed to explicitly deal with the non-ideality in solution. The UNIFAC predictions for components of atmospheric importance were improved considerably by using revised interaction parameters derived from electro-dynamic balance studies. Using such parameters, the model was found to adequately describe mixed systems including 5–6 dicarboxylic acids, down to low relative humidity conditions. By comparison with electrodynamic balance data, it was also found that the model was capable of capturing the behaviour of aqueous aerosols containing Suwannee River Fulvic acid, a structure previously used to represent the functionality of complex oxidised macromolecules often found in atmospheric aerosols. The additive approach for modelling mixed inorganic/organic systems worked well for a variety of mixtures. As expected, deviations between model predictions and measurements increase with increasing concentration. Available surface tension models, used in evaluating the Kelvin term, were found to reproduce measured data with varying success. Deviations from experimental data increased with increased organic compound complexity. For components only slightly

  2. A curved multi-component aerosol hygroscopicity model framework: Part 2 Including organic compounds

    Science.gov (United States)

    Topping, D. O.; McFiggans, G. B.; Coe, H.

    2005-05-01

    This paper describes the inclusion of organic particulate material within the Aerosol Diameter Dependent Equilibrium Model (ADDEM) framework described in the companion paper applied to inorganic aerosol components. The performance of ADDEM is analysed in terms of its capability to reproduce the behaviour of various organic and mixed inorganic/organic systems using recently published bulk data. Within the modelling architecture already described two separate thermodynamic models are coupled in an additive approach and combined with a method for solving the Kohler equation in order to develop a tool for predicting the water content associated with an aerosol of known inorganic/organic composition and dry size. For development of the organic module, the widely used group contribution method UNIFAC is employed to explicitly deal with the non-ideality in solution. The UNIFAC predictions for components of atmospheric importance were improved considerably by using revised interaction parameters derived from electro-dynamic balance studies. Using such parameters, the model was found to adequately describe mixed systems including 5-6 dicarboxylic acids, down to low relative humidity conditions. By comparison with electrodynamic balance data, it was also found that the model was capable of capturing the behaviour of aqueous aerosols containing Suwannee River Fulvic acid, a structure previously used to represent the functionality of complex oxidised macromolecules often found in atmospheric aerosols. The additive approach for modelling mixed inorganic/organic systems worked well for a variety of mixtures. As expected, deviations between model predictions and measurements increase with increasing concentration. Available surface tension models, used in evaluating the Kelvin term, were found to reproduce measured data with varying success. Deviations from experimental data increased with increased organic compound complexity. For components only slightly soluble in water

  3. Modeling of Cloud/Radiation Processes for Cirrus Cloud Formation

    National Research Council Canada - National Science Library

    Liou, K

    1997-01-01

    This technical report includes five reprints and pre-prints of papers associated with the modeling of cirrus cloud and radiation processes as well as remote sensing of cloud optical and microphysical...

  4. A generalized model for optimal transport of images including dissipation and density modulation

    KAUST Repository

    Maas, Jan

    2015-11-01

    © EDP Sciences, SMAI 2015. In this paper the optimal transport and the metamorphosis perspectives are combined. For a pair of given input images geodesic paths in the space of images are defined as minimizers of a resulting path energy. To this end, the underlying Riemannian metric measures the rate of transport cost and the rate of viscous dissipation. Furthermore, the model is capable to deal with strongly varying image contrast and explicitly allows for sources and sinks in the transport equations which are incorporated in the metric related to the metamorphosis approach by Trouvé and Younes. In the non-viscous case with source term existence of geodesic paths is proven in the space of measures. The proposed model is explored on the range from merely optimal transport to strongly dissipative dynamics. For this model a robust and effective variational time discretization of geodesic paths is proposed. This requires to minimize a discrete path energy consisting of a sum of consecutive image matching functionals. These functionals are defined on corresponding pairs of intensity functions and on associated pairwise matching deformations. Existence of time discrete geodesics is demonstrated. Furthermore, a finite element implementation is proposed and applied to instructive test cases and to real images. In the non-viscous case this is compared to the algorithm proposed by Benamou and Brenier including a discretization of the source term. Finally, the model is generalized to define discrete weighted barycentres with applications to textures and objects.

  5. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Stéphane Guichard

    2015-12-01

    Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.

  6. A multiscale model for glioma spread including cell-tissue interactions and proliferation.

    Science.gov (United States)

    Engwer, Christian; Knappitsch, Markus; Surulescu, Christina

    2016-04-01

    Glioma is a broad class of brain and spinal cord tumors arising from glia cells, which are the main brain cells that can develop into neoplasms. They are highly invasive and lead to irregular tumor margins which are not precisely identifiable by medical imaging, thus rendering a precise enough resection very difficult. The understanding of glioma spread patterns is hence essential for both radiological therapy as well as surgical treatment. In this paper we propose a multiscale model for glioma growth including interactions of the cells with the underlying tissue network, along with proliferative effects. Our current accounting for two subpopulations of cells to accomodate proliferation according to the go-or-grow dichtomoty is an extension of the setting in [16]. As in that paper, we assume that cancer cells use neuronal fiber tracts as invasive pathways. Hence, the individual structure of brain tissue seems to be decisive for the tumor spread. Diffusion tensor imaging (DTI) is able to provide such information, thus opening the way for patient specific modeling of glioma invasion. Starting from a multiscale model involving subcellular (microscopic) and individual (mesoscale) cell dynamics, we perform a parabolic scaling to obtain an approximating reaction-diffusion-transport equation on the macroscale of the tumor cell population. Numerical simulations based on DTI data are carried out in order to assess the performance of our modeling approach.

  7. Analysis of electronic models for solar cells including energy resolved defect densities

    Energy Technology Data Exchange (ETDEWEB)

    Glitzky, Annegret

    2010-07-01

    We introduce an electronic model for solar cells including energy resolved defect densities. The resulting drift-diffusion model corresponds to a generalized van Roosbroeck system with additional source terms coupled with ODEs containing space and energy as parameters for all defect densities. The system has to be considered in heterostructures and with mixed boundary conditions from device simulation. We give a weak formulation of the problem. If the boundary data and the sources are compatible with thermodynamic equilibrium the free energy along solutions decays monotonously. In other cases it may be increasing, but we estimate its growth. We establish boundedness and uniqueness results and prove the existence of a weak solution. This is done by considering a regularized problem, showing its solvability and the boundedness of its solutions independent of the regularization level. (orig.)

  8. Effect of including decay chains on predictions of equilibrium-type terrestrial food chain models

    International Nuclear Information System (INIS)

    Kirchner, G.

    1990-01-01

    Equilibrium-type food chain models are commonly used for assessing the radiological impact to man from environmental releases of radionuclides. Usually these do not take into account build-up of radioactive decay products during environmental transport. This may be a potential source of underprediction. For estimating consequences of this simplification, the equations of an internationally recognised terrestrial food chain model have been extended to include decay chains of variable length. Example calculations show that for releases from light water reactors as expected both during routine operation and in the case of severe accidents, the build-up of decay products during environmental transport is generally of minor importance. However, a considerable number of radionuclides of potential radiological significance have been identified which show marked contributions of decay products to calculated contamination of human food and resulting radiation dose rates. (author)

  9. Design and modeling of an advanced marine machinery system including waste heat recovery and removal of sulphur oxides

    DEFF Research Database (Denmark)

    Frimann Nielsen, Rasmus; Haglind, Fredrik; Larsen, Ulrik

    2014-01-01

    the efficiency of machinery systems. The wet sulphuric acid process is an effective way of removing flue gas sulphur oxides from land-based coal-fired power plants. Moreover, organic Rankine cycles (ORC) are suitable for heat to power conversion for low temperature heat sources. This paper describes the design...... and modeling of a highly efficient machinery system which includes the removal of exhaust gas sulphur oxides. The system consists of a two-stroke diesel engine, the wet sulphuric process for sulphur removal, a conventional steam Rankine cycle and an ORC. Results of numerical modeling efforts suggest...... that an ORC placed after the conventional waste heat recovery system is able to extract the sulphuric acid from the exhaust gas, while at the same time increase the combined cycle thermal efficiency by 2.6%. The findings indicate that the technology has potential in marine applications regarding both energy...

  10. Toward including the effect of manufacturing processes in the pre-estimated losses of the switched reluctance motor

    Directory of Open Access Journals (Sweden)

    Eyhab El-Kharahi

    2015-03-01

    Full Text Available The estimation of losses plays a key role in the process of building any electrical machine. How to estimate those losses while designing any machine; by obtaining the characteristic of the electrical steel from the catalogue and calculate the losses. However, this way is inaccurate since the electrical steel performs several manufacturing processes during the process of building any machine, which affects directly the magnetic property of the electrical steel and accordingly the characteristic of the electrical steel will be affected. That means the B–H curve of the steel that was obtained from the catalogue will be changed. Moreover, during loading and rotating the machine, some important changes occur to the B–H characteristic of the electrical steel such as the stress on the laminated iron. Accordingly, the pre-estimated losses are completely far from the actual losses because they were estimated based on the data of the electrical steel obtained from the catalogue. So in order to estimate the losses precisely significant factors of the manufacturing processes must be included. The paper introduces the systematic estimation of the losses including the effect of one of the manufacturing factors. Similarly, any other manufacturing factor can be included in the pre-designed losses estimations.

  11. Process modeling for the Integrated Nonthermal Treatment System (INTS) study

    Energy Technology Data Exchange (ETDEWEB)

    Brown, B.W.

    1997-04-01

    This report describes the process modeling done in support of the Integrated Nonthermal Treatment System (INTS) study. This study was performed to supplement the Integrated Thermal Treatment System (ITTS) study and comprises five conceptual treatment systems that treat DOE contract-handled mixed low-level wastes (MLLW) at temperatures of less than 350{degrees}F. ASPEN PLUS, a chemical process simulator, was used to model the systems. Nonthermal treatment systems were developed as part of the INTS study and include sufficient processing steps to treat the entire inventory of MLLW. The final result of the modeling is a process flowsheet with a detailed mass and energy balance. In contrast to the ITTS study, which modeled only the main treatment system, the INTS study modeled each of the various processing steps with ASPEN PLUS, release 9.1-1. Trace constituents, such as radionuclides and minor pollutant species, were not included in the calculations.

  12. Measures of Quality in Business Process Modelling

    Directory of Open Access Journals (Sweden)

    Radek Hronza

    2015-06-01

    Full Text Available Business process modelling and analysing is undoubtedly one of the most important parts of Applied (Business Informatics. Quality of business process models (diagrams is crucial for any purpose in this area. The goal of a process analyst’s work is to create generally understandable, explicit and error free models. If a process is properly described, created models can be used as an input into deep analysis and optimization. It can be assumed that properly designed business process models (similarly as in the case of correctly written algorithms contain characteristics that can be mathematically described. Besides it will be possible to create a tool that will help process analysts to design proper models. As part of this review will be conducted systematic literature review in order to find and analyse business process model’s design and business process model’s quality measures. It was found that mentioned area had already been the subject of research investigation in the past. Thirty-three suitable scietific publications and twenty-two quality measures were found. Analysed scientific publications and existing quality measures do not reflect all important attributes of business process model’s clarity, simplicity and completeness. Therefore it would be appropriate to add new measures of quality.

  13. Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS

    Science.gov (United States)

    Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.

    2010-12-01

    With 4 million ha currently grown for ethanol in Brazil only, approximately half the global bioethanol production in 2005 (Smeets 2008), and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Indeed, ethanol made from biomass is currently the most widespread option for alternative transportation fuels. It was originally promoted as a carbon neutral energy resource that could bring energy independence to countries and local opportunities to farmers, until attention was drawn to its environmental and socio-economical drawbacks. It is still not clear to which extent it is a solution or a contributor to climate change mitigation. Dynamic Global Vegetation models can help address these issues and quantify the potential impacts of biofuels on ecosystems at scales ranging from on-site to global. The global agro-ecosystem model ORCHIDEE describes water, carbon and energy exchanges at the soil-atmosphere interface for a limited number of natural and agricultural vegetation types. In order to integrate agricultural management to the simulations and to capture more accurately the specificity of crops' phenology, ORCHIDEE has been coupled with the agronomical model STICS. The resulting crop-oriented vegetation model ORCHIDEE-STICS has been used so far to simulate temperate crops such as wheat, corn and soybean. As a generic ecosystem model, each grid cell can include several vegetation types with their own phenology and management practices, making it suitable to spatial simulations. Here, ORCHIDEE-STICS is altered to include sugar cane as a new agricultural Plant functional Type, implemented and parametrized using the STICS approach. An on-site calibration and validation is then performed based on biomass and flux chamber measurements in several sites in Australia and variables such as LAI, dry weight, heat fluxes and respiration are used to evaluate the ability of the model to simulate the specific

  14. Modelling income processes with lots of heterogeneity

    DEFF Research Database (Denmark)

    Browning, Martin; Ejrnæs, Mette; Alvarez, Javier

    2010-01-01

    We model earnings processes allowing for lots of heterogeneity across agents. We also introduce an extension to the linear ARMA model which allows the initial convergence in the long run to be different from that implied by the conventional ARMA model. This is particularly important for unit root...

  15. Counting Processes for Retail Default Modeling

    DEFF Research Database (Denmark)

    Kiefer, Nicholas Maximilian; Larson, C. Erik

    in a discrete state space. In a simple case, the states could be default/non-default; in other models relevant for credit modeling the states could be credit scores or payment status (30 dpd, 60 dpd, etc.). Here we focus on the use of stochastic counting processes for mortgage default modeling, using data...

  16. Web-accessible molecular modeling with Rosetta: The Rosetta Online Server that Includes Everyone (ROSIE).

    Science.gov (United States)

    Moretti, Rocco; Lyskov, Sergey; Das, Rhiju; Meiler, Jens; Gray, Jeffrey J

    2018-01-01

    The Rosetta molecular modeling software package provides a large number of experimentally validated tools for modeling and designing proteins, nucleic acids, and other biopolymers, with new protocols being added continually. While freely available to academic users, external usage is limited by the need for expertise in the Unix command line environment. To make Rosetta protocols available to a wider audience, we previously created a web server called Rosetta Online Server that Includes Everyone (ROSIE), which provides a common environment for hosting web-accessible Rosetta protocols. Here we describe a simplification of the ROSIE protocol specification format, one that permits easier implementation of Rosetta protocols. Whereas the previous format required creating multiple separate files in different locations, the new format allows specification of the protocol in a single file. This new, simplified protocol specification has more than doubled the number of Rosetta protocols available under ROSIE. These new applications include pK a determination, lipid accessibility calculation, ribonucleic acid redesign, protein-protein docking, protein-small molecule docking, symmetric docking, antibody docking, cyclic toxin docking, critical binding peptide determination, and mapping small molecule binding sites. ROSIE is freely available to academic users at http://rosie.rosettacommons.org. © 2017 The Protein Society.

  17. Bringing Value-Based Perspectives to Care: Including Patient and Family Members in Decision-Making Processes

    Directory of Open Access Journals (Sweden)

    Graeme Kohler

    2017-11-01

    Full Text Available n a gap in consistent application of system-level strategies that can effectively translate organizational policies around patient and family engagement into practice. Methods The broad objective of this initiative was to develop a system-level implementation strategy to include patient and family advisors (PFAs at decision-making points in primary healthcare (PHC based on wellestablished evidence and literature. In this opportunity sponsored by the Canadian Foundation for Healthcare Improvement (CFHI a co-design methodology, also well-established was applied in identifying and developing a suitable implementation strategy to engage PFAs as members of quality teams in PHC. Diabetes management centres (DMCs was selected as the pilot site to develop the strategy. Key steps in the process included review of evidence, review of the current state in PHC through engagement of key stakeholders and a co-design approach. Results The project team included a diverse representation of members from the PHC system including patient advisors, DMC team members, system leads, providers, Public Engagement team members and CFHI improvement coaches. Key outcomes of this 18-month long initiative included development of a working definition of patient and family engagement, development of a Patient and Family Engagement Resource Guide and evaluation of the resource guide. Conclusion This novel initiative provided us an opportunity to develop a supportive system-wide implementation plan and a strategy to include PFAs in decision-making processes in PHC. The well-established co-design methodology further allowed us to include value-based (customer driven quality and experience of care perspectives of several important stakeholders including patient advisors. The next step will be to implement the strategy within DMCs, spread the strategy PHC, both locally and provincially with a focus on sustainability.

  18. A curved multi-component aerosol hygroscopicity model framework: 2 Including organics

    Science.gov (United States)

    Topping, D. O.; McFiggans, G. B.; Coe, H.

    2004-12-01

    This paper describes the inclusion of organic particulate material within the Aerosol Diameter Dependent Equilibrium Model (ADDEM) framework described in the companion paper applied to inorganic aerosol components. The performance of ADDEM is analysed in terms of its capability to reproduce the behaviour of various organic and mixed inorganic/organic systems using recently published bulk data. Within the modelling architecture already described two separate thermodynamic models are coupled in an additive approach and combined with a method for solving the Köhler equation in order to develop a tool for predicting the water content associated with an aerosol of known inorganic/organic composition and dry size. For development of the organic module, the widely used group contribution method UNIFAC is employed to explicitly deal with the non-ideality in solution. The UNIFAC predictions for components of atmospheric importance were improved considerably by using revised interaction parameters derived from electro-dynamic balance studies. Using such parameters, the model was found to adequately describe mixed systems including 5-6 dicarboxylic acids, down to low relative humidity conditions. The additive approach for modelling mixed inorganic/organic systems worked well for a variety of mixtures. As expected, deviations between predicted and measured data increase with increasing concentration. Available surface tension models, used in evaluating the Kelvin term, were found to reproduce measured data with varying success. Deviations from experimental data increased with increased organic compound complexity. For components only slightly soluble in water, significant deviations from measured surface tension depression behaviour were predicted with both model formalisms tested. A Sensitivity analysis showed that such variation is likely to lead to predicted growth factors within the measurement uncertainty for growth factor taken in the sub-saturated regime. Greater

  19. A Hydrological Concept including Lateral Water Flow Compatible with the Biogeochemical Model ForSAFE

    Directory of Open Access Journals (Sweden)

    Giuliana Zanchi

    2016-03-01

    Full Text Available The study presents a hydrology concept developed to include lateral water flow in the biogeochemical model ForSAFE. The hydrology concept was evaluated against data collected at Svartberget in the Vindeln Research Forest in Northern Sweden. The results show that the new concept allows simulation of a saturated and an unsaturated zone in the soil as well as water flow that reaches the stream comparable to measurements. The most relevant differences compared to streamflow measurements are that the model simulates a higher base flow in winter and lower flow peaks after snowmelt. These differences are mainly caused by the assumptions made to regulate the percolation at the bottom of the simulated soil columns. The capability for simulating lateral flows and a saturated zone in ForSAFE can greatly improve the simulation of chemical exchange in the soil and export of elements from the soil to watercourses. Such a model can help improve the understanding of how environmental changes in the forest landscape will influence chemical loads to surface waters.

  20. Integrated Sachs-Wolfe effect in a quintessence cosmological model: Including anisotropic stress of dark energy

    International Nuclear Information System (INIS)

    Wang, Y. T.; Xu, L. X.; Gui, Y. X.

    2010-01-01

    In this paper, we investigate the integrated Sachs-Wolfe effect in the quintessence cold dark matter model with constant equation of state and constant speed of sound in dark energy rest frame, including dark energy perturbation and its anisotropic stress. Comparing with the ΛCDM model, we find that the integrated Sachs-Wolfe (ISW)-power spectrums are affected by different background evolutions and dark energy perturbation. As we change the speed of sound from 1 to 0 in the quintessence cold dark matter model with given state parameters, it is found that the inclusion of dark energy anisotropic stress makes the variation of magnitude of the ISW source uncertain due to the anticorrelation between the speed of sound and the ratio of dark energy density perturbation contrast to dark matter density perturbation contrast in the ISW-source term. Thus, the magnitude of the ISW-source term is governed by the competition between the alterant multiple of (1+3/2xc-circumflex s 2 ) and that of δ de /δ m with the variation of c-circumflex s 2 .

  1. Expanded rock blast modeling capabilities of DMC{_}BLAST, including buffer blasting

    Energy Technology Data Exchange (ETDEWEB)

    Preece, D.S. [Sandia National Labs., Albuquerque, NM (United States); Tidman, J.P.; Chung, S.H. [ICI Explosives (Canada)

    1996-12-31

    A discrete element computer program named DMC{_}BLAST (Distinct Motion Code) has been under development since 1987 for modeling rock blasting. This program employs explicit time integration and uses spherical or cylindrical elements that are represented as circles in 2-D. DMC{_}BLAST calculations compare favorably with data from actual bench blasts. The blast modeling capabilities of DMC{_}BLAST have been expanded to include independently dipping geologic layers, top surface, bottom surface and pit floor. The pit can also now be defined using coordinates based on the toe of the bench. A method for modeling decked explosives has been developed which allows accurate treatment of the inert materials (stemming) in the explosive column and approximate treatment of different explosives in the same blasthole. A DMC{_}BLAST user can specify decking through a specific geologic layer with either inert material or a different explosive. Another new feature of DMC{_}BLAST is specification of an uplift angle which is the angle between the normal to the blasthole and a vector defining the direction of explosive loading on particles adjacent to the blasthole. A buffer (choke) blast capability has been added for situations where previously blasted material is adjacent to the free face of the bench preventing any significant lateral motion during the blast.

  2. Modeling closed nuclear fuel cycles processes

    Energy Technology Data Exchange (ETDEWEB)

    Shmidt, O.V. [A.A. Bochvar All-Russian Scientific Research Institute for Inorganic Materials, Rogova, 5a street, Moscow, 123098 (Russian Federation); Makeeva, I.R. [Zababakhin All-Russian Scientific Research Institute of Technical Physics, Vasiliev street 13, Snezhinsk, Chelyabinsk region, 456770 (Russian Federation); Liventsov, S.N. [Tomsk Polytechnic University, Tomsk, Lenin Avenue, 30, 634050 (Russian Federation)

    2016-07-01

    Computer models of processes are necessary for determination of optimal operating conditions for closed nuclear fuel cycle (NFC) processes. Computer models can be quickly changed in accordance with new and fresh data from experimental research. 3 kinds of process simulation are necessary. First, the VIZART software package is a balance model development used for calculating the material flow in technological processes. VIZART involves taking into account of equipment capacity, transport lines and storage volumes. Secondly, it is necessary to simulate the physico-chemical processes that are involved in the closure of NFC. The third kind of simulation is the development of software that allows the optimization, diagnostics and control of the processes which implies real-time simulation of product flows on the whole plant or on separate lines of the plant. (A.C.)

  3. A catchment-scale groundwater model including sewer pipe leakage in an urban system

    Science.gov (United States)

    Peche, Aaron; Fuchs, Lothar; Spönemann, Peter; Graf, Thomas; Neuweiler, Insa

    2016-04-01

    -202. itwh (2002). Modellbeschreibung, Institut für technisch-wissenschaftliche Hydrologie GmbH, Hannover. Karpf, C. & Krebs, P. (2013). Modelling of groundwater infiltration into sewer systems. Urban Water Journal, 10:4, 221-229, DOI: 10.1080/1573062X.2012.724077. Kolditz, O., Bauer, S. et al. (2012). OpenGeoSys: an open source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THM/C) processes in porous media. Env. Earth Sci. 67(2):589-599. Wolf, L., Held, I., Eiswirth, M., & Hötzl, H. (2004). Impact of leaky sewers on groundwater quality. Acta Hydrochimica et Hydrobiologica, 32(4-5), 361-373. doi:10.1002/aheh.200400538. Wolf, L. (2006). Influence of leaky sewer systems on groundwater resources beneath the city of Rastatt, Germany. Dissertation, University of Karlsruhe.

  4. Modelling and control of a microgrid including photovoltaic and wind generation

    Science.gov (United States)

    Hussain, Mohammed Touseef

    Extensive increase of distributed generation (DG) penetration and the existence of multiple DG units at distribution level have introduced the notion of micro-grid. This thesis develops a detailed non-linear and small-signal dynamic model of a microgrid that includes PV, wind and conventional small scale generation along with their power electronics interfaces and the filters. The models developed evaluate the amount of generation mix from various DGs for satisfactory steady state operation of the microgrid. In order to understand the interaction of the DGs on microgrid system initially two simpler configurations were considered. The first one consists of microalternator, PV and their electronics, and the second system consists of microalternator and wind system each connected to the power system grid. Nonlinear and linear state space model of each microgrid are developed. Small signal analysis showed that the large participation of PV/wind can drive the microgrid to the brink of unstable region without adequate control. Non-linear simulations are carried out to verify the results obtained through small-signal analysis. The role of the extent of generation mix of a composite microgrid consisting of wind, PV and conventional generation was investigated next. The findings of the smaller systems were verified through nonlinear and small signal modeling. A central supervisory capacitor energy storage controller interfaced through a STATCOM was proposed to monitor and enhance the microgrid operation. The potential of various control inputs to provide additional damping to the system has been evaluated through decomposition techniques. The signals identified to have damping contents were employed to design the supervisory control system. The controller gains were tuned through an optimal pole placement technique. Simulation studies demonstrate that the STATCOM voltage phase angle and PV inverter phase angle were the best inputs for enhanced stability boundaries.

  5. An Improved Heat Budget Estimation Including Bottom Effects for General Ocean Circulation Models

    Science.gov (United States)

    Carder, Kendall; Warrior, Hari; Otis, Daniel; Chen, R. F.

    2001-01-01

    This paper studies the effects of the underwater light field on heat-budget calculations of general ocean circulation models for shallow waters. The presence of a bottom significantly alters the estimated heat budget in shallow waters, which affects the corresponding thermal stratification and hence modifies the circulation. Based on the data collected during the COBOP field experiment near the Bahamas, we have used a one-dimensional turbulence closure model to show the influence of the bottom reflection and absorption on the sea surface temperature field. The water depth has an almost one-to-one correlation with the temperature rise. Effects of varying the bottom albedo by replacing the sea grass bed with a coral sand bottom, also has an appreciable effect on the heat budget of the shallow regions. We believe that the differences in the heat budget for the shallow areas will have an influence on the local circulation processes and especially on the evaporative and long-wave heat losses for these areas. The ultimate effects on humidity and cloudiness of the region are expected to be significant as well.

  6. Ultrasonic-assisted manufacturing processes: Variational model and numerical simulations

    KAUST Repository

    Siddiq, Amir

    2012-04-01

    We present a computational study of ultrasonic assisted manufacturing processes including sheet metal forming, upsetting, and wire drawing. A fully variational porous plasticity model is modified to include ultrasonic softening effects and then utilized to account for instantaneous softening when ultrasonic energy is applied during deformation. Material model parameters are identified via inverse modeling, i.e. by using experimental data. The versatility and predictive ability of the model are demonstrated and the effect of ultrasonic intensity on the manufacturing process at hand is investigated and compared qualitatively with experimental results reported in the literature. © 2011 Elsevier B.V. All rights reserved.

  7. Extending Galactic Habitable Zone Modeling to Include the Emergence of Intelligent Life.

    Science.gov (United States)

    Morrison, Ian S; Gowanlock, Michael G

    2015-08-01

    Previous studies of the galactic habitable zone have been concerned with identifying those regions of the Galaxy that may favor the emergence of complex life. A planet is deemed habitable if it meets a set of assumed criteria for supporting the emergence of such complex life. In this work, we extend the assessment of habitability to consider the potential for life to further evolve to the point of intelligence--termed the propensity for the emergence of intelligent life, φI. We assume φI is strongly influenced by the time durations available for evolutionary processes to proceed undisturbed by the sterilizing effects of nearby supernovae. The times between supernova events provide windows of opportunity for the evolution of intelligence. We developed a model that allows us to analyze these window times to generate a metric for φI, and we examine here the spatial and temporal variation of this metric. Even under the assumption that long time durations are required between sterilizations to allow for the emergence of intelligence, our model suggests that the inner Galaxy provides the greatest number of opportunities for intelligence to arise. This is due to the substantially higher number density of habitable planets in this region, which outweighs the effects of a higher supernova rate in the region. Our model also shows that φI is increasing with time. Intelligent life emerged at approximately the present time at Earth's galactocentric radius, but a similar level of evolutionary opportunity was available in the inner Galaxy more than 2 Gyr ago. Our findings suggest that the inner Galaxy should logically be a prime target region for searches for extraterrestrial intelligence and that any civilizations that may have emerged there are potentially much older than our own.

  8. Modelling Template for the Development of the Process Flowsheet

    DEFF Research Database (Denmark)

    Fedorova, Marina; Gani, Rafiqul

    2015-01-01

    provides building blocks for the templates (generic models previously developed); 3) computer aided methods and tools, that include procedures to perform model translation, model analysis, model verification/validation, model solution and model documentation. In this work, the integrated use of all three......Models are playing important roles in design and analysis of chemicals/bio-chemicals based products and the processes that manufacture them. Model-based methods and tools have the potential to decrease the number of experiments, which can be expensive and time consuming, and point to candidates......, where the experimental effort could be focused.In this contribution a general modelling framework for systematic model building through modelling templates, which supports the reuse of existing models via its tools integration and model import and export capabilities, is presented. Modelling templates...

  9. porewater chemistry experiment at Mont Terri rock laboratory. Reactive transport modelling including bacterial activity

    International Nuclear Information System (INIS)

    Tournassat, Christophe; Gaucher, Eric C.; Leupin, Olivier X.; Wersin, Paul

    2010-01-01

    Document available in extended abstract form only. An in-situ test in the Opalinus Clay formation, termed pore water Chemistry (PC) experiment, was run for a period of five years. It was based on the concept of diffusive equilibration whereby traced water with a composition close to that expected in the formation was continuously circulated and monitored in a packed off borehole. The main original focus was to obtain reliable data on the pH/pCO 2 of the pore water, but because of unexpected microbially- induced redox reactions, the objective was then changed to elucidate the biogeochemical processes happening in the borehole and to understand their impact on pH/pCO 2 and pH in the low permeability clay formation. The biologically perturbed chemical evolution of the PC experiment was simulated with reactive transport models. The aim of this modelling exercise was to develop a 'minimal-' model able to reproduce the chemical evolution of the PC experiment, i.e. the chemical evolution of solute inorganic and organic compounds (organic carbon, dissolved inorganic carbon etc...) that are coupled with each other through the simultaneous occurrence of biological transformation of solute or solid compounds, in-diffusion and out-diffusion of solute species and precipitation/dissolution of minerals (in the borehole and in the formation). An accurate description of the initial chemical conditions in the surrounding formation together with simplified kinetics rule mimicking the different phases of bacterial activities allowed reproducing the evolution of all main measured parameters (e.g. pH, TOC). Analyses from the overcoring and these simulations evidence the high buffer capacity of Opalinus clay regarding chemical perturbations due to bacterial activity. This pH buffering capacity is mainly attributed to the carbonate system as well as to the clay surfaces reactivity. Glycerol leaching from the pH-electrode might be the primary organic source responsible for

  10. MODELLING PURCHASING PROCESSES FROM QUALITY ASPECTS

    Directory of Open Access Journals (Sweden)

    Zora Arsovski

    2008-12-01

    Full Text Available Management has a fundamental task to identify and direct primary and specific processes within purchasing function, applying the up-to-date information infrastructure. ISO 9001:2000 defines a process as a number of interrelated or interactive activities transforming inputs and outputs, and the "process approach" as a systematic identification in management processes employed with the organization and particularly - relationships among the processes. To direct a quality management system using process approach, the organization is to determine the map of its general (basic processes. Primary processes are determined on the grounds of their interrelationship and impact on satisfying customers' needs. To make a proper choice of general business processes, it is necessary to determine the entire business flow, beginning with the customer demand up to the delivery of products or service provided. In the next step the process model is to be converted into data model which is essential for implementation of the information system enabling automation, monitoring, measuring, inspection, analysis and improvement of key purchase processes. In this paper are given methodology and some results of investigation of development of IS for purchasing process from aspects of quality.

  11. Three-dimensional model analysis and processing

    CERN Document Server

    Yu, Faxin; Luo, Hao; Wang, Pinghui

    2011-01-01

    This book focuses on five hot research directions in 3D model analysis and processing in computer science:  compression, feature extraction, content-based retrieval, irreversible watermarking and reversible watermarking.

  12. Value-Oriented Coordination Process Modeling

    NARCIS (Netherlands)

    Fatemi, Hassan; van Sinderen, Marten J.; Wieringa, Roelf J.; Hull, Richard; Mendling, Jan; Tai, Stefan

    Business webs are collections of enterprises designed to jointly satisfy a consumer need. Designing business webs calls for modeling the collaboration of enterprises from different perspectives, in particular the business value and coordination process perspectives, and for mutually aligning these

  13. Intestinal bacterial overgrowth includes potential pathogens in the carbohydrate overload models of equine acute laminitis.

    Science.gov (United States)

    Onishi, Janet C; Park, Joong-Wook; Prado, Julio; Eades, Susan C; Mirza, Mustajab H; Fugaro, Michael N; Häggblom, Max M; Reinemeyer, Craig R

    2012-10-12

    Carbohydrate overload models of equine acute laminitis are used to study the development of lameness. It is hypothesized that a diet-induced shift in cecal bacterial communities contributes to the development of the pro-inflammatory state that progresses to laminar failure. It is proposed that vasoactive amines, protease activators and endotoxin, all bacterial derived bioactive metabolites, play a role in disease development. Questions regarding the oral bioavailability of many of the bacterial derived bioactive metabolites remain. This study evaluates the possibility that a carbohydrate-induced overgrowth of potentially pathogenic cecal bacteria occurs and that bacterial translocation contributes toward the development of the pro-inflammatory state. Two groups of mixed-breed horses were used, those with laminitis induced by cornstarch (n=6) or oligofructan (n=6) and non-laminitic controls (n=8). Cecal fluid and tissue homogenates of extra-intestinal sites including the laminae were used to enumerate Gram-negative and -positive bacteria. Horses that developed Obel grade2 lameness, revealed a significant overgrowth of potentially pathogenic Gram-positive and Gram-negative intestinal bacteria within the cecal fluid. Although colonization of extra-intestinal sites with potentially pathogenic bacteria was not detected, results of this study indicate that cecal/colonic lymphadenopathy and eosinophilia develop in horses progressing to lameness. It is hypothesized that the pro-inflammatory state in carbohydrate overload models of equine acute laminitis is driven by an immune response to the rapid overgrowth of Gram-positive and Gram-negative cecal bacterial communities in the gut. Further equine research is indicated to study the immunological response, involving the lymphatic system that develops in the model. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. CFD simulations and reduced order modeling of a refrigerator compartment including radiation effects

    International Nuclear Information System (INIS)

    Bayer, Ozgur; Oskay, Ruknettin; Paksoy, Akin; Aradag, Selin

    2013-01-01

    Highlights: ► Free convection in a refrigerator is simulated including radiation effects. ► Heat rates are affected drastically when radiation effects are considered. ► 95% of the flow energy can be represented by using one spatial POD mode. - Abstract: Considering the engineering problem of natural convection in domestic refrigerator applications, this study aims to simulate the fluid flow and temperature distribution in a single commercial refrigerator compartment by using the experimentally determined temperature values as the specified constant wall temperature boundary conditions. The free convection in refrigerator applications is evaluated as a three-dimensional (3D), turbulent, transient and coupled non-linear flow problem. Radiation heat transfer mode is also included in the analysis. According to the results, taking radiation effects into consideration does not change the temperature distribution inside the refrigerator significantly; however the heat rates are affected drastically. The flow inside the compartment is further analyzed with a reduced order modeling method called Proper Orthogonal Decomposition (POD) and the energy contents of several spatial and temporal modes that exist in the flow are examined. The results show that approximately 95% of all the flow energy can be represented by only using one spatial mode

  15. Modeling of Dielectric Heating within Lyophilization Process

    Directory of Open Access Journals (Sweden)

    Jan Kyncl

    2014-01-01

    Full Text Available A process of lyophilization of paper books is modeled. The process of drying is controlled by a dielectric heating system. From the physical viewpoint, the task represents a 2D coupled problem described by two partial differential equations for the electric and temperature fields. The material parameters are supposed to be temperature-dependent functions. The continuous mathematical model is solved numerically. The methodology is illustrated with some examples whose results are discussed.

  16. Cost Models for MMC Manufacturing Processes

    Science.gov (United States)

    Elzey, Dana M.; Wadley, Haydn N. G.

    1996-01-01

    Processes for the manufacture of advanced metal matrix composites are rapidly approaching maturity in the research laboratory and there is growing interest in their transition to industrial production. However, research conducted to date has almost exclusively focused on overcoming the technical barriers to producing high-quality material and little attention has been given to the economical feasibility of these laboratory approaches and process cost issues. A quantitative cost modeling (QCM) approach was developed to address these issues. QCM are cost analysis tools based on predictive process models relating process conditions to the attributes of the final product. An important attribute, of the QCM approach is the ability to predict the sensitivity of material production costs to product quality and to quantitatively explore trade-offs between cost and quality. Applications of the cost models allow more efficient direction of future MMC process technology development and a more accurate assessment of MMC market potential. Cost models were developed for two state-of-the art metal matrix composite (MMC) manufacturing processes: tape casting and plasma spray deposition. Quality and Cost models are presented for both processes and the resulting predicted quality-cost curves are presented and discussed.

  17. Physical and mathematical modelling of extrusion processes

    DEFF Research Database (Denmark)

    Arentoft, Mogens; Gronostajski, Z.; Niechajowics, A.

    2000-01-01

    The main objective of the work is to study the extrusion process using physical modelling and to compare the findings of the study with finite element predictions. The possibilities and advantages of the simultaneous application of both of these methods for the analysis of metal forming processes...

  18. Business Process Modeling Notation - An Overview

    Directory of Open Access Journals (Sweden)

    Alexandra Fortiş

    2006-01-01

    Full Text Available BPMN represents an industrial standard created to offer a common and user friendly notation to all the participants to a business process. The present paper aims to briefly present the main features of this notation as well as an interpretation of some of the main patterns characterizing a business process modeled by the working fluxes.

  19. Qualitative simulation in formal process modelling

    International Nuclear Information System (INIS)

    Sivertsen, Elin R.

    1999-01-01

    In relation to several different research activities at the OECD Halden Reactor Project, the usefulness of formal process models has been identified. Being represented in some appropriate representation language, the purpose of these models is to model process plants and plant automatics in a unified way to allow verification and computer aided design of control strategies. The present report discusses qualitative simulation and the tool QSIM as one approach to formal process models. In particular, the report aims at investigating how recent improvements of the tool facilitate the use of the approach in areas like process system analysis, procedure verification, and control software safety analysis. An important long term goal is to provide a basis for using qualitative reasoning in combination with other techniques to facilitate the treatment of embedded programmable systems in Probabilistic Safety Analysis (PSA). This is motivated from the potential of such a combination in safety analysis based on models comprising both software, hardware, and operator. It is anticipated that the research results from this activity will benefit V and V in a wide variety of applications where formal process models can be utilized. Examples are operator procedures, intelligent decision support systems, and common model repositories (author) (ml)

  20. Controlled Carbon Source Addition to an Alternating Nitrification-Denitrification Wastewater Treatment Process Including Biological P Removal

    DEFF Research Database (Denmark)

    Isaacs, Steven Howard; Henze, Mogens

    1995-01-01

    The paper investigates the effect of adding an external carbon source on the rate of denitrification in an alternating activated sludge process including biological P removal. Two carbon sources were examined, acetate and hydrolysate derived from biologically hydrolyzed sludge. Preliminary batch...... experiments performed in 5 liter bottles indicated that the denitrification rate can be instantaneously increased through the addition of either carbon source. The amount by which the rate was increased depended on the amount of carbon added. In the main experiments performed in a pilot scale alternating...... process, the addition of either carbon source to the anoxic zone also resulted in an instantaneous and fairly reproducible increase in the denitrification rate. Some release of phosphate associated with the carbon source addition was observed. With respect to nitrogen removal, these results indicate...

  1. Bringing Value-Based Perspectives to Care: Including Patient and Family Members in Decision-Making Processes.

    Science.gov (United States)

    Kohler, Graeme; Sampalli, Tara; Ryer, Ashley; Porter, Judy; Wood, Les; Bedford, Lisa; Higgins-Bowser, Irene; Edwards, Lynn; Christian, Erin; Dunn, Susan; Gibson, Rick; Ryan Carson, Shannon; Vallis, Michael; Zed, Joanna; Tugwell, Barna; Van Zoost, Colin; Canfield, Carolyn; Rivoire, Eleanor

    2017-03-06

    Recent evidence shows that patient engagement is an important strategy in achieving a high performing healthcare system. While there is considerable evidence of implementation initiatives in direct care context, there is limited investigation of implementation initiatives in decision-making context as it relates to program planning, service delivery and developing policies. Research has also shown a gap in consistent application of system-level strategies that can effectively translate organizational policies around patient and family engagement into practice. The broad objective of this initiative was to develop a system-level implementation strategy to include patient and family advisors (PFAs) at decision-making points in primary healthcare (PHC) based on wellestablished evidence and literature. In this opportunity sponsored by the Canadian Foundation for Healthcare Improvement (CFHI) a co-design methodology, also well-established was applied in identifying and developing a suitable implementation strategy to engage PFAs as members of quality teams in PHC. Diabetes management centres (DMCs) was selected as the pilot site to develop the strategy. Key steps in the process included review of evidence, review of the current state in PHC through engagement of key stakeholders and a co-design approach. The project team included a diverse representation of members from the PHC system including patient advisors, DMC team members, system leads, providers, Public Engagement team members and CFHI improvement coaches. Key outcomes of this 18-month long initiative included development of a working definition of patient and family engagement, development of a Patient and Family Engagement Resource Guide and evaluation of the resource guide. This novel initiative provided us an opportunity to develop a supportive system-wide implementation plan and a strategy to include PFAs in decision-making processes in PHC. The well-established co-design methodology further allowed us to

  2. The (Mathematical) Modeling Process in Biosciences.

    Science.gov (United States)

    Torres, Nestor V; Santos, Guido

    2015-01-01

    In this communication, we introduce a general framework and discussion on the role of models and the modeling process in the field of biosciences. The objective is to sum up the common procedures during the formalization and analysis of a biological problem from the perspective of Systems Biology, which approaches the study of biological systems as a whole. We begin by presenting the definitions of (biological) system and model. Particular attention is given to the meaning of mathematical model within the context of biology. Then, we present the process of modeling and analysis of biological systems. Three stages are described in detail: conceptualization of the biological system into a model, mathematical formalization of the previous conceptual model and optimization and system management derived from the analysis of the mathematical model. All along this work the main features and shortcomings of the process are analyzed and a set of rules that could help in the task of modeling any biological system are presented. Special regard is given to the formative requirements and the interdisciplinary nature of this approach. We conclude with some general considerations on the challenges that modeling is posing to current biology.

  3. Models and Modelling Tools for Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    2016-01-01

    -process design. Illustrative examples highlighting the need for efficient model-based systems will be presented, where the need for predictive models for innovative chemical product-process design will be highlighted. The examples will cover aspects of chemical product-process design where the idea of the grand......The design, development and reliability of a chemical product and the process to manufacture it, need to be consistent with the end-use characteristics of the desired product. One of the common ways to match the desired product-process characteristics is through trial and error based experiments......, which can be expensive and time consuming. An alternative approach is the use of a systematic model-based framework according to an established work-flow in product-process design, replacing some of the time consuming and/or repetitive experimental steps. The advantages of the use of a model...

  4. Multiscale soil-landscape process modeling

    NARCIS (Netherlands)

    Schoorl, J.M.; Veldkamp, A.

    2006-01-01

    The general objective of this chapter is to illustrate the role of soils and geomorphological processes in the multiscale soil-lanscape context. Included in this context is the fourth dimension (temporal dimension) and the human role (fifth dimension)

  5. Evaluation of EOR Processes Using Network Models

    DEFF Research Database (Denmark)

    Winter, Anatol; Larsen, Jens Kjell; Krogsbøll, Anette

    1998-01-01

    The report consists of the following parts: 1) Studies of wetting properties of model fluids and fluid mixtures aimed at an optimal selection of candidates for micromodel experiments. 2) Experimental studies of multiphase transport properties using physical models of porous networks (micromodels......) including estimation of their "petrophysical" properties (e.g. absolute permeability). 3) Mathematical modelling and computer studies of multiphase transport through pore space using mathematical network models. 4) Investigation of link between pore-scale and macroscopic recovery mechanisms....

  6. Extending Model Checking To Object Process Validation

    NARCIS (Netherlands)

    van Rein, H.

    2002-01-01

    Object-oriented techniques allow the gathering and modelling of system requirements in terms of an application area. The expression of data and process models at that level is a great asset in communication with non-technical people in that area, but it does not necessarily lead to consistent

  7. Hierarchical Structured Model for Nonlinear Dynamical Processes ...

    African Journals Online (AJOL)

    The mathematical representation of the process, in this context, is by a set of linear stochastic differential equations (SDE) with unique solutions. The problem of realization is that of constructing the dynamical system by looking at the problem of scientific model building. In model building, one must be able to calculate the ...

  8. Numerical approaches to expansion process modeling

    Directory of Open Access Journals (Sweden)

    G. V. Alekseev

    2017-01-01

    Full Text Available Forage production is currently undergoing a period of intensive renovation and introduction of the most advanced technologies and equipment. More and more often such methods as barley toasting, grain extrusion, steaming and grain flattening, boiling bed explosion, infrared ray treatment of cereals and legumes, followed by flattening, and one-time or two-time granulation of the purified whole grain without humidification in matrix presses By grinding the granules. These methods require special apparatuses, machines, auxiliary equipment, created on the basis of different methods of compiled mathematical models. When roasting, simulating the heat fields arising in the working chamber, provide such conditions, the decomposition of a portion of the starch to monosaccharides, which makes the grain sweetish, but due to protein denaturation the digestibility of the protein and the availability of amino acids decrease somewhat. Grain is roasted mainly for young animals in order to teach them to eat food at an early age, stimulate the secretory activity of digestion, better development of the masticatory muscles. In addition, the high temperature is detrimental to bacterial contamination and various types of fungi, which largely avoids possible diseases of the gastrointestinal tract. This method has found wide application directly on the farms. Apply when used in feeding animals and legumes: peas, soy, lupine and lentils. These feeds are preliminarily ground, and then cooked or steamed for 1 hour for 30–40 minutes. In the feed mill. Such processing of feeds allows inactivating the anti-nutrients in them, which reduce the effectiveness of their use. After processing, legumes are used as protein supplements in an amount of 25–30% of the total nutritional value of the diet. But it is recommended to cook and steal a grain of good quality. A poor-quality grain that has been stored for a long time and damaged by pathogenic micro flora is subject to

  9. A comparison of different ways of including baseline counts in negative binomial models for data from falls prevention trials.

    Science.gov (United States)

    Zheng, Han; Kimber, Alan; Goodwin, Victoria A; Pickering, Ruth M

    2018-01-01

    A common design for a falls prevention trial is to assess falling at baseline, randomize participants into an intervention or control group, and ask them to record the number of falls they experience during a follow-up period of time. This paper addresses how best to include the baseline count in the analysis of the follow-up count of falls in negative binomial (NB) regression. We examine the performance of various approaches in simulated datasets where both counts are generated from a mixed Poisson distribution with shared random subject effect. Including the baseline count after log-transformation as a regressor in NB regression (NB-logged) or as an offset (NB-offset) resulted in greater power than including the untransformed baseline count (NB-unlogged). Cook and Wei's conditional negative binomial (CNB) model replicates the underlying process generating the data. In our motivating dataset, a statistically significant intervention effect resulted from the NB-logged, NB-offset, and CNB models, but not from NB-unlogged, and large, outlying baseline counts were overly influential in NB-unlogged but not in NB-logged. We conclude that there is little to lose by including the log-transformed baseline count in standard NB regression compared to CNB for moderate to larger sized datasets. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Biomolecular Modeling in a Process Dynamics and Control Course

    Science.gov (United States)

    Gray, Jeffrey J.

    2006-01-01

    I present modifications to the traditional course entitled, "Process dynamics and control," which I renamed "Modeling, dynamics, and control of chemical and biological processes." Additions include the central dogma of biology, pharmacokinetic systems, population balances, control of gene transcription, and large­-scale…

  11. Model for Simulating a Spiral Software-Development Process

    Science.gov (United States)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code

  12. Explosive Bubble Modelling by Noncausal Process

    OpenAIRE

    Christian Gouriéroux; Jean-Michel Zakoian

    2013-01-01

    The linear mixed causal and noncausal autoregressive processes provide often a better fit to economic and financial time series than the standard causal linear autoregressive processes. By considering the example of the noncausal Cauchy autoregressive process, we show that it might be explained by the special associated nonlinear causal dynamics. Indeed, this causal dynamics can include unit root, bubble phenomena, or asymmetric cycles often observed on financial markets. The noncausal Cauchy...

  13. Evolution of the continental upper mantle : numerical modelling of thermo-chemical convection including partial melting

    NARCIS (Netherlands)

    de Smet, J.H.

    1999-01-01

    This thesis elaborates on the evolution of the continental upper mantle based on numerical modelling results. The descriptive and explanatory basis is formed by a numerical thermo-chemical convection model. The model evolution starts in the early Archaean about 4 billion years ago. The model follows

  14. Evolution of the continental upper mantle : numerical modelling of thermo-chemical convection including partial melting

    NARCIS (Netherlands)

    Smet, J.H. de

    1999-01-01

    This thesis elaborates on the evolution of the continental upper mantle based on numerical modelling results. The descriptive and explanatory basis is formed by a numerical thermo-chemical convection model. The model evolution starts in the early Archaean about 4 billion years ago. The model

  15. Filament winding cylinders. I - Process model

    Science.gov (United States)

    Lee, Soo-Yong; Springer, George S.

    1990-01-01

    A model was developed which describes the filament winding process of composite cylinders. The model relates the significant process variables such as winding speed, fiber tension, and applied temperature to the thermal, chemical and mechanical behavior of the composite cylinder and the mandrel. Based on the model, a user friendly code was written which can be used to calculate (1) the temperature in the cylinder and the mandrel, (2) the degree of cure and viscosity in the cylinder, (3) the fiber tensions and fiber positions, (4) the stresses and strains in the cylinder and in the mandrel, and (5) the void diameters in the cylinder.

  16. A Thermal Evolution Model of the Earth Including the Biosphere, Continental Growth and Mantle Hydration

    Science.gov (United States)

    Höning, D.; Spohn, T.

    2014-12-01

    By harvesting solar energy and converting it to chemical energy, photosynthetic life plays an important role in the energy budget of Earth [2]. This leads to alterations of chemical reservoirs eventually affecting the Earth's interior [4]. It further has been speculated [3] that the formation of continents may be a consequence of the evolution life. A steady state model [1] suggests that the Earth without its biosphere would evolve to a steady state with a smaller continent coverage and a dryer mantle than is observed today. We present a model including (i) parameterized thermal evolution, (ii) continental growth and destruction, and (iii) mantle water regassing and outgassing. The biosphere enhances the production rate of sediments which eventually are subducted. These sediments are assumed to (i) carry water to depth bound in stable mineral phases and (ii) have the potential to suppress shallow dewatering of the underlying sediments and crust due to their low permeability. We run a Monte Carlo simulation for various initial conditions and treat all those parameter combinations as success which result in the fraction of continental crust coverage observed for present day Earth. Finally, we simulate the evolution of an abiotic Earth using the same set of parameters but a reduced rate of continental weathering and erosion. Our results suggest that the origin and evolution of life could have stabilized the large continental surface area of the Earth and its wet mantle, leading to the relatively low mantle viscosity we observe at present. Without photosynthetic life on our planet, the Earth would be geodynamical less active due to a dryer mantle, and would have a smaller fraction of continental coverage than observed today. References[1] Höning, D., Hansen-Goos, H., Airo, A., Spohn, T., 2014. Biotic vs. abiotic Earth: A model for mantle hydration and continental coverage. Planetary and Space Science 98, 5-13. [2] Kleidon, A., 2010. Life, hierarchy, and the

  17. Planar optical waveguide based sandwich assay sensors and processes for the detection of biological targets including early detection of cancers

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, Jennifer S [Santa Fe, NM; Swanson, Basil I [Los Alamos, NM; Shively, John E [Arcadia, CA; Li, Lin [Monrovia, CA

    2009-06-02

    An assay element is described including recognition ligands adapted for binding to carcinoembryonic antigen (CEA) bound to a film on a single mode planar optical waveguide, the film from the group of a membrane, a polymerized bilayer membrane, and a self-assembled monolayer containing polyethylene glycol or polypropylene glycol groups therein and an assay process for detecting the presence of CEA is described including injecting a possible CEA-containing sample into a sensor cell including the assay element, maintaining the sample within the sensor cell for time sufficient for binding to occur between CEA present within the sample and the recognition ligands, injecting a solution including a reporter ligand into the sensor cell; and, interrogating the sample within the sensor cell with excitation light from the waveguide, the excitation light provided by an evanescent field of the single mode penetrating into the biological target-containing sample to a distance of less than about 200 nanometers from the waveguide thereby exciting any bound reporter ligand within a distance of less than about 200 nanometers from the waveguide and resulting in a detectable signal.

  18. Modeling Aspects of Activated Sludge Processes Part l l: Mathematical Process Modeling and Biokinetics of Activated Sludge Processes

    International Nuclear Information System (INIS)

    AbdElHaleem, H.S.; EI-Ahwany, A. H.; Ibrahim, H.I.; Ibrahim, G.

    2004-01-01

    Mathematical process modeling and biokinetics of activated sludge process were reviewed considering different types of models. It has been evaluated the task group models of ASMI. and 2, and 3 versioned by Henze et al considering the conditions of each model and the different processes of which every model consists. It is revealed that ASMI contains some defects avoided in ASM3. Relied on homogeneity, Models can be classified into homogenous models characterized by taking the activated sludge process as one phase. In this type of models, the internal mass transfer inside the floes was neglected.. Hence, the kinetic parameter produces can be considered inaccurate. The other type of models is the heterogeneous model This type considers the mass transfer operations in addition to the biochemical reaction processes; hence, the resulted kinetic parameters can be considered more accurate than that of homogenous type

  19. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  20. From Business Value Model to Coordination Process Model

    Science.gov (United States)

    Fatemi, Hassan; van Sinderen, Marten; Wieringa, Roel

    The increased complexity of business webs calls for modeling the collaboration of enterprises from different perspectives, in particular the business and process perspectives, and for mutually aligning these perspectives. Business value modeling and coordination process modeling both are necessary for a good e-business design, but these activities have different goals and use different concepts. Nevertheless, the resulting models should be consistent with each other because they refer to the same system from different perspectives. Hence, checking the consistency between these models or producing one based on the other would be of high value. In this paper we discuss the issue of achieving consistency in multi-level e-business design and give guidelines to produce consistent coordination process models from business value models in a stepwise manner.

  1. Numerical modeling of atmospheric washout processes

    International Nuclear Information System (INIS)

    Bayer, D.; Beheng, K.D.; Herbert, F.

    1987-01-01

    For the washout of particles from the atmosphere by clouds and rain one has to distinguish between processes which work in the first phase of cloud development, when condensation nuclei build up in saturated air (Nucleation Aerosol Scavenging, NAS) and those processes which work at the following cloud development. In the second case particles are taken off by cloud droplets or by falling rain drops via collision (Collision Aerosol Scavenging, CAS). The physics of both processes is described. For the CAS process a numerical model is presented. The report contains a documentation of the mathematical equations and the computer programs (FORTRAN). (KW) [de

  2. In-situ biogas upgrading process: modeling and simulations aspects

    DEFF Research Database (Denmark)

    Lovato, Giovanna; Alvarado-Morales, Merlin; Kovalovszki, Adam

    2017-01-01

    Biogas upgrading processes by in-situ hydrogen (H2) injection are still challenging and could benefit from a mathematical model to predict system performance. Therefore, a previous model on anaerobic digestion was updated and expanded to include the effect of H2 injection into the liquid phase of...

  3. Numerical modelling of seawater intrusion in Shenzhen (China) using a 3D density-dependent model including tidal effects

    Science.gov (United States)

    Lu, Wei; Yang, Qingchun; Martín, Jordi D.; Juncosa, Ricardo

    2013-04-01

    During the 1990s, groundwater overexploitation has resulted in seawater intrusion in the coastal aquifer of the Shenzhen city, China. Although water supply facilities have been improved and alleviated seawater intrusion in recent years, groundwater overexploitation is still of great concern in some local areas. In this work we present a three-dimensional density-dependent numerical model developed with the FEFLOW code, which is aimed at simulating the extent of seawater intrusion while including tidal effects and different groundwater pumping scenarios. Model calibration, using waterheads and reported chloride concentration, has been performed based on the data from 14 boreholes, which were monitored from May 2008 to December 2009. A fairly good fitness between the observed and computed values was obtained by a manual trial-and-error method. Model prediction has been carried out forward 3 years with the calibrated model taking into account high, medium and low tide levels and different groundwater exploitation schemes. The model results show that tide-induced seawater intrusion significantly affects the groundwater levels and concentrations near the estuarine of the Dasha river, which implies that an important hydraulic connection exists between this river and groundwater, even considering that some anti-seepage measures were taken in the river bed. Two pumping scenarios were considered in the calibrated model in order to predict the future changes in the water levels and chloride concentration. The numerical results reveal a decreased tendency of seawater intrusion if groundwater exploitation does not reach an upper bound of about 1.32 × 104 m3/d. The model results provide also insights for controlling seawater intrusion in such coastal aquifer systems.

  4. Remote sensing models and methods for image processing

    CERN Document Server

    Schowengerdt, Robert A

    1997-01-01

    This book is a completely updated, greatly expanded version of the previously successful volume by the author. The Second Edition includes new results and data, and discusses a unified framework and rationale for designing and evaluating image processing algorithms.Written from the viewpoint that image processing supports remote sensing science, this book describes physical models for remote sensing phenomenology and sensors and how they contribute to models for remote-sensing data. The text then presents image processing techniques and interprets them in terms of these models. Spectral, s

  5. Enzymatic corn wet milling: engineering process and cost model.

    Science.gov (United States)

    Ramírez, Edna C; Johnston, David B; McAloon, Andrew J; Singh, Vijay

    2009-01-21

    Enzymatic corn wet milling (E-milling) is a process derived from conventional wet milling for the recovery and purification of starch and co-products using proteases to eliminate the need for sulfites and decrease the steeping time. In 2006, the total starch production in USA by conventional wet milling equaled 23 billion kilograms, including modified starches and starches used for sweeteners and ethanol production 1. Process engineering and cost models for an E-milling process have been developed for a processing plant with a capacity of 2.54 million kg of corn per day (100,000 bu/day). These models are based on the previously published models for a traditional wet milling plant with the same capacity. The E-milling process includes grain cleaning, pretreatment, enzymatic treatment, germ separation and recovery, fiber separation and recovery, gluten separation and recovery and starch separation. Information for the development of the conventional models was obtained from a variety of technical sources including commercial wet milling companies, industry experts and equipment suppliers. Additional information for the present models was obtained from our own experience with the development of the E-milling process and trials in the laboratory and at the pilot plant scale. The models were developed using process and cost simulation software (SuperPro Designer) and include processing information such as composition and flow rates of the various process streams, descriptions of the various unit operations and detailed breakdowns of the operating and capital cost of the facility. Based on the information from the model, we can estimate the cost of production per kilogram of starch using the input prices for corn, enzyme and other wet milling co-products. The work presented here describes the E-milling process and compares the process, the operation and costs with the conventional process. The E-milling process was found to be cost competitive with the conventional

  6. Enzymatic corn wet milling: engineering process and cost model

    Directory of Open Access Journals (Sweden)

    McAloon Andrew J

    2009-01-01

    Full Text Available Abstract Background Enzymatic corn wet milling (E-milling is a process derived from conventional wet milling for the recovery and purification of starch and co-products using proteases to eliminate the need for sulfites and decrease the steeping time. In 2006, the total starch production in USA by conventional wet milling equaled 23 billion kilograms, including modified starches and starches used for sweeteners and ethanol production 1. Process engineering and cost models for an E-milling process have been developed for a processing plant with a capacity of 2.54 million kg of corn per day (100,000 bu/day. These models are based on the previously published models for a traditional wet milling plant with the same capacity. The E-milling process includes grain cleaning, pretreatment, enzymatic treatment, germ separation and recovery, fiber separation and recovery, gluten separation and recovery and starch separation. Information for the development of the conventional models was obtained from a variety of technical sources including commercial wet milling companies, industry experts and equipment suppliers. Additional information for the present models was obtained from our own experience with the development of the E-milling process and trials in the laboratory and at the pilot plant scale. The models were developed using process and cost simulation software (SuperPro Designer® and include processing information such as composition and flow rates of the various process streams, descriptions of the various unit operations and detailed breakdowns of the operating and capital cost of the facility. Results Based on the information from the model, we can estimate the cost of production per kilogram of starch using the input prices for corn, enzyme and other wet milling co-products. The work presented here describes the E-milling process and compares the process, the operation and costs with the conventional process. Conclusion The E-milling process

  7. Reverse Osmosis Processing of Organic Model Compounds and Fermentation Broths

    Science.gov (United States)

    2006-04-01

    key species found in the fermentation broth: ethanol, butanol, acetic acid, oxalic acid, lactic acid, and butyric acid. Correlations of the rejection...AFRL-ML-TY-TP-2007-4545 POSTPRINT REVERSE OSMOSIS PROCESSING OF ORGANIC MODEL COMPOUNDS AND FERMENTATION BROTHS Robert Diltz...TELEPHONE NUMBER (Include area code) Bioresource Technology 98 (2007) 686–695Reverse osmosis processing of organic model compounds and fermentation broths

  8. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  9. Software Engineering Laboratory (SEL) cleanroom process model

    Science.gov (United States)

    Green, Scott; Basili, Victor; Godfrey, Sally; Mcgarry, Frank; Pajerski, Rose; Waligora, Sharon

    1991-01-01

    The Software Engineering Laboratory (SEL) cleanroom process model is described. The term 'cleanroom' originates in the integrated circuit (IC) production process, where IC's are assembled in dust free 'clean rooms' to prevent the destructive effects of dust. When applying the clean room methodology to the development of software systems, the primary focus is on software defect prevention rather than defect removal. The model is based on data and analysis from previous cleanroom efforts within the SEL and is tailored to serve as a guideline in applying the methodology to future production software efforts. The phases that are part of the process model life cycle from the delivery of requirements to the start of acceptance testing are described. For each defined phase, a set of specific activities is discussed, and the appropriate data flow is described. Pertinent managerial issues, key similarities and differences between the SEL's cleanroom process model and the standard development approach used on SEL projects, and significant lessons learned from prior cleanroom projects are presented. It is intended that the process model described here will be further tailored as additional SEL cleanroom projects are analyzed.

  10. Dynamic experiments with high bisphenol-A concentrations modelled with an ASM model extended to include a separate XOC degrading microorganism.

    Science.gov (United States)

    Lindblom, Erik; Press-Kristensen, Kåre; Vanrolleghem, Peter A; Mikkelsen, Peter S; Henze, Mogens

    2009-07-01

    The perspective of this work is to develop a model, which can be used to better understand and optimize wastewater treatment plants that are able to remove xenobiotic organic compounds (XOCs) in combination with removal of traditional pollutants. Results from dynamic experiments conducted with the endocrine disrupting XOC bisphenol-A (BPA) in an activated sludge process with real wastewater were used to hypothesize an ASM-based process model including aerobic growth of a specific BPA-degrading microorganism and sorption of BPA to sludge. A parameter estimation method was developed, which simultaneously utilizes steady-state background concentrations and dynamic step response data, as well as conceptual simplifications of the plant configuration. Validation results show that biodegradation of BPA is sensitive to operational conditions before and during the experiment and that the proposed model structure is capable of capturing important characteristics of the observed BPA removal, thus increasing the potential for generalizing knowledge obtained from plant specific experiments.

  11. Analysis of Two Stroke Marine Diesel Engine Operation Including Turbocharger Cut-Out by Using a Zero-Dimensional Model

    Directory of Open Access Journals (Sweden)

    Cong Guan

    2015-06-01

    Full Text Available In this article, the operation of a large two-stroke marine diesel engine including various cases with turbocharger cut-out was thoroughly investigated by using a modular zero-dimensional engine model built in MATLAB/Simulink environment. The model was developed by using as a basis an in-house modular mean value engine model, in which the existing cylinder block was replaced by a more detailed one that is capable of representing the scavenging ports-cylinder-exhaust valve processes. Simulation of the engine operation at steady state conditions was performed and the derived engine performance parameters were compared with the respective values obtained by the engine shop trials. The investigation of engine operation under turbocharger cut-out conditions in the region from 10% to 50% load was carried out and the influence of turbocharger cut-out on engine performance including the in-cylinder parameters was comprehensively studied. The recommended schedule for the combination of the turbocharger cut-out and blower activation was discussed for the engine operation under part load conditions. Finally, the influence of engine operating strategies on the annual fuel savings, CO2 emissions reduction and blower operating hours for a Panamax container ship operating at slow steaming conditions is presented and discussed.

  12. Including Overweight or Obese Students in Physical Education: A Social Ecological Constraint Model

    Science.gov (United States)

    Li, Weidong; Rukavina, Paul

    2012-01-01

    In this review, we propose a social ecological constraint model to study inclusion of overweight or obese students in physical education by integrating key concepts and assumptions from ecological constraint theory in motor development and social ecological models in health promotion and behavior. The social ecological constraint model proposes…

  13. ETM documentation update – including modelling conventions and manual for software tools

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik

    , it summarises the work done during 2013, and it also contains presentations for promotion of fusion as a future element in the electricity generation mix and presentations for the modelling community concerning model development and model documentation – in particular for TIAM collaboration workshops....

  14. Causally nonseparable processes admitting a causal model

    International Nuclear Information System (INIS)

    Feix, Adrien; Araújo, Mateus; Brukner, Caslav

    2016-01-01

    A recent framework of quantum theory with no global causal order predicts the existence of ‘causally nonseparable’ processes. Some of these processes produce correlations incompatible with any causal order (they violate so-called ‘causal inequalities’ analogous to Bell inequalities ) while others do not (they admit a ‘causal model’ analogous to a local model ). Here we show for the first time that bipartite causally nonseparable processes with a causal model exist, and give evidence that they have no clear physical interpretation. We also provide an algorithm to generate processes of this kind and show that they have nonzero measure in the set of all processes. We demonstrate the existence of processes which stop violating causal inequalities but are still causally nonseparable when mixed with a certain amount of ‘white noise’. This is reminiscent of the behavior of Werner states in the context of entanglement and nonlocality. Finally, we provide numerical evidence for the existence of causally nonseparable processes which have a causal model even when extended with an entangled state shared among the parties. (paper)

  15. Stochastic differential equation model to Prendiville processes

    International Nuclear Information System (INIS)

    Granita; Bahar, Arifah

    2015-01-01

    The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution

  16. Stochastic differential equation model to Prendiville processes

    Energy Technology Data Exchange (ETDEWEB)

    Granita, E-mail: granitafc@gmail.com [Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310, Johor Malaysia (Malaysia); Bahar, Arifah [Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310, Johor Malaysia (Malaysia); UTM Center for Industrial & Applied Mathematics (UTM-CIAM) (Malaysia)

    2015-10-22

    The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution.

  17. The Probability Model of Expectation Disconfirmation Process

    Directory of Open Access Journals (Sweden)

    Hui-Hsin HUANG

    2015-06-01

    Full Text Available This paper proposes a probability model to explore the dynamic process of customer’s satisfaction. Bases on expectation disconfirmation theory, the satisfaction is constructed with customer’s expectation before buying behavior and the perceived performance after purchase. The experiment method is designed to measure expectation disconfirmation effects and we also use the collection data to estimate the overall satisfaction and model calibration. The results show good fitness between the model and the real data. This model has application for business marketing areas in order to manage relationship satisfaction.

  18. Serverification of Molecular Modeling Applications: The Rosetta Online Server That Includes Everyone (ROSIE)

    Science.gov (United States)

    Conchúir, Shane Ó.; Der, Bryan S.; Drew, Kevin; Kuroda, Daisuke; Xu, Jianqing; Weitzner, Brian D.; Renfrew, P. Douglas; Sripakdeevong, Parin; Borgo, Benjamin; Havranek, James J.; Kuhlman, Brian; Kortemme, Tanja; Bonneau, Richard; Gray, Jeffrey J.; Das, Rhiju

    2013-01-01

    The Rosetta molecular modeling software package provides experimentally tested and rapidly evolving tools for the 3D structure prediction and high-resolution design of proteins, nucleic acids, and a growing number of non-natural polymers. Despite its free availability to academic users and improving documentation, use of Rosetta has largely remained confined to developers and their immediate collaborators due to the code’s difficulty of use, the requirement for large computational resources, and the unavailability of servers for most of the Rosetta applications. Here, we present a unified web framework for Rosetta applications called ROSIE (Rosetta Online Server that Includes Everyone). ROSIE provides (a) a common user interface for Rosetta protocols, (b) a stable application programming interface for developers to add additional protocols, (c) a flexible back-end to allow leveraging of computer cluster resources shared by RosettaCommons member institutions, and (d) centralized administration by the RosettaCommons to ensure continuous maintenance. This paper describes the ROSIE server infrastructure, a step-by-step ‘serverification’ protocol for use by Rosetta developers, and the deployment of the first nine ROSIE applications by six separate developer teams: Docking, RNA de novo, ERRASER, Antibody, Sequence Tolerance, Supercharge, Beta peptide design, NCBB design, and VIP redesign. As illustrated by the number and diversity of these applications, ROSIE offers a general and speedy paradigm for serverification of Rosetta applications that incurs negligible cost to developers and lowers barriers to Rosetta use for the broader biological community. ROSIE is available at http://rosie.rosettacommons.org. PMID:23717507

  19. Dispersive processes in models of regional radionuclide migration. Technical memorandum

    International Nuclear Information System (INIS)

    Evenson, D.E.; Dettinger, M.D.

    1980-05-01

    Three broad areas of concern in the development of aquifer scale transport models will be local scale diffusion and dispersion processes, regional scale dispersion processes, and numerical problems associated with the advection-dispersion equation. Local scale dispersion processes are fairly well understood and accessible to observation. These processes will generally be dominated in large scale systems by regional processes, or macro-dispersion. Macro-dispersion is primarily the result of large scale heterogeneities in aquifer properties. In addition, the effects of many modeling approximations are often included in the process. Because difficulties arise in parameterization of this large scale phenomenon, parameterization should be based on field measurements made at the same scale as the transport process of interest or else partially circumvented through the application of a probabilistic advection model. Other problems associated with numerical transport models include difficulties with conservation of mass, stability, numerical dissipation, overshoot, flexibility, and efficiency. We recommend the random-walk model formulation for Lawrence Livermore Laboratory's purposes as the most flexible, accurate and relatively efficient modeling approach that overcomes these difficulties

  20. A neurolinguistic model of grammatical construction processing.

    Science.gov (United States)

    Dominey, Peter Ford; Hoen, Michel; Inui, Toshio

    2006-12-01

    One of the functions of everyday human language is to communicate meaning. Thus, when one hears or reads the sentence, "John gave a book to Mary," some aspect of an event concerning the transfer of possession of a book from John to Mary is (hopefully) transmitted. One theoretical approach to language referred to as construction grammar emphasizes this link between sentence structure and meaning in the form of grammatical constructions. The objective of the current research is to (1) outline a functional description of grammatical construction processing based on principles of psycholinguistics, (2) develop a model of how these functions can be implemented in human neurophysiology, and then (3) demonstrate the feasibility of the resulting model in processing languages of typologically diverse natures, that is, English, French, and Japanese. In this context, particular interest will be directed toward the processing of novel compositional structure of relative phrases. The simulation results are discussed in the context of recent neurophysiological studies of language processing.

  1. A process algebra model of QED

    International Nuclear Information System (INIS)

    Sulis, William

    2016-01-01

    The process algebra approach to quantum mechanics posits a finite, discrete, determinate ontology of primitive events which are generated by processes (in the sense of Whitehead). In this ontology, primitive events serve as elements of an emergent space-time and of emergent fundamental particles and fields. Each process generates a set of primitive elements, using only local information, causally propagated as a discrete wave, forming a causal space termed a causal tapestry. Each causal tapestry forms a discrete and finite sampling of an emergent causal manifold (space-time) M and emergent wave function. Interactions between processes are described by a process algebra which possesses 8 commutative operations (sums and products) together with a non-commutative concatenation operator (transitions). The process algebra possesses a representation via nondeterministic combinatorial games. The process algebra connects to quantum mechanics through the set valued process and configuration space covering maps, which associate each causal tapestry with sets of wave functions over M. Probabilities emerge from interactions between processes. The process algebra model has been shown to reproduce many features of the theory of non-relativistic scalar particles to a high degree of accuracy, without paradox or divergences. This paper extends the approach to a semi-classical form of quantum electrodynamics. (paper)

  2. Understanding uncertainty in process-based hydrological models

    Science.gov (United States)

    Clark, M. P.; Kavetski, D.; Slater, A. G.; Newman, A. J.; Marks, D. G.; Landry, C.; Lundquist, J. D.; Rupp, D. E.; Nijssen, B.

    2013-12-01

    Building an environmental model requires making a series of decisions regarding the appropriate representation of natural processes. While some of these decisions can already be based on well-established physical understanding, gaps in our current understanding of environmental dynamics, combined with incomplete knowledge of properties and boundary conditions of most environmental systems, make many important modeling decisions far more ambiguous. There is consequently little agreement regarding what a 'correct' model structure is, especially at relatively larger spatial scales such as catchments and beyond. In current practice, faced with such a range of decisions, different modelers will generally make different modeling decisions, often on an ad hoc basis, based on their balancing of process understanding, the data available to evaluate the model, the purpose of the modeling exercise, and their familiarity with or investment in an existing model infrastructure. This presentation describes development and application of multiple-hypothesis models to evaluate process-based hydrologic models. Our numerical model uses robust solutions of the hydrology and thermodynamic governing equations as the structural core, and incorporates multiple options to represent the impact of different modeling decisions, including multiple options for model parameterizations (e.g., below-canopy wind speed, thermal conductivity, storage and transmission of liquid water through soil, etc.), as well as multiple options for model architecture, that is, the coupling and organization of different model components (e.g., representations of sub-grid variability and hydrologic connectivity, coupling with groundwater, etc.). Application of this modeling framework across a collection of different research basins demonstrates that differences among model parameterizations are often overwhelmed by differences among equally-plausible model parameter sets, while differences in model architecture lead

  3. Modelling the effect of acoustic waves on the thermodynamics and kinetics of phase transformation in a solution: Including mass transportation

    Science.gov (United States)

    Haqshenas, S. R.; Ford, I. J.; Saffari, N.

    2018-01-01

    Effects of acoustic waves on a phase transformation in a metastable phase were investigated in our previous work [S. R. Haqshenas, I. J. Ford, and N. Saffari, "Modelling the effect of acoustic waves on nucleation," J. Chem. Phys. 145, 024315 (2016)]. We developed a non-equimolar dividing surface cluster model and employed it to determine the thermodynamics and kinetics of crystallisation induced by an acoustic field in a mass-conserved system. In the present work, we developed a master equation based on a hybrid Szilard-Fokker-Planck model, which accounts for mass transportation due to acoustic waves. This model can determine the kinetics of nucleation and the early stage of growth of clusters including the Ostwald ripening phenomenon. It was solved numerically to calculate the kinetics of an isothermal sonocrystallisation process in a system with mass transportation. The simulation results show that the effect of mass transportation for different excitations depends on the waveform as well as the imposed boundary conditions and tends to be noticeable in the case of shock waves. The derivations are generic and can be used with any acoustic source and waveform.

  4. Performance assessment modeling of pyrometallurgical process wasteforms

    International Nuclear Information System (INIS)

    Nutt, W.M.; Hill, R.N.; Bullen, D.B.

    1995-01-01

    Performance assessment analyses have been completed to estimate the behavior of high-level nuclear wasteforms generated from the pyrometallurgical processing of liquid metal reactor (LMR) and light water reactor (LWR) spent nuclear fuel. Waste emplaced in the proposed repository at Yucca Mountain is investigated as the basis for the study. The resulting cumulative actinide and fission product releases to the accessible environment within a 100,000 year period from the various pyrometallurgical process wasteforms are compared to those of directly disposed LWR spent fuel using the same total repository system model. The impact of differing radionuclide transport models on the overall release characteristics is investigated

  5. Retort process modelling for Indian traditional foods.

    Science.gov (United States)

    Gokhale, S V; Lele, S S

    2014-11-01

    Indian traditional staple and snack food is typically a heterogeneous recipe that incorporates varieties of vegetables, lentils and other ingredients. Modelling the retorting process of multilayer pouch packed Indian food was achieved using lumped-parameter approach. A unified model is proposed to estimate cold point temperature. Initial process conditions, retort temperature and % solid content were the significantly affecting independent variables. A model was developed using combination of vegetable solids and water, which was then validated using four traditional Indian vegetarian products: Pulav (steamed rice with vegetables), Sambar (south Indian style curry containing mixed vegetables and lentils), Gajar Halawa (carrot based sweet product) and Upama (wheat based snack product). The predicted and experimental values of temperature profile matched with ±10 % error which is a good match considering the food was a multi component system. Thus the model will be useful as a tool to reduce number of trials required to optimize retorting of various Indian traditional vegetarian foods.

  6. A Mechanical Model of Brownian Motion for One Massive Particle Including Slow Light Particles

    Science.gov (United States)

    Liang, Song

    2018-01-01

    We provide a connection between Brownian motion and a classical mechanical system. Precisely, we consider a system of one massive particle interacting with an ideal gas, evolved according to non-random mechanical principles, via interaction potentials, without any assumption requiring that the initial velocities of the environmental particles should be restricted to be "fast enough". We prove the convergence of the (position, velocity)-process of the massive particle under a certain scaling limit, such that the mass of the environmental particles converges to 0 while the density and the velocities of them go to infinity, and give the precise expression of the limiting process, a diffusion process.

  7. Internet User Behaviour Model Discovery Process

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available The Academy of Economic Studies has more than 45000 students and about 5000 computers with Internet access which are connected to AES network. Students can access internet on these computers through a proxy server which stores information about the way the Internet is accessed. In this paper, we describe the process of discovering internet user behavior models by analyzing proxy server raw data and we emphasize the importance of such models for the e-learning environment.

  8. Process model development for optimization of forged disk manufacturing processes

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, C.E.; Gunasekera, J.S. [Ohio Univ., Athens, OH (United States). Center for Advanced Materials Processing; Malas, J.C. [Wright Labs., Wright Patterson AFB, OH (United States). Materials Directorate

    1997-12-31

    This paper addresses the development of a system which will enable the optimization of an entire processing sequence for a forged part. Typically such a sequence may involve several stages and alternative routes of manufacturing a given part. It is important that such a system be optimized globally, (rather than locally, as is the current practice) in order to achieve improvements in affordability, producibility, and performance. This paper demonstrates the development of a simplified forging model, discussion techniques for searching and reducing a very large design space, and an objective function to evaluate the cost of a design sequence.

  9. Transverse Crack Modeling and Validation in Rotor Systems, Including Thermal Effects

    Directory of Open Access Journals (Sweden)

    N. Bachschmid

    2003-01-01

    Full Text Available This article describes a model that allows the simulation of the static behavior of a transverse crack in a horizontal rotor under the action of weight and other possible static loads and the dynamic behavior of cracked rotating shaft. The crack breathes—that is, the mechanism of the crack's opening and closing is ruled by the stress on the cracked section exerted by the external loads. In a rotor, the stresses are time-dependent and have a period equal to the period of rotation; thus, the crack periodically breathes. An original, simplified model allows cracks of various shapes to be modeled and thermal stresses to be taken into account, as they may influence the opening and closing mechanism. The proposed method was validated by using two criteria. First the crack's breathing mechanism, simulated by the model, was compared with the results obtained by a nonlinear, threedimensional finite element model calculation, and a good agreement in the results was observed. Then the proposed model allowed the development of the equivalent cracked beam. The results of this model were compared with those obtained by the three-dimensional finite element model. Also in this case, there was a good agreement in the results.

  10. A model for firm-specific strategic wisdom : including illustrations and 49 guiding questions

    NARCIS (Netherlands)

    van Straten, Roeland Peter

    2017-01-01

    This PhD thesis provides an answer to the question ‘How may one think strategically’. It does so by presenting a new prescriptive ‘Model for Firm-Specific Strategic Wisdom’. This Model aims to guide any individual strategist in his or her thinking from a state of firm-specific ‘ignorance’ to a state

  11. Complete Loss and Thermal Model of Power Semiconductors Including Device Rating Information

    DEFF Research Database (Denmark)

    Ma, Ke; Bahman, Amir Sajjad; Beczkowski, Szymon

    2015-01-01

    models, only the electrical loadings are focused and treated as design variables, while the device rating is normally pre-defined by experience with limited design flexibility. Consequently, a more complete loss and thermal model is proposed in this paper, which takes into account not only the electrical...

  12. Numerical models of single- and double-negative metamaterials including viscous and thermal losses

    DEFF Research Database (Denmark)

    Cutanda Henriquez, Vicente; Sánchez-Dehesa, José

    2017-01-01

    detailed understanding on how viscous and thermal losses affect the setups at different frequencies. The modeling of a simpler single-negative metamaterial also broadens this overview. Both setups have been modeled with quadratic BEM meshes. Each sample, scaled at two different sizes, has been represented...

  13. Gasification of biomass in a fixed bed downdraft gasifier--a realistic model including tar.

    Science.gov (United States)

    Barman, Niladri Sekhar; Ghosh, Sudip; De, Sudipta

    2012-03-01

    This study presents a model for fixed bed downdraft biomass gasifiers considering tar also as one of the gasification products. A representative tar composition along with its mole fractions, as available in the literature was used as an input parameter within the model. The study used an equilibrium approach for the applicable gasification reactions and also considered possible deviations from equilibrium to further upgrade the equilibrium model to validate a range of reported experimental results. Heat balance was applied to predict the gasification temperature and the predicted values were compared with reported results in literature. A comparative study was made with some reference models available in the literature and also with experimental results reported in the literature. Finally a predicted variation of performance of the gasifier by this validated model for different air-fuel ratio and moisture content was also discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. A statistical model for estimation of fish density including correlation in size, space, time and between species from research survey data

    DEFF Research Database (Denmark)

    Nielsen, J. Rasmus; Kristensen, Kasper; Lewy, Peter

    2014-01-01

    Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP) statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes...

  15. Statistical methodology for discrete fracture model - including fracture size, orientation uncertainty together with intensity uncertainty and variability

    International Nuclear Information System (INIS)

    Darcel, C.; Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O.

    2009-11-01

    the other, addresses the issue of the nature of the transition. We develop a new 'mechanistic' model that could help in modeling why and where this transition can occur. The transition between both regimes would occur for a fracture length of 1-10 m and even at a smaller scale for the few outcrops that follow the self-similar density model. A consequence for the disposal issue is that the model that is likely to apply in the 'blind' scale window between 10-100 m is the self-similar model as it is defined for large-scale lineaments. The self-similar model, as it is measured for some outcrops and most lineament maps, is definitely worth being investigated as a reference for scales above 1-10 m. In the rest of the report, we develop a methodology for incorporating uncertainty and variability into the DFN modeling. Fracturing properties arise from complex processes which produce an intrinsic variability; characterizing this variability as an admissible variation of model parameter or as the division of the site into subdomains with distinct DFN models is a critical point of the modeling effort. Moreover, the DFN model encompasses a part of uncertainty, due to data inherent uncertainties and sampling limits. Both effects must be quantified and incorporated into the DFN site model definition process. In that context, all available borehole data including recording of fracture intercept positions, pole orientation and relative uncertainties are used as the basis for the methodological development and further site model assessment. An elementary dataset contains a set of discrete fracture intercepts from which a parent orientation/density distribution can be computed. The elementary bricks of the site, from which these initial parent density distributions are computed, rely on the former Single Hole Interpretation division of the boreholes into sections whose local boundaries are expected to reflect - locally - geology and fracturing properties main characteristics. From that

  16. A viscoplastic model including anisotropic damage for the time dependent behaviour of rock

    Science.gov (United States)

    Pellet, F.; Hajdu, A.; Deleruyelle, F.; Besnus, F.

    2005-08-01

    This paper presents a new constitutive model for the time dependent mechanical behaviour of rock which takes into account both viscoplastic behaviour and evolution of damage with respect to time. This model is built by associating a viscoplastic constitutive law to the damage theory. The main characteristics of this model are the account of a viscoplastic volumetric strain (i.e. contractancy and dilatancy) as well as the anisotropy of damage. The latter is described by a second rank tensor. Using this model, it is possible to predict delayed rupture by determining time to failure, in creep tests for example. The identification of the model parameters is based on experiments such as creep tests, relaxation tests and quasi-static tests. The physical meaning of these parameters is discussed and comparisons with lab tests are presented. The ability of the model to reproduce the delayed failure observed in tertiary creep is demonstrated as well as the sensitivity of the mechanical response to the rate of loading. The model could be used to simulate the evolution of the excavated damage zone around underground openings.

  17. Potential transformation of trace species including aircraft exhaust in a cloud environment. The `Chedrom model`

    Energy Technology Data Exchange (ETDEWEB)

    Ozolin, Y.E.; Karol, I.L. [Main Geophysical Observatory, St. Petersburg (Russian Federation); Ramaroson, R. [Office National d`Etudes et de Recherches Aerospatiales (ONERA), 92 - Chatillon (France)

    1997-12-31

    Box model for coupled gaseous and aqueous phases is used for sensitivity study of potential transformation of trace gases in a cloud environment. The rate of this transformation decreases with decreasing of pH in droplets, with decreasing of photodissociation rates inside the cloud and with increasing of the droplet size. Model calculations show the potential formation of H{sub 2}O{sub 2} in aqueous phase and transformation of gaseous HNO{sub 3} into NO{sub x} in a cloud. This model is applied for exploration of aircraft exhausts evolution in plume inside a cloud. (author) 10 refs.

  18. A Two-Dimensional Modeling Procedure to Estimate the Loss Equivalent Resistance Including the Saturation Effect

    Directory of Open Access Journals (Sweden)

    Rosa Ana Salas

    2013-11-01

    Full Text Available We propose a modeling procedure specifically designed for a ferrite inductor excited by a waveform in time domain. We estimate the loss resistance in the core (parameter of the electrical model of the inductor by means of a Finite Element Method in 2D which leads to significant computational advantages over the 3D model. The methodology is validated for an RM (rectangular modulus ferrite core working in the linear and the saturation regions. Excellent agreement is found between the experimental data and the computational results.

  19. Model Identification of Integrated ARMA Processes

    Science.gov (United States)

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  20. Modeling as a Decision-Making Process

    Science.gov (United States)

    Bleiler-Baxter, Sarah K.; Stephens, D. Christopher; Baxter, Wesley A.; Barlow, Angela T.

    2017-01-01

    The goal in this article is to support teachers in better understanding what it means to model with mathematics by focusing on three key decision-making processes: Simplification, Relationship Mapping, and Situation Analysis. The authors use the Theme Park task to help teachers develop a vision of how students engage in these three decision-making…

  1. Kinetics and modeling of anaerobic digestion process

    DEFF Research Database (Denmark)

    Gavala, Hariklia N.; Angelidaki, Irini; Ahring, Birgitte Kiær

    2003-01-01

    Anaerobic digestion modeling started in the early 1970s when the need for design and efficient operation of anaerobic systems became evident. At that time not only was the knowledge about the complex process of anaerobic digestion inadequate but also there were computational limitations. Thus...

  2. Querying Business Process Models with VMQL

    DEFF Research Database (Denmark)

    Störrle, Harald; Acretoaie, Vlad

    2013-01-01

    . In this paper, we apply VMQL to the Business Process Modeling Notation (BPMN) to evaluate the second claim. We explore the adaptations required, and re-evaluate the usability of VMQL in this context. We find similar results to earlier work, thus both supporting our claims and establishing the usability of VMQL...

  3. Numerical modeling and simulation in various processes

    Directory of Open Access Journals (Sweden)

    Eliza Consuela ISBĂŞOIU

    2011-12-01

    The economic modeling offers the manager the rigorous side of his actions, multiple chances in order to connect existing resources with the objectives pursued for a certain period of time, offering the possibility of a better and faster thinking and deciding process, without deforming the reality.

  4. Aligning Grammatical Theories and Language Processing Models

    Science.gov (United States)

    Lewis, Shevaun; Phillips, Colin

    2015-01-01

    We address two important questions about the relationship between theoretical linguistics and psycholinguistics. First, do grammatical theories and language processing models describe separate cognitive systems, or are they accounts of different aspects of the same system? We argue that most evidence is consistent with the one-system view. Second,…

  5. Computational Process Modeling for Additive Manufacturing (OSU)

    Science.gov (United States)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  6. Modelling the Colour of Strawberry Spread During Storage, Including Effects of Technical Variations

    Directory of Open Access Journals (Sweden)

    Kadivec Mirta

    2016-12-01

    Full Text Available The colour of freshly processed strawberry spread changes relatively rapidly from a bright red to a dull red, which then makes its appearance generally less acceptable for consumers. The colours of strawberry spreads following several processing conditions were measured under different storage conditions. Additional sugar and colorant had only slight effects on the colour decay, while exclusion of oxygen and daylight did not affect this process. The only condition that clearly maintained the freshly processed appearance was storage at 4°C. Hexagonal bottles were filled with the strawberry spreads and their colour was repeatedly measured at the six sides of the bottles, using a Minolta chroma meter. Data were analysed using non-linear indexed regression analysis based on a logistic function for the three colour aspect of a*, b* and L*. This technology allowed the determination of the variation in these data in terms of improved reliability (R2adj, >90%. It also allowed better interpretation of the processes involved. All variations in the data could be attributed to technical variation.

  7. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    DEFF Research Database (Denmark)

    Jensen, Ninna Reitzel; Schomacker, Kristian Juul

    2015-01-01

    Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death......, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance...... and unit-linked insurance. By use of a two-account model, we are able to illustrate general concepts without making the model too abstract. To allow for complicated financial markets without dramatically increasing the mathematical complexity, we focus on economic scenarios. We illustrate the use of our...

  8. A Lumped Thermal Model Including Thermal Coupling and Thermal Boundary Conditions for High Power IGBT Modules

    DEFF Research Database (Denmark)

    Bahman, Amir Sajjad; Ma, Ke; Blaabjerg, Frede

    2018-01-01

    Detailed thermal dynamics of high power IGBT modules are important information for the reliability analysis and thermal design of power electronic systems. However, the existing thermal models have their limits to correctly predict these complicated thermal behavior in the IGBTs: The typically used...... thermal model based on one-dimensional RC lumps have limits to provide temperature distributions inside the device, moreover some variable factors in the real-field applications like the cooling and heating conditions of the converter cannot be adapted. On the other hand, the more advanced three......-dimensional thermal models based on Finite Element Method (FEM) need massive computations, which make the long-term thermal dynamics difficult to calculate. In this paper, a new lumped three-dimensional thermal model is proposed, which can be easily characterized from FEM simulations and can acquire the critical...

  9. Advanced Modeling of Ramp Operations including Departure Status at Secondary Airports, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project addresses three modeling elements relevant to NASA's IADS research and ATD-2 project, two related to ramp operations at primary airports and one related...

  10. Extending the Scope of the Acculturation/Pidginization Model to Include Cognition.

    Science.gov (United States)

    Schumann, John H.

    1990-01-01

    Examines five cognitive models for second-language acquisition (SLA) and assesses how each might account for the Pidginized interlanguage found in the early stages of second-language acquisition. (23 references) (JL)

  11. An integrated computable general equilibrium model including multiple types and uses of water

    OpenAIRE

    Luckmann, Jonas Jens

    2015-01-01

    Water is a scarce resource in many regions of the world and competition for water is an increasing problem. To countervail this trend policies are needed regulating supply and demand for water. As water is used in many economic activities, water related management decisions usually have complex implications. Economic simulation models have been proven useful to ex-ante assess the consequences of policy changes. Specifically, Computable General Equilibrium (CGE) models are very suitable to ana...

  12. Transverse Crack Modeling and Validation in Rotor Systems Including Thermal Effects

    Directory of Open Access Journals (Sweden)

    N. Bachschmid

    2004-01-01

    Full Text Available In this article, a model is described that allows one to simulate the static behavior of a transversal crack in a horizontal rotor, under the action of the weight and other possible static loads and the dynamical behavior of the rotating cracked shaft. The crack “breaths,” i.e., the mechanism of opening and closing of the crack, is ruled by the stress acting on the cracked section due to the external loads; in a rotor the stress is time-depending with a period equal to the period of rotation, thus the crack “periodically breaths.” An original simplified model is described that allows cracks of different shape to be modeled and thermal stresses to be taken into account, since they may influence the opening and closing mechanism. The proposed method has been validated using two criteria. Firstly, the crack “breathing” mechanism, simulated with the model, has been compared with the results obtained by a nonlinear 3-D FEM calculation and a good agreement in the results has been observed. Secondly, the proposed model allows the development of the equivalent cracked beam. The results of this model are compared with those obtained by the above-mentioned 3-D FEM. There is a good agreement in the results, of this case as well.

  13. Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS: calibration and validation

    Science.gov (United States)

    Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.

    2011-12-01

    Sugarcane is currently the most efficient bioenergy crop with regards to the energy produced per hectare. With approximately half the global bioethanol production in 2005, and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Dynamic global vegetation models coupled with agronomical models are powerful and novel tools to tackle many of the environmental issues related to biofuels if they are carefully calibrated and validated against field observations. Here we adapt the agro-terrestrial model ORCHIDEE-STICS for sugar cane simulations. Observation data of LAI are used to evaluate the sensitivity of the model to parameters of nitrogen absorption and phenology, which are calibrated in a systematic way for six sites in Australia and La Reunion. We find that the optimal set of parameters is highly dependent on the sites' characteristics and that the model can reproduce satisfactorily the evolution of LAI. This careful calibration of ORCHIDEE-STICS for sugar cane biomass production for different locations and technical itineraries provides a strong basis for further analysis of the impacts of bioenergy-related land use change on carbon cycle budgets. As a next step, a sensitivity analysis is carried out to estimate the uncertainty of the model in biomass and carbon flux simulation due to its parameterization.

  14. Simplification and Validation of a Spectral-Tensor Model for Turbulence Including Atmospheric Stability

    Science.gov (United States)

    Chougule, Abhijit; Mann, Jakob; Kelly, Mark; Larsen, Gunner C.

    2018-02-01

    A spectral-tensor model of non-neutral, atmospheric-boundary-layer turbulence is evaluated using Eulerian statistics from single-point measurements of the wind speed and temperature at heights up to 100 m, assuming constant vertical gradients of mean wind speed and temperature. The model has been previously described in terms of the dissipation rate ɛ , the length scale of energy-containing eddies L , a turbulence anisotropy parameter Γ, the Richardson number Ri, and the normalized rate of destruction of temperature variance η _θ ≡ ɛ _θ /ɛ . Here, the latter two parameters are collapsed into a single atmospheric stability parameter z / L using Monin-Obukhov similarity theory, where z is the height above the Earth's surface, and L is the Obukhov length corresponding to Ri,η _θ. Model outputs of the one-dimensional velocity spectra, as well as cospectra of the streamwise and/or vertical velocity components, and/or temperature, and cross-spectra for the spatial separation of all three velocity components and temperature, are compared with measurements. As a function of the four model parameters, spectra and cospectra are reproduced quite well, but horizontal temperature fluxes are slightly underestimated in stable conditions. In moderately unstable stratification, our model reproduces spectra only up to a scale ˜ 1 km. The model also overestimates coherences for vertical separations, but is less severe in unstable than in stable cases.

  15. A Novel Mean-Value Model of the Cardiovascular System Including a Left Ventricular Assist Device.

    Science.gov (United States)

    Ochsner, Gregor; Amacher, Raffael; Schmid Daners, Marianne

    2017-06-01

    Time-varying elastance models (TVEMs) are often used for simulation studies of the cardiovascular system with a left ventricular assist device (LVAD). Because these models are computationally expensive, they cannot be used for long-term simulation studies. In addition, their equilibria are periodic solutions, which prevent the extraction of a linear time-invariant model that could be used e.g. for the design of a physiological controller. In the current paper, we present a new type of model to overcome these problems: the mean-value model (MVM). The MVM captures the behavior of the cardiovascular system by representative mean values that do not change within the cardiac cycle. For this purpose, each time-varying element is manually converted to its mean-value counterpart. We compare the derived MVM to a similar TVEM in two simulation experiments. In both cases, the MVM is able to fully capture the inter-cycle dynamics of the TVEM. We hope that the new MVM will become a useful tool for researchers working on physiological control algorithms. This paper provides a plant model that enables for the first time the use of tools from classical control theory in the field of physiological LVAD control.

  16. Modeling of the mechanical alloying process

    Science.gov (United States)

    Maurice, D.; Courtney, T. H.

    1992-01-01

    Two programs have been developed to compute the dimensional and property changes that occur with repetitive impacts during the mechanical alloying process. The more sophisticated of the programs also maintains a running count of the fractions of particles present and from this calculates a population distribution. The programs predict powder particle size and shape changes in accord with the accepted stages of powder development during mechanical alloying of ductile species. They also predict hardness and lamellar thickness changes with processing, again with reasonable agreement with experimental results. These predictions offer support of the model (and thereby give insight into the possible 'actual' happenings of mechanical alloying) and hence allow refinement and calibration of the myriad aspects of the model. They also provide a vehicle for establishing control over the dimensions and properties of the output powders used for consolidation, thereby facilitating optimization of the consolidation process.

  17. Managing risks in business model innovation processes

    DEFF Research Database (Denmark)

    Taran, Yariv; Boer, Harry; Lindgren, Peter

    2010-01-01

    Companies today, in some industries more than others, invest more capital and resources just to stay competitive, develop more diverse solutions, and increasingly start thinking more radically when considering their business models. However, despite the understanding that business model (BM) inno...... forward, which link success and failure to the way companies appreciate and handle the risks involved in BM innovation.......) innovation is a risky enterprise, many companies are still choosing not to apply any risk management in the BM innovation process. The objective of this paper is to develop a better understanding of how risks are handled in the practice of BM innovation. An analysis of the BM innovation experiences of two...... industrial companies shows that both companies are experiencing high levels of uncertainty and complexity during their innovation processes and are, consequently, struggling to find new processes for handling the risks involved. Based on the two companies’ experiences, various testable propositions are put...

  18. Statistical model for high energy inclusive processes

    International Nuclear Information System (INIS)

    Pomorisac, B.

    1980-01-01

    We propose a statistical model of inclusive processes. The model is an extension of the model proposed by Salapino and Sugar for the inclusive distributions in rapidity. The model is defined in terms of a random variable on the full phase space of the produced particles and in terms of a Lorentz-invariant probability distribution. We suggest that the Lorentz invariance is broken spontaneously, this may describe the observed anisotropy of the inclusive distributions. Based on this model we calculate the distribution in transverse momentum. An explicit calculation is given of the one-particle inclusive cross sections and the two-particle correlation. The results give a fair representation of the shape of one-particle inclusive cross sections, and positive correlation for the particles emitted. The relevance of our results to experiments is discussed

  19. Study of dissolution process and its modelling

    Directory of Open Access Journals (Sweden)

    Juan Carlos Beltran-Prieto

    2017-01-01

    Full Text Available The use of mathematical concepts and language aiming to describe and represent the interactions and dynamics of a system is known as a mathematical model. Mathematical modelling finds a huge number of successful applications in a vast amount of science, social and engineering fields, including biology, chemistry, physics, computer sciences, artificial intelligence, bioengineering, finance, economy and others. In this research, we aim to propose a mathematical model that predicts the dissolution of a solid material immersed in a fluid. The developed model can be used to evaluate the rate of mass transfer and the mass transfer coefficient. Further research is expected to be carried out to use the model as a base to develop useful models for the pharmaceutical industry to gain information about the dissolution of medicaments in the body stream and this could play a key role in formulation of medicaments.

  20. Fundamentals of Numerical Modelling of Casting Processes

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri; Pryds, Nini; Thorborg, Jesper

    Fundamentals of Numerical Modelling of Casting Processes comprises a thorough presentation of the basic phenomena that need to be addressed in numerical simulation of casting processes. The main philosophy of the book is to present the topics in view of their physical meaning, whenever possible......, rather than relying strictly on mathematical formalism. The book, aimed both at the researcher and the practicing engineer, as well as the student, is naturally divided into four parts. Part I (Chapters 1-3) introduces the fundamentals of modelling in a 1-dimensional framework. Part II (Chapter 4......) presents the most important aspects of solidification theory related to modelling. Part III (Chapter 5) describes the fluid flow phenomena and in part IV (Chapter 6) the stress-strain analysis is addressed. For all parts, both numerical formulations as well as some important analytical solutions...

  1. Temperature Modelling of the Biomass Pretreatment Process

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail; Blanke, Mogens; Jensen, Jakob M.

    2012-01-01

    In a second generation biorefinery, the biomass pretreatment stage has an important contribution to the efficiency of the downstream processing units involved in biofuel production. Most of the pretreatment process occurs in a large pressurized thermal reactor that presents an irregular temperature...... distribution. Therefore, an accurate temperature model is critical for observing the biomass pretreatment. More than that, the biomass is also pushed with a constant horizontal speed along the reactor in order to ensure a continuous throughput. The goal of this paper is to derive a temperature model...... that captures the environmental temperature differences inside the reactor using distributed parameters. A Kalman filter is then added to account for any missing dynamics and the overall model is embedded into a temperature soft sensor. The operator of the plant will be able to observe the temperature in any...

  2. Modeling veterans healthcare administration disclosure processes :

    Energy Technology Data Exchange (ETDEWEB)

    Beyeler, Walter E; DeMenno, Mercy B.; Finley, Patrick D.

    2013-09-01

    As with other large healthcare organizations, medical adverse events at the Department of Veterans Affairs (VA) facilities can expose patients to unforeseen negative risks. VHA leadership recognizes that properly handled disclosure of adverse events can minimize potential harm to patients and negative consequences for the effective functioning of the organization. The work documented here seeks to help improve the disclosure process by situating it within the broader theoretical framework of issues management, and to identify opportunities for process improvement through modeling disclosure and reactions to disclosure. The computational model will allow a variety of disclosure actions to be tested across a range of incident scenarios. Our conceptual model will be refined in collaboration with domain experts, especially by continuing to draw on insights from VA Study of the Communication of Adverse Large-Scale Events (SCALE) project researchers.

  3. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  4. Including crystal structure attributes in machine learning models of formation energies via Voronoi tessellations

    Science.gov (United States)

    Ward, Logan; Liu, Ruoqian; Krishna, Amar; Hegde, Vinay I.; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris

    2017-07-01

    While high-throughput density functional theory (DFT) has become a prevalent tool for materials discovery, it is limited by the relatively large computational cost. In this paper, we explore using DFT data from high-throughput calculations to create faster, surrogate models with machine learning (ML) that can be used to guide new searches. Our method works by using decision tree models to map DFT-calculated formation enthalpies to a set of attributes consisting of two distinct types: (i) composition-dependent attributes of elemental properties (as have been used in previous ML models of DFT formation energies), combined with (ii) attributes derived from the Voronoi tessellation of the compound's crystal structure. The ML models created using this method have half the cross-validation error and similar training and evaluation speeds to models created with the Coulomb matrix and partial radial distribution function methods. For a dataset of 435 000 formation energies taken from the Open Quantum Materials Database (OQMD), our model achieves a mean absolute error of 80 meV/atom in cross validation, which is lower than the approximate error between DFT-computed and experimentally measured formation enthalpies and below 15% of the mean absolute deviation of the training set. We also demonstrate that our method can accurately estimate the formation energy of materials outside of the training set and be used to identify materials with especially large formation enthalpies. We propose that our models can be used to accelerate the discovery of new materials by identifying the most promising materials to study with DFT at little additional computational cost.

  5. Representing vegetation processes in hydrometeorological simulations using the WRF model

    DEFF Research Database (Denmark)

    Nielsen, Joakim Refslund

    For accurate predictions of weather and climate, it is important that the land surface and its processes are well represented. In a mesoscale model the land surface processes are calculated in a land surface model (LSM). These pro-cesses include exchanges of energy, water and momentum between...... data and the default vegetation data in WRF were further used in high-resolution simulations over Denmark down to cloud-resolving scale (3 km). Results from two spatial resolutions were compared to investigate the inuence of parametrized and resolved convec-tion. The simulations using the parametrized...

  6. Statistical methodology for discrete fracture model - including fracture size, orientation uncertainty together with intensity uncertainty and variability

    Energy Technology Data Exchange (ETDEWEB)

    Darcel, C. (Itasca Consultants SAS (France)); Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O. (Geosciences Rennes, UMR 6118 CNRS, Univ. def Rennes, Rennes (France))

    2009-11-15

    the lineament scale (k{sub t} = 2) on the other, addresses the issue of the nature of the transition. We develop a new 'mechanistic' model that could help in modeling why and where this transition can occur. The transition between both regimes would occur for a fracture length of 1-10 m and even at a smaller scale for the few outcrops that follow the self-similar density model. A consequence for the disposal issue is that the model that is likely to apply in the 'blind' scale window between 10-100 m is the self-similar model as it is defined for large-scale lineaments. The self-similar model, as it is measured for some outcrops and most lineament maps, is definitely worth being investigated as a reference for scales above 1-10 m. In the rest of the report, we develop a methodology for incorporating uncertainty and variability into the DFN modeling. Fracturing properties arise from complex processes which produce an intrinsic variability; characterizing this variability as an admissible variation of model parameter or as the division of the site into subdomains with distinct DFN models is a critical point of the modeling effort. Moreover, the DFN model encompasses a part of uncertainty, due to data inherent uncertainties and sampling limits. Both effects must be quantified and incorporated into the DFN site model definition process. In that context, all available borehole data including recording of fracture intercept positions, pole orientation and relative uncertainties are used as the basis for the methodological development and further site model assessment. An elementary dataset contains a set of discrete fracture intercepts from which a parent orientation/density distribution can be computed. The elementary bricks of the site, from which these initial parent density distributions are computed, rely on the former Single Hole Interpretation division of the boreholes into sections whose local boundaries are expected to reflect - locally - geology

  7. Including fluid shear viscosity in a structural acoustic finite element model using a scalar fluid representation

    Science.gov (United States)

    Cheng, Lei; Li, Yizeng; Grosh, Karl

    2013-01-01

    An approximate boundary condition is developed in this paper to model fluid shear viscosity at boundaries of coupled fluid-structure system. The effect of shear viscosity is approximated by a correction term to the inviscid boundary condition, written in terms of second order in-plane derivatives of pressure. Both thin and thick viscous boundary layer approximations are formulated; the latter subsumes the former. These approximations are used to develop a variational formation, upon which a viscous finite element method (FEM) model is based, requiring only minor modifications to the boundary integral contributions of an existing inviscid FEM model. Since this FEM formulation has only one degree of freedom for pressure, it holds a great computational advantage over the conventional viscous FEM formulation which requires discretization of the full set of linearized Navier-Stokes equations. The results from thick viscous boundary layer approximation are found to be in good agreement with the prediction from a Navier-Stokes model. When applicable, thin viscous boundary layer approximation also gives accurate results with computational simplicity compared to the thick boundary layer formulation. Direct comparison of simulation results using the boundary layer approximations and a full, linearized Navier-Stokes model are made and used to evaluate the accuracy of the approximate technique. Guidelines are given for the parameter ranges over which the accurate application of the thick and thin boundary approximations can be used for a fluid-structure interaction problem. PMID:23729844

  8. Results of including geometric nonlinearities in an aeroelastic model of an F/A-18

    Science.gov (United States)

    Buttrill, Carey S.

    1989-01-01

    An integrated, nonlinear simulation model suitable for aeroelastic modeling of fixed-wing aircraft has been developed. While the author realizes that the subject of modeling rotating, elastic structures is not closed, it is believed that the equations of motion developed and applied herein are correct to second order and are suitable for use with typical aircraft structures. The equations are not suitable for large elastic deformation. In addition, the modeling framework generalizes both the methods and terminology of non-linear rigid-body airplane simulation and traditional linear aeroelastic modeling. Concerning the importance of angular/elastic inertial coupling in the dynamic analysis of fixed-wing aircraft, the following may be said. The rigorous inclusion of said coupling is not without peril and must be approached with care. In keeping with the same engineering judgment that guided the development of the traditional aeroelastic equations, the effect of non-linear inertial effects for most airplane applications is expected to be small. A parameter does not tell the whole story, however, and modes flagged by the parameter as significant also need to be checked to see if the coupling is not a one-way path, i.e., the inertially affected modes can influence other modes.

  9. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    Science.gov (United States)

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  10. Modelling of bypass transition including the pseudolaminar part of the boundary layer

    Energy Technology Data Exchange (ETDEWEB)

    Prihoda, J.; Hlava, T. [Ceska Akademie Ved, Prague (Czech Republic). Inst. of Thermomechanics; Kozel, K. [Ceske Vysoke Uceni Technicke, Prague (Czech Republic). Faculty of Mechanical Engineering

    1999-12-01

    The boundary-layer transition in turbomachinery is accelerated by a number of parameters, especially by the free-stream turbulence. This so-called bypass transition is usually modelled by means of one-equation or two-equation turbulence models based on turbulent viscosity. Using of transport equations for turbulent energy and for dissipation rate in these models is questionable before the onset of the last stage of the transition, i.e. before the formation of turbulent spots. Used approximations of production and turbulent diffusion are the weak points of turbulence models with turbulent viscosity in the pseudolaminar boundary layer, as the Boussinesq assumption on turbulent viscosity is not fulfilled in this part of the boundary layer. In order to obtain a more reliable prediction of the transitional boundary layer, Mayle and Schulz (1997) proposed for the solution of pseudolaminar boundary layer a special `laminar-kinetic-energy` equation based on the analysis of laminar boundary layer in flows with velocity fluctuations. The effect of production and turbulent diffusion on the development of turbulent energy in the pseudolaminar boundary layer was tested using a two-layer turbulence model. (orig.)

  11. Modelling of bypass transition including the pseudolaminar part of the boundary layer

    Energy Technology Data Exchange (ETDEWEB)

    Prihoda, J.; Hlava, T. (Ceska Akademie Ved, Prague (Czech Republic). Inst. of Thermomechanics); Kozel, K. (Ceske Vysoke Uceni Technicke, Prague (Czech Republic). Faculty of Mechanical Engineering)

    1999-01-01

    The boundary-layer transition in turbomachinery is accelerated by a number of parameters, especially by the free-stream turbulence. This so-called bypass transition is usually modelled by means of one-equation or two-equation turbulence models based on turbulent viscosity. Using of transport equations for turbulent energy and for dissipation rate in these models is questionable before the onset of the last stage of the transition, i.e. before the formation of turbulent spots. Used approximations of production and turbulent diffusion are the weak points of turbulence models with turbulent viscosity in the pseudolaminar boundary layer, as the Boussinesq assumption on turbulent viscosity is not fulfilled in this part of the boundary layer. In order to obtain a more reliable prediction of the transitional boundary layer, Mayle and Schulz (1997) proposed for the solution of pseudolaminar boundary layer a special 'laminar-kinetic-energy' equation based on the analysis of laminar boundary layer in flows with velocity fluctuations. The effect of production and turbulent diffusion on the development of turbulent energy in the pseudolaminar boundary layer was tested using a two-layer turbulence model. (orig.)

  12. Beam dynamics in high intensity cyclotrons including neighboring bunch effects: Model, implementation, and application

    Directory of Open Access Journals (Sweden)

    J. J. Yang (杨建俊

    2010-06-01

    Full Text Available Space-charge effects, being one of the most significant collective effects, play an important role in high intensity cyclotrons. However, for cyclotrons with small turn separation, other existing effects are of equal importance. Interactions of radially neighboring bunches are also present, but their combined effects have not yet been investigated in any great detail. In this paper, a new particle in the cell-based self-consistent numerical simulation model is presented for the first time. The model covers neighboring bunch effects and is implemented in the three-dimensional object-oriented parallel code OPAL-cycl, a flavor of the OPAL framework. We discuss this model together with its implementation and validation. Simulation results are presented from the PSI 590 MeV ring cyclotron in the context of the ongoing high intensity upgrade program, which aims to provide a beam power of 1.8 MW (CW at the target destination.

  13. A Simple Model of Fields Including the Strong or Nuclear Force and a Cosmological Speculation

    Directory of Open Access Journals (Sweden)

    David L. Spencer

    2016-10-01

    Full Text Available Reexamining the assumptions underlying the General Theory of Relativity and calling an object's gravitational field its inertia, and acceleration simply resistance to that inertia, yields a simple field model where the potential (kinetic energy of a particle at rest is its capacity to move itself when its inertial field becomes imbalanced. The model then attributes electromagnetic and strong forces to the effects of changes in basic particle shape. Following up on the model's assumption that the relative intensity of a particle's gravitational field is always inversely related to its perceived volume and assuming that all black holes spin, may create the possibility of a cosmic rebound where a final spinning black hole ends with a new Big Bang.

  14. Modeling of the dynamics of wind to power conversion including high wind speed behavior

    DEFF Research Database (Denmark)

    Litong-Palima, Marisciel; Bjerge, Martin Huus; Cutululis, Nicolaos Antonio

    2016-01-01

    This paper proposes and validates an efficient, generic and computationally simple dynamic model for the conversion of the wind speed at hub height into the electrical power by a wind turbine. This proposed wind turbine model was developed as a first step to simulate wind power time series...... speed shutdowns and restarts are represented as on–off switching rules that govern the output of the wind turbine at extreme wind speed conditions. The model uses the concept of equivalent wind speed, estimated from the single point (hub height) wind speed using a second-order dynamic filter...... measurements available from the DONG Energy offshore wind farm Horns Rev 2. Copyright © 2015 John Wiley & Sons, Ltd....

  15. A computational model of human auditory signal processing and perception.

    Science.gov (United States)

    Jepsen, Morten L; Ewert, Stephan D; Dau, Torsten

    2008-07-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications.

  16. Modelling Of Manufacturing Processes With Membranes

    Science.gov (United States)

    Crăciunean, Daniel Cristian; Crăciunean, Vasile

    2015-07-01

    The current objectives to increase the standards of quality and efficiency in manufacturing processes can be achieved only through the best combination of inputs, independent of spatial distance between them. This paper proposes modelling production processes based on membrane structures introduced in [4]. Inspired from biochemistry, membrane computation [4] is based on the concept of membrane represented in its formalism by the mathematical concept of multiset. The manufacturing process is the evolution of a super cell system from its initial state according to the given actions of aggregation. In this paper we consider that the atomic production unit of the process is the action. The actions and the resources on which the actions are produced, are distributed in a virtual network of companies working together. The destination of the output resources is specified by corresponding output events.

  17. An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    2005-02-01

    Full Text Available The TRANSCAR ionospheric model was extended to account for the convection of the magnetic field lines in the auroral and polar ionosphere. A mixed Eulerian-Lagrangian 13-moment approach was used to describe the dynamics of an ionospheric plasma tube. In the present study, one focuses on large scale transports in the polar ionosphere. The model was used to simulate a 35-h period of EISCAT-UHF observations on 16-17 February 1993. The first day was magnetically quiet, and characterized by elevated electron concentrations: the diurnal F2 layer reached as much as 1012m-3, which is unusual for a winter and moderate solar activity (F10.7=130 period. An intense geomagnetic event occurred on the second day, seen in the data as a strong intensification of the ionosphere convection velocities in the early afternoon (with the northward electric field reaching 150mVm-1 and corresponding frictional heating of the ions up to 2500K. The simulation used time-dependent AMIE outputs to infer flux-tube transports in the polar region, and to provide magnetospheric particle and energy inputs to the ionosphere. The overall very good agreement, obtained between the model and the observations, demonstrates the high ability of the extended TRANSCAR model for quantitative modelling of the high-latitude ionosphere; however, some differences are found which are attributed to the precipitation of electrons with very low energy. All these results are finally discussed in the frame of modelling the auroral ionosphere with space weather applications in mind.

  18. An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    2005-02-01

    Full Text Available The TRANSCAR ionospheric model was extended to account for the convection of the magnetic field lines in the auroral and polar ionosphere. A mixed Eulerian-Lagrangian 13-moment approach was used to describe the dynamics of an ionospheric plasma tube. In the present study, one focuses on large scale transports in the polar ionosphere. The model was used to simulate a 35-h period of EISCAT-UHF observations on 16-17 February 1993. The first day was magnetically quiet, and characterized by elevated electron concentrations: the diurnal F2 layer reached as much as 1012m-3, which is unusual for a winter and moderate solar activity (F10.7=130 period. An intense geomagnetic event occurred on the second day, seen in the data as a strong intensification of the ionosphere convection velocities in the early afternoon (with the northward electric field reaching 150mVm-1 and corresponding frictional heating of the ions up to 2500K. The simulation used time-dependent AMIE outputs to infer flux-tube transports in the polar region, and to provide magnetospheric particle and energy inputs to the ionosphere. The overall very good agreement, obtained between the model and the observations, demonstrates the high ability of the extended TRANSCAR model for quantitative modelling of the high-latitude ionosphere; however, some differences are found which are attributed to the precipitation of electrons with very low energy. All these results are finally discussed in the frame of modelling the auroral ionosphere with space weather applications in mind.

  19. Evaluation of European air quality modelled by CAMx including the volatility basis set scheme

    Directory of Open Access Journals (Sweden)

    G. Ciarelli

    2016-08-01

    Full Text Available Four periods of EMEP (European Monitoring and Evaluation Programme intensive measurement campaigns (June 2006, January 2007, September–October 2008 and February–March 2009 were modelled using the regional air quality model CAMx with VBS (volatility basis set approach for the first time in Europe within the framework of the EURODELTA-III model intercomparison exercise. More detailed analysis and sensitivity tests were performed for the period of February–March 2009 and June 2006 to investigate the uncertainties in emissions as well as to improve the modelling of organic aerosol (OA. Model performance for selected gas phase species and PM2.5 was evaluated using the European air quality database AirBase. Sulfur dioxide (SO2 and ozone (O3 were found to be overestimated for all the four periods, with O3 having the largest mean bias during June 2006 and January–February 2007 periods (8.9 pbb and 12.3 ppb mean biases respectively. In contrast, nitrogen dioxide (NO2 and carbon monoxide (CO were found to be underestimated for all the four periods. CAMx reproduced both total concentrations and monthly variations of PM2.5 for all the four periods with average biases ranging from −2.1 to 1.0 µg m−3. Comparisons with AMS (aerosol mass spectrometer measurements at different sites in Europe during February–March 2009 showed that in general the model overpredicts the inorganic aerosol fraction and underpredicts the organic one, such that the good agreement for PM2.5 is partly due to compensation of errors. The effect of the choice of VBS scheme on OA was investigated as well. Two sensitivity tests with volatility distributions based on previous chamber and ambient measurements data were performed. For February–March 2009 the chamber case reduced the total OA concentrations by about 42 % on average. In contrast, a test based on ambient measurement data increased OA concentrations by about 42 % for the same period bringing

  20. Molecular Modeling of Aerospace Polymer Matrices Including Carbon Nanotube-Enhanced Epoxy

    Science.gov (United States)

    Radue, Matthew S.

    Carbon fiber (CF) composites are increasingly replacing metals used in major structural parts of aircraft, spacecraft, and automobiles. The current limitations of carbon fiber composites are addressed through computational material design by modeling the salient aerospace matrix materials. Molecular Dynamics (MD) models of epoxies with and without carbon nanotube (CNT) reinforcement and models of pure bismaleimides (BMIs) were developed to elucidate structure-property relationships for improved selection and tailoring of matrices. The influence of monomer functionality on the mechanical properties of epoxies is studied using the Reax Force Field (ReaxFF). From deformation simulations, the Young's modulus, yield point, and Poisson's ratio are calculated and analyzed. The results demonstrate an increase in stiffness and yield strength with increasing resin functionality. Comparison between the network structures of distinct epoxies is further advanced by the Monomeric Degree Index (MDI). Experimental validation demonstrates the MD results correctly predict the relationship in Young's moduli for all epoxies modeled. Therefore, the ReaxFF is confirmed to be a useful tool for studying the mechanical behavior of epoxies. While epoxies have been well-studied using MD, there has been no concerted effort to model cured BMI polymers due to the complexity of the network-forming reactions. A novel, adaptable crosslinking framework is developed for implementing 5 distinct cure reactions of Matrimid-5292 (a BMI resin) and investigating the network structure using MD simulations. The influence of different cure reactions and extent of curing are analyzed on the several thermo-mechanical properties such as mass density, glass transition temperature, coefficient of thermal expansion, elastic moduli, and thermal conductivity. The developed crosslinked models correctly predict experimentally observed trends for various properties. Finally, the epoxies modeled (di-, tri-, and tetra

  1. A program for confidence interval calculations for a Poisson process with background including systematic uncertainties: POLE 1.0

    Science.gov (United States)

    Conrad, Jan

    2004-04-01

    A Fortran 77 routine has been developed to calculate confidence intervals with and without systematic uncertainties using a frequentist confidence interval construction with a Bayesian treatment of the systematic uncertainties. The routine can account for systematic uncertainties in the background prediction and signal/background efficiencies. The uncertainties may be separately parametrized by a Gauss, log-normal or flat probability density function (PDF), though since a Monte Carlo approach is chosen to perform the necessary integrals a generalization to other parameterizations is particularly simple. Full correlation between signal and background efficiency is optional. The ordering schemes for frequentist construction currently supported are the likelihood ratio ordering (also known as Feldman-Cousins) and Neyman ordering. Optionally, both schemes can be used with conditioning, meaning the probability density function is conditioned on the fact that the actual outcome of the background process can not have been larger than the number of observed events. Program summaryTitle of program: POLE version 1.0 Catalogue identifier: ADTA Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTA Program available from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: None Computer for which the program is designed: DELL PC 1 GB 2.0 Ghz Pentium IV Operating system under which the program has been tested: RH Linux 7.2 Kernel 2.4.7-10 Programming language used: Fortran 77 Memory required to execute with typical data: ˜1.6 Mbytes No. of bytes in distributed program, including test data, etc.: 373745 No. of lines in distributed program, including test data, etc.: 2700 Distribution format: tar gzip file Keywords: Confidence interval calculation, Systematic uncertainties Nature of the physical problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with known background in presence of

  2. Dimensional modeling: beyond data processing constraints.

    Science.gov (United States)

    Bunardzic, A

    1995-01-01

    The focus of information processing requirements is shifting from the on-line transaction processing (OLTP) issues to the on-line analytical processing (OLAP) issues. While the former serves to ensure the feasibility of the real-time on-line transaction processing (which has already exceeded a level of up to 1,000 transactions per second under normal conditions), the latter aims at enabling more sophisticated analytical manipulation of data. The OLTP requirements, or how to efficiently get data into the system, have been solved by applying the Relational theory in the form of Entity-Relation model. There is presently no theory related to OLAP that would resolve the analytical processing requirements as efficiently as Relational theory provided for the transaction processing. The "relational dogma" also provides the mathematical foundation for the Centralized Data Processing paradigm in which mission-critical information is incorporated as 'one and only one instance' of data, thus ensuring data integrity. In such surroundings, the information that supports business analysis and decision support activities is obtained by running predefined reports and queries that are provided by the IS department. In today's intensified competitive climate, businesses are finding that this traditional approach is not good enough. The only way to stay on top of things, and to survive and prosper, is to decentralize the IS services. The newly emerging Distributed Data Processing, with its increased emphasis on empowering the end user, does not seem to find enough merit in the relational database model to justify relying upon it. Relational theory proved too rigid and complex to accommodate the analytical processing needs. In order to satisfy the OLAP requirements, or how to efficiently get the data out of the system, different models, metaphors, and theories have been devised. All of them are pointing to the need for simplifying the highly non-intuitive mathematical constraints found

  3. The economic production lot size model extended to include more than one production rate

    DEFF Research Database (Denmark)

    Larsen, Christian

    2005-01-01

    We study an extension of the economic production lot size model, where more than one production rate can be used during a cycle. Moreover, the production rates, as well as their corresponding runtimes are decision variables. We decompose the problem into two subproblems. First, we show that all...

  4. The economic production lot size model extended to include more than one production rate

    DEFF Research Database (Denmark)

    Larsen, Christian

    2001-01-01

    We study an extension of the economic production lot size model, where more than one production rate can be used during a cycle. Moreover, the production rates, as well as their corresponding runtimes are decision variables. First, we show that all production rates should be choosen in the interval...

  5. Static aeroelastic analysis including geometric nonlinearities based on reduced order model

    Directory of Open Access Journals (Sweden)

    Changchuan Xie

    2017-04-01

    Full Text Available This paper describes a method proposed for modeling large deflection of aircraft in nonlinear aeroelastic analysis by developing reduced order model (ROM. The method is applied for solving the static aeroelastic and static aeroelastic trim problems of flexible aircraft containing geometric nonlinearities; meanwhile, the non-planar effects of aerodynamics and follower force effect have been considered. ROMs are computational inexpensive mathematical representations compared to traditional nonlinear finite element method (FEM especially in aeroelastic solutions. The approach for structure modeling presented here is on the basis of combined modal/finite element (MFE method that characterizes the stiffness nonlinearities and we apply that structure modeling method as ROM to aeroelastic analysis. Moreover, the non-planar aerodynamic force is computed by the non-planar vortex lattice method (VLM. Structure and aerodynamics can be coupled with the surface spline method. The results show that both of the static aeroelastic analysis and trim analysis of aircraft based on structure ROM can achieve a good agreement compared to analysis based on the FEM and experimental result.

  6. Loss and thermal model for power semiconductors including device rating information

    DEFF Research Database (Denmark)

    Ma, Ke; Bahman, Amir Sajjad; Beczkowski, Szymon

    2014-01-01

    pre-defined by experience with poor design flexibility. Consequently a more complete loss and thermal model is proposed in this paper, which takes into account not only the electrical loading but also the device rating as input variables. The quantified correlation between the power loss, thermal...

  7. Social Rationality as a Unified Model of Man (Including Bounded Rationality)

    NARCIS (Netherlands)

    Lindenberg, Siegwart

    2001-01-01

    In 1957, Simon published a collection of his essays under the title of “Models of Man: Social and Rational”. In the preface, he explains the choice for this title: All of the essays “are concerned with laying foundations for a science of man that will comfortably accommodate his dual nature as a

  8. Simple vibration modeling of structural fuzzy with continuous boundary by including two-dimensional spatial memory

    DEFF Research Database (Denmark)

    Friis, Lars; Ohlrich, Mogens

    2008-01-01

    is considered as one or more fuzzy substructures that are known in some statistical sense only. Experiments have shown that such fuzzy substructures often introduce a damping in the master which is much higher than the structural losses account for. A special method for modeling fuzzy substructures with a one...

  9. Situational effects of the school factors included in the dynamic model of educational effectiveness

    NARCIS (Netherlands)

    Creerners, Bert; Kyriakides, Leonidas

    We present results of a longitudinal study in which 50 schools, 113 classes and 2,542 Cypriot primary students participated. We tested the validity of the dynamic model of educational effectiveness and especially its assumption that the impact of school factors depends on the current situation of

  10. Modeling the elastic behavior of ductile cast iron including anisotropy in the graphite nodules

    DEFF Research Database (Denmark)

    Andriollo, Tito; Thorborg, Jesper; Hattel, Jesper Henri

    2016-01-01

    by means of a 3D periodic unit cell model. In this respect, an explicit procedure to enforce both periodic displacement and periodic traction boundary conditions in ABAQUS is presented, and the importance of fulfilling the traction continuity conditions at the unit cell boundaries is discussed. It is shown...

  11. Engineered Barrier System Degradation, Flow, and Transport Process Model Report

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2000-07-17

    The Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is one of nine PMRs supporting the Total System Performance Assessment (TSPA) being developed by the Yucca Mountain Project for the Site Recommendation Report (SRR). The EBS PMR summarizes the development and abstraction of models for processes that govern the evolution of conditions within the emplacement drifts of a potential high-level nuclear waste repository at Yucca Mountain, Nye County, Nevada. Details of these individual models are documented in 23 supporting Analysis/Model Reports (AMRs). Nineteen of these AMRs are for process models, and the remaining 4 describe the abstraction of results for application in TSPA. The process models themselves cluster around four major topics: ''Water Distribution and Removal Model, Physical and Chemical Environment Model, Radionuclide Transport Model, and Multiscale Thermohydrologic Model''. One AMR (Engineered Barrier System-Features, Events, and Processes/Degradation Modes Analysis) summarizes the formal screening analysis used to select the Features, Events, and Processes (FEPs) included in TSPA and those excluded from further consideration. Performance of a potential Yucca Mountain high-level radioactive waste repository depends on both the natural barrier system (NBS) and the engineered barrier system (EBS) and on their interactions. Although the waste packages are generally considered as components of the EBS, the EBS as defined in the EBS PMR includes all engineered components outside the waste packages. The principal function of the EBS is to complement the geologic system in limiting the amount of water contacting nuclear waste. A number of alternatives were considered by the Project for different EBS designs that could provide better performance than the design analyzed for the Viability Assessment. The design concept selected was Enhanced Design Alternative II (EDA II).

  12. Engineered Barrier System Degradation, Flow, and Transport Process Model Report

    International Nuclear Information System (INIS)

    E.L. Hardin

    2000-01-01

    The Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is one of nine PMRs supporting the Total System Performance Assessment (TSPA) being developed by the Yucca Mountain Project for the Site Recommendation Report (SRR). The EBS PMR summarizes the development and abstraction of models for processes that govern the evolution of conditions within the emplacement drifts of a potential high-level nuclear waste repository at Yucca Mountain, Nye County, Nevada. Details of these individual models are documented in 23 supporting Analysis/Model Reports (AMRs). Nineteen of these AMRs are for process models, and the remaining 4 describe the abstraction of results for application in TSPA. The process models themselves cluster around four major topics: ''Water Distribution and Removal Model, Physical and Chemical Environment Model, Radionuclide Transport Model, and Multiscale Thermohydrologic Model''. One AMR (Engineered Barrier System-Features, Events, and Processes/Degradation Modes Analysis) summarizes the formal screening analysis used to select the Features, Events, and Processes (FEPs) included in TSPA and those excluded from further consideration. Performance of a potential Yucca Mountain high-level radioactive waste repository depends on both the natural barrier system (NBS) and the engineered barrier system (EBS) and on their interactions. Although the waste packages are generally considered as components of the EBS, the EBS as defined in the EBS PMR includes all engineered components outside the waste packages. The principal function of the EBS is to complement the geologic system in limiting the amount of water contacting nuclear waste. A number of alternatives were considered by the Project for different EBS designs that could provide better performance than the design analyzed for the Viability Assessment. The design concept selected was Enhanced Design Alternative II (EDA II)

  13. Mathematical modeling of the flash converting process

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, H.Y.; Perez-Tello, M.; Riihilahti, K.M. [Utah Univ., Salt Lake City, UT (United States)

    1996-12-31

    An axisymmetric mathematical model for the Kennecott-Outokumpu flash converting process for converting solid copper matte to copper is presented. The model is an adaptation of the comprehensive mathematical model formerly developed at the University of Utah for the flash smelting of copper concentrates. The model incorporates the transport of momentum, heat, mass, and reaction kinetics between gas and particles in a particle-laden turbulent gas jet. The standard k-{epsilon} model is used to describe gas-phase turbulence in an Eulerian framework. The particle-phase is treated from a Lagrangian viewpoint which is coupled to the gas-phase via the source terms in the Eulerian gas-phase governing equations. Matte particles were represented as Cu{sub 2}S yFeS, and assumed to undergo homogeneous oxidation to Cu{sub 2}O, Fe{sub 3}O{sub 4}, and SO{sub 2}. A reaction kinetics mechanism involving both external mass transfer of oxygen gas to the particle surface and diffusion of oxygen through the porous oxide layer is proposed to estimate the particle oxidation rate Predictions of the mathematical model were compared with the experimental data collected in a bench-scale flash converting facility. Good agreement between the model predictions and the measurements was obtained. The model was used to study the effect of different gas-injection configurations on the overall fluid dynamics in a commercial size flash converting shaft. (author)

  14. Residence time modeling of hot melt extrusion processes.

    Science.gov (United States)

    Reitz, Elena; Podhaisky, Helmut; Ely, David; Thommes, Markus

    2013-11-01

    The hot melt extrusion process is a widespread technique to mix viscous melts. The residence time of material in the process frequently determines the product properties. An experimental setup and a corresponding mathematical model were developed to evaluate residence time and residence time distribution in twin screw extrusion processes. The extrusion process was modeled as the convolution of a mass transport process described by a Gaussian probability function, and a mixing process represented by an exponential function. The residence time of the extrusion process was determined by introducing a tracer at the extruder inlet and measuring the tracer concentration at the die. These concentrations were fitted to the residence time model, and an adequate correlation was found. Different parameters were derived to characterize the extrusion process including the dead time, the apparent mixing volume, and a transport related axial mixing. A 2(3) design of experiments was performed to evaluate the effect of powder feed rate, screw speed, and melt viscosity of the material on the residence time. All three parameters affect the residence time of material in the extruder. In conclusion, a residence time model was developed to interpret experimental data and to get insights into the hot melt extrusion process. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Modelling and control of crystallization process

    Directory of Open Access Journals (Sweden)

    S.K. Jha

    2017-03-01

    Full Text Available Batch crystallizers are predominantly used in chemical industries like pharmaceuticals, food industries and specialty chemicals. The nonlinear nature of the batch process leads to difficulties when the objective is to obtain a uniform Crystal Size Distribution (CSD. In this study, a linear PI controller is designed using classical controller tuning methods for controlling the crystallizer outlet temperature by manipulating the inlet jacket temperature; however, the response is not satisfactory. A simple PID controller cannot guarantee a satisfactory response that is why an optimal controller is designed to keep the concentration and temperature in a range that suits our needs. Any typical process operation has constraints on states, inputs and outputs. So, a nonlinear process needs to be operated satisfying the constraints. Hence, a nonlinear controller like Generic Model Controller (GMC which is similar in structure to the PI controller is implemented. It minimizes the derivative of the squared error, thus improving the output response of the process. Minimization of crystal size variation is considered as an objective function in this study. Model predictive control is also designed that uses advanced optimization algorithm to minimize the error while linearizing the process. Constraints are fed into the MPC toolbox in MATLAB and Prediction, Control horizons and Performance weights are tuned using Sridhar and Cooper Method. Performances of all the three controllers (PID, GMC and MPC are compared and it is found that MPC is the most superior one in terms of settling time and percentage overshoot.

  16. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    International Nuclear Information System (INIS)

    Sonnenthale, E.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [1534471]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: Performance Assessment (PA); Near-Field Environment (NFE) PMR; Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); and UZ Flow and Transport Process Model Report (PMR). The work scope for this activity is presented in the TWPs cited above, and summarized as follows: Continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are

  17. Including Antenna Models in Microwave Imaging for Breast-Cancer Screening

    DEFF Research Database (Denmark)

    Rubæk, Tonny; Meincke, Peter

    2006-01-01

    Microwave imaging is emerging as a tool for screening for breast cancer, but the lack of methods for including the characteristics of the antennas of the imaging systems in the imaging algorithms limits their performance. In this paper, a method for incorporating the full antenna characteristics...

  18. Numerical models of single- and double-negative metamaterials including viscous and thermal losses

    DEFF Research Database (Denmark)

    Cutanda Henriquez, Vicente; Sánchez-Dehesa, José

    2017-01-01

    Negative index acoustic metamaterials are artificial structures made of subwavelength units arranged in a lattice, whose effective acoustic parameters, bulk modulus and mass density, can be negative. In these materials, sound waves propagate inside the periodic structure, assumed rigid, showing...... extraordinary properties. We are interested in two particular cases: a double-negative metamaterial, where both parameters are negative at some frequencies, and a single-negative metamaterial with negative bulk modulus within a broader frequency band. In previous research involving the double-negative...... detailed understanding on how viscous and thermal losses affect the setups at different frequencies. The modeling of a simpler single-negative metamaterial also broadens this overview. Both setups have been modeled with quadratic BEM meshes. Each sample, scaled at two different sizes, has been represented...

  19. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers

    DEFF Research Database (Denmark)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel

    2016-01-01

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated...... of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling....

  20. Paint Pavement Marking Performance Prediction Model That Includes the Impacts of Snow Removal Operations

    Science.gov (United States)

    2011-03-01

    Hypothesized that snow plows wear down mountain road pavement markings. 2007 Craig et al. -Edge lines degrade slower than center/skip lines 2007...retroreflectivity to create the models. They discovered that paint pavement markings last 80% longer on Portland Cement Concrete than Asphalt Concrete at low AADT...retroreflectivity, while yellow markings lost 21%. Lu and Barter attributed the sizable degradation to snow removal, sand application, and studded

  1. An earth outgoing longwave radiation climate model. II - Radiation with clouds included

    Science.gov (United States)

    Yang, Shi-Keng; Smith, G. Louis; Bartman, Fred L.

    1988-01-01

    The model of the outgoing longwave radiation (OLWR) of Yang et al. (1987) is modified by accounting for the presence of clouds and their influence on OLWR. Cloud top temperature was adjusted so that the calculation agreed with NOAA scanning radiometer measurements. Cloudy sky cases were calculated for global average, zonal average, and worldwide distributed cases. The results were found to agree well with satellite observations.

  2. Modelling topical photodynamic therapy treatment including the continuous production of Protoporphyrin IX

    Science.gov (United States)

    Campbell, C. L.; Brown, C. T. A.; Wood, K.; Moseley, H.

    2016-11-01

    Most existing theoretical models of photodynamic therapy (PDT) assume a uniform initial distribution of the photosensitive molecule, Protoporphyrin IX (PpIX). This is an adequate assumption when the prodrug is systematically administered; however for topical PDT this is no longer a valid assumption. Topical application and subsequent diffusion of the prodrug results in an inhomogeneous distribution of PpIX, especially after short incubation times, prior to light illumination. In this work a theoretical simulation of PDT where the PpIX distribution depends on the incubation time and the treatment modality is described. Three steps of the PpIX production are considered. The first is the distribution of the topically applied prodrug, the second in the conversion from the prodrug to PpIX and the third is the light distribution which affects the PpIX distribution through photobleaching. The light distribution is modelled using a Monte Carlo radiation transfer model and indicates treatment depths of around 2 mm during daylight PDT and approximately 3 mm during conventional PDT. The results suggest that treatment depths are not only limited by the light penetration but also by the PpIX distribution.

  3. Situational effects of the school factors included in the dynamic model of educational effectiveness

    Directory of Open Access Journals (Sweden)

    Bert Creemers

    2009-08-01

    Full Text Available We present results of a longitudinal study in which 50 schools, 113 classes and 2,542 Cypriot primary students participated. We tested the validity of the dynamic model of educational effectiveness and especially its assumption that the impact of school factors depends on the current situation of the school and on the type of problems/difficulties the school is facing. Reference is made to the methods used to test this assumption of the dynamic model by measuring school effectiveness in mathematics, Greek language, and religious education over two consecutive school years. The main findings are as follows. School factors were found to have situational effects. Specifically, the development of a school policy for teaching and the school evaluation of policy for teaching were found to have stronger effects in schools where the quality of teaching at classroom level was low. Moreover, time stability in the effectiveness status of schools was identified and thereby changes in the functioning of schools were found not to have a significant impact on changes in the effectiveness status of schools. Implications of the findings for the development of the dynamic model and suggestions for further research are presented.

  4. Comparison of lead isotopes with source apportionment models, including SOM, for air particulates

    International Nuclear Information System (INIS)

    Gulson, Brian; Korsch, Michael; Dickson, Bruce; Cohen, David; Mizon, Karen; Michael Davis, J.

    2007-01-01

    We have measured high precision lead isotopes in PM 2.5 particulates from a highly-trafficked site (Mascot) and rural site (Richmond) in the Sydney Basin, New South Wales, Australia to compare with isotopic data from total suspended particulates (TSP) from other sites in the Sydney Basin and evaluate relationships with source fingerprints obtained from multi-element PM 2.5 data. The isotopic data for the period 1998 to 2004 show seasonal peaks and troughs that are more pronounced in the rural site for the PM 2.5 .samples but are consistent with the TSP. The Self Organising Map (SOM) method has been applied to the multi-element PM 2.5 data to evaluate its use in obtaining fingerprints for comparison with standard statistical procedures (ANSTO model). As seasonal effects are also significant for the multi-element data, the SOM modelling is reported as site and season dependent. At the Mascot site, the ANSTO model exhibits decreasing 206 Pb/ 204 Pb ratios with increasing contributions of fingerprints for 'secondary smoke' (industry), 'soil', 'smoke' and 'seaspray'. Similar patterns were shown by SOM winter fingerprints for both sites. At the rural site, there are large isotopic variations but for the majority of samples these are not associated with increased contributions from the main sources with the ANSTO model. For two winter sampling times, there are increased contributions from 'secondary industry', 'smoke', 'soil' and seaspray with one time having a source or sources of Pb similar to that of Mascot. The only positive relationship between increasing 206 Pb/ 204 Pb ratio and source contributions is found at the rural site using the SOM summer fingerprints, both of which show a significant contribution from sulphur. Several of the fingerprints using either model have significant contributions from black carbon (BC) and/or sulphur (S) that probably derive from diesel fuels and industrial sources. Increased contributions from sources with the SOM summer

  5. Changes of microbial substrate metabolic patterns through a wastewater reuse process, including WWTP and SAT concerning depth.

    Science.gov (United States)

    Takabe, Yugo; Kameda, Ippei; Suzuki, Ryosuke; Nishimura, Fumitake; Itoh, Sadahiko

    2014-09-01

    In this study, changes of microbial substrate metabolic patterns by BIOLOG assay were discussed through a sequential wastewater reuse process, which includes activated sludge and treated effluent in wastewater treatment plant and soil aquifer treatment (SAT), especially focussing on the surface sand layer in conjunction with the vadose zone, concerning sand depth. A SAT pilot-scale reactor, in which the height of packed sand was 237 cm (vadose zone: 17 cm and saturated zone 220 cm), was operated and fed continuously by discharged anaerobic-anoxic-oxic (A2O) treated water. Continuous water quality measurements over a period of 10 months indicated that the treatment performance of the reactor, such as 83.2% dissolved organic carbon removal, appeared to be stable. Core sampling was conducted for the surface sand to a 30 cm depth, and the sample was divided into six 5 cm sections. Microbial activities, as evaluated by fluorescein diacetate, sharply decreased with increasing distance from the surface of the 30 cm core sample, which included significant decreases only 5 cm from the top surface. A similar microbial metabolic pattern containing a high degree of carbohydrates was obtained among the activated sludge, A2O treated water (influent to the SAT reactor) and the 0-5 cm layer of sand. Meanwhile, the 10-30 cm sand core layers showed dramatically different metabolic patterns containing a high degree of carboxylic acid and esters, and it is possible that the metabolic pattern exhibited by the 5-10 cm layer is at a midpoint of the changing pattern. This suggests that the removal of different organic compounds by biodegradation would be expected to occur in the activated sludge and in the SAT sand layers immediately below 5 cm from the top surface. It is possible that changes in the composition of the organic matter and/or transit of the limiting factor for microbial activities from carbon to phosphorus might have contributed to the observed dramatic changes

  6. A review of cell-scale multiphase flow modeling, including water management, in polymer electrolyte fuel cells

    International Nuclear Information System (INIS)

    Andersson, M.; Beale, S.B.; Espinoza, M.; Wu, Z.; Lehnert, W.

    2016-01-01

    on the transport processes inside the porous GDL are extensively discussed. The selection of a computational approach, for the two-phase flow within a GDL or GC, for example, should be based on the computational resources available, concerns about time and scale (microscale, cell scale, stack scale or system scale), as well as accuracy requirements. One important feature, included in some computational approaches, is the possibility to track the front between the liquid and the gas phases. To build a PEFC model, one must make a large number of assumptions. Some assumptions have a negligible effect on the results and reliability of the model. However, other assumptions may significantly affect the result. It is strongly recommended in any modeling paper to clearly state the assumptions being implemented, for others to be able to judge the work. It is important to note that a large fraction of the expressions that presently are used to describe the transport processes inside PEFC GDLs were originally developed to describe significantly different systems, such as sand or rocks. Moreover, the flow pattern maps and pressure drop correlations of two phase flow in micro channels may not be applicable for GCs due to one side wall being porous, with the resulting interaction between the GDL and GC.

  7. Theoretical modelling of carbon deposition processes

    International Nuclear Information System (INIS)

    Marsh, G.R.; Norfolk, D.J.; Skinner, R.F.

    1985-01-01

    Work based on capsule experiments in the BNL Gamma Facility, aimed at elucidating the chemistry involved in the formation of carbonaceous deposit on CAGR fuel pin surfaces is described. Using a data-base derived from capsule experiments together with literature values for the kinetics of the fundamental reactions, a chemical model of the gas-phase processes has been developed. This model successfully reproduces the capsule results, whilst preliminary application to the WAGR coolant circuit indicates the likely concentration profiles of various radical species within the fuel channels. (author)

  8. Theoretical Modelling of Intercultural Communication Process

    Directory of Open Access Journals (Sweden)

    Mariia Soter

    2016-08-01

    Full Text Available The definition of the concepts of “communication”, “intercultural communication”, “model of communication” are analyzed in the article. The basic components of the communication process are singled out. The model of intercultural communication is developed. Communicative, behavioral and complex skills for optimal organization of intercultural communication, establishment of productive contact with a foreign partner to achieve mutual understanding, searching for acceptable ways of organizing interaction and cooperation for both communicants are highlighted in the article. It is noted that intercultural communication through interaction between people affects the development of different cultures’ aspects.

  9. Including the effects of elastic compressibility and volume changes in geodynamical modeling of crust-lithosphere-mantle deformation

    Science.gov (United States)

    de Monserrat, Albert; Morgan, Jason P.

    2016-04-01

    Materials in Earth's interior are exposed to thermomechanical (e.g. variations in stress/pressure and temperature) and chemical (e.g. phase changes, serpentinization, melting) processes that are associated with volume changes. Most geodynamical codes assume the incompressible Boussinesq approximation, where changes in density due to temperature or phase change effect buoyancy, yet volumetric changes are not allowed, and mass is not locally conserved. Elastic stresses induced by volume changes due to thermal expansion, serpentinization, and melt intrusion should cause 'cold' rocks to brittlely fail at ~1% strain. When failure/yielding is an important rheological feature, we think it plausible that volume-change-linked stresses may have a significant influence on the localization of deformation. Here we discuss a new Lagrangian formulation for "elasto-compressible -visco-plastic" flow. In this formulation, the continuity equation has been generalised from a Boussinesq incompressible formulation to include recoverable, elastic, volumetric deformations linked to the local state of mean compressive stress. This formulation differs from the 'anelastic approximation' used in compressible viscous flow in that pressure- and temperature- dependent volume changes are treated as elastic deformation for a given pressure, temperature, and composition/phase. This leads to a visco-elasto-plastic formulation that can model the effects of thermal stresses, pressure-dependent volume changes, and local phase changes. We use a modified version of the (Miliman-based) FEM code M2TRI to run a set of numerical experiments for benchmarking purposes. Three benchmarks are being used to assess the accuracy of this formulation: (1) model the effects on density of a compressible mantle under the influence of gravity; (2) model the deflection of a visco-elastic beam under the influence of gravity, and its recovery when gravitational loading is artificially removed; (3) Modelling the stresses

  10. Environmental Enteric Dysfunction Includes a Broad Spectrum of Inflammatory Responses and Epithelial Repair ProcessesSummary

    Directory of Open Access Journals (Sweden)

    Jinsheng Yu

    2016-03-01

    Full Text Available Background & Aims: Environmental enteric dysfunction (EED, a chronic diffuse inflammation of the small intestine, is associated with stunting in children in the developing world. The pathobiology of EED is poorly understood because of the lack of a method to elucidate the host response. This study tested a novel microarray method to overcome limitation of RNA sequencing to interrogate the host transcriptome in feces in Malawian children with EED. Methods: In 259 children, EED was measured by lactulose permeability (%L. After isolating low copy numbers of host messenger RNA, the transcriptome was reliably and reproducibly profiled, validated by polymerase chain reaction. Messenger RNA copy number then was correlated with %L and differential expression in EED. The transcripts identified were mapped to biological pathways and processes. The children studied had a range of %L values, consistent with a spectrum of EED from none to severe. Results: We identified 12 transcripts associated with the severity of EED, including chemokines that stimulate T-cell proliferation, Fc fragments of multiple immunoglobulin families, interferon-induced proteins, activators of neutrophils and B cells, and mediators that dampen cellular responses to hormones. EED-associated transcripts mapped to pathways related to cell adhesion, and responses to a broad spectrum of viral, bacterial, and parasitic microbes. Several mucins, regulatory factors, and protein kinases associated with the maintenance of the mucous layer were expressed less in children with EED than in normal children. Conclusions: EED represents the activation of diverse elements of the immune system and is associated with widespread intestinal barrier disruption. Differentially expressed transcripts, appropriately enumerated, should be explored as potential biomarkers. Keywords: Environmental Enteropathy, Fecal Transcriptome, Stunting, Intestinal Inflammation

  11. Empirical process modeling in fast breeder reactors

    International Nuclear Information System (INIS)

    Ikonomopoulos, A.; Endou, A.

    1998-01-01

    A non-linear multi-input/single output (MISO) empirical model is introduced for monitoring vital system parameters in a nuclear reactor environment. The proposed methodology employs a scheme of non-parametric smoothing that models the local dynamics of each fitting point individually, as opposed to global modeling techniques--such as multi-layer perceptrons (MLPs)--that attempt to capture the dynamics of the entire design space. The stimulation for employing local models in monitoring rises from one's desire to capture localized idiosyncrasies of the dynamic system utilizing independent estimators. This approach alleviates the effect of negative interference between old and new observations enhancing the model prediction capabilities. Modeling the behavior of any given system comes down to a trade off between variance and bias. The building blocks of the proposed approach are tailored to each data set through two separate, adaptive procedures in order to optimize the bias-variance reconciliation. Hetero-associative schemes of the technique presented exhibit insensitivity to sensor noise and provide the operator with accurate predictions of the actual process signals. A comparison between the local model and MLP prediction capabilities is performed and the results appear in favor of the first method. The data used to demonstrate the potential of local regression have been obtained during two startup periods of the Monju fast breeder reactor (FBR)

  12. Modeling of processes in the tourism sector

    Directory of Open Access Journals (Sweden)

    Salamatina Victoriya, S.

    2015-06-01

    Full Text Available In modern conditions for a number of Russian regions tourism is becoming budget. In this regard, it is of interest to the simulation of processes occurring in the tourism business, because they are affected by many random parameters, due to various economic, political, geographic, and other aspects. For improvement and development of systems for the management of tourism business systematically embeds economic mathematical apparatus in this area, because increased competitiveness requires continuous and constructive changes. Results of application of the economic mathematical apparatus allow a more systematic and internal unity to analyze and evaluate the applicability of further processes in tourism. For some economic processes typical tourist activities is that a certain effect and result from exposure to any of the factors on the indicators of the processes is not immediately but gradually, after some certain time, with a certain lag. With the necessity of accounting for this delay has to face when developing mathematical models of tourist business processes. In this case, the simulation of such processes it is advisable to apply economic-mathematical formalism of optimal control, called game theory.

  13. Markov State Model of Ion Assembling Process.

    Science.gov (United States)

    Shevchuk, Roman

    2016-05-12

    We study the process of ion assembling in aqueous solution by means of molecular dynamics. In this article, we present a method to study many-particle assembly using the Markov state model formalism. We observed that at NaCl concentration higher than 1.49 mol/kg, the system tends to form a big ionic cluster composed of roughly 70-90% of the total number of ions. Using Markov state models, we estimated the average time needed for the system to make a transition from discorded state to a state with big ionic cluster. Our results suggest that the characteristic time to form an ionic cluster is a negative exponential function of the salt concentration. Moreover, we defined and analyzed three different kinetic states of a single ion particle. These states correspond to a different particle location during nucleation process.

  14. Coupled modeling of land hydrology–regional climate including human carbon emission and water exploitation

    Directory of Open Access Journals (Sweden)

    Zheng-Hui Xie

    2017-06-01

    Full Text Available Carbon emissions and water use are two major kinds of human activities. To reveal whether these two activities can modify the hydrological cycle and climate system in China, we conducted two sets of numerical experiments using regional climate model RegCM4. In the first experiment used to study the climatic responses to human carbon emissions, the model were configured over entire China because the impacts of carbon emissions can be detected across the whole country. Results from the first experiment revealed that near-surface air temperature may significantly increase from 2007 to 2059 at a rate exceeding 0.1 °C per decade in most areas across the country; southwestern and southeastern China also showed increasing trends in summer precipitation, with rates exceeding 10 mm per decade over the same period. In summer, only northern China showed an increasing trend of evapotranspiration, with increase rates ranging from 1 to 5 mm per decade; in winter, increase rates ranging from 1 to 5 mm per decade were observed in most regions. These effects are believed to be caused by global warming from human carbon emissions. In the second experiment used to study the effects of human water use, the model were configured over a limited region—Haihe River Basin in the northern China, because compared with the human carbon emissions, the effects of human water use are much more local and regional, and the Haihe River Basin is the most typical region in China that suffers from both intensive human groundwater exploitation and surface water diversion. We incorporated a scheme of human water regulation into RegCM4 and conducted the second experiment. Model outputs showed that the groundwater table severely declined by ∼10 m in 1971–2000 through human groundwater over-exploitation in the basin; in fact, current conditions are so extreme that even reducing the pumping rate by half cannot eliminate the groundwater depletion cones observed in the area

  15. Able but unintelligent: including positively stereotyped black subgroups in the stereotype content model.

    Science.gov (United States)

    Walzer, Amy S; Czopp, Alexander M

    2011-01-01

    The stereotype content model (SCM) posits that warmth and competence are the key components underlying judgments about social groups. Because competence can encompass different components (e.g., intelligence, talent) different group members may be perceived to be competent for different reasons. Therefore, we believe it may be important to specify the type of competence being assessed when examining perceptions of groups that are positively stereotyped (i.e., Black athletes and musical Blacks). Consistent with the SCM, these subgroups were perceived as high in competence-talent but not in competence-intelligence and low in warmth. Both the intelligence and talent frame of competence fit in the SCM's social structural hypothesis.

  16. Environmental Modeling Framework using Stacked Gaussian Processes

    OpenAIRE

    Abdelfatah, Kareem; Bao, Junshu; Terejanu, Gabriel

    2016-01-01

    A network of independently trained Gaussian processes (StackedGP) is introduced to obtain predictions of quantities of interest with quantified uncertainties. The main applications of the StackedGP framework are to integrate different datasets through model composition, enhance predictions of quantities of interest through a cascade of intermediate predictions, and to propagate uncertainties through emulated dynamical systems driven by uncertain forcing variables. By using analytical first an...

  17. Deep inelastic processes and the parton model

    International Nuclear Information System (INIS)

    Altarelli, G.

    The lecture was intended as an elementary introduction to the physics of deep inelastic phenomena from the point of view of theory. General formulae and facts concerning inclusive deep inelastic processes in the form: l+N→l'+hadrons (electroproduction, neutrino scattering) are first recalled. The deep inelastic annihilation e + e - →hadrons is then envisaged. The light cone approach, the parton model and their relation are mainly emphasized

  18. Observational Data-Driven Modeling and Optimization of Manufacturing Processes

    OpenAIRE

    Sadati, Najibesadat; Chinnam, Ratna Babu; Nezhad, Milad Zafar

    2017-01-01

    The dramatic increase of observational data across industries provides unparalleled opportunities for data-driven decision making and management, including the manufacturing industry. In the context of production, data-driven approaches can exploit observational data to model, control and improve the process performance. When supplied by observational data with adequate coverage to inform the true process performance dynamics, they can overcome the cost associated with intrusive controlled de...

  19. Description and Application of A Model of Seepage under A Weir Including Mechanical Clogging

    Directory of Open Access Journals (Sweden)

    Sroka Zbigniew

    2014-07-01

    Full Text Available The paper discusses seepage flow under a damming structure (a weir in view of mechanical clogging in a thin layer at the upstream site. It was assumed that in this layer flow may be treated as one-dimensional (perpendicular to the layer, while elsewhere flow was modelled as two-dimensional. The solution in both zones was obtained in the discrete form using the finite element method and the Euler method. The effect of the clogging layer on seepage flow was modelled using the third kind boundary condition. Seepage parameters in the clogging layer were estimated based on laboratory tests conducted by Skolasińska [2006]. Typical problem was taken to provide simulation and indicate how clogging affects the seepage rate and other parameters of the flow. Results showed that clogging at the upstream site has a significant effect on the distribution of seepage velocity and hydraulic gradients. The flow underneath the structure decreases with time, but these changes are relatively slow.

  20. Modeling radiation dosimetry to predict cognitive outcomes in pediatric patients with CNS embryonal tumors including medulloblastoma

    International Nuclear Information System (INIS)

    Merchant, Thomas E.; Kiehna, Erin N.; Li Chenghong; Shukla, Hemant; Sengupta, Saikat; Xiong Xiaoping; Gajjar, Amar; Mulhern, Raymond K.

    2006-01-01

    Purpose: Model the effects of radiation dosimetry on IQ among pediatric patients with central nervous system (CNS) tumors. Methods and Materials: Pediatric patients with CNS embryonal tumors (n = 39) were prospectively evaluated with serial cognitive testing, before and after treatment with postoperative, risk-adapted craniospinal irradiation (CSI) and conformal primary-site irradiation, followed by chemotherapy. Differential dose-volume data for 5 brain volumes (total brain, supratentorial brain, infratentorial brain, and left and right temporal lobes) were correlated with IQ after surgery and at follow-up by use of linear regression. Results: When the dose distribution was partitioned into 2 levels, both had a significantly negative effect on longitudinal IQ across all 5 brain volumes. When the dose distribution was partitioned into 3 levels (low, medium, and high), exposure to the supratentorial brain appeared to have the most significant impact. For most models, each Gy of exposure had a similar effect on IQ decline, regardless of dose level. Conclusions: Our results suggest that radiation dosimetry data from 5 brain volumes can be used to predict decline in longitudinal IQ. Despite measures to reduce radiation dose and treatment volume, the volume that receives the highest dose continues to have the greatest effect, which supports current volume-reduction efforts

  1. On the modelling of semi-insulating GaAs including surface tension and bulk stresses

    Energy Technology Data Exchange (ETDEWEB)

    Dreyer, W.; Duderstadt, F.

    2004-07-01

    Necessary heat treatment of single crystal semi-insulating Gallium Arsenide (GaAs), which is deployed in micro- and opto- electronic devices, generate undesirable liquid precipitates in the solid phase. The appearance of precipitates is influenced by surface tension at the liquid/solid interface and deviatoric stresses in the solid. The central quantity for the description of the various aspects of phase transitions is the chemical potential, which can be additively decomposed into a chemical and a mechanical part. In particular the calculation of the mechanical part of the chemical potential is of crucial importance. We determine the chemical potential in the framework of the St. Venant-Kirchhoff law which gives an appropriate stress/strain relation for many solids in the small strain regime. We establish criteria, which allow the correct replacement of the St. Venant-Kirchhoff law by the simpler Hooke law. The main objectives of this study are: (i) We develop a thermo-mechanical model that describes diffusion and interface motion, which both are strongly influenced by surface tension effects and deviatoric stresses. (ii) We give an overview and outlook on problems that can be posed and solved within the framework of the model. (iii) We calculate non-standard phase diagrams, i.e. those that take into account surface tension and non-deviatoric stresses, for GaAs above 786 C, and we compare the results with classical phase diagrams without these phenomena. (orig.)

  2. Effect of neurosteroids on a model lipid bilayer including cholesterol: An Atomic Force Microscopy study.

    Science.gov (United States)

    Sacchi, Mattia; Balleza, Daniel; Vena, Giulia; Puia, Giulia; Facci, Paolo; Alessandrini, Andrea

    2015-05-01

    Amphiphilic molecules which have a biological effect on specific membrane proteins, could also affect lipid bilayer properties possibly resulting in a modulation of the overall membrane behavior. In light of this consideration, it is important to study the possible effects of amphiphilic molecule of pharmacological interest on model systems which recapitulate some of the main properties of the biological plasma membranes. In this work we studied the effect of a neurosteroid, Allopregnanolone (3α,5α-tetrahydroprogesterone or Allo), on a model bilayer composed by the ternary lipid mixture DOPC/bSM/chol. We chose ternary mixtures which present, at room temperature, a phase coexistence of liquid ordered (Lo) and liquid disordered (Ld) domains and which reside near to a critical point. We found that Allo, which is able to strongly partition in the lipid bilayer, induces a marked increase in the bilayer area and modifies the relative proportion of the two phases favoring the Ld phase. We also found that the neurosteroid shifts the miscibility temperature to higher values in a way similarly to what happens when the cholesterol concentration is decreased. Interestingly, an isoform of Allo, isoAllopregnanolone (3β,5α-tetrahydroprogesterone or isoAllo), known to inhibit the effects of Allo on GABAA receptors, has an opposite effect on the bilayer properties. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Near Field Environment Process Model Report

    Energy Technology Data Exchange (ETDEWEB)

    R.A. Wagner

    2000-11-14

    Waste emplacement and activities associated with construction of a repository system potentially will change environmental conditions within the repository system. These environmental changes principally result from heat generated by the decay of the radioactive waste, which elevates temperatures within the repository system. Elevated temperatures affect distribution of water, increase kinetic rates of geochemical processes, and cause stresses to change in magnitude and orientation from the stresses resulting from the overlying rock and from underground construction activities. The recognition of this evolving environment has been reflected in activities, studies and discussions generally associated with what has been termed the Near-Field Environment (NFE). The NFE interacts directly with waste packages and engineered barriers as well as potentially changing the fluid composition and flow conditions within the mountain. As such, the NFE defines the environment for assessing the performance of a potential Monitored Geologic Repository at Yucca Mountain, Nevada. The NFe evolves over time, and therefore is not amenable to direct characterization or measurement in the ambient system. Analysis or assessment of the NFE must rely upon projections based on tests and models that encompass the long-term processes of the evolution of this environment. This NFE Process Model Report (PMR) describes the analyses and modeling based on current understanding of the evolution of the near-field within the rock mass extending outward from the drift wall.

  4. MODELLING OF POSTSEISMIC PROCESSES IN SUBDUCTION ZONES

    Directory of Open Access Journals (Sweden)

    Irina S. Vladimirova

    2012-01-01

    Full Text Available Large intraplate subduction earthquakes are generally accompanied by prolonged and intense postseismic anomalies. In the present work, viscoelastic relaxation in the upper mantle and the asthenosphere is considered as a main mechanism responsible for the occurrence of such postseismic effects. The study of transient processes is performed on the basis of data on postseismic processes accompanying the first Simushir earthquake on 15 November 2006 and Maule earthquake on 27 February 2010.The methodology of modelling a viscoelastic relaxation process after a large intraplate subduction earthquake is presented. A priori parameters of the selected model describing observed postseismic effects are adjusted by minimizing deviations between modeled surface displacements and actual surface displacements recorded by geodetic methods through solving corresponding inverse problems.The presented methodology yielded estimations of Maxwell’s viscosity of the asthenosphere of the central Kuril Arc and also of the central Chile. Besides, postseismic slip distribution patterns were obtained for the focus of the Simushir earthquake of 15 November 2006 (Mw=8.3 (Figure 3, and distribution patterns of seismic and postseismic slip were determined for the focus of the Maule earthquake of 27 February 2010 (Mw=8.8 (Figure 6. These estimations and patterns can provide for prediction of the intensity of viscoelastic stress attenuation in the asthenosphere; anomalous values should be taken into account as adjustment factors when analyzing inter-seismic deformation in order to ensure correct estimation of the accumulated elastic seismogenic potential.

  5. Coupled Modeling of Rhizosphere and Reactive Transport Processes

    Science.gov (United States)

    Roque-Malo, S.; Kumar, P.

    2017-12-01

    The rhizosphere, as a bio-diverse plant root-soil interface, hosts many hydrologic and biochemical processes, including nutrient cycling, hydraulic redistribution, and soil carbon dynamics among others. The biogeochemical function of root networks, including the facilitation of nutrient cycling through absorption and rhizodeposition, interaction with micro-organisms and fungi, contribution to biomass, etc., plays an important role in myriad Critical Zone processes. Despite this knowledge, the role of the rhizosphere on watershed-scale ecohydrologic functions in the Critical Zone has not been fully characterized, and specifically, the extensive capabilities of reactive transport models (RTMs) have not been applied to these hydrobiogeochemical dynamics. This study uniquely links rhizospheric processes with reactive transport modeling to couple soil biogeochemistry, biological processes, hydrologic flow, hydraulic redistribution, and vegetation dynamics. Key factors in the novel modeling approach are: (i) bi-directional effects of root-soil interaction, such as simultaneous root exudation and nutrient absorption; (ii) multi-state biomass fractions in soil (i.e. living, dormant, and dead biological and root materials); (iii) expression of three-dimensional fluxes to represent both vertical and lateral interconnected flows and processes; and (iv) the potential to include the influence of non-stationary external forcing and climatic factors. We anticipate that the resulting model will demonstrate the extensive effects of plant root dynamics on ecohydrologic functions at the watershed scale and will ultimately contribute to a better characterization of efflux from both agricultural and natural systems.

  6. Validating Computational Cognitive Process Models across Multiple Timescales

    Science.gov (United States)

    Myers, Christopher; Gluck, Kevin; Gunzelmann, Glenn; Krusmark, Michael

    2010-12-01

    Model comparison is vital to evaluating progress in the fields of artificial general intelligence (AGI) and cognitive architecture. As they mature, AGI and cognitive architectures will become increasingly capable of providing a single model that completes a multitude of tasks, some of which the model was not specifically engineered to perform. These models will be expected to operate for extended periods of time and serve functional roles in real-world contexts. Questions arise regarding how to evaluate such models appropriately, including issues pertaining to model comparison and validation. In this paper, we specifically address model validation across multiple levels of abstraction, using an existing computational process model of unmanned aerial vehicle basic maneuvering to illustrate the relationship between validity and timescales of analysis.

  7. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    Science.gov (United States)

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  8. Product Trial Processing (PTP): a model approach from ...

    African Journals Online (AJOL)

    Product trial is described as consumer's first usage experience with a company's brand or product that is most important in determining brand attributes and the intention to make a purchase. Among the constructs used in the model of consumer's processing of product trail includes; experiential and non- experiential ...

  9. Modeling process-structure-property relationships for additive manufacturing

    Science.gov (United States)

    Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Yu, Cheng; Liu, Zeliang; Lian, Yanping; Wolff, Sarah; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam

    2018-02-01

    This paper presents our latest work on comprehensive modeling of process-structure-property relationships for additive manufacturing (AM) materials, including using data-mining techniques to close the cycle of design-predict-optimize. To illustrate the processstructure relationship, the multi-scale multi-physics process modeling starts from the micro-scale to establish a mechanistic heat source model, to the meso-scale models of individual powder particle evolution, and finally to the macro-scale model to simulate the fabrication process of a complex product. To link structure and properties, a highefficiency mechanistic model, self-consistent clustering analyses, is developed to capture a variety of material response. The model incorporates factors such as voids, phase composition, inclusions, and grain structures, which are the differentiating features of AM metals. Furthermore, we propose data-mining as an effective solution for novel rapid design and optimization, which is motivated by the numerous influencing factors in the AM process. We believe this paper will provide a roadmap to advance AM fundamental understanding and guide the monitoring and advanced diagnostics of AM processing.

  10. MODEL OF QUALITY MANAGEMENT OF TECHNOLOGICAL PROCESSES OF THE GRAIN PROCESSING AND MILL ENTERPRISES

    Directory of Open Access Journals (Sweden)

    M. M. Blagoveshchenskaia

    2014-01-01

    Full Text Available Summary. In the work the model of quality management of technological processes of the grain processing and mill enterprises is presented. It is shown that flour-grinding production is an important part of agro-industrial complex because it provides production of the main food product of people – flour. The analytical indicators of quality of technological process are presented. The matrix of expert estimates of i-th level of quality for the set combinations of parameters values according to the scheme of complete factorial experiment is made. Considered a model for the calculation of the raw material preparation for milling, which characterizes the main qualities of the processed raw materials. For the purpose of management of quality of technological processes of flour mill the mathematical model which includes calculation of two groups of indicators of an assessment is developed: qualities of preparation of raw materials for a grinding and qualities of conducting technological process. The algorithm of an analytical assessment of indicators of quality of technological process of the flour-grinding enterprises, including the selection of waste, selection of bran, a compliance rate of output of flour-grinding products, compliance rate of moisture products, is offered. The assessment of quality management of technological process of a high-quality grinding on the example of several leading flour-grinding enterprises of Central Federal District is carried out. The two-dimensional model of quality management of technological process based on an analytical indicators of an assessment of quality, an assessment of quality of preparation the raw materials for a grinding and an optimum effective condition of technological process is constructed. It is shown that quality management at the enterprise provides collecting, processing and the analysis of information on a condition of material streams and productions on all of their stages.

  11. Including temperature in a wavefunction description of the dynamics of the quantum Rabi model

    Science.gov (United States)

    Werther, Michael; Grossmann, Frank

    2018-01-01

    We present a wavefunction methodology to account for finite temperature initial conditions in the quantum Rabi model. The approach is based on the Davydov Ansatz together with a statistical sampling of the canonical harmonic oscillator initial density matrix. Equations of motion are gained from a variational principle and numerical results are compared to those of the thermal Hamiltonian approach. For a system consisting of a single spin and a single oscillator and for moderate coupling strength, we compare our new results with full quantum ones as well as with other Davydov-type results based on alternative sampling/summation strategies. All of these perform better than the ones based on the thermal Hamiltonian approach. The best agreement is shown by a Boltzmann weighting of individual eigenstate propagations. Extending this to a bath of many oscillators will, however, be very demanding numerically. The use of any one of the investigated stochastic sampling approaches will then be favorable.

  12. Dynamic Analysis of Thick Plates Including Deep Beams on Elastic Foundations Using Modified Vlasov Model

    Directory of Open Access Journals (Sweden)

    Korhan Ozgan

    2013-01-01

    Full Text Available Dynamic analysis of foundation plate-beam systems with transverse shear deformation is presented using modified Vlasov foundation model. Finite element formulation of the problem is derived by using an 8-node (PBQ8 finite element based on Mindlin plate theory for the plate and a 2-node Hughes element based on Timoshenko beam theory for the beam. Selective reduced integration technique is used to avoid shear locking problem for the evaluation of the stiffness matrices for both the elements. The effect of beam thickness, the aspect ratio of the plate and subsoil depth on the response of plate-beam-soil system is analyzed. Numerical examples show that the displacement, bending moments and shear forces are changed significantly by adding the beams.

  13. Robust and Adaptive OMR System Including Fuzzy Modeling, Fusion of Musical Rules, and Possible Error Detection

    Directory of Open Access Journals (Sweden)

    Bloch Isabelle

    2007-01-01

    Full Text Available This paper describes a system for optical music recognition (OMR in case of monophonic typeset scores. After clarifying the difficulties specific to this domain, we propose appropriate solutions at both image analysis level and high-level interpretation. Thus, a recognition and segmentation method is designed, that allows dealing with common printing defects and numerous symbol interconnections. Then, musical rules are modeled and integrated, in order to make a consistent decision. This high-level interpretation step relies on the fuzzy sets and possibility framework, since it allows dealing with symbol variability, flexibility, and imprecision of music rules, and merging all these heterogeneous pieces of information. Other innovative features are the indication of potential errors and the possibility of applying learning procedures, in order to gain in robustness. Experiments conducted on a large data base show that the proposed method constitutes an interesting contribution to OMR.

  14. Energy-based fatigue model for shape memory alloys including thermomechanical coupling

    Science.gov (United States)

    Zhang, Yahui; Zhu, Jihong; Moumni, Ziad; Van Herpen, Alain; Zhang, Weihong

    2016-03-01

    This paper is aimed at developing a low cycle fatigue criterion for pseudoelastic shape memory alloys to take into account thermomechanical coupling. To this end, fatigue tests are carried out at different loading rates under strain control at room temperature using NiTi wires. Temperature distribution on the specimen is measured using a high speed thermal camera. Specimens are tested to failure and fatigue lifetimes of specimens are measured. Test results show that the fatigue lifetime is greatly influenced by the loading rate: as the strain rate increases, the fatigue lifetime decreases. Furthermore, it is shown that the fatigue cracks initiate when the stored energy inside the material reaches a critical value. An energy-based fatigue criterion is thus proposed as a function of the irreversible hysteresis energy of the stabilized cycle and the loading rate. Fatigue life is calculated using the proposed model. The experimental and computational results compare well.

  15. Areal rainfall estimation using moving cars - computer experiments including hydrological modeling

    Science.gov (United States)

    Rabiei, Ehsan; Haberlandt, Uwe; Sester, Monika; Fitzner, Daniel; Wallner, Markus

    2016-09-01

    The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rain rate. The optical sensors used in that study are designed for operating the windscreen wipers and showed promising results for rainfall measurement purposes. Their measurement accuracy has been quantified in laboratory experiments. Considering explicitly those errors, the main objective of this study is to investigate the benefit of using RCs for estimating areal rainfall. For that, computer experiments are carried out, where radar rainfall is considered as the reference and the other sources of data, i.e., RCs and rain gauges, are extracted from radar data. Comparing the quality of areal rainfall estimation by RCs with rain gauges and reference data helps to investigate the benefit of the RCs. The value of this additional source of data is not only assessed for areal rainfall estimation performance but also for use in hydrological modeling. Considering measurement errors derived from laboratory experiments, the result shows that the RCs provide useful additional information for areal rainfall estimation as well as for hydrological modeling. Moreover, by testing larger uncertainties for RCs, they observed to be useful up to a certain level for areal rainfall estimation and discharge simulation.

  16. A phantom model demonstration of tomotherapy dose painting delivery, including managed respiratory motion without motion management

    Energy Technology Data Exchange (ETDEWEB)

    Kissick, Michael W; Mo Xiaohu; McCall, Keisha C; Mackie, Thomas R [Department of Medical Physics, Wisconsin Institutes for Medical Research, 111 Highland Avenue, University of Wisconsin-Madison, Madison, WI 53705 (United States); Schubert, Leah K [Radiation Oncology Department, University of Nebraska Medical Center, Omaha, NE 68198 (United States); Westerly, David C, E-mail: mwkissick@wisc.ed [Department of Radiation Oncology, University of Colorado Denver, Aurora, CO 80045 (United States)

    2010-05-21

    The aim of the study was to demonstrate a potential alternative scenario for accurate dose-painting (non-homogeneous planned dose) delivery at 1 cm beam width with helical tomotherapy (HT) in the presence of 1 cm, three-dimensional, intra-fraction respiratory motion, but without any active motion management. A model dose-painting experiment was planned and delivered to the average position (proper phase of a 4DCT scan) with three spherical PTV levels to approximate dose painting to compensate for hypothetical hypoxia in a model lung tumor. Realistic but regular motion was produced with the Washington University 4D Motion Phantom. A small spherical Virtual Water(TM) phantom was used to simulate a moving lung tumor inside of the LUNGMAN(TM) anthropomorphic chest phantom to simulate realistic heterogeneity uncertainties. A piece of 4 cm Gafchromic EBT(TM) film was inserted into the 6 cm diameter sphere. TomoTherapy, Inc., DQA(TM) software was used to verify the delivery performed on a TomoTherapy Hi-Art II(TM) device. The dose uncertainty in the purposeful absence of motion management and in the absence of large, low frequency drifts (periods greater than the beam width divided by the couch velocity) or randomness in the breathing displacement yields very favorable results. Instead of interference effects, only small blurring is observed because of the averaging of many breathing cycles and beamlets and the avoidance of interference. Dose painting during respiration with helical tomotherapy is feasible in certain situations without motion management. A simple recommendation is to make respiration as regular as possible without low frequency drifting. The blurring is just small enough to suggest that it may be acceptable to deliver without motion management if the motion is equal to the beam width or smaller (at respiration frequencies) when registered to the average position.

  17. A phantom model demonstration of tomotherapy dose painting delivery, including managed respiratory motion without motion management

    International Nuclear Information System (INIS)

    Kissick, Michael W; Mo Xiaohu; McCall, Keisha C; Mackie, Thomas R; Schubert, Leah K; Westerly, David C

    2010-01-01

    The aim of the study was to demonstrate a potential alternative scenario for accurate dose-painting (non-homogeneous planned dose) delivery at 1 cm beam width with helical tomotherapy (HT) in the presence of 1 cm, three-dimensional, intra-fraction respiratory motion, but without any active motion management. A model dose-painting experiment was planned and delivered to the average position (proper phase of a 4DCT scan) with three spherical PTV levels to approximate dose painting to compensate for hypothetical hypoxia in a model lung tumor. Realistic but regular motion was produced with the Washington University 4D Motion Phantom. A small spherical Virtual Water(TM) phantom was used to simulate a moving lung tumor inside of the LUNGMAN(TM) anthropomorphic chest phantom to simulate realistic heterogeneity uncertainties. A piece of 4 cm Gafchromic EBT(TM) film was inserted into the 6 cm diameter sphere. TomoTherapy, Inc., DQA(TM) software was used to verify the delivery performed on a TomoTherapy Hi-Art II(TM) device. The dose uncertainty in the purposeful absence of motion management and in the absence of large, low frequency drifts (periods greater than the beam width divided by the couch velocity) or randomness in the breathing displacement yields very favorable results. Instead of interference effects, only small blurring is observed because of the averaging of many breathing cycles and beamlets and the avoidance of interference. Dose painting during respiration with helical tomotherapy is feasible in certain situations without motion management. A simple recommendation is to make respiration as regular as possible without low frequency drifting. The blurring is just small enough to suggest that it may be acceptable to deliver without motion management if the motion is equal to the beam width or smaller (at respiration frequencies) when registered to the average position.

  18. Multimodal Similarity Gaussian Process Latent Variable Model.

    Science.gov (United States)

    Song, Guoli; Wang, Shuhui; Huang, Qingming; Tian, Qi

    2017-09-01

    Data from real applications involve multiple modalities representing content with the same semantics from complementary aspects. However, relations among heterogeneous modalities are simply treated as observation-to-fit by existing work, and the parameterized modality specific mapping functions lack flexibility in directly adapting to the content divergence and semantic complicacy in multimodal data. In this paper, we build our work based on the Gaussian process latent variable model (GPLVM) to learn the non-parametric mapping functions and transform heterogeneous modalities into a shared latent space. We propose multimodal Similarity Gaussian Process latent variable model (m-SimGP), which learns the mapping functions between the intra-modal similarities and latent representation. We further propose multimodal distance-preserved similarity GPLVM (m-DSimGP) to preserve the intra-modal global similarity structure, and multimodal regularized similarity GPLVM (m-RSimGP) by encouraging similar/dissimilar points to be similar/dissimilar in the latent space. We propose m-DRSimGP, which combines the distance preservation in m-DSimGP and semantic preservation in m-RSimGP to learn the latent representation. The overall objective functions of the four models are solved by simple and scalable gradient decent techniques. They can be applied to various tasks to discover the nonlinear correlations and to obtain the comparable low-dimensional representation for heterogeneous modalities. On five widely used real-world data sets, our approaches outperform existing models on cross-modal content retrieval and multimodal classification.

  19. A Harmonized Process Model for Digital Forensic Investigation Readiness

    OpenAIRE

    Valjarevic , Aleksandar; Venter , Hein

    2013-01-01

    Part 2: FORENSIC MODELS; International audience; Digital forensic readiness enables an organization to prepare itself to perform digital forensic investigations in an efficient and effective manner. The benefits include enhancing the admissibility of digital evidence, better utilization of resources and greater incident awareness. However, a harmonized process model for digital forensic readiness does not currently exist and, thus, there is a lack of effective and standardized implementations...

  20. Modeling Dynamic Regulatory Processes in Stroke

    Science.gov (United States)

    McDermott, Jason E.; Jarman, Kenneth; Taylor, Ronald; Lancaster, Mary; Shankaran, Harish; Vartanian, Keri B.; Stevens, Susan L.; Stenzel-Poore, Mary P.; Sanfilippo, Antonio

    2012-01-01

    The ability to examine the behavior of biological systems in silico has the potential to greatly accelerate the pace of discovery in diseases, such as stroke, where in vivo analysis is time intensive and costly. In this paper we describe an approach for in silico examination of responses of the blood transcriptome to neuroprotective agents and subsequent stroke through the development of dynamic models of the regulatory processes observed in the experimental gene expression data. First, we identified functional gene clusters from these data. Next, we derived ordinary differential equations (ODEs) from the data relating these functional clusters to each other in terms of their regulatory influence on one another. Dynamic models were developed by coupling these ODEs into a model that simulates the expression of regulated functional clusters. By changing the magnitude of gene expression in the initial input state it was possible to assess the behavior of the networks through time under varying conditions since the dynamic model only requires an initial starting state, and does not require measurement of regulatory influences at each time point in order to make accurate predictions. We discuss the implications of our models on neuroprotection in stroke, explore the limitations of the approach, and report that an optimized dynamic model can provide accurate predictions of overall system behavior under several different neuroprotective paradigms. PMID:23071432

  1. Modeling dynamic regulatory processes in stroke.

    Directory of Open Access Journals (Sweden)

    Jason E McDermott

    Full Text Available The ability to examine the behavior of biological systems in silico has the potential to greatly accelerate the pace of discovery in diseases, such as stroke, where in vivo analysis is time intensive and costly. In this paper we describe an approach for in silico examination of responses of the blood transcriptome to neuroprotective agents and subsequent stroke through the development of dynamic models of the regulatory processes observed in the experimental gene expression data. First, we identified functional gene clusters from these data. Next, we derived ordinary differential equations (ODEs from the data relating these functional clusters to each other in terms of their regulatory influence on one another. Dynamic models were developed by coupling these ODEs into a model that simulates the expression of regulated functional clusters. By changing the magnitude of gene expression in the initial input state it was possible to assess the behavior of the networks through time under varying conditions since the dynamic model only requires an initial starting state, and does not require measurement of regulatory influences at each time point in order to make accurate predictions. We discuss the implications of our models on neuroprotection in stroke, explore the limitations of the approach, and report that an optimized dynamic model can provide accurate predictions of overall system behavior under several different neuroprotective paradigms.

  2. Exposing earth surface process model simulations to a large audience

    Science.gov (United States)

    Overeem, I.; Kettner, A. J.; Borkowski, L.; Russell, E. L.; Peddicord, H.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) represents a diverse group of >1300 scientists who develop and apply numerical models to better understand the Earth's surface. CSDMS has a mandate to make the public more aware of model capabilities and therefore started sharing state-of-the-art surface process modeling results with large audiences. One platform to reach audiences outside the science community is through museum displays on 'Science on a Sphere' (SOS). Developed by NOAA, SOS is a giant globe, linked with computers and multiple projectors and can display data and animations on a sphere. CSDMS has developed and contributed model simulation datasets for the SOS system since 2014, including hydrological processes, coastal processes, and human interactions with the environment. Model simulations of a hydrological and sediment transport model (WBM-SED) illustrate global river discharge patterns. WAVEWATCH III simulations have been specifically processed to show the impacts of hurricanes on ocean waves, with focus on hurricane Katrina and super storm Sandy. A large world dataset of dams built over the last two centuries gives an impression of the profound influence of humans on water management. Given the exposure of SOS, CSDMS aims to contribute at least 2 model datasets a year, and will soon provide displays of global river sediment fluxes and changes of the sea ice free season along the Arctic coast. Over 100 facilities worldwide show these numerical model displays to an estimated 33 million people every year. Datasets storyboards, and teacher follow-up materials associated with the simulations, are developed to address common core science K-12 standards. CSDMS dataset documentation aims to make people aware of the fact that they look at numerical model results, that underlying models have inherent assumptions and simplifications, and that limitations are known. CSDMS contributions aim to familiarize large audiences with the use of numerical

  3. Virtual models of the HLA class I antigen processing pathway.

    Science.gov (United States)

    Petrovsky, Nikolai; Brusic, Vladimir

    2004-12-01

    Antigen recognition by cytotoxic CD8 T cells is dependent upon a number of critical steps in MHC class I antigen processing including proteosomal cleavage, TAP transport into the endoplasmic reticulum, and MHC class I binding. Based on extensive experimental data relating to each of these steps there is now the capacity to model individual antigen processing steps with a high degree of accuracy. This paper demonstrates the potential to bring together models of individual antigen processing steps, for example proteosome cleavage, TAP transport, and MHC binding, to build highly informative models of functional pathways. In particular, we demonstrate how an artificial neural network model of TAP transport was used to mine a HLA-binding database so as to identify HLA-binding peptides transported by TAP. This integrated model of antigen processing provided the unique insight that HLA class I alleles apparently constitute two separate classes: those that are TAP-efficient for peptide loading (HLA-B27, -A3, and -A24) and those that are TAP-inefficient (HLA-A2, -B7, and -B8). Hence, using this integrated model we were able to generate novel hypotheses regarding antigen processing, and these hypotheses are now capable of being tested experimentally. This model confirms the feasibility of constructing a virtual immune system, whereby each additional step in antigen processing is incorporated into a single modular model. Accurate models of antigen processing have implications for the study of basic immunology as well as for the design of peptide-based vaccines and other immunotherapies.

  4. Improvement in genetic evaluation of female fertility in dairy cattle using multiple-trait models including milk production traits

    DEFF Research Database (Denmark)

    Sun, C; Madsen, P; Lund, M S

    2010-01-01

    This study investigated the improvement in genetic evaluation of fertility traits by using production traits as secondary traits (MILK = 305-d milk yield, FAT = 305-d fat yield, and PROT = 305-d protein yield). Data including 471,742 records from first lactations of Denmark Holstein cows, covering...... (DATAC1, which only contained the first crop daughters) for proven bulls. In addition, the superiority of the models was evaluated by expected reliability of EBV, calculated from the prediction error variance of EBV. Based on these criteria, the models combining milk production traits showed better model...... stability and predictive ability than single-trait models for all the fertility traits, except for nonreturn rate within 56 d after first service. The stability and predictive ability for the model including MILK or PROT were similar to the model including all 3 milk production traits and better than...

  5. GREENSCOPE: A Method for Modeling Chemical Process ...

    Science.gov (United States)

    Current work within the U.S. Environmental Protection Agency’s National Risk Management Research Laboratory is focused on the development of a method for modeling chemical process sustainability. The GREENSCOPE methodology, defined for the four bases of Environment, Economics, Efficiency, and Energy, can evaluate processes with over a hundred different indicators. These indicators provide a means for realizing the principles of green chemistry and green engineering in the context of sustainability. Development of the methodology has centered around three focal points. One is a taxonomy of impacts that describe the indicators and provide absolute scales for their evaluation. The setting of best and worst limits for the indicators allows the user to know the status of the process under study in relation to understood values. Thus, existing or imagined processes can be evaluated according to their relative indicator scores, and process modifications can strive towards realizable targets. A second area of focus is in advancing definitions of data needs for the many indicators of the taxonomy. Each of the indicators has specific data that is necessary for their calculation. Values needed and data sources have been identified. These needs can be mapped according to the information source (e.g., input stream, output stream, external data, etc.) for each of the bases. The user can visualize data-indicator relationships on the way to choosing selected ones for evalua

  6. Regional modelling of future African climate north of 15S including greenhouse warming and land degradation

    Energy Technology Data Exchange (ETDEWEB)

    Paeth, H. [Geographical Institute, University of Wuerzburg, Am Hubland, 97074 Wuerzburg (Germany); Thamm, H.P. [Geographical Institute, University of Bonn, Bonn (Germany)

    2007-08-15

    Previous studies have highlighted the crucial role of land degradation in tropical African climate. This effect urgently has to be taken into account when predicting future African climate under enhanced greenhouse conditions. Here, we present time slice experiments of African climate until 2025, using a high-resolution regional climate model. A supposable scenario of future land use changes, involving vegetation loss and soil degradation, is prescribed simultaneously with increasing greenhouse-gas concentrations in order to detect, where the different forcings counterbalance or reinforce each other. This proceeding allows us to define the regions of highest vulnerability with respect to future freshwater availability and food security in tropical and subtropical Africa and may provide a decision basis for political measures. The model simulates a considerable reduction in precipitation amount until 2025 over most of tropical Africa, amounting to partly more than 500 mm (20-40% of the annual sum), particularly in the Congo Basin and the Sahel Zone. The change is strongest in boreal summer and basically reflects the pattern of maximum vegetation cover during the seasonal cycle. The related change in the surface energy fluxes induces a substantial near-surface warming by up to 7C. According to the modified temperature gradients over tropical Africa, the summer monsoon circulation intensifies and transports more humid air masses into the southern part of West Africa. This humidifying effect is overcompensated by a remarkable decrease in surface evaporation, leading to the overall drying tendency over most of Africa. Extreme daily rainfall events become stronger in autumn but less intense in spring. Summer and autumn appear to be characterized by more severe heat waves over Subsaharan West Africa. In addition, the Tropical Easterly Jet is weakening, leading to enhanced drought conditions in the Sahel Zone. All these results suggest that the local impact of land

  7. Landau quantized dynamics and spectra for group-VI dichalcogenides, including a model quantum wire

    Science.gov (United States)

    Horing, Norman J. M.

    2017-06-01

    This work is concerned with the derivation of the Green's function for Landau-quantized carriers in the Group-VI dichalcogenides. In the spatially homogeneous case, the Green's function is separated into a Peierls phase factor and a translationally invariant part which is determined in a closed form integral representation involving only elementary functions. The latter is expanded in an eigenfunction series of Laguerre polynomials. These results for the retarded Green's function are presented in both position and momentum representations, and yet another closed form representation is derived in circular coordinates in terms of the Bessel wave function of the second kind (not to be confused with the Bessel function). The case of a quantum wire is also addressed, representing the quantum wire in terms of a model one-dimensional δ (x ) -potential profile. This retarded Green's function for propagation directly along the wire is determined exactly in terms of the corresponding Green's function for the system without the δ (x ) -potential, and the Landau quantized eigenenergy dispersion relation is examined. The thermodynamic Green's function for the dichalcogenide carriers in a normal magnetic field is formulated here in terms of its spectral weight, and its solution is presented in a momentum/integral representation involving only elementary functions, which is subsequently expanded in Laguerre eigenfunctions and presented in both momentum and position representations.

  8. Landau quantized dynamics and spectra for group-VI dichalcogenides, including a model quantum wire

    Directory of Open Access Journals (Sweden)

    Norman J. M. Horing

    2017-06-01

    Full Text Available This work is concerned with the derivation of the Green’s function for Landau-quantized carriers in the Group-VI dichalcogenides. In the spatially homogeneous case, the Green’s function is separated into a Peierls phase factor and a translationally invariant part which is determined in a closed form integral representation involving only elementary functions. The latter is expanded in an eigenfunction series of Laguerre polynomials. These results for the retarded Green’s function are presented in both position and momentum representations, and yet another closed form representation is derived in circular coordinates in terms of the Bessel wave function of the second kind (not to be confused with the Bessel function. The case of a quantum wire is also addressed, representing the quantum wire in terms of a model one-dimensional δ(x-potential profile. This retarded Green’s function for propagation directly along the wire is determined exactly in terms of the corresponding Green’s function for the system without the δ(x-potential, and the Landau quantized eigenenergy dispersion relation is examined. The thermodynamic Green’s function for the dichalcogenide carriers in a normal magnetic field is formulated here in terms of its spectral weight, and its solution is presented in a momentum/integral representation involving only elementary functions, which is subsequently expanded in Laguerre eigenfunctions and presented in both momentum and position representations.

  9. Discovering Reference Process Models by Mining Process Variants

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    Recently, a new generation of adaptive Process-Aware Information Systems (PAIS) has emerged, which allows for dynamic process and service changes (e.g., to insert, delete, and move activities and service executions in a running process). This, in turn, has led to a large number of process variants

  10. Multifunctional multiscale composites: Processing, modeling and characterization

    Science.gov (United States)

    Qiu, Jingjing

    Carbon nanotubes (CNTs) demonstrate extraordinary properties and show great promise in enhancing out-of-plane properties of traditional polymer/fiber composites and enabling functionality. However, current manufacturing challenges hinder the realization of their potential. In the dissertation research, both experimental and computational efforts have been conducted to investigate effective manufacturing techniques of CNT integrated multiscale composites. The fabricated composites demonstrated significant improvements in physical properties, such as tensile strength, tensile modulus, inter-laminar shear strength, thermal dimension stability and electrical conductivity. Such multiscale composites were truly multifunctional with the addition of CNTs. Furthermore, a novel hierarchical multiscale modeling method was developed in this research. Molecular dynamic (MD) simulation offered reasonable explanation of CNTs dispersion and their motion in polymer solution. Bi-mode finite-extensible-nonlinear-elastic (FENE) dumbbell simulation was used to analyze the influence of CNT length distribution on the stress tensor and shear-rate-dependent viscosity. Based on the simulated viscosity profile and empirical equations from experiments, a macroscale flow simulation model on the finite element method (FEM) method was developed and validated to predict resin flow behavior in the processing of CNT-enhanced multiscale composites. The proposed multiscale modeling method provided a comprehensive understanding of micro/nano flow in both atomistic details and mesoscale. The simulation model can be used to optimize process design and control of the mold-filling process in multiscale composite manufacturing. This research provided systematic investigations into the CNT-based multiscale composites. The results from this study may be used to leverage the benefits of CNTs and open up new application opportunities for high-performance multifunctional multiscale composites. Keywords. Carbon

  11. Innovative process engineering: a generic model of the innovation process

    OpenAIRE

    Pénide, Thomas; Gourc, Didier; Pingaud, Hervé; Peillon, Philippe

    2013-01-01

    International audience; Innovation can be represented as a knowledge transformation process perceived with different levels of granularity. The milestones of this process allow assessment for its each step and set up feedback loops that will be highlighted. This innovation process is a good starting point to understand innovation and then to manage it. Best practices being patterns of processes, we describe innovation best practices as compulsory steps in our innovation process. To put into p...

  12. A Fully Coupled Computational Model of the Silylation Process

    Energy Technology Data Exchange (ETDEWEB)

    G. H. Evans; R. S. Larson; V. C. Prantil; W. S. Winters

    1999-02-01

    This report documents the development of a new finite element model of the positive tone silylation process. Model development makes use of pre-existing Sandia technology used to describe coupled thermal-mechanical behavior in deforming metals. Material properties and constitutive models were obtained from the literature. The model is two-dimensional and transient and focuses on the part of the lithography process in which crosslinked and uncrosslinked resist is exposed to a gaseous silylation agent. The model accounts for the combined effects of mass transport (diffusion of silylation agent and reaction product), chemical reaction resulting in the uptake of silicon and material swelling, the generation of stresses, and the resulting material motion. The influence of stress on diffusion and reaction rates is also included.

  13. Pavement maintenance optimization model using Markov Decision Processes

    Science.gov (United States)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  14. Modeling the Object-Oriented Software Process: OPEN and the Unified Process

    NARCIS (Netherlands)

    van den Berg, Klaas; Aksit, Mehmet; van den Broek, P.M.

    A short introduction to software process modeling is presented, particularly object-oriented modeling. Two major industrial process models are discussed: the OPEN model and the Unified Process model. In more detail, the quality assurance in the Unified Process tool (formally called Objectory) is

  15. Modelling of the Heating Process in a Thermal Screw

    Science.gov (United States)

    Zhang, Xuan; Veje, Christian T.; Lassen, Benny; Willatzen, Morten

    2012-11-01

    The procedure of separating efficiently dry-stuff (proteins), fat, and water is an important process in the handling of waste products from industrial and commercial meat manufactures. One of the sub-processes in a separation facility is a thermal screw where the raw material (after proper mincing) is heated in order to melt fat, coagulate protein, and free water. This process is very energy consuming and the efficiency of the product is highly dependent on accurate temperature control of the process. A key quality parameter is the time that the product is maintained at temperatures within a certain threshold. A detailed mathematical model for the heating process in the thermal screw is developed and analysed. The model is formulated as a set of partial differential equations including the latent heat for the melting process of the fat and the boiling of water, respectively. The product is modelled by three components; water, fat and dry-stuff (bones and proteins). The melting of the fat component is captured as a plateau in the product temperature. The model effectively captures the product outlet temperature and the energy consumed. Depending on raw material composition, "soft" or "dry", the model outlines the heat injection and screw speeds necessary to obtain optimal output quality.

  16. Strategies to Automatically Derive a Process Model from a Configurable Process Model Based on Event Data

    Directory of Open Access Journals (Sweden)

    Mauricio Arriagada-Benítez

    2017-10-01

    Full Text Available Configurable process models are frequently used to represent business workflows and other discrete event systems among different branches of large organizations: they unify commonalities shared by all branches and describe their differences, at the same time. The configuration of such models is usually done manually, which is challenging. On the one hand, when the number of configurable nodes in the configurable process model grows, the size of the search space increases exponentially. On the other hand, the person performing the configuration may lack the holistic perspective to make the right choice for all configurable nodes at the same time, since choices influence each other. Nowadays, information systems that support the execution of business processes create event data reflecting how processes are performed. In this article, we propose three strategies (based on exhaustive search, genetic algorithms and a greedy heuristic that use event data to automatically derive a process model from a configurable process model that better represents the characteristics of the process in a specific branch. These strategies have been implemented in our proposed framework and tested in both business-like event logs as recorded in a higher educational enterprise resource planning system and a real case scenario involving a set of Dutch municipalities.

  17. Dynamic modeling and validation of a lignocellulosic enzymatic hydrolysis process

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail; Sin, Gürkan

    2013-01-01

    The enzymatic hydrolysis process is one of the key steps in second generation biofuel production. After being thermally pretreated, the lignocellulosic material is liquefied by enzymes prior to fermentation. The scope of this paper is to evaluate a dynamic model of the hydrolysis process...... on a demonstration scale reactor. The following novel features are included: the application of the Convection–Diffusion–Reaction equation to a hydrolysis reactor to assess transport and mixing effects; the extension of a competitive kinetic model with enzymatic pH dependency and hemicellulose hydrolysis...

  18. Modeling and optimization of wet sizing process

    International Nuclear Information System (INIS)

    Thai Ba Cau; Vu Thanh Quang and Nguyen Ba Tien

    2004-01-01

    Mathematical simulation on basis of Stock law has been done for wet sizing process on cylinder equipment of laboratory and semi-industrial scale. The model consists of mathematical equations describing relations between variables, such as: - Resident time distribution function of emulsion particles in the separating zone of the equipment depending on flow-rate, height, diameter and structure of the equipment. - Size-distribution function in the fine and coarse parts depending on resident time distribution function of emulsion particles, characteristics of the material being processed, such as specific density, shapes, and characteristics of the environment of classification, such as specific density, viscosity. - Experimental model was developed on data collected from an experimental cylindrical equipment with diameter x height of sedimentation chamber equal to 50 x 40 cm for an emulsion of zirconium silicate in water. - Using this experimental model allows to determine optimal flow-rate in order to obtain product with desired grain size in term of average size or size distribution function. (author)

  19. Modelling and control of dynamic systems using gaussian process models

    CERN Document Server

    Kocijan, Juš

    2016-01-01

    This monograph opens up new horizons for engineers and researchers in academia and in industry dealing with or interested in new developments in the field of system identification and control. It emphasizes guidelines for working solutions and practical advice for their implementation rather than the theoretical background of Gaussian process (GP) models. The book demonstrates the potential of this recent development in probabilistic machine-learning methods and gives the reader an intuitive understanding of the topic. The current state of the art is treated along with possible future directions for research. Systems control design relies on mathematical models and these may be developed from measurement data. This process of system identification, when based on GP models, can play an integral part of control design in data-based control and its description as such is an essential aspect of the text. The background of GP regression is introduced first with system identification and incorporation of prior know...

  20. Deconstructing crop processes and models via identities

    DEFF Research Database (Denmark)

    Porter, John Roy; Christensen, Svend

    2013-01-01

    , mainly atmospheric CO2 concentration and increased and/or varying temperatures. It illustrates an important principle in models of a single cause having alternative effects and vice versa. The second part suggests some features, mostly missing in current crop models, that need to be included......This paper is part review and part opinion piece; it has three parts of increasing novelty and speculation in approach. The first presents an overview of how some of the major crop simulation models approach the issue of simulating the responses of crops to changing climatic and weather variables...... in the future, focussing on extreme events such as high temperature or extreme drought. The final opinion part is speculative but novel. It describes an approach to deconstruct resource use efficiencies into their constituent identities or elements based on the Kaya-Porter identity, each of which can...

  1. Use of mathematical modelling in electron beam processing: A guidebook

    International Nuclear Information System (INIS)

    2010-01-01

    The use of electron beam irradiation for industrial applications, like the sterilization of medical devices or cross-linking of polymers, has a long and successful track record and has proven itself to be a key technology. Emerging fields, including environmental applications of ionizing radiation, the sterilization of complex medical and pharmaceutical products or advanced material treatment, require the design and control of even more complex irradiators and irradiation processes. Mathematical models can aid the design process, for example by calculating absorbed dose distributions in a product, long before any prototype is built. They support process qualification through impact assessment of process variable uncertainties, and can be an indispensable teaching tool for technologists in training in the use of radiation processing. The IAEA, through various mechanisms, including its technical cooperation programme, coordinated research projects, technical meetings, guidelines and training materials, is promoting the use of radiation technologies to minimize the effects of harmful contaminants and develop value added products originating from low cost natural and human made raw materials. The need to publish a guidebook on the use of mathematical modelling for design processes in the electron beam treatment of materials was identified through the increased interest of radiation processing laboratories in Member States and as a result of recommendations from several IAEA expert meetings. In response, the IAEA has prepared this report using the services of an expert in the field. This publication should serve as both a guidebook and introductory tutorial for the use of mathematical modelling (using mostly Monte Carlo methods) in electron beam processing. The emphasis of this guide is on industrial irradiation methodologies with a strong reference to existing literature and applicable standards. Its target audience is readers who have a basic understanding of electron

  2. Process modeling for the Integrated Thermal Treatment System (ITTS) study

    Energy Technology Data Exchange (ETDEWEB)

    Liebelt, K.H.; Brown, B.W.; Quapp, W.J.

    1995-09-01

    This report describes the process modeling done in support of the integrated thermal treatment system (ITTS) study, Phases 1 and 2. ITTS consists of an integrated systems engineering approach for uniform comparison of widely varying thermal treatment technologies proposed for treatment of the contact-handled mixed low-level wastes (MLLW) currently stored in the U.S. Department of Energy complex. In the overall study, 19 systems were evaluated. Preconceptual designs were developed that included all of the various subsystems necessary for a complete installation, from waste receiving through to primary and secondary stabilization and disposal of the processed wastes. Each system included the necessary auxiliary treatment subsystems so that all of the waste categories in the complex were fully processed. The objective of the modeling task was to perform mass and energy balances of the major material components in each system. Modeling of trace materials, such as pollutants and radioactive isotopes, were beyond the present scope. The modeling of the main and secondary thermal treatment, air pollution control, and metal melting subsystems was done using the ASPEN PLUS process simulation code, Version 9.1-3. These results were combined with calculations for the remainder of the subsystems to achieve the final results, which included offgas volumes, and mass and volume waste reduction ratios.

  3. Model systems for life processes on Mars

    Science.gov (United States)

    Mitz, M. A.

    1974-01-01

    In the evolution of life forms nonphotosynthetic mechanisms are developed. The question remains whether a total life system could evolve which is not dependent upon photosynthesis. In trying to visualize life on other planets, the photosynthetic process has problems. On Mars, the high intensity of light at the surface is a concern and alternative mechanisms need to be defined and analyzed. In the UV search for alternate mechanisms, several different areas may be identified. These involve activated inorganic compounds in the atmosphere, such as the products of photodissociation of carbon dioxide and the organic material which may be created by natural phenomena. In addition, a life system based on the pressure of the atmospheric constituents, such as carbon dioxide, is a possibility. These considerations may be important for the understanding of evolutionary processes of life on another planet. Model systems which depend on these alternative mechanisms are defined and related to presently planned and future planetary missions.

  4. Clarifying the use of aggregated exposures in multilevel models: self-included vs. self-excluded measures.

    Directory of Open Access Journals (Sweden)

    Etsuji Suzuki

    Full Text Available Multilevel analyses are ideally suited to assess the effects of ecological (higher level and individual (lower level exposure variables simultaneously. In applying such analyses to measures of ecologies in epidemiological studies, individual variables are usually aggregated into the higher level unit. Typically, the aggregated measure includes responses of every individual belonging to that group (i.e. it constitutes a self-included measure. More recently, researchers have developed an aggregate measure which excludes the response of the individual to whom the aggregate measure is linked (i.e. a self-excluded measure. In this study, we clarify the substantive and technical properties of these two measures when they are used as exposures in multilevel models.Although the differences between the two aggregated measures are mathematically subtle, distinguishing between them is important in terms of the specific scientific questions to be addressed. We then show how these measures can be used in two distinct types of multilevel models-self-included model and self-excluded model-and interpret the parameters in each model by imposing hypothetical interventions. The concept is tested on empirical data of workplace social capital and employees' systolic blood pressure.Researchers assume group-level interventions when using a self-included model, and individual-level interventions when using a self-excluded model. Analytical re-parameterizations of these two models highlight their differences in parameter interpretation. Cluster-mean centered self-included models enable researchers to decompose the collective effect into its within- and between-group components. The benefit of cluster-mean centering procedure is further discussed in terms of hypothetical interventions.When investigating the potential roles of aggregated variables, researchers should carefully explore which type of model-self-included or self-excluded-is suitable for a given situation

  5. Clarifying the use of aggregated exposures in multilevel models: self-included vs. self-excluded measures.

    Science.gov (United States)

    Suzuki, Etsuji; Yamamoto, Eiji; Takao, Soshi; Kawachi, Ichiro; Subramanian, S V

    2012-01-01

    Multilevel analyses are ideally suited to assess the effects of ecological (higher level) and individual (lower level) exposure variables simultaneously. In applying such analyses to measures of ecologies in epidemiological studies, individual variables are usually aggregated into the higher level unit. Typically, the aggregated measure includes responses of every individual belonging to that group (i.e. it constitutes a self-included measure). More recently, researchers have developed an aggregate measure which excludes the response of the individual to whom the aggregate measure is linked (i.e. a self-excluded measure). In this study, we clarify the substantive and technical properties of these two measures when they are used as exposures in multilevel models. Although the differences between the two aggregated measures are mathematically subtle, distinguishing between them is important in terms of the specific scientific questions to be addressed. We then show how these measures can be used in two distinct types of multilevel models-self-included model and self-excluded model-and interpret the parameters in each model by imposing hypothetical interventions. The concept is tested on empirical data of workplace social capital and employees' systolic blood pressure. Researchers assume group-level interventions when using a self-included model, and individual-level interventions when using a self-excluded model. Analytical re-parameterizations of these two models highlight their differences in parameter interpretation. Cluster-mean centered self-included models enable researchers to decompose the collective effect into its within- and between-group components. The benefit of cluster-mean centering procedure is further discussed in terms of hypothetical interventions. When investigating the potential roles of aggregated variables, researchers should carefully explore which type of model-self-included or self-excluded-is suitable for a given situation, particularly

  6. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM) MODELS

    International Nuclear Information System (INIS)

    Y.S. Wu

    2005-01-01

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on water and gas

  7. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Y.S. Wu

    2005-08-24

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on

  8. Comments on: Spatiotemporal models for skewed processes

    KAUST Repository

    Genton, Marc G.

    2017-09-04

    We would first like to thank the authors for this paper that highlights the important problem of building models for non-Gaussian space-time processes. We will hereafter refer to the paper as SGV, and we also would like to acknowledge and thank them for providing us with the temporally detrended temperatures, plotted in their Figure 1, along with the coordinates of the twenty-one locations and the posterior means of the parameters for the MA1 model. We find much of interest to discuss in this paper, and as we progress through points of interest, we pose some questions to the authors that we hope they will be able to address.

  9. Combining prior knowledge with data-driven modeling of a batch distillation column including start-up

    NARCIS (Netherlands)

    van Lith, P.F.; van Lith, Pascal F.; Betlem, Bernardus H.L.; Roffel, B.

    2003-01-01

    This paper presents the development of a simple model which describes the product quality and production over time of an experimental batch distillation column, including start-up. The model structure is based on a simple physical framework, which is augmented with fuzzy logic. This provides a way

  10. Specification of e-business process model for PayPal online payment process using Reo

    OpenAIRE

    Xie, M.

    2005-01-01

    textabstractE-business process modeling allows business analysts to better understand and analyze the business processes, and eventually to use software systems to automate (parts of) these business processes to achieve higher profit. To support e-business process modeling, many business process modeling languages have been used as tools. However, many existing business process modeling languages lack (a) formal semantics, (b) formal computational model, and (c) an integrated view of the busi...

  11. Building a Steganography Program Including How to Load, Process, and Save JPEG and PNG Files in Java

    Science.gov (United States)

    Courtney, Mary F.; Stix, Allen

    2006-01-01

    Instructors teaching beginning programming classes are often interested in exercises that involve processing photographs (i.e., files stored as .jpeg). They may wish to offer activities such as color inversion, the color manipulation effects archived with pixel thresholding, or steganography, all of which Stevenson et al. [4] assert are sought by…

  12. 40 CFR Appendix A to Part 419 - Processes Included in the Determination of BAT Effluent Limitations for Total Chromium...

    Science.gov (United States)

    2010-07-01

    ... Extraction 37. Clay Contacting—Percolation 38. Wax Sweating 39. Acid Treating 40. Phenol Extraction Reforming and Alkylation Processes 8. H2SO4 Alkylation 12. Catalytic Reforming [50 FR 28528, July 12, 1985; 50... Catalytic Cracking 7. Moving Bed Catalytic Cracking 10. Hydrocracking 15. Delayed Coking 16. Fluid Coking 54...

  13. Privatization processes in banking: Motives and models

    Directory of Open Access Journals (Sweden)

    Ristić Života

    2006-01-01

    Full Text Available The paper consists of three methodologically and causally connected thematic parts: the first part deals with crucial motives and models of the privatization processes in the USA and EU with a particular analytical focus on the Herfindahl-Hirschman doctrine of the collective domination index, as well as on the essence of merger-acquisition and take-over models. The second thematic part of the paper, as a logical continuation of the first one represents a brief comparative analysis of the motives and models implemented in bank privatization in the south-eastern European countries with particular focus on identifying interests of foreign investors, an optimal volume and price of the investment, and assessment of finalized privatizations in those countries. The final part of the paper theoretically and practically stems from the first and the second part, in that way making an interdependent and a compatible thematic whole with them, presents qualitative and quantitative aspects of analyzing finalized privatization and/or sale-purchase of Serbian banks with particular focus on IPO and IPOPLUS as the prevailing models of future sale-purchase in privatizing Serbian banks.

  14. Designing a Process for Tracking Business Model Change

    DEFF Research Database (Denmark)

    Groskovs, Sergejs

    that may alter the business model of the firm. The decision-making process about which metrics to track affects what management’s attention is focused on during the year. The rather streamlined process outlined here is capable of facilitating swift responses to environmental changes in local markets...... by establishing new KPIs on an ongoing basis together with the business units on the ground, and is thus of key importance to strategic management of the firm. The paper concludes with a discussion of its methodological compliance to design science research guidelines and revisits the literature in process......The paper has adopted a design science research approach to design and verify with key stakeholders a fundamental management process of revising KPIs (key performance indicators), including those indicators that are related to business model change. The paper proposes a general guide...

  15. Computer-Aided Modeling of Lipid Processing Technology

    DEFF Research Database (Denmark)

    Diaz Tovar, Carlos Axel

    2011-01-01

    increase along with growing interest in biofuels, the oleochemical industry faces in the upcoming years major challenges in terms of design and development of better products and more sustainable processes to make them. Computer-aided methods and tools for process synthesis, modeling and simulation...... are widely used for design, analysis, and optimization of processes in the chemical and petrochemical industries. These computer-aided tools have helped the chemical industry to evolve beyond commodities toward specialty chemicals and ‘consumer oriented chemicals based products’. Unfortunately...... to develop systematic computer-aided methods (property models) and tools (database) related to the prediction of the necessary physical properties suitable for design and analysis of processes employing lipid technologies. The methods and tools include: the development of a lipid-database (CAPEC...

  16. Modeling Aspects Of Activated Sludge Processes Part I: Process Modeling Of Activated Sludge Facilitation And Sedimentation

    International Nuclear Information System (INIS)

    Ibrahim, H. I.; EI-Ahwany, A.H.; Ibrahim, G.

    2004-01-01

    Process modeling of activated sludge flocculation and sedimentation reviews consider the activated sludge floc characteristics such as: morphology viable and non-viable cell ratio density and water content, bio flocculation and its kinetics were studied considering the characteristics of bio flocculation and explaining theory of Divalent Cation Bridging which describes the major role of cations in bio flocculation. Activated sludge flocculation process modeling was studied considering mass transfer limitations from Clifft and Andrew, 1981, Benefild and Molz 1983 passing Henze 1987, until Tyagi 1996 and G. Ibrahim et aI. 2002. Models of aggregation and breakage of flocs were studied by Spicer and Pratsinis 1996,and Biggs 2002 Size distribution of floes influences mass transfer and biomass separation in the activated sludge process. Therefore, it is of primary importance to establish the role of specific process operation factors, such as sludge loading dynamic sludge age and dissolved oxygen, on this distribution with special emphasis on the formation of primary particles

  17. Including cetaceans in multi-species assessment models using strandings data: why, how and what can we do about it?

    Directory of Open Access Journals (Sweden)

    Camilo Saavedra

    2014-07-01

    questions. Most multispecies models include interactions between commercially exploited species, since those data are more readily available. However, information is needed on at least both the main preys and predators of a selected stock. In the case of European Hake, the species we have focus our research on, cetaceans are their main predator, particularly common and bottlenose dolphins, which have been estimated to remove annually in the Atlantic shelf waters of the Iberian Peninsula, an amount similar to that caught by Spanish and Portuguese fleets (Santos et al., 2013. The European hake is one of the main fishing species of the Spanish and Portuguese fleets operating in the area, and one where more research activity has been concentrated, hence there is plenty of available biological information on growth, reproduction and trophic interactions. As a result, a population model has been built which uses trophic interactions to investigate the relationships between hake and other species. The European hake population is currently divided into two stocks, north and south. The southern hake stock, distributed along the Atlantic coast of the Iberian Peninsula, is annually assessed by the International Council for the Exploration of the Sea (ICES and the Spanish Institute of Oceanography (IEO. For the assessment of this stock, “Gadget” a multi-specific modeling framework is used. Gadget allows the building of minimum realistic models that integrating the main trophic relationships among selected species considered to reflect the main processes in the system. Modeling cetacean populations can allow us to include complex trophic relationships in multispecies models. Furthermore, it will also be a tool to help cetaceans conservation by guiding possible management measures that ensure their viability or recovery. All cetacean species are protected by national and international legislation (e.g. Habitats Directive. However, modeling cetaceans dynamics has a number of problems

  18. Developing a model for assessing biomass processing technologies within a local biomass processing depot.

    Science.gov (United States)

    Bals, Bryan D; Dale, Bruce E

    2012-02-01

    One solution to the supply chain challenges of cellulosic biofuels is a network of local biomass processing depots (LBPDs) that can produce stable, dense, intermediate commodities and valuable co-products prior to shipping to a refinery. A techno-economic model of an LBPD facility that could incorporate multiple technologies and products was developed in Microsoft Excel to be used to economically and environmentally evaluate potential LBPD systems. In this study, three technologies (ammonia fiber expansion or AFEX™ pretreatment, fast pyrolysis, and leaf protein processing) were assessed for profitability. Pyrolysis was slightly profitable under the base conditions, leaf protein processing was highly unprofitable, and AFEX was profitable if biomass drying was not required. This model can be adapted to multiple feedstocks and end uses, including both economic and environmental modeling. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Mathematical Modeling of Aerodynamic Space -to - Surface Flight with Trajectory for Avoid Intercepting Process

    OpenAIRE

    Gornev, Serge

    2006-01-01

    Modeling has been created for a Space-to-Surface system defined for an optimal trajectory for targeting in terminal phase with avoids an intercepting process. The modeling includes models for simulation atmosphere, speed of sound, aerodynamic flight and navigation by an infrared system. The modeling and simulation includes statistical analysis of the modeling results.

  20. Self-similar Gaussian processes for modeling anomalous diffusion

    Science.gov (United States)

    Lim, S. C.; Muniandy, S. V.

    2002-08-01

    We study some Gaussian models for anomalous diffusion, which include the time-rescaled Brownian motion, two types of fractional Brownian motion, and models associated with fractional Brownian motion based on the generalized Langevin equation. Gaussian processes associated with these models satisfy the anomalous diffusion relation which requires the mean-square displacement to vary with tα, 0Brownian motion and time-rescaled Brownian motion all have the same probability distribution function, the Slepian theorem can be used to compare their first passage time distributions, which are different. Finally, in order to model anomalous diffusion with a variable exponent α(t) it is necessary to consider the multifractional extensions of these Gaussian processes.

  1. Big-bang nucleosynthesis with a long-lived charged massive particle including {sup 4}He spallation processes in a bound state

    Energy Technology Data Exchange (ETDEWEB)

    Jittoh, Toshifumi; Kohri, Kazunori; Koike, Masafumi; Sato, Joe; Sugai, Kenichi; Yamanaka, Masato; Yazaki, Koichi [Department of Physics, Saitama University, Shimo-okubo, Sakura-ku, Saitama, 338-8570 (Japan); Theory Center, Institute of Particle and Nuclear Studies, KEK (High Energy Accelerator Research Organization), 1-1 Oho, Tsukuba 305-0801 (Japan); Maskawa Institute for Science and Culture, Kyoto Sangyo University, Kyoto 603-8555 (Japan); Hashimoto Mathematical Physics Laboratory, Nishina Accelerator Research Center, RIKEN, Wako, Saitama 351-0198 and Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan)

    2012-07-27

    We propose helium-4 spallation processes induced by long-lived stau in supersymmetric standard models, and investigate an impact of the processes on light elements abundances. We show that, as long as the phase space of helium-4 spallation processes is open, they are more important than stau-catalyzed fusion and hence constrain the stau property.

  2. Big-bang nucleosynthesis with a long-lived charged massive particle including 4He spallation processes in a bound state

    Science.gov (United States)

    Jittoh, Toshifumi; Kohri, Kazunori; Koike, Masafumi; Sato, Joe; Sugai, Kenichi; Yamanaka, Masato; Yazaki, Koichi

    2012-07-01

    We propose helium-4 spallation processes induced by long-lived stau in supersymmetric standard models, and investigate an impact of the processes on light elements abundances. We show that, as long as the phase space of helium-4 spallation processes is open, they are more important than stau-catalyzed fusion and hence constrain the stau property.

  3. Measuring the precision of multi-perspective process models

    NARCIS (Netherlands)

    Mannhardt, Felix; De Leoni, Massimiliano; Reijers, Hajo A.; Van Der Aalst, Wil M P

    2016-01-01

    Process models need to reflect the real behavior of an organization’s processes to be beneficial for several use cases, such as process analysis, process documentation and process improvement. One quality criterion for a process model is that they should precise and not express more behavior than

  4. Uncertainty modeling process for semantic technology

    Directory of Open Access Journals (Sweden)

    Rommel N. Carvalho

    2016-08-01

    Full Text Available The ubiquity of uncertainty across application domains generates a need for principled support for uncertainty management in semantically aware systems. A probabilistic ontology provides constructs for representing uncertainty in domain ontologies. While the literature has been growing on formalisms for representing uncertainty in ontologies, there remains little guidance in the knowledge engineering literature for how to design probabilistic ontologies. To address the gap, this paper presents the Uncertainty Modeling Process for Semantic Technology (UMP-ST, a new methodology for modeling probabilistic ontologies. To explain how the methodology works and to verify that it can be applied to different scenarios, this paper describes step-by-step the construction of a proof-of-concept probabilistic ontology. The resulting domain model can be used to support identification of fraud in public procurements in Brazil. While the case study illustrates the development of a probabilistic ontology in the PR-OWL probabilistic ontology language, the methodology is applicable to any ontology formalism that properly integrates uncertainty with domain semantics.

  5. Geochemical modelization of differentiation processes by crystallization

    International Nuclear Information System (INIS)

    Cebria, J.M.; Lopez Ruiz, J.

    1994-01-01

    During crystallization processes, major and trace elements and stable isotopes fractionate, whereas radiogenic isotopes do not change. The different equations proposed allow us to reproduce the variation in major and trace elements during these differentiation processes. In the case of simple fractional crystallization, the residual liquid is impoverished in compatible elements faster than it is enriched in incompatible elements as crystallization proceeds. During in situ crystallization the incompatible elements evolve in a similar way to the case of simple fractional crystallization but the enrichment rate of the moderately incompatible elements is slower and the compatible elements do not suffer a depletion as strong as the one observed during simple fractional crystallization, even for higher f values. In a periodically replenished magma chamber if all the liquid present is removed at the end of each cycle, the magma follows patterns similar to those generated by simple fractional crystallization. On the contrary, if the liquid fraction that crystallizes during each cycle and the one that is extruded at the end of the cycle are small, the residual liquid shows compositions similar to those that would be obtained by equilibrium crystallization. Crystallization processes modelling is in general less difficult than for partial melting. If a rock series is the result of simple fractional crystallization, a C''i L -C''i L plot in which i is a compatible element and j is highly incompatible, allows us to obtain a good approximation to the initial liquid composition. Additionally, long C''i L -log C''i L diagrams in which i is a highly incompatible element, allow us to identify steps in the process and to calculate the bulk distribution coefficients of the trace elements during each step

  6. The Benefits of Including Clinical Factors in Rectal Normal Tissue Complication Probability Modeling After Radiotherapy for Prostate Cancer

    International Nuclear Information System (INIS)

    Defraene, Gilles; Van den Bergh, Laura; Al-Mamgani, Abrahim; Haustermans, Karin; Heemsbergen, Wilma; Van den Heuvel, Frank; Lebesque, Joos V.

    2012-01-01

    Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including the most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011–0.013) clinical factor was “previous abdominal surgery.” As second significant (p = 0.012–0.016) factor, “cardiac history” was included in all three rectal bleeding fits, whereas including “diabetes” was significant (p = 0.039–0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003–0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D 50 . Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions

  7. Recursive Gaussian Process Regression Model for Adaptive Quality Monitoring in Batch Processes

    Directory of Open Access Journals (Sweden)

    Le Zhou

    2015-01-01

    Full Text Available In chemical batch processes with slow responses and a long duration, it is time-consuming and expensive to obtain sufficient normal data for statistical analysis. With the persistent accumulation of the newly evolving data, the modelling becomes adequate gradually and the subsequent batches will change slightly owing to the slow time-varying behavior. To efficiently make use of the small amount of initial data and the newly evolving data sets, an adaptive monitoring scheme based on the recursive Gaussian process (RGP model is designed in this paper. Based on the initial data, a Gaussian process model and the corresponding SPE statistic are constructed at first. When the new batches of data are included, a strategy based on the RGP model is used to choose the proper data for model updating. The performance of the proposed method is finally demonstrated by a penicillin fermentation batch process and the result indicates that the proposed monitoring scheme is effective for adaptive modelling and online monitoring.

  8. Can a one-layer optical skin model including melanin and inhomogeneously distributed blood explain spatially resolved diffuse reflectance spectra?

    Science.gov (United States)

    Karlsson, Hanna; Pettersson, Anders; Larsson, Marcus; Strömberg, Tomas

    2011-02-01

    Model based analysis of calibrated diffuse reflectance spectroscopy can be used for determining oxygenation and concentration of skin chromophores. This study aimed at assessing the effect of including melanin in addition to hemoglobin (Hb) as chromophores and compensating for inhomogeneously distributed blood (vessel packaging), in a single-layer skin model. Spectra from four humans were collected during different provocations using a twochannel fiber optic probe with source-detector separations 0.4 and 1.2 mm. Absolute calibrated spectra using data from either a single distance or both distances were analyzed using inverse Monte Carlo for light transport and Levenberg-Marquardt for non-linear fitting. The model fitting was excellent using a single distance. However, the estimated model failed to explain spectra from the other distance. The two-distance model did not fit the data well at either distance. Model fitting was significantly improved including melanin and vessel packaging. The most prominent effect when fitting data from the larger separation compared to the smaller separation was a different light scattering decay with wavelength, while the tissue fraction of Hb and saturation were similar. For modeling spectra at both distances, we propose using either a multi-layer skin model or a more advanced model for the scattering phase function.

  9. Heat source model for welding process

    International Nuclear Information System (INIS)

    Doan, D.D.

    2006-10-01

    One of the major industrial stakes of the welding simulation relates to the control of mechanical effects of the process (residual stress, distortions, fatigue strength... ). These effects are directly dependent on the temperature evolutions imposed during the welding process. To model this thermal loading, an original method is proposed instead of the usual methods like equivalent heat source approach or multi-physical approach. This method is based on the estimation of the weld pool shape together with the heat flux crossing the liquid/solid interface, from experimental data measured in the solid part. Its originality consists in solving an inverse Stefan problem specific to the welding process, and it is shown how to estimate the parameters of the weld pool shape. To solve the heat transfer problem, the interface liquid/solid is modeled by a Bezier curve ( 2-D) or a Bezier surface (3-D). This approach is well adapted to a wide diversity of weld pool shapes met for the majority of the current welding processes (TIG, MlG-MAG, Laser, FE, Hybrid). The number of parameters to be estimated is weak enough, according to the cases considered from 2 to 5 in 20 and 7 to 16 in 3D. A sensitivity study leads to specify the location of the sensors, their number and the set of measurements required to a good estimate. The application of the method on test results of welding TIG on thin stainless steel sheets in emerging and not emerging configurations, shows that only one measurement point is enough to estimate the various weld pool shapes in 20, and two points in 3D, whatever the penetration is full or not. In the last part of the work, a methodology is developed for the transient analysis. It is based on the Duvaut's transformation which overpasses the discontinuity of the liquid metal interface and therefore gives a continuous variable for the all spatial domain. Moreover, it allows to work on a fixed mesh grid and the new inverse problem is equivalent to identify a source

  10. Modelling and application of stochastic processes

    CERN Document Server

    1986-01-01

    The subject of modelling and application of stochastic processes is too vast to be exhausted in a single volume. In this book, attention is focused on a small subset of this vast subject. The primary emphasis is on realization and approximation of stochastic systems. Recently there has been considerable interest in the stochastic realization problem, and hence, an attempt has been made here to collect in one place some of the more recent approaches and algorithms for solving the stochastic realiza­ tion problem. Various different approaches for realizing linear minimum-phase systems, linear nonminimum-phase systems, and bilinear systems are presented. These approaches range from time-domain methods to spectral-domain methods. An overview of the chapter contents briefly describes these approaches. Also, in most of these chapters special attention is given to the problem of developing numerically ef­ ficient algorithms for obtaining reduced-order (approximate) stochastic realizations. On the application side,...

  11. Mechanical-mathematical modeling for landslide process

    Science.gov (United States)

    Svalova, V.

    2009-04-01

    500 m and displacement of a landslide in the plan over 1 m. Last serious activization of a landslide has taken place in 2002 with a motion on 53 cm. Catastrophic activization of the deep blockglide landslide in the area of Khoroshevo in Moscow took place in 2006-2007. A crack of 330 m long appeared in the old sliding circus, along which a new 220 m long creeping block was separated from the plateau and began sinking with a displaced surface of the plateau reaching to 12 m. Such activization of the landslide process was not observed in Moscow since mid XIX century. The sliding area of Khoroshevo was stable during long time without manifestations of activity. Revealing of the reasons of deformation and development of ways of protection from deep landslide motions is extremely actual and difficult problem which decision is necessary for preservation of valuable historical monuments and modern city constructions. The reasons of activization and protective measures are discussed. Structure of monitoring system for urban territories is elaborated. Mechanical-mathematical model of high viscous fluid was used for modeling of matter behavior on landslide slopes. Equation of continuity and an approximated equation of the Navier-Stockes for slow motions in a thin layer were used. The results of modelling give possibility to define the place of highest velocity on landslide surface, which could be the best place for monitoring post position. Model can be used for calibration of monitoring equipment and gives possibility to investigate some fundamental aspects of matter movement on landslide slope.

  12. Modeling and simulation of economic processes

    Directory of Open Access Journals (Sweden)

    Bogdan Brumar

    2010-12-01

    Full Text Available In general, any activity requires a longer action often characterized by a degree of uncertainty, insecurity, in terms of size of the objective pursued. Because of the complexity of real economic systems, the stochastic dependencies between different variables and parameters considered, not all systems can be adequately represented by a model that can be solved by analytical methods and covering all issues for management decision analysis-economic horizon real. Often in such cases, it is considered that the simulation technique is the only alternative available. Using simulation techniques to study real-world systems often requires a laborious work. Making a simulation experiment is a process that takes place in several stages.

  13. Kanban simulation model for production process optimization

    Directory of Open Access Journals (Sweden)

    Golchev Riste

    2015-01-01

    Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.

  14. Elliptic Determinantal Processes and Elliptic Dyson Models

    Science.gov (United States)

    Katori, Makoto

    2017-10-01

    We introduce seven families of stochastic systems of interacting particles in one-dimension corresponding to the seven families of irreducible reduced affine root systems. We prove that they are determinantal in the sense that all spatio-temporal correlation functions are given by determinants controlled by a single function called the spatio-temporal correlation kernel. For the four families {A}_{N-1}, {B}_N, {C}_N and {D}_N, we identify the systems of stochastic differential equations solved by these determinantal processes, which will be regarded as the elliptic extensions of the Dyson model. Here we use the notion of martingales in probability theory and the elliptic determinant evaluations of the Macdonald denominators of irreducible reduced affine root systems given by Rosengren and Schlosser.

  15. A finite element model of the lower limb during stance phase of gait cycle including the muscle forces.

    Science.gov (United States)

    Diffo Kaze, Arnaud; Maas, Stefan; Arnoux, Pierre-Jean; Wolf, Claude; Pape, Dietrich

    2017-12-07

    Results of finite element (FE) analyses can give insight into musculoskeletal diseases if physiological boundary conditions, which include the muscle forces during specific activities of daily life, are considered in the FE modelling. So far, many simplifications of the boundary conditions are currently made. This study presents an approach for FE modelling of the lower limb for which muscle forces were included. The stance phase of normal gait was simulated. Muscle forces were calculated using a musculoskeletal rigid body (RB) model of the human body, and were subsequently applied to a FE model of the lower limb. It was shown that the inertial forces are negligible during the stance phase of normal gait. The contact surfaces between the parts within the knee were modelled as bonded. Weak springs were attached to the distal tibia for numerical reasons. Hip joint reaction forces from the RB model and those from the FE model were similar in magnitude with relative differences less than 16%. The forces of the weak spring were negligible compared to the applied muscle forces. The maximal strain was 0.23% in the proximal region of the femoral diaphysis and 1.7% in the contact zone between the tibia and the fibula. The presented approach based on FE modelling by including muscle forces from inverse dynamic analysis of musculoskeletal RB model can be used to perform analyses of the lower limb with very realistic boundary conditions. In the present form, this model can be used to better understand the loading, stresses and strains of bones in the knee area and hence to analyse osteotomy fixation devices.

  16. Global stability for infectious disease models that include immigration of infected individuals and delay in the incidence

    Directory of Open Access Journals (Sweden)

    Chelsea Uggenti

    2018-03-01

    Full Text Available We begin with a detailed study of a delayed SI model of disease transmission with immigration into both classes. The incidence function allows for a nonlinear dependence on the infected population, including mass action and saturating incidence as special cases. Due to the immigration of infectives, there is no disease-free equilibrium and hence no basic reproduction number. We show there is a unique endemic equilibrium and that this equilibrium is globally asymptotically stable for all parameter values. The results include vector-style delay and latency-style delay. Next, we show that previous global stability results for an SEI model and an SVI model that include immigration of infectives and non-linear incidence but not delay can be extended to systems with vector-style delay and latency-style delay.

  17. Modeling of flash calcination process during clay activation

    International Nuclear Information System (INIS)

    Borrajo Perez, Ruben; Gonzalez Bayon, Juan Jose; Sanchez Rodriguez, Andy A.

    2011-01-01

    Pozzolanic activity in some materials can be increased by means of different processes, among them, thermal activation is one of the most promising. The activation process, occurring at high temperatures and velocities produces a material with better characteristics. In the last few years, high reactivity pozzolan during cure's early days has been produced. Temperature is an important parameter in the activation process and as a consequence, the activation units must consider temperature variation to allow the use of different raw materials, each one of them with different characteristics. Considering the high prices of Kaolin in the market, new materials are being tested, the clayey soil, which after a sedimentation process produces a clay that has turned out to be a suitable raw material, when the kinetics of the pozzolanic reaction is considered. Additionally, other material with higher levels of kaolin are being used with good results. This paper is about the modeling of thermal, hydrodynamics and dehydroxilation processes suffering for solids particles exposed to a hot gas stream. The models employed are discussed; the velocity and temperature of particles are obtained as a function of carrier gas parameters. The calculation include the heat losses and finally the model predict the residence time needed for finish the activation process. (author)

  18. [Standardization and modeling of surgical processes].

    Science.gov (United States)

    Strauss, G; Schmitz, P

    2016-12-01

    Due to the technological developments around the operating room, surgery in the twenty-first century is undergoing a paradigm shift. Which technologies have already been integrated into the surgical routine? How can a favorable cost-benefit balance be achieved by the implementation of new software-based assistance systems? This article presents the state of the art technology as exemplified by a semi-automated operation system for otorhinolaryngology surgery. The main focus is on systems for implementation of digital handbooks and navigational functions in situ. On the basis of continuous development in digital imaging, decisions may by facilitated by individual patient models thus allowing procedures to be optimized. The ongoing digitization and linking of all relevant information enable a high level of standardization in terms of operating procedures. This may be used by assistance systems as a basis for complete documentation and high process reliability. Automation of processes in the operating room results in an increase in quality, precision and standardization so that the effectiveness and efficiency of treatment can be improved; however, care must be taken that detrimental consequences, such as loss of skills and placing too much faith in technology must be avoided by adapted training concepts.

  19. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    Energy Technology Data Exchange (ETDEWEB)

    E. Gonnenthal; N. Spyoher

    2001-02-05

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [153447]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: (1) Performance Assessment (PA); (2) Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); (3) UZ Flow and Transport Process Model Report (PMR); and (4) Near-Field Environment (NFE) PMR. The work scope for this activity is presented in the TWPs cited above, and summarized as follows: continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data

  20. Mathematical Formulation Requirements and Specifications for the Process Models

    International Nuclear Information System (INIS)

    Steefel, C.; Moulton, D.; Pau, G.; Lipnikov, K.; Meza, J.; Lichtner, P.; Wolery, T.; Bacon, D.; Spycher, N.; Bell, J.; Moridis, G.; Yabusaki, S.; Sonnenthal, E.; Zyvoloski, G.; Andre, B.; Zheng, L.; Davis, J.

    2010-01-01

    The Advanced Simulation Capability for Environmental Management (ASCEM) is intended to be a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM program is aimed at addressing critical EM program needs to better understand and quantify flow and contaminant transport behavior in complex geological systems. It will also address the long-term performance of engineered components including cementitious materials in nuclear waste disposal facilities, in order to reduce uncertainties and risks associated with DOE EM's environmental cleanup and closure activities. Building upon national capabilities developed from decades of Research and Development in subsurface geosciences, computational and computer science, modeling and applied mathematics, and environmental remediation, the ASCEM initiative will develop an integrated, open-source, high-performance computer modeling system for multiphase, multicomponent, multiscale subsurface flow and contaminant transport. This integrated modeling system will incorporate capabilities for predicting releases from various waste forms, identifying exposure pathways and performing dose calculations, and conducting systematic uncertainty quantification. The ASCEM approach will be demonstrated on selected sites, and then applied to support the next generation of performance assessments of nuclear waste disposal and facility decommissioning across the EM complex. The Multi-Process High Performance Computing (HPC) Simulator is one of three thrust areas in ASCEM. The other two are the Platform and Integrated Toolsets (dubbed the Platform) and Site Applications. The primary objective of the HPC Simulator is to provide a flexible and extensible computational engine to simulate the coupled processes and flow scenarios described by the conceptual models developed using the ASCEM Platform. The graded and iterative approach to assessments naturally